Monday, August 31, 2020

HOW TO DESTROY SURVEILLANCE CAPITALISM




By Cory Doctorow, One Zero.

August 30, 2020




https://popularresistance.org/how-to-destroy-surveillance-capitalism/


Editor’s Note: Surveillance capitalism is everywhere. But it’s not the result of some wrong turn or a rogue abuse of corporate power — it’s the system working as intended. This is the subject of Cory Doctorow’s new book, which we’re thrilled to publish in whole here on OneZero. This is how to destroy surveillance capitalism.





The Net Of A Thousand Lies.

The most surprising thing about the rebirth of flat Earthers in the 21st century is just how widespread the evidence against them is. You can understand how, centuries ago, people who’d never gained a high-enough vantage point from which to see the Earth’s curvature might come to the commonsense belief that the flat-seeming Earth was, indeed, flat.

But today, when elementary schools routinely dangle GoPro cameras from balloons and loft them high enough to photograph the Earth’s curve — to say nothing of the unexceptional sight of the curved Earth from an airplane window — it takes a heroic effort to maintain the belief that the world is flat.

Likewise for white nationalism and eugenics: In an age where you can become a computational genomics datapoint by swabbing your cheek and mailing it to a gene-sequencing company along with a modest sum of money, “race science” has never been easier to refute.

We are living through a golden age of both readily available facts and denial of those facts. Terrible ideas that have lingered on the fringes for decades or even centuries have gone mainstream seemingly overnight.

When an obscure idea gains currency, there are only two things that can explain its ascendance: Either the person expressing that idea has gotten a lot better at stating their case, or the proposition has become harder to deny in the face of mounting evidence. In other words, if we want people to take climate change seriously, we can get a bunch of Greta Thunbergs to make eloquent, passionate arguments from podiums, winning our hearts and minds, or we can wait for flood, fire, broiling sun, and pandemics to make the case for us. In practice, we’ll probably have to do some of both: The more we’re boiling and burning and drowning and wasting away, the easier it will be for the Greta Thunbergs of the world to convince us.

The arguments for ridiculous beliefs in odious conspiracies like anti-vaccination, climate denial, a flat Earth, and eugenics are no better than they were a generation ago. Indeed, they’re worse because they are being pitched to people who have at least a background awareness of the refuting facts.

Anti-vax has been around since the first vaccines, but the early anti-vaxxers were pitching people who were less equipped to understand even the most basic ideas from microbiology, and moreover, those people had not witnessed the extermination of mass-murdering diseases like polio, smallpox, and measles. Today’s anti-vaxxers are no more eloquent than their forebears, and they have a much harder job.

So can these far-fetched conspiracy theorists really be succeeding on the basis of superior arguments?

Some people think so. Today, there is a widespread belief that machine learning and commercial surveillance can turn even the most fumble-tongued conspiracy theorist into a svengali who can warp your perceptions and win your belief by locating vulnerable people and then pitching them with A.I.-refined arguments that bypass their rational faculties and turn everyday people into flat Earthers, anti-vaxxers, or even Nazis. When the RAND Corporation blames Facebook for “radicalization” and when Facebook’s role in spreading coronavirus misinformation is blamed on its algorithm, the implicit message is that machine learning and surveillance are causing the changes in our consensus about what’s true.

After all, in a world where sprawling and incoherent conspiracy theories like Pizzagate and its successor, QAnon, have widespread followings, something must be afoot.

But what if there’s another explanation? What if it’s the material circumstances, and not the arguments, that are making the difference for these conspiracy pitchmen? What if the trauma of living through real conspiracies all around us — conspiracies among wealthy people, their lobbyists, and lawmakers to bury inconvenient facts and evidence of wrongdoing (these conspiracies are commonly known as “corruption”) — is making people vulnerable to conspiracy theories?

If it’s trauma and not contagion — material conditions and not ideology — that is making the difference today and enabling a rise of repulsive misinformation in the face of easily observed facts, that doesn’t mean our computer networks are blameless. They’re still doing the heavy work of locating vulnerable people and guiding them through a series of ever-more-extreme ideas and communities.

Belief in conspiracy is a raging fire that has done real damage and poses real danger to our planet and species, from epidemics kicked off by vaccine denial to genocides kicked off by racist conspiracies to planetary meltdown caused by denial-inspired climate inaction. Our world is on fire, and so we have to put the fires out — to figure out how to help people see the truth of the world through the conspiracies they’ve been confused by.

But firefighting is reactive. We need fire prevention. We need to strike at the traumatic material conditions that make people vulnerable to the contagion of conspiracy. Here, too, tech has a role to play.

There’s no shortage of proposals to address this. From the EU’s Terrorist Content Regulation, which requires platforms to police and remove “extremist” content, to the U.S. proposals to force tech companies to spy on their users and hold them liable for their users’ bad speech, there’s a lot of energy to force tech companies to solve the problems they created.

There’s a critical piece missing from the debate, though. All these solutions assume that tech companies are a fixture, that their dominance over the internet is a permanent fact. Proposals to replace Big Tech with a more diffused, pluralistic internet are nowhere to be found. Worse: The “solutions” on the table today require Big Tech to stay big because only the very largest companies can afford to implement the systems these laws demand.

Figuring out what we want our tech to look like is crucial if we’re going to get out of this mess. Today, we’re at a crossroads where we’re trying to figure out if we want to fix the Big Tech companies that dominate our internet or if we want to fix the internet itself by unshackling it from Big Tech’s stranglehold. We can’t do both, so we have to choose.

I want us to choose wisely. Taming Big Tech is integral to fixing the internet, and for that, we need digital rights activism.
Digital Rights Activism, A Quarter-Century On

Digital rights activism is more than 30 years old now. The Electronic Frontier Foundation turned 30 this year; the Free Software Foundation launched in 1985. For most of the history of the movement, the most prominent criticism leveled against it was that it was irrelevant: The real activist causes were real-world causes (think of the skepticism when Finland declared broadband a human right in 2010), and real-world activism was shoe-leather activism (think of Malcolm Gladwell’s contempt for “clicktivism”). But as tech has grown more central to our daily lives, these accusations of irrelevance have given way first to accusations of insincerity (“You only care about tech because you’re shilling for tech companies”) to accusations of negligence (“Why didn’t you foresee that tech could be such a destructive force?”). But digital rights activism is right where it’s always been: looking out for the humans in a world where tech is inexorably taking over.

The latest version of this critique comes in the form of “surveillance capitalism,” a term coined by business professor Shoshana Zuboff in her long and influential 2019 book, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. Zuboff argues that “surveillance capitalism” is a unique creature of the tech industry and that it is unlike any other abusive commercial practice in history, one that is “constituted by unexpected and often illegible mechanisms of extraction, commodification, and control that effectively exile persons from their own behavior while producing new markets of behavioral prediction and modification. Surveillance capitalism challenges democratic norms and departs in key ways from the centuries-long evolution of market capitalism.” It is a new and deadly form of capitalism, a “rogue capitalism,” and our lack of understanding of its unique capabilities and dangers represents an existential, species-wide threat. She’s right that capitalism today threatens our species, and she’s right that tech poses unique challenges to our species and civilization, but she’s really wrong about how tech is different and why it threatens our species.

What’s more, I think that her incorrect diagnosis will lead us down a path that ends up making Big Tech stronger, not weaker. We need to take down Big Tech, and to do that, we need to start by correctly identifying the problem.
Tech Exceptionalism, Then And Now

Early critics of the digital rights movement — perhaps best represented by campaigning organizations like the Electronic Frontier Foundation, the Free Software Foundation, Public Knowledge, and others that focused on preserving and enhancing basic human rights in the digital realm — damned activists for practicing “tech exceptionalism.” Around the turn of the millennium, serious people ridiculed any claim that tech policy mattered in the “real world.” Claims that tech rules had implications for speech, association, privacy, search and seizure, and fundamental rights and equities were treated as ridiculous, an elevation of the concerns of sad nerds arguing about Star Trek on bulletin board systems above the struggles of the Freedom Riders, Nelson Mandela, or the Warsaw ghetto uprising.

In the decades since, accusations of “tech exceptionalism” have only sharpened as tech’s role in everyday life has expanded: Now that tech has infiltrated every corner of our life and our online lives have been monopolized by a handful of giants, defenders of digital freedoms are accused of carrying water for Big Tech, providing cover for its self-interested negligence (or worse, nefarious plots).

From my perspective, the digital rights movement has remained stationary while the rest of the world has moved. From the earliest days, the movement’s concern was users and the toolsmiths who provided the code they needed to realize their fundamental rights. Digital rights activists only cared about companies to the extent that companies were acting to uphold users’ rights (or, just as often, when companies were acting so foolishly that they threatened to bring down new rules that would also make it harder for good actors to help users).

The “surveillance capitalism” critique recasts the digital rights movement in a new light again: not as alarmists who overestimate the importance of their shiny toys nor as shills for big tech but as serene deck-chair rearrangers whose long-standing activism is a liability because it makes them incapable of perceiving novel threats as they continue to fight the last century’s tech battles.

But tech exceptionalism is a sin no matter who practices it.
Don’t Believe The Hype

You’ve probably heard that “if you’re not paying for the product, you’re the product.” As we’ll see below, that’s true, if incomplete. But what is absolutely true is that ad-driven Big Tech’s customers are advertisers, and what companies like Google and Facebook sell is their ability to convince you to buy stuff. Big Tech’s product is persuasion. The services — social media, search engines, maps, messaging, and more — are delivery systems for persuasion.

The fear of surveillance capitalism starts from the (correct) presumption that everything Big Tech says about itself is probably a lie. But the surveillance capitalism critique makes an exception for the claims Big Tech makes in its sales literature — the breathless hype in the pitches to potential advertisers online and in ad-tech seminars about the efficacy of its products: It assumes that Big Tech is as good at influencing us as they claim they are when they’re selling influencing products to credulous customers. That’s a mistake because sales literature is not a reliable indicator of a product’s efficacy.

Surveillance capitalism assumes that because advertisers buy a lot of what Big Tech is selling, Big Tech must be selling something real. But Big Tech’s massive sales could just as easily be the result of a popular delusion or something even more pernicious: monopolistic control over our communications and commerce.

Being watched changes your behavior, and not for the better. It creates risks for our social progress. Zuboff’s book features beautifully wrought explanations of these phenomena. But Zuboff also claims that surveillance literally robs us of our free will — that when our personal data is mixed with machine learning, it creates a system of persuasion so devastating that we are helpless before it. That is, Facebook uses an algorithm to analyze the data it nonconsensually extracts from your daily life and uses it to customize your feed in ways that get you to buy stuff. It is a mind-control ray out of a 1950s comic book, wielded by mad scientists whose supercomputers guarantee them perpetual and total world domination.
What Is Persuasion?

To understand why you shouldn’t worry about mind-control rays — but why you should worry about surveillance and Big Tech — we must start by unpacking what we mean by “persuasion.”

Google, Facebook, and other surveillance capitalists promise their customers (the advertisers) that if they use machine-learning tools trained on unimaginably large data sets of nonconsensually harvested personal information, they will be able to uncover ways to bypass the rational faculties of the public and direct their behavior, creating a stream of purchases, votes, and other desired outcomes.


The impact of dominance far exceeds the impact of manipulation and should be central to our analysis and any remedies we seek.

But there’s little evidence that this is happening. Instead, the predictions that surveillance capitalism delivers to its customers are much less impressive. Rather than finding ways to bypass our rational faculties, surveillance capitalists like Mark Zuckerberg mostly do one or more of three things:

1. Segmenting

If you’re selling diapers, you have better luck if you pitch them to people in maternity wards. Not everyone who enters or leaves a maternity ward just had a baby, and not everyone who just had a baby is in the market for diapers. But having a baby is a really reliable correlate of being in the market for diapers, and being in a maternity ward is highly correlated with having a baby. Hence diaper ads around maternity wards (and even pitchmen for baby products, who haunt maternity wards with baskets full of freebies).

Surveillance capitalism is segmenting times a billion. Diaper vendors can go way beyond people in maternity wards (though they can do that, too, with things like location-based mobile ads). They can target you based on whether you’re reading articles about child-rearing, diapers, or a host of other subjects, and data mining can suggest unobvious keywords to advertise against. They can target you based on the articles you’ve recently read. They can target you based on what you’ve recently purchased. They can target you based on whether you receive emails or private messages about these subjects — or even if you speak aloud about them (though Facebook and the like convincingly claim that’s not happening — yet).

This is seriously creepy.

But it’s not mind control.

It doesn’t deprive you of your free will. It doesn’t trick you.

Think of how surveillance capitalism works in politics. Surveillance capitalist companies sell political operatives the power to locate people who might be receptive to their pitch. Candidates campaigning on finance industry corruption seek people struggling with debt; candidates campaigning on xenophobia seek out racists. Political operatives have always targeted their message whether their intentions were honorable or not: Union organizers set up pitches at factory gates, and white supremacists hand out fliers at John Birch Society meetings.

But this is an inexact and thus wasteful practice. The union organizer can’t know which worker to approach on the way out of the factory gates and may waste their time on a covert John Birch Society member; the white supremacist doesn’t know which of the Birchers are so delusional that making it to a meeting is as much as they can manage and which ones might be convinced to cross the country to carry a tiki torch through the streets of Charlottesville, Virginia.

Because targeting improves the yields on political pitches, it can accelerate the pace of political upheaval by making it possible for everyone who has secretly wished for the toppling of an autocrat — or just an 11-term incumbent politician — to find everyone else who feels the same way at very low cost. This has been critical to the rapid crystallization of recent political movements including Black Lives Matter and Occupy Wall Street as well as less savory players like the far-right white nationalist movements that marched in Charlottesville.

It’s important to differentiate this kind of political organizing from influence campaigns; finding people who secretly agree with you isn’t the same as convincing people to agree with you. The rise of phenomena like nonbinary or otherwise nonconforming gender identities is often characterized by reactionaries as the result of online brainwashing campaigns that convince impressionable people that they have been secretly queer all along.

But the personal accounts of those who have come out tell a different story where people who long harbored a secret about their gender were emboldened by others coming forward and where people who knew that they were different but lacked a vocabulary for discussing that difference learned the right words from these low-cost means of finding people and learning about their ideas.

2. Deception

Lies and fraud are pernicious, and surveillance capitalism supercharges them through targeting. If you want to sell a fraudulent payday loan or subprime mortgage, surveillance capitalism can help you find people who are both desperate and unsophisticated and thus receptive to your pitch. This accounts for the rise of many phenomena, like multilevel marketing schemes, in which deceptive claims about potential earnings and the efficacy of sales techniques are targeted at desperate people by advertising against search queries that indicate, for example, someone struggling with ill-advised loans.

Surveillance capitalism also abets fraud by making it easy to locate other people who have been similarly deceived, forming a community of people who reinforce one another’s false beliefs. Think of the forums where people who are being victimized by multilevel marketing frauds gather to trade tips on how to improve their luck in peddling the product.

Sometimes, online deception involves replacing someone’s correct beliefs with incorrect ones, as it does in the anti-vaccination movement, whose victims are often people who start out believing in vaccines but are convinced by seemingly plausible evidence that leads them into the false belief that vaccines are harmful.

But it’s much more common for fraud to succeed when it doesn’t have to displace a true belief. When my daughter contracted head lice at daycare, one of the daycare workers told me I could get rid of them by treating her hair and scalp with olive oil. I didn’t know anything about head lice, and I assumed that the daycare worker did, so I tried it (it didn’t work, and it doesn’t work). It’s easy to end up with false beliefs when you simply don’t know any better and when those beliefs are conveyed by someone who seems to know what they’re doing.

This is pernicious and difficult — and it’s also the kind of thing the internet can help guard against by making true information available, especially in a form that exposes the underlying deliberations among parties with sharply divergent views, such as Wikipedia. But it’s not brainwashing; it’s fraud. In the majority of cases, the victims of these fraud campaigns have an informational void filled in the customary way, by consulting a seemingly reliable source. If I look up the length of the Brooklyn Bridge and learn that it is 5,800 feet long, but in reality, it is 5,989 feet long, the underlying deception is a problem, but it’s a problem with a simple remedy. It’s a very different problem from the anti-vax issue in which someone’s true belief is displaced by a false one by means of sophisticated persuasion.

3. Domination

Surveillance capitalism is the result of monopoly. Monopoly is the cause, and surveillance capitalism and its negative outcomes are the effects of monopoly. I’ll get into this in depth later, but for now, suffice it to say that the tech industry has grown up with a radical theory of antitrust that has allowed companies to grow by merging with their rivals, buying up their nascent competitors, and expanding to control whole market verticals.

One example of how monopolism aids in persuasion is through dominance: Google makes editorial decisions about its algorithms that determine the sort order of the responses to our queries. If a cabal of fraudsters have set out to trick the world into thinking that the Brooklyn Bridge is 5,800 feet long, and if Google gives a high search rank to this group in response to queries like “How long is the Brooklyn Bridge?” then the first eight or 10 screens’ worth of Google results could be wrong. And since most people don’t go beyond the first couple of results — let alone the first page of results — Google’s choice means that many people will be deceived.

Google’s dominance over search — more than 86% of web searches are performed through Google — means that the way it orders its search results has an outsized effect on public beliefs. Ironically, Google claims this is why it can’t afford to have any transparency in its algorithm design: Google’s search dominance makes the results of its sorting too important to risk telling the world how it arrives at those results lest some bad actor discover a flaw in the ranking system and exploit it to push its point of view to the top of the search results. There’s an obvious remedy to a company that is too big to audit: break it up into smaller pieces.

Zuboff calls surveillance capitalism a “rogue capitalism” whose data-hoarding and machine-learning techniques rob us of our free will. But influence campaigns that seek to displace existing, correct beliefs with false ones have an effect that is small and temporary while monopolistic dominance over informational systems has massive, enduring effects. Controlling the results to the world’s search queries means controlling access both to arguments and their rebuttals and, thus, control over much of the world’s beliefs. If our concern is how corporations are foreclosing on our ability to make up our own minds and determine our own futures, the impact of dominance far exceeds the impact of manipulation and should be central to our analysis and any remedies we seek.

4. Bypassing our rational faculties

This is the good stuff: using machine learning, “dark patterns,” engagement hacking, and other techniques to get us to do things that run counter to our better judgment. This is mind control.

Some of these techniques have proven devastatingly effective (if only in the short term). The use of countdown timers on a purchase completion page can create a sense of urgency that causes you to ignore the nagging internal voice suggesting that you should shop around or sleep on your decision. The use of people from your social graph in ads can provide “social proof” that a purchase is worth making. Even the auction system pioneered by eBay is calculated to play on our cognitive blind spots, letting us feel like we “own” something because we bid on it, thus encouraging us to bid again when we are outbid to ensure that “our” things stay ours.

Games are extraordinarily good at this. “Free to play” games manipulate us through many techniques, such as presenting players with a series of smoothly escalating challenges that create a sense of mastery and accomplishment but which sharply transition into a set of challenges that are impossible to overcome without paid upgrades. Add some social proof to the mix — a stream of notifications about how well your friends are faring — and before you know it, you’re buying virtual power-ups to get to the next level.

Companies have risen and fallen on these techniques, and the “fallen” part is worth paying attention to. In general, living things adapt to stimulus: Something that is very compelling or noteworthy when you first encounter it fades with repetition until you stop noticing it altogether. Consider the refrigerator hum that irritates you when it starts up but disappears into the background so thoroughly that you only notice it when it stops again.

That’s why behavioral conditioning uses “intermittent reinforcement schedules.” Instead of giving you a steady drip of encouragement or setbacks, games and gamified services scatter rewards on a randomized schedule — often enough to keep you interested and random enough that you can never quite find the pattern that would make it boring.

Intermittent reinforcement is a powerful behavioral tool, but it also represents a collective action problem for surveillance capitalism. The “engagement techniques” invented by the behaviorists of surveillance capitalist companies are quickly copied across the whole sector so that what starts as a mysteriously compelling fillip in the design of a service—like “pull to refresh” or alerts when someone likes your posts or side quests that your characters get invited to while in the midst of main quests—quickly becomes dully ubiquitous. The impossible-to-nail-down nonpattern of randomized drips from your phone becomes a grey-noise wall of sound as every single app and site starts to make use of whatever seems to be working at the time.

From the surveillance capitalist’s point of view, our adaptive capacity is like a harmful bacterium that deprives it of its food source — our attention — and novel techniques for snagging that attention are like new antibiotics that can be used to breach our defenses and destroy our self-determination. And there are techniques like that. Who can forget the Great Zynga Epidemic, when all of our friends were caught in FarmVille’s endless, mindless dopamine loops? But every new attention-commanding technique is jumped on by the whole industry and used so indiscriminately that antibiotic resistance sets in. Given enough repetition, almost all of us develop immunity to even the most powerful techniques — by 2013, two years after Zynga’s peak, its user base had halved.

Not everyone, of course. Some people never adapt to stimulus, just as some people never stop hearing the hum of the refrigerator. This is why most people who are exposed to slot machines play them for a while and then move on while a small and tragic minority liquidate their kids’ college funds, buy adult diapers, and position themselves in front of a machine until they collapse.

But surveillance capitalism’s margins on behavioral modification suck. Tripling the rate at which someone buys a widget sounds great unless the base rate is way less than 1% with an improved rate of… still less than 1%. Even penny slot machines pull down pennies for every spin while surveillance capitalism rakes in infinitesimal penny fractions.

Slot machines’ high returns mean that they can be profitable just by draining the fortunes of the small rump of people who are pathologically vulnerable to them and unable to adapt to their tricks. But surveillance capitalism can’t survive on the fractional pennies it brings down from that vulnerable sliver — that’s why, after the Great Zynga Epidemic had finally burned itself out, the small number of still-addicted players left behind couldn’t sustain it as a global phenomenon. And new powerful attention weapons aren’t easy to find, as is evidenced by the long years since the last time Zynga had a hit. Despite the hundreds of millions of dollars that Zynga has to spend on developing new tools to blast through our adaptation, it has never managed to repeat the lucky accident that let it snag so much of our attention for a brief moment in 2009. Powerhouses like Supercell have fared a little better, but they are rare and throw away many failures for every success.

The vulnerability of small segments of the population to dramatic, efficient corporate manipulation is a real concern that’s worthy of our attention and energy. But it’s not an existential threat to society.
If Data Is The New Oil, Then Surveillance Capitalism’s Engine Has A Leak

This adaptation problem offers an explanation for one of surveillance capitalism’s most alarming traits: its relentless hunger for data and its endless expansion of data-gathering capabilities through the spread of sensors, online surveillance, and acquisition of data streams from third parties.

Zuboff observes this phenomenon and concludes that data must be very valuable if surveillance capitalism is so hungry for it. (In her words: “Just as industrial capitalism was driven to the continuous intensification of the means of production, so surveillance capitalists and their market players are now locked into the continuous intensification of the means of behavioral modification and the gathering might of instrumentarian power.”) But what if the voracious appetite is because data has such a short half-life — because people become inured so quickly to new, data-driven persuasion techniques — that the companies are locked in an arms race with our limbic system? What if it’s all a Red Queen’s race where they have to run ever faster — collect ever-more data — just to stay in the same spot?

Of course, all of Big Tech’s persuasion techniques work in concert with one another, and collecting data is useful beyond mere behavioral trickery.

If someone wants to recruit you to buy a refrigerator or join a pogrom, they might use profiling and targeting to send messages to people they judge to be good sales prospects. The messages themselves may be deceptive, making claims about things you’re not very knowledgeable about (food safety and energy efficiency or eugenics and historical claims about racial superiority). They might use search engine optimization and/or armies of fake reviewers and commenters and/or paid placement to dominate the discourse so that any search for further information takes you back to their messages. And finally, they may refine the different pitches using machine learning and other techniques to figure out what kind of pitch works best on someone like you.

Each phase of this process benefits from surveillance: The more data they have, the more precisely they can profile you and target you with specific messages. Think of how you’d sell a fridge if you knew that the warranty on your prospect’s fridge just expired and that they were expecting a tax rebate in April.

Also, the more data they have, the better they can craft deceptive messages — if I know that you’re into genealogy, I might not try to feed you pseudoscience about genetic differences between “races,” sticking instead to conspiratorial secret histories of “demographic replacement” and the like.

Facebook also helps you locate people who have the same odious or antisocial views as you. It makes it possible to find other people who want to carry tiki torches through the streets of Charlottesville in Confederate cosplay. It can help you find other people who want to join your militia and go to the border to look for undocumented migrants to terrorize. It can help you find people who share your belief that vaccines are poison and that the Earth is flat.

There is one way in which targeted advertising uniquely benefits those advocating for socially unacceptable causes: It is invisible. Racism is widely geographically dispersed, and there are few places where racists — and only racists — gather. This is similar to the problem of selling refrigerators in that potential refrigerator purchasers are geographically dispersed and there are few places where you can buy an ad that will be primarily seen by refrigerator customers. But buying a refrigerator is socially acceptable while being a Nazi is not, so you can buy a billboard or advertise in the newspaper sports section for your refrigerator business, and the only potential downside is that your ad will be seen by a lot of people who don’t want refrigerators, resulting in a lot of wasted expense.

But even if you wanted to advertise your Nazi movement on a billboard or prime-time TV or the sports section, you would struggle to find anyone willing to sell you the space for your ad partly because they disagree with your views and partly because they fear censure (boycott, reputational damage, etc.) from other people who disagree with your views.

Targeted ads solve this problem: On the internet, every ad unit can be different for every person, meaning that you can buy ads that are only shown to people who appear to be Nazis and not to people who hate Nazis. When there’s spillover — when someone who hates racism is shown a racist recruiting ad — there is some fallout; the platform or publication might get an angry public or private denunciation. But the nature of the risk assumed by an online ad buyer is different than the risks to a traditional publisher or billboard owner who might want to run a Nazi ad.

Online ads are placed by algorithms that broker between a diverse ecosystem of self-serve ad platforms that anyone can buy an ad through, so the Nazi ad that slips onto your favorite online publication isn’t seen as their moral failing but rather as a failure in some distant, upstream ad supplier. When a publication gets a complaint about an offensive ad that’s appearing in one of its units, it can take some steps to block that ad, but the Nazi might buy a slightly different ad from a different broker serving the same unit. And in any event, internet users increasingly understand that when they see an ad, it’s likely that the advertiser did not choose that publication and that the publication has no idea who its advertisers are.

These layers of indirection between advertisers and publishers serve as moral buffers: Today’s moral consensus is largely that publishers shouldn’t be held responsible for the ads that appear on their pages because they’re not actively choosing to put those ads there. Because of this, Nazis are able to overcome significant barriers to organizing their movement.

Data has a complex relationship with domination. Being able to spy on your customers can alert you to their preferences for your rivals and allow you to head off your rivals at the pass.

More importantly, if you can dominate the information space while also gathering data, then you make other deceptive tactics stronger because it’s harder to break out of the web of deceit you’re spinning. Domination — that is, ultimately becoming a monopoly — and not the data itself is the supercharger that makes every tactic worth pursuing because monopolistic domination deprives your target of an escape route.

If you’re a Nazi who wants to ensure that your prospects primarily see deceptive, confirming information when they search for more, you can improve your odds by seeding the search terms they use through your initial communications. You don’t need to own the top 10 results for “voter suppression” if you can convince your marks to confine their search terms to “voter fraud,” which throws up a very different set of search results.

Surveillance capitalists are like stage mentalists who claim that their extraordinary insights into human behavior let them guess the word that you wrote down and folded up in your pocket but who really use shills, hidden cameras, sleight of hand, and brute-force memorization to amaze you.

Or perhaps they’re more like pick-up artists, the misogynistic cult that promises to help awkward men have sex with women by teaching them “neurolinguistic programming” phrases, body language techniques, and psychological manipulation tactics like “negging” — offering unsolicited negative feedback to women to lower their self-esteem and prick their interest.

Some pick-up artists eventually manage to convince women to go home with them, but it’s not because these men have figured out how to bypass women’s critical faculties. Rather, pick-up artists’ “success” stories are a mix of women who were incapable of giving consent, women who were coerced, women who were intoxicated, self-destructive women, and a few women who were sober and in command of their faculties but who didn’t realize straightaway that they were with terrible men but rectified the error as soon as they could.

Pick-up artists believe they have figured out a secret back door that bypasses women’s critical faculties, but they haven’t. Many of the tactics they deploy, like negging, became the butt of jokes (just like people joke about bad ad targeting), and there’s a good chance that anyone they try these tactics on will immediately recognize them and dismiss the men who use them as irredeemable losers.

Pick-up artists are proof that people can believe they have developed a system of mind control even when it doesn’t work. Pick-up artists simply exploit the fact that one-in-a-million chances can come through for you if you make a million attempts, and then they assume that the other 999,999 times, they simply performed the technique incorrectly and commit themselves to doing better next time. There’s only one group of people who find pick-up artist lore reliably convincing: other would-be pick-up artists whose anxiety and insecurity make them vulnerable to scammers and delusional men who convince them that if they pay for tutelage and follow instructions, then they will someday succeed. Pick-up artists assume they fail to entice women because they are bad at being pick-up artists, not because pick-up artistry is bullshit. Pick-up artists are bad at selling themselves to women, but they’re much better at selling themselves to men who pay to learn the secrets of pick-up artistry.

Department store pioneer John Wanamaker is said to have lamented, “Half the money I spend on advertising is wasted; the trouble is I don’t know which half.” The fact that Wanamaker thought that only half of his advertising spending was wasted is a tribute to the persuasiveness of advertising executives, who are much better at convincing potential clients to buy their services than they are at convincing the general public to buy their clients’ wares.
What Is Facebook?

Facebook is heralded as the origin of all of our modern plagues, and it’s not hard to see why. Some tech companies want to lock their users in but make their money by monopolizing access to the market for apps for their devices and gouging them on prices rather than by spying on them (like Apple). Some companies don’t care about locking in users because they’ve figured out how to spy on them no matter where they are and what they’re doing and can turn that surveillance into money (Google). Facebook alone among the Western tech giants has built a business based on locking in its users and spying on them all the time.

Facebook’s surveillance regime is really without parallel in the Western world. Though Facebook tries to prevent itself from being visible on the public web, hiding most of what goes on there from people unless they’re logged into Facebook, the company has nevertheless booby-trapped the entire web with surveillance tools in the form of Facebook “Like” buttons that web publishers include on their sites to boost their Facebook profiles. Facebook also makes various libraries and other useful code snippets available to web publishers that act as surveillance tendrils on the sites where they’re used, funneling information about visitors to the site — newspapers, dating sites, message boards — to Facebook.

Facebook offers similar tools to app developers, so the apps — games, fart machines, business review services, apps for keeping abreast of your kid’s schooling — you use will send information about your activities to Facebook even if you don’t have a Facebook account and even if you don’t download or use Facebook apps. On top of all that, Facebook buys data from third-party brokers on shopping habits, physical location, use of “loyalty” programs, financial transactions, etc., and cross-references that with the dossiers it develops on activity on Facebook and with apps and the public web.

Though it’s easy to integrate the web with Facebook — linking to news stories and such — Facebook products are generally not available to be integrated back into the web itself. You can embed a tweet in a Facebook post, but if you embed a Facebook post in a tweet, you just get a link back to Facebook and must log in before you can see it. Facebook has used extreme technological and legal countermeasures to prevent rivals from allowing their users to embed Facebook snippets in competing services or to create alternative interfaces to Facebook that merge your Facebook inbox with those of other services that you use.

And Facebook is incredibly popular, with 2.3 billion claimed users (though many believe this figure to be inflated). Facebook has been used to organize genocidal pogroms, racist riots, anti-vaccination movements, flat Earth cults, and the political lives of some of the world’s ugliest, most brutal autocrats. There are some really alarming things going on in the world, and Facebook is implicated in many of them, so it’s easy to conclude that these bad things are the result of Facebook’s mind-control system, which it rents out to anyone with a few bucks to spend.

To understand what role Facebook plays in the formulation and mobilization of antisocial movements, we need to understand the dual nature of Facebook.

Because it has a lot of users and a lot of data about those users, Facebook is a very efficient tool for locating people with hard-to-find traits, the kinds of traits that are widely diffused in the population such that advertisers have historically struggled to find a cost-effective way to reach them. Think back to refrigerators: Most of us only replace our major appliances a few times in our entire lives. If you’re a refrigerator manufacturer or retailer, you have these brief windows in the life of a consumer during which they are pondering a purchase, and you have to somehow reach them. Anyone who’s ever registered a title change after buying a house can attest that appliance manufacturers are incredibly desperate to reach anyone who has even the slenderest chance of being in the market for a new fridge.

Facebook makes finding people shopping for refrigerators a lot easier. It can target ads to people who’ve registered a new home purchase, to people who’ve searched for refrigerator buying advice, to people who have complained about their fridge dying, or any combination thereof. It can even target people who’ve recently bought other kitchen appliances on the theory that someone who’s just replaced their stove and dishwasher might be in a fridge-buying kind of mood. The vast majority of people who are reached by these ads will not be in the market for a new fridge, but — crucially — the percentage of people who are looking for fridges that these ads reach is much larger than it is than for any group that might be subjected to traditional, offline targeted refrigerator marketing.

Facebook also makes it a lot easier to find people who have the same rare disease as you, which might have been impossible in earlier eras — the closest fellow sufferer might otherwise be hundreds of miles away. It makes it easier to find people who went to the same high school as you even though decades have passed and your former classmates have all been scattered to the four corners of the Earth.

Facebook also makes it much easier to find people who hold the same rare political beliefs as you. If you’ve always harbored a secret affinity for socialism but never dared utter this aloud lest you be demonized by your neighbors, Facebook can help you discover other people who feel the same way (and it might just demonstrate to you that your affinity is more widespread than you ever suspected). It can make it easier to find people who share your sexual identity. And again, it can help you to understand that what you thought was a shameful secret that affected only you was really a widely shared trait, giving you both comfort and the courage to come out to the people in your life.

All of this presents a dilemma for Facebook: Targeting makes the company’s ads more effective than traditional ads, but it also lets advertisers see just how effective their ads are. While advertisers are pleased to learn that Facebook ads are more effective than ads on systems with less sophisticated targeting, advertisers can also see that in nearly every case, the people who see their ads ignore them. Or, at best, the ads work on a subconscious level, creating nebulous unmeasurables like “brand recognition.” This means that the price per ad is very low in nearly every case.

To make things worse, many Facebook groups spark precious little discussion. Your little-league soccer team, the people with the same rare disease as you, and the people you share a political affinity with may exchange the odd flurry of messages at critical junctures, but on a daily basis, there’s not much to say to your old high school chums or other hockey-card collectors.

With nothing but “organic” discussion, Facebook would not generate enough traffic to sell enough ads to make the money it needs to continually expand by buying up its competitors while returning handsome sums to its investors.

So Facebook has to gin up traffic by sidetracking its own forums: Every time Facebook’s algorithm injects controversial materials — inflammatory political articles, conspiracy theories, outrage stories — into a group, it can hijack that group’s nominal purpose with its desultory discussions and supercharge those discussions by turning them into bitter, unproductive arguments that drag on and on. Facebook is optimized for engagement, not happiness, and it turns out that automated systems are pretty good at figuring out things that people will get angry about.

Facebook can modify our behavior but only in a couple of trivial ways. First, it can lock in all your friends and family members so that you check and check and check with Facebook to find out what they are up to; and second, it can make you angry and anxious. It can force you to choose between being interrupted constantly by updates — a process that breaks your concentration and makes it hard to be introspective — and staying in touch with your friends. This is a very limited form of mind control, and it can only really make us miserable, angry, and anxious.

This is why Facebook’s targeting systems — both the ones it shows to advertisers and the ones that let users find people who share their interests — are so next-gen and smooth and easy to use as well as why its message boards have a toolset that seems like it hasn’t changed since the mid-2000s. If Facebook delivered an equally flexible, sophisticated message-reading system to its users, those users could defend themselves against being nonconsensually eyeball-fucked with Donald Trump headlines.

The more time you spend on Facebook, the more ads it gets to show you. The solution to Facebook’s ads only working one in a thousand times is for the company to try to increase how much time you spend on Facebook by a factor of a thousand. Rather than thinking of Facebook as a company that has figured out how to show you exactly the right ad in exactly the right way to get you to do what its advertisers want, think of it as a company that has figured out how to make you slog through an endless torrent of arguments even though they make you miserable, spending so much time on the site that it eventually shows you at least one ad that you respond to.
Monopoly And The Right To The Future Tense

Zuboff and her cohort are particularly alarmed at the extent to which surveillance allows corporations to influence our decisions, taking away something she poetically calls “the right to the future tense” — that is, the right to decide for yourself what you will do in the future.

It’s true that advertising can tip the scales one way or another: When you’re thinking of buying a fridge, a timely fridge ad might end the search on the spot. But Zuboff puts enormous and undue weight on the persuasive power of surveillance-based influence techniques. Most of these don’t work very well, and the ones that do won’t work for very long. The makers of these influence tools are confident they will someday refine them into systems of total control, but they are hardly unbiased observers, and the risks from their dreams coming true are very speculative.

By contrast, Zuboff is rather sanguine about 40 years of lax antitrust practice that has allowed a handful of companies to dominate the internet, ushering in an information age with, as one person on Twitter noted, five giant websites each filled with screenshots of the other four.




However, if we are to be alarmed that we might lose the right to choose for ourselves what our future will hold, then monopoly’s nonspeculative, concrete, here-and-now harms should be front and center in our debate over tech policy.

Start with “digital rights management.” In 1998, Bill Clinton signed the Digital Millennium Copyright Act (DMCA) into law. It’s a complex piece of legislation with many controversial clauses but none more so than Section 1201, the “anti-circumvention” rule.

This is a blanket ban on tampering with systems that restrict access to copyrighted works. The ban is so thoroughgoing that it prohibits removing a copyright lock even when no copyright infringement takes place. This is by design: The activities that the DMCA’s Section 1201 sets out to ban are not copyright infringements; rather, they are legal activities that frustrate manufacturers’ commercial plans.

For example, Section 1201’s first major application was on DVD players as a means of enforcing the region coding built into those devices. DVD-CCA, the body that standardized DVDs and DVD players, divided the world into six regions and specified that DVD players must check each disc to determine which regions it was authorized to be played in. DVD players would have their own corresponding region (a DVD player bought in the U.S. would be region 1 while one bought in India would be region 5). If the player and the disc’s region matched, the player would play the disc; otherwise, it would reject it.

However, watching a lawfully produced disc in a country other than the one where you purchased it is not copyright infringement — it’s the opposite. Copyright law imposes this duty on customers for a movie: You must go into a store, find a licensed disc, and pay the asking price. Do that — and nothing else — and you and copyright are square with one another.

The fact that a movie studio wants to charge Indians less than Americans or release in Australia later than it releases in the U.K. has no bearing on copyright law. Once you lawfully acquire a DVD, it is no copyright infringement to watch it no matter where you happen to be.

So DVD and DVD player manufacturers would not be able to use accusations of abetting copyright infringement to punish manufacturers who made noncompliant players that would play discs from any region or repair shops that modified players to let you watch out-of-region discs or software programmers who created programs to let you do this.

That’s where Section 1201 of the DMCA comes in: By banning tampering with an “access control,” the rule gave manufacturers and rights holders standing to sue competitors who released superior products with lawful features that the market demanded (in this case, region-free players).

This is an odious scam against consumers, but as time went by, Section 1201 grew to encompass a rapidly expanding constellation of devices and services as canny manufacturers have realized certain things:
Any device with software in it contains a “copyrighted work” — i.e., the software.
A device can be designed so that reconfiguring the software requires bypassing an “access control for copyrighted works,” which is a potential felony under Section 1201.
Thus, companies can control their customers’ behavior after they take home their purchases by designing products so that all unpermitted uses require modifications that fall afoul of Section 1201.

Section 1201 then becomes a means for manufacturers of all descriptions to force their customers to arrange their affairs to benefit the manufacturers’ shareholders instead of themselves.

This manifests in many ways: from a new generation of inkjet printers that use countermeasures to prevent third-party ink that cannot be bypassed without legal risks to similar systems in tractors that prevent third-party technicians from swapping in the manufacturer’s own parts that are not recognized by the tractor’s control system until it is supplied with a manufacturer’s unlock code.

Closer to home, Apple’s iPhones use these measures to prevent both third-party service and third-party software installation. This allows Apple to decide when an iPhone is beyond repair and must be shredded and landfilled as opposed to the iPhone’s purchaser. (Apple is notorious for its environmentally catastrophic policy of destroying old electronics rather than permitting them to be cannibalized for parts.) This is a very useful power to wield, especially in light of CEO Tim Cook’s January 2019 warning to investors that the company’s profits are endangered by customers choosing to hold onto their phones for longer rather than replacing them.

Apple’s use of copyright locks also allows it to establish a monopoly over how its customers acquire software for their mobile devices. The App Store’s commercial terms guarantee Apple a share of all revenues generated by the apps sold there, meaning that Apple gets paid when you buy an app from its store and then continues to get paid every time you buy something using that app. This comes out of the bottom line of software developers, who must either charge more or accept lower profits for their products.

Crucially, Apple’s use of copyright locks gives it the power to make editorial decisions about which apps you may and may not install on your own device. Apple has used this power to reject dictionaries for containing obscene words; to limit political speech, especially from apps that make sensitive political commentary such as an app that notifies you every time a U.S. drone kills someone somewhere in the world; and to object to a game that commented on the Israel-Palestine conflict.

Apple often justifies monopoly power over software installation in the name of security, arguing that its vetting of apps for its store means that it can guard its users against apps that contain surveillance code. But this cuts both ways. In China, the government ordered Apple to prohibit the sale of privacy tools like VPNs with the exception of VPNs that had deliberately introduced flaws designed to let the Chinese state eavesdrop on users. Because Apple uses technological countermeasures — with legal backstops — to block customers from installing unauthorized apps, Chinese iPhone owners cannot readily (or legally) acquire VPNs that would protect them from Chinese state snooping.

Zuboff calls surveillance capitalism a “rogue capitalism.” Theoreticians of capitalism claim that its virtue is that it aggregates information in the form of consumers’ decisions, producing efficient markets. Surveillance capitalism’s supposed power to rob its victims of their free will through computationally supercharged influence campaigns means that our markets no longer aggregate customers’ decisions because we customers no longer decide — we are given orders by surveillance capitalism’s mind-control rays.

If our concern is that markets cease to function when consumers can no longer make choices, then copyright locks should concern us at least as much as influence campaigns. An influence campaign might nudge you to buy a certain brand of phone; but the copyright locks on that phone absolutely determine where you get it serviced, which apps can run on it, and when you have to throw it away rather than fixing it.
Search Order And The Right To The Future Tense

Markets are posed as a kind of magic: By discovering otherwise hidden information conveyed by the free choices of consumers, those consumers’ local knowledge is integrated into a self-correcting system that makes efficient allocations—more efficient than any computer could calculate. But monopolies are incompatible with that notion. When you only have one app store, the owner of the store — not the consumer — decides on the range of choices. As Boss Tweed once said, “I don’t care who does the electing, so long as I get to do the nominating.” A monopolized market is an election whose candidates are chosen by the monopolist.

This ballot rigging is made more pernicious by the existence of monopolies over search order. Google’s search market share is about 90%. When Google’s ranking algorithm puts a result for a popular search term in its top 10, that helps determine the behavior of millions of people. If Google’s answer to “Are vaccines dangerous?” is a page that rebuts anti-vax conspiracy theories, then a sizable portion of the public will learn that vaccines are safe. If, on the other hand, Google sends those people to a site affirming the anti-vax conspiracies, a sizable portion of those millions will come away convinced that vaccines are dangerous.

Google’s algorithm is often tricked into serving disinformation as a prominent search result. But in these cases, Google isn’t persuading people to change their minds; it’s just presenting something untrue as fact when the user has no cause to doubt it.

This is true whether the search is for “Are vaccines dangerous?” or “best restaurants near me.” Most users will never look past the first page of search results, and when the overwhelming majority of people all use the same search engine, the ranking algorithm deployed by that search engine will determine myriad outcomes (whether to adopt a child, whether to have cancer surgery, where to eat dinner, where to move, where to apply for a job) to a degree that vastly outstrips any behavioral outcomes dictated by algorithmic persuasion techniques.

Many of the questions we ask search engines have no empirically correct answers: “Where should I eat dinner?” is not an objective question. Even questions that do have correct answers (“Are vaccines dangerous?”) don’t have one empirically superior source for that answer. Many pages affirm the safety of vaccines, so which one goes first? Under conditions of competition, consumers can choose from many search engines and stick with the one whose algorithmic judgment suits them best, but under conditions of monopoly, we all get our answers from the same place.

Google’s search dominance isn’t a matter of pure merit: The company has leveraged many tactics that would have been prohibited under classical, pre-Ronald-Reagan antitrust enforcement standards to attain its dominance. After all, this is a company that has developed two major products: a really good search engine and a pretty good Hotmail clone. Every other major success it’s had — Android, YouTube, Google Maps, etc. — has come through an acquisition of a nascent competitor. Many of the company’s key divisions, such as the advertising technology of DoubleClick, violate the historical antitrust principle of structural separation, which forbade firms from owning subsidiaries that competed with their customers. Railroads, for example, were barred from owning freight companies that competed with the shippers whose freight they carried.

If we’re worried about giant companies subverting markets by stripping consumers of their ability to make free choices, then vigorous antitrust enforcement seems like an excellent remedy. If we’d denied Google the right to effect its many mergers, we would also have probably denied it its total search dominance. Without that dominance, the pet theories, biases, errors (and good judgment, too) of Google search engineers and product managers would not have such an outsized effect on consumer choice.

This goes for many other companies. Amazon, a classic surveillance capitalist, is obviously the dominant tool for searching Amazon — though many people find their way to Amazon through Google searches and Facebook posts — and obviously, Amazon controls Amazon search. That means that Amazon’s own self-serving editorial choices—like promoting its own house brands over rival goods from its sellers as well as its own pet theories, biases, and errors— determine much of what we buy on Amazon. And since Amazon is the dominant e-commerce retailer outside of China and since it attained that dominance by buying up both large rivals and nascent competitors in defiance of historical antitrust rules, we can blame the monopoly for stripping consumers of their right to the future tense and the ability to shape markets by making informed choices.

Not every monopolist is a surveillance capitalist, but that doesn’t mean they’re not able to shape consumer choices in wide-ranging ways. Zuboff lauds Apple for its App Store and iTunes Store, insisting that adding price tags to the features on its platforms has been the secret to resisting surveillance and thus creating markets. But Apple is the only retailer allowed to sell on its platforms, and it’s the second-largest mobile device vendor in the world. The independent software vendors that sell through Apple’s marketplace accuse the company of the same surveillance sins as Amazon and other big retailers: spying on its customers to find lucrative new products to launch, effectively using independent software vendors as free-market researchers, then forcing them out of any markets they discover.

Because of its use of copyright locks, Apple’s mobile customers are not legally allowed to switch to a rival retailer for its apps if they want to do so on an iPhone. Apple, obviously, is the only entity that gets to decide how it ranks the results of search queries in its stores. These decisions ensure that some apps are often installed (because they appear on page one) and others are never installed (because they appear on page one million). Apple’s search-ranking design decisions have a vastly more significant effect on consumer behaviors than influence campaigns delivered by surveillance capitalism’s ad-serving bots.
Monopolists Can Afford Sleeping Pills For Watchdogs

Only the most extreme market ideologues think that markets can self-regulate without state oversight. Markets need watchdogs — regulators, lawmakers, and other elements of democratic control — to keep them honest. When these watchdogs sleep on the job, then markets cease to aggregate consumer choices because those choices are constrained by illegitimate and deceptive activities that companies are able to get away with because no one is holding them to account.

But this kind of regulatory capture doesn’t come cheap. In competitive sectors, where rivals are constantly eroding one another’s margins, individual firms lack the surplus capital to effectively lobby for laws and regulations that serve their ends.

Many of the harms of surveillance capitalism are the result of weak or nonexistent regulation. Those regulatory vacuums spring from the power of monopolists to resist stronger regulation and to tailor what regulation exists to permit their existing businesses.

Here’s an example: When firms over-collect and over-retain our data, they are at increased risk of suffering a breach — you can’t leak data you never collected, and once you delete all copies of that data, you can no longer leak it. For more than a decade, we’ve lived through an endless parade of ever-worsening data breaches, each one uniquely horrible in the scale of data breached and the sensitivity of that data.

<>But still, firms continue to over-collect and over-retain our data for three reasons:

1. They are locked in the aforementioned limbic arms race with our capacity to shore up our attentional defense systems to resist their new persuasion techniques. They’re also locked in an arms race with their competitors to find new ways to target people for sales pitches. As soon as they discover a soft spot in our attentional defenses (a counterintuitive, unobvious way to target potential refrigerator buyers), the public begins to wise up to the tactic, and their competitors leap on it, hastening the day in which all potential refrigerator buyers have been inured to the pitch.

2. They believe the surveillance capitalism story. Data is cheap to aggregate and store, and both proponents and opponents of surveillance capitalism have assured managers and product designers that if you collect enough data, you will be able to perform sorcerous acts of mind control, thus supercharging your sales. Even if you never figure out how to profit from the data, someone else will eventually offer to buy it from you to give it a try. This is the hallmark of all economic bubbles: acquiring an asset on the assumption that someone else will buy it from you for more than you paid for it, often to sell to someone else at an even greater price.

3. The penalties for leaking data are negligible. Most countries limit these penalties to actual damages, meaning that consumers who’ve had their data breached have to show actual monetary harms to get a reward. In 2014, Home Depot disclosed that it had lost credit-card data for 53 million of its customers, but it settled the matter by paying those customers about $0.34 each — and a third of that $0.34 wasn’t even paid in cash. It took the form of a credit to procure a largely ineffectual credit-monitoring service.

But the harms from breaches are much more extensive than these actual-damages rules capture. Identity thieves and fraudsters are wily and endlessly inventive. All the vast breaches of our century are being continuously recombined, the data sets merged and mined for new ways to victimize the people whose data was present in them. Any reasonable, evidence-based theory of deterrence and compensation for breaches would not confine damages to actual damages but rather would allow users to claim these future harms.

<p”>However, even the most ambitious privacy rules, such as the EU General Data Protection Regulation, fall far short of capturing the negative externalities of the platforms’ negligent over-collection and over-retention, and what penalties they do provide are not aggressively pursued by regulators.

This tolerance of — or indifference to — data over-collection and over-retention can be ascribed in part to the sheer lobbying muscle of the platforms. They are so profitable that they can handily afford to divert gigantic sums to fight any real change — that is, change that would force them to internalize the costs of their surveillance activities.

And then there’s state surveillance, which the surveillance capitalism story dismisses as a relic of another era when the big worry was being jailed for your dissident speech, not having your free will stripped away with machine learning.

But state surveillance and private surveillance are intimately related. As we saw when Apple was conscripted by the Chinese government as a vital collaborator in state surveillance, the only really affordable and tractable way to conduct mass surveillance on the scale practiced by modern states — both “free” and autocratic states — is to suborn commercial services.

Whether it’s Google being used as a location tracking tool by local law enforcement across the U.S. or the use of social media tracking by the Department of Homeland Security to build dossiers on participants in protests against Immigration and Customs Enforcement’s family separation practices, any hard limits on surveillance capitalism would hamstring the state’s own surveillance capability. Without Palantir, Amazon, Google, and other major tech contractors, U.S. cops would not be able to spy on Black people, ICE would not be able to manage the caging of children at the U.S. border, and state welfare systems would not be able to purge their rolls by dressing up cruelty as empiricism and claiming that poor and vulnerable people are ineligible for assistance. At least some of the states’ unwillingness to take meaningful action to curb surveillance should be attributed to this symbiotic relationship. There is no mass state surveillance without mass commercial surveillance.

Monopolism is key to the project of mass state surveillance. It’s true that smaller tech firms are apt to be less well-defended than Big Tech, whose security experts are drawn from the tops of their field and who are given enormous resources to secure and monitor their systems against intruders. But smaller firms also have less to protect: fewer users whose data is more fragmented across more systems and have to be suborned one at a time by state actors.

A concentrated tech sector that works with authorities is a much more powerful ally in the project of mass state surveillance than a fragmented one composed of smaller actors. The U.S. tech sector is small enough that all of its top executives fit around a single boardroom table in Trump Tower in 2017, shortly after Trump’s inauguration. Most of its biggest players bid to win JEDI, the Pentagon’s $10 billion Joint Enterprise Defense Infrastructure cloud contract. Like other highly concentrated industries, Big Tech rotates its key employees in and out of government service, sending them to serve in the Department of Defense and the White House, then hiring ex-Pentagon and ex-DOD top staffers and officers to work in their own government relations departments.

They can even make a good case for doing this: After all, when there are only four or five big companies in an industry, everyone qualified to regulate those companies has served as an executive in at least a couple of them — because, likewise, when there are only five companies in an industry, everyone qualified for a senior role at any of them is by definition working at one of the other ones.

Industries that are competitive are fragmented — composed of companies that are at each other’s throats all the time and eroding one another’s margins in bids to steal their best customers. This leaves them with much more limited capital to use to lobby for favorable rules and a much harder job of getting everyone to agree to pool their resources to benefit the industry as a whole.

Surveillance combined with machine learning is supposed to be an existential crisis, a species-defining moment at which our free will is just a few more advances in the field from being stripped away. I am skeptical of this claim, but I do think that tech poses an existential threat to our society and possibly our species.

But that threat grows out of monopoly.

<p”>One of the consequences of tech’s regulatory capture is that it can shift liability for poor security decisions onto its customers and the wider society. It is absolutely normal in tech for companies to obfuscate the workings of their products, to make them deliberately hard to understand, and to threaten security researchers who seek to independently audit those products.

IT is the only field in which this is practiced: No one builds a bridge or a hospital and keeps the composition of the steel or the equations used to calculate load stresses a secret. It is a frankly bizarre practice that leads, time and again, to grotesque security defects on farcical scales, with whole classes of devices being revealed as vulnerable long after they are deployed in the field and put into sensitive places.

The monopoly power that keeps any meaningful consequences for breaches at bay means that tech companies continue to build terrible products that are insecure by design and that end up integrated into our lives, in possession of our data, and connected to our physical world. For years, Boeing has struggled with the aftermath of a series of bad technology decisions that made its 737 fleet a global pariah, a rare instance in which bad tech decisions have been seriously punished in the market.

These bad security decisions are compounded yet again by the use of copyright locks to enforce business-model decisions against consumers. Recall that these locks have become the go-to means for shaping consumer behavior, making it technically impossible to use third-party ink, insulin, apps, or service depots in connection with your lawfully acquired property.

Recall also that these copyright locks are backstopped by legislation (such as Section 1201 of the DMCA or Article 6 of the 2001 EU Copyright Directive) that ban tampering with (“circumventing”) them, and these statutes have been used to threaten security researchers who make disclosures about vulnerabilities without permission from manufacturers.

This amounts to a manufacturer’s veto over safety warnings and criticism. While this is far from the legislative intent of the DMCA and its sister statutes around the world, Congress has not intervened to clarify the statute nor will it because to do so would run counter to the interests of powerful, large firms whose lobbying muscle is unstoppable.

Copyright locks are a double whammy: They create bad security decisions that can’t be freely investigated or discussed. If markets are supposed to be machines for aggregating information (and if surveillance capitalism’s notional mind-control rays are what make it a “rogue capitalism” because it denies consumers the power to make decisions), then a program of legally enforced ignorance of the risks of products makes monopolism even more of a “rogue capitalism” than surveillance capitalism’s influence campaigns.

And unlike mind-control rays, enforced silence over security is an immediate, documented problem, and it does constitute an existential threat to our civilization and possibly our species. The proliferation of insecure devices — especially devices that spy on us and especially when those devices also can manipulate the physical world by, say, steering your car or flipping a breaker at a power station — is a kind of technology debt.

In software design, “technology debt” refers to old, baked-in decisions that turn out to be bad ones in hindsight. Perhaps a long-ago developer decided to incorporate a networking protocol made by a vendor that has since stopped supporting it. But everything in the product still relies on that superannuated protocol, and so, with each revision, the product team has to work around this obsolete core, adding compatibility layers, surrounding it with security checks that try to shore up its defenses, and so on. These Band-Aid measures compound the debt because every subsequent revision has to make allowances for them, too, like interest mounting on a predatory subprime loan. And like a subprime loan, the interest mounts faster than you can hope to pay it off: The product team has to put so much energy into maintaining this complex, brittle system that they don’t have any time left over to refactor the product from the ground up and “pay off the debt” once and for all.

Typically, technology debt results in a technological bankruptcy: The product gets so brittle and unsustainable that it fails catastrophically. Think of the antiquated COBOL-based banking and accounting systems that fell over at the start of the pandemic emergency when confronted with surges of unemployment claims. Sometimes that ends the product; sometimes it takes the company down with it. Being caught in the default of a technology debt is scary and traumatic, just like losing your house due to bankruptcy is scary and traumatic.

But the technology debt created by copyright locks isn’t individual debt; it’s systemic. Everyone in the world is exposed to this over-leverage, as was the case with the 2008 financial crisis. When that debt comes due — when we face a cascade of security breaches that threaten global shipping and logistics, the food supply, pharmaceutical production pipelines, emergency communications, and other critical systems that are accumulating technology debt in part due to the presence of deliberately insecure and deliberately unauditable copyright locks — it will indeed pose an existential risk.
Privacy And Monopoly

Many tech companies are gripped by an orthodoxy that holds that if they just gather enough data on enough of our activities, everything else is possible — the mind control and endless profits. This is an unfalsifiable hypothesis: If data gives a tech company even a tiny improvement in behavior prediction and modification, the company declares that it has taken the first step toward global domination with no end in sight. If a company fails to attain any improvements from gathering and analyzing data, it declares success to be just around the corner, attainable once more data is in hand.

Surveillance tech is far from the first industry to embrace a nonsensical, self-serving belief that harms the rest of the world, and it is not the first industry to profit handsomely from such a delusion. Long before hedge-fund managers were claiming (falsely) that they could beat the S&P 500, there were plenty of other “respectable” industries that have been revealed as quacks in hindsight. From the makers of radium suppositories (a real thing!) to the cruel sociopaths who claimed they could “cure” gay people, history is littered with the formerly respectable titans of discredited industries.

This is not to say that there’s nothing wrong with Big Tech and its ideological addiction to data. While surveillance’s benefits are mostly overstated, its harms are, if anything, understated.

There’s real irony here. The belief in surveillance capitalism as a “rogue capitalism” is driven by the belief that markets wouldn’t tolerate firms that are gripped by false beliefs. An oil company that has false beliefs about where the oil is will eventually go broke digging dry wells after all.

But monopolists get to do terrible things for a long time before they pay the price. Think of how concentration in the finance sector allowed the subprime crisis to fester as bond-rating agencies, regulators, investors, and critics all fell under the sway of a false belief that complex mathematics could construct “fully hedged” debt instruments that could not possibly default. A small bank that engaged in this kind of malfeasance would simply go broke rather than outrunning the inevitable crisis, perhaps growing so big that it averted it altogether. But large banks were able to continue to attract investors, and when they finally did come a-cropper, the world’s governments bailed them out. The worst offenders of the subprime crisis are bigger than they were in 2008, bringing home more profits and paying their execs even larger sums.

Big Tech is able to practice surveillance not just because it is tech but because it is big. The reason every web publisher embeds a Facebook “Like” button is that Facebook dominates the internet’s social media referrals — and every one of those “Like” buttons spies on everyone who lands on a page that contains them (see also: Google Analytics embeds, Twitter buttons, etc.).

The reason the world’s governments have been slow to create meaningful penalties for privacy breaches is that Big Tech’s concentration produces huge profits that can be used to lobby against those penalties — and Big Tech’s concentration means that the companies involved are able to arrive at a unified negotiating position that supercharges the lobbying.

The reason that the smartest engineers in the world want to work for Big Tech is that Big Tech commands the lion’s share of tech industry jobs.

The reason people who are aghast at Facebook’s and Google’s and Amazon’s data-handling practices continue to use these services is that all their friends are on Facebook; Google dominates search; and Amazon has put all the local merchants out of business.

Competitive markets would weaken the companies’ lobbying muscle by reducing their profits and pitting them against each other in regulatory forums. It would give customers other places to go to get their online services. It would make the companies small enough to regulate and pave the way to meaningful penalties for breaches. It would let engineers with ideas that challenged the surveillance orthodoxy raise capital to compete with the incumbents. It would give web publishers multiple ways to reach audiences and make the case against Facebook and Google and Twitter embeds.

In other words, while surveillance doesn’t cause monopolies, monopolies certainly abet surveillance.
Ronald Reagan, Pioneer Of Tech Monopolism

Technology exceptionalism is a sin, whether it’s practiced by technology’s blind proponents or by its critics. Both of these camps are prone to explaining away monopolistic concentration by citing some special characteristic of the tech industry, like network effects or first-mover advantage. The only real difference between these two groups is that the tech apologists say monopoly is inevitable so we should just let tech get away with its abuses while competition regulators in the U.S. and the EU say monopoly is inevitable so we should punish tech for its abuses but not try to break up the monopolies.

To understand how tech became so monopolistic, it’s useful to look at the dawn of the consumer tech industry: 1979, the year the Apple II Plus launched and became the first successful home computer. That also happens to be the year that Ronald Reagan hit the campaign trail for the 1980 presidential race — a race he won, leading to a radical shift in the way that antitrust concerns are handled in America. Reagan’s cohort of politicians — including Margaret Thatcher in the U.K., Brian Mulroney in Canada, Helmut Kohl in Germany, and Augusto Pinochet in Chile — went on to enact similar reforms that eventually spread around the world.

Antitrust’s story began nearly a century before all that with laws like the Sherman Act, which took aim at monopolists on the grounds that monopolies were bad in and of themselves — squeezing out competitors, creating “diseconomies of scale” (when a company is so big that its constituent parts go awry and it is seemingly helpless to address the problems), and capturing their regulators to such a degree that they can get away with a host of evils.

Then came a fabulist named Robert Bork, a former solicitor general who Reagan appointed to the powerful U.S. Court of Appeals for the D.C. Circuit and who had created an alternate legislative history of the Sherman Act and its successors out of whole cloth. Bork insisted that these statutes were never targeted at monopolies (despite a wealth of evidence to the contrary, including the transcribed speeches of the acts’ authors) but, rather, that they were intended to prevent “consumer harm” — in the form of higher prices.

Bork was a crank, but he was a crank with a theory that rich people really liked. Monopolies are a great way to make rich people richer by allowing them to receive “monopoly rents” (that is, bigger profits) and capture regulators, leading to a weaker, more favorable regulatory environment with fewer protections for customers, suppliers, the environment, and workers.

Bork’s theories were especially palatable to the same power brokers who backed Reagan, and Reagan’s Department of Justice and other agencies began to incorporate Bork’s antitrust doctrine into their enforcement decisions (Reagan even put Bork up for a Supreme Court seat, but Bork flunked the Senate confirmation hearing so badly that, 40 years later, D.C. insiders use the term “borked” to refer to any catastrophically bad political performance).

Little by little, Bork’s theories entered the mainstream, and their backers began to infiltrate the legal education field, even putting on junkets where members of the judiciary were treated to lavish meals, fun outdoor activities, and seminars where they were indoctrinated into the consumer harm theory of antitrust. The more Bork’s theories took hold, the more money the monopolists were making — and the more surplus capital they had at their disposal to lobby for even more Borkian antitrust influence campaigns.

The history of Bork’s antitrust theories is a really good example of the kind of covertly engineered shifts in public opinion that Zuboff warns us against, where fringe ideas become mainstream orthodoxy. But Bork didn’t change the world overnight. He played a very long game, for over a generation, and he had a tailwind because the same forces that backed oligarchic antitrust theories also backed many other oligarchic shifts in public opinion. For example, the idea that taxation is theft, that wealth is a sign of virtue, and so on — all of these theories meshed to form a coherent ideology that elevated inequality to a virtue.

Today, many fear that machine learning allows surveillance capitalism to sell “Bork-as-a-Service,” at internet speeds, so that you can contract a machine-learning company to engineer rapid shifts in public sentiment without needing the capital to sustain a multipronged, multigenerational project working at the local, state, national, and global levels in business, law, and philosophy. I do not believe that such a project is plausible, though I agree that this is basically what the platforms claim to be selling. They’re just lying about it. Big Tech lies all the time, including in their sales literature.

The idea that tech forms “natural monopolies” (monopolies that are the inevitable result of the realities of an industry, such as the monopolies that accrue the first company to run long-haul phone lines or rail lines) is belied by tech’s own history: In the absence of anti-competitive tactics, Google was able to unseat AltaVista and Yahoo; Facebook was able to head off Myspace. There are some advantages to gathering mountains of data, but those mountains of data also have disadvantages: liability (from leaking), diminishing returns (from old data), and institutional inertia (big companies, like science, progress one funeral at a time).

Indeed, the birth of the web saw a mass-extinction event for the existing giant, wildly profitable proprietary technologies that had capital, network effects, and walls and moats surrounding their businesses. The web showed that when a new industry is built around a protocol, rather than a product, the combined might of everyone who uses the protocol to reach their customers or users or communities outweighs even the most massive products. CompuServe, AOL, MSN, and a host of other proprietary walled gardens learned this lesson the hard way: Each believed it could stay separate from the web, offering “curation” and a guarantee of consistency and quality instead of the chaos of an open system. Each was wrong and ended up being absorbed into the public web.

Yes, tech is heavily monopolized and is now closely associated with industry concentration, but this has more to do with a matter of timing than its intrinsically monopolistic tendencies. Tech was born at the moment that antitrust enforcement was being dismantled, and tech fell into exactly the same pathologies that antitrust was supposed to guard against. To a first approximation, it is reasonable to assume that tech’s monopolies are the result of a lack of anti-monopoly action and not the much-touted unique characteristics of tech, such as network effects, first-mover advantage, and so on.

In support of this thesis, I offer the concentration that every other industry has undergone over the same period. From professional wrestling to consumer packaged goods to commercial property leasing to banking to sea freight to oil to record labels to newspaper ownership to theme parks, every industry has undergone a massive shift toward concentration. There’s no obvious network effects or first-mover advantage at play in these industries. However, in every case, these industries attained their concentrated status through tactics that were prohibited before Bork’s triumph: merging with major competitors, buying out innovative new market entrants, horizontal and vertical integration, and a suite of anti-competitive tactics that were once illegal but are not any longer.

Again: When you change the laws intended to prevent monopolies and then monopolies form in exactly the way the law was supposed to prevent, it is reasonable to suppose that these facts are related. Tech’s concentration can be readily explained without recourse to radical theories of network effects — but only if you’re willing to indict unregulated markets as tending toward monopoly. Just as a lifelong smoker can give you a hundred reasons why their smoking didn’t cause their cancer (“It was the environmental toxins”), true believers in unregulated markets have a whole suite of unconvincing explanations for monopoly in tech that leave capitalism intact.
Steering With The Windshield Wipers

It’s been 40 years since Bork’s project to rehabilitate monopolies achieved liftoff, and that is a generation and a half, which is plenty of time to take a common idea and make it seem outlandish and vice versa. Before the 1940s, affluent Americans dressed their baby boys in pink while baby girls wore blue (a “delicate and dainty” color). While gendered colors are obviously totally arbitrary, many still greet this news with amazement and find it hard to imagine a time when pink connoted masculinity.

After 40 years of studiously ignoring antitrust analysis and enforcement, it’s not surprising that we’ve all but forgotten that antitrust exists, that in living memory, growth through mergers and acquisitions were largely prohibited under law, that market-cornering strategies like vertical integration could land a company in court.

Antitrust is a market society’s steering wheel, the control of first resort to keep would-be masters of the universe in their lanes. But Bork and his cohort ripped out our steering wheel 40 years ago. The car is still barreling along, and so we’re yanking as hard as we can on all the other controls in the car as well as desperately flapping the doors and rolling the windows up and down in the hopes that one of these other controls can be repurposed to let us choose where we’re heading before we careen off a cliff.

It’s like a 1960s science-fiction plot come to life: People stuck in a “generation ship,” plying its way across the stars, a ship once piloted by their ancestors; and now, after a great cataclysm, the ship’s crew have forgotten that they’re in a ship at all and no longer remember where the control room is. Adrift, the ship is racing toward its extinction, and unless we can seize the controls and execute emergency course correction, we’re all headed for a fiery death in the heart of a sun.
Surveillance Still Matters

None of this is to minimize the problems with surveillance. Surveillance matters, and Big Tech’s use of surveillance is an existential risk to our species, but that’s not because surveillance and machine learning rob us of our free will.

Surveillance has become much more efficient thanks to Big Tech. In 1989, the Stasi — the East German secret police — had the whole country under surveillance, a massive undertaking that recruited one out of every 60 people to serve as an informant or intelligence operative.

Today, we know that the NSA is spying on a significant fraction of the entire world’s population, and its ratio of surveillance operatives to the surveilled is more like 1:10,000 (that’s probably on the low side since it assumes that every American with top-secret clearance is working for the NSA on this project — we don’t know how many of those cleared people are involved in NSA spying, but it’s definitely not all of them).

How did the ratio of surveillable citizens expand from 1:60 to 1:10,000 in less than 30 years? It’s thanks to Big Tech. Our devices and services gather most of the data that the NSA mines for its surveillance project. We pay for these devices and the services they connect to, and then we painstakingly perform the data-entry tasks associated with logging facts about our lives, opinions, and preferences. This mass surveillance project has been largely useless for fighting terrorism: The NSA can only point to a single minor success story in which it used its data collection program to foil an attempt by a U.S. resident to wire a few thousand dollars to an overseas terror group. It’s ineffective for much the same reason that commercial surveillance projects are largely ineffective at targeting advertising: The people who want to commit acts of terror, like people who want to buy a refrigerator, are extremely rare. If you’re trying to detect a phenomenon whose base rate is one in a million with an instrument whose accuracy is only 99%, then every true positive will come at the cost of 9,999 false positives.

Let me explain that again: If one in a million people is a terrorist, then there will only be about one terrorist in a random sample of one million people. If your test for detecting terrorists is 99% accurate, it will identify 10,000 terrorists in your million-person sample (1% of one million is 10,000). For every true positive, you’ll get 9,999 false positives.

In reality, the accuracy of algorithmic terrorism detection falls far short of the 99% mark, as does refrigerator ad targeting. The difference is that being falsely accused of wanting to buy a fridge is a minor nuisance while being falsely accused of planning a terror attack can destroy your life and the lives of everyone you love.

Mass state surveillance is only feasible because of surveillance capitalism and its extremely low-yield ad-targeting systems, which require a constant feed of personal data to remain barely viable. Surveillance capitalism’s primary failure mode is mistargeted ads while mass state surveillance’s primary failure mode is grotesque human rights abuses, tending toward totalitarianism.

State surveillance is no mere parasite on Big Tech, sucking up its data and giving nothing in return. In truth, the two are symbiotes: Big Tech sucks up our data for spy agencies, and spy agencies ensure that governments don’t limit Big Tech’s activities so severely that it would no longer serve the spy agencies’ needs. There is no firm distinction between state surveillance and surveillance capitalism; they are dependent on one another.

To see this at work today, look no further than Amazon’s home surveillance device, the Ring doorbell, and its associated app, Neighbors. Ring — a product that Amazon acquired and did not develop in house — makes a camera-enabled doorbell that streams footage from your front door to your mobile device. The Neighbors app allows you to form a neighborhood-wide surveillance grid with your fellow Ring owners through which you can share clips of “suspicious characters.” If you’re thinking that this sounds like a recipe for letting curtain-twitching racists supercharge their suspicions of people with brown skin who walk down their blocks, you’re right. Ring has become a de facto, off-the-books arm of the police without any of the pesky oversight or rules.

In mid-2019, a series of public records requests revealed that Amazon had struck confidential deals with more than 400 local law enforcement agencies through which the agencies would promote Ring and Neighbors and in exchange get access to footage from Ring cameras. In theory, cops would need to request this footage through Amazon (and internal documents reveal that Amazon devotes substantial resources to coaching cops on how to spin a convincing story when doing so), but in practice, when a Ring customer turns down a police request, Amazon only requires the agency to formally request the footage from the company, which it will then produce.

Ring and law enforcement have found many ways to intertwine their activities. Ring strikes secret deals to acquire real-time access to 911 dispatch and then streams alarming crime reports to Neighbors users, which serve as convincers for anyone who’s contemplating a surveillance doorbell but isn’t sure whether their neighborhood is dangerous enough to warrant it.

The more the cops buzz-market the surveillance capitalist Ring, the more surveillance capability the state gets. Cops who rely on private entities for law-enforcement roles then brief against any controls on the deployment of that technology while the companies return the favor by lobbying against rules requiring public oversight of police surveillance technology. The more the cops rely on Ring and Neighbors, the harder it will be to pass laws to curb them. The fewer laws there are against them, the more the cops will rely on them.
Dignity And Sanctuary

But even if we could exercise democratic control over our states and force them to stop raiding surveillance capitalism’s reservoirs of behavioral data, surveillance capitalism would still harm us.

This is an area where Zuboff shines. Her chapter on “sanctuary” — the feeling of being unobserved — is a beautiful hymn to introspection, calmness, mindfulness, and tranquility.

When you are watched, something changes. Anyone who has ever raised a child knows this. You might look up from your book (or more realistically, from your phone) and catch your child in a moment of profound realization and growth, a moment where they are learning something that is right at the edge of their abilities, requiring their entire ferocious concentration. For a moment, you’re transfixed, watching that rare and beautiful moment of focus playing out before your eyes, and then your child looks up and sees you seeing them, and the moment collapses. To grow, you need to be and expose your authentic self, and in that moment, you are vulnerable like a hermit crab scuttling from one shell to the next. The tender, unprotected tissues you expose in that moment are too delicate to reveal in the presence of another, even someone you trust as implicitly as a child trusts their parent.

In the digital age, our authentic selves are inextricably tied to our digital lives. Your search history is a running ledger of the questions you’ve pondered. Your location history is a record of the places you’ve sought out and the experiences you’ve had there. Your social graph reveals the different facets of your identity, the people you’ve connected with.

To be observed in these activities is to lose the sanctuary of your authentic self.

There’s another way in which surveillance capitalism robs us of our capacity to be our authentic selves: by making us anxious. Surveillance capitalism isn’t really a mind-control ray, but you don’t need a mind-control ray to make someone anxious. After all, another word for anxiety is agitation, and to make someone experience agitation, you need merely to agitate them. To poke them and prod them and beep at them and buzz at them and bombard them on an intermittent schedule that is just random enough that our limbic systems never quite become inured to it.

Our devices and services are “general purpose” in that they can connect anything or anyone to anything or anyone else and that they can run any program that can be written. This means that the distraction rectangles in our pockets hold our most precious moments with our most beloved people and their most urgent or time-sensitive communications (from “running late can you get the kid?” to “doctor gave me bad news and I need to talk to you RIGHT NOW”) as well as ads for refrigerators and recruiting messages from Nazis.

All day and all night, our pockets buzz, shattering our concentration and tearing apart the fragile webs of connection we spin as we think through difficult ideas. If you locked someone in a cell and agitated them like this, we’d call it “sleep deprivation torture,” and it would be a war crime under the Geneva Conventions.
Afflicting The Afflicted

The effects of surveillance on our ability to be our authentic selves are not equal for all people. Some of us are lucky enough to live in a time and place in which all the most important facts of our lives are widely and roundly socially acceptable and can be publicly displayed without the risk of social consequence.

But for many of us, this is not true. Recall that in living memory, many of the ways of being that we think of as socially acceptable today were once cause for dire social sanction or even imprisonment. If you are 65 years old, you have lived through a time in which people living in “free societies” could be imprisoned or sanctioned for engaging in homosexual activity, for falling in love with a person whose skin was a different color than their own, or for smoking weed.

Today, these activities aren’t just decriminalized in much of the world, they’re considered normal, and the fallen prohibitions are viewed as shameful, regrettable relics of the past.

How did we get from prohibition to normalization? Through private, personal activity: People who were secretly gay or secret pot-smokers or who secretly loved someone with a different skin color were vulnerable to retaliation if they made their true selves known and were limited in how much they could advocate for their own right to exist in the world and be true to themselves. But because there was a private sphere, these people could form alliances with their friends and loved ones who did not share their disfavored traits by having private conversations in which they came out, disclosing their true selves to the people around them and bringing them to their cause one conversation at a time.

The right to choose the time and manner of these conversations was key to their success. It’s one thing to come out to your dad while you’re on a fishing trip away from the world and another thing entirely to blurt it out over the Christmas dinner table while your racist Facebook uncle is there to make a scene.

Without a private sphere, there’s a chance that none of these changes would have come to pass and that the people who benefited from these changes would have either faced social sanction for coming out to a hostile world or would have never been able to reveal their true selves to the people they love.

The corollary is that, unless you think that our society has attained social perfection — that your grandchildren in 50 years will ask you to tell them the story of how, in 2019, every injustice had been righted and no further change had to be made — then you should expect that right now, at this minute, there are people you love, whose happiness is key to your own, who have a secret in their hearts that stops them from ever being their authentic selves with you. These people are sorrowing and will go to their graves with that secret sorrow in their hearts, and the source of that sorrow will be the falsity of their relationship to you.

A private realm is necessary for human progress.
Any Data You Collect And Retain Will Eventually Leak

The lack of a private life can rob vulnerable people of the chance to be their authentic selves and constrain our actions by depriving us of sanctuary, but there is another risk that is borne by everyone, not just people with a secret: crime.

Personally identifying information is of very limited use for the purpose of controlling peoples’ minds, but identity theft — really a catchall term for a whole constellation of terrible criminal activities that can destroy your finances, compromise your personal integrity, ruin your reputation, or even expose you to physical danger — thrives on it.

Attackers are not limited to using data from one breached source, either. Multiple services have suffered breaches that exposed names, addresses, phone numbers, passwords, sexual tastes, school grades, work performance, brushes with the criminal justice system, family details, genetic information, fingerprints and other biometrics, reading habits, search histories, literary tastes, pseudonymous identities, and other sensitive information. Attackers can merge data from these different breaches to build up extremely detailed dossiers on random subjects and then use different parts of the data for different criminal purposes.

For example, attackers can use leaked username and password combinations to hijack whole fleets of commercial vehicles that have been fitted with anti-theft GPS trackers and immobilizers or to hijack baby monitors in order to terrorize toddlers with the audio tracks from pornography. Attackers use leaked data to trick phone companies into giving them your phone number, then they intercept SMS-based two-factor authentication codes in order to take over your email, bank account, and/or cryptocurrency wallets.

Attackers are endlessly inventive in the pursuit of creative ways to weaponize leaked data. One common use of leaked data is to penetrate companies in order to access more data.

Like spies, online fraudsters are totally dependent on companies over-collecting and over-

retaining our data. Spy agencies sometimes pay companies for access to their data or intimidate them into giving it up, but sometimes they work just like criminals do — by sneaking data out of companies’ databases.

The over-collection of data has a host of terrible social consequences, from the erosion of our authentic selves to the undermining of social progress, from state surveillance to an epidemic of online crime. Commercial surveillance is also a boon to people running influence campaigns, but that’s the least of our troubles.

No comments:

Post a Comment