1 page reflection of reading

write one page of the reading i send in attachment.

CHAPTER THREE
THE DISCOVERY OF BEHAVIORAL
SURPLUS
He watched the stars and noted birds in flight;
A river flooded or a fortress fell:
He made predictions that were sometimes right;
His lucky guesses were rewarded well.
—W. H. AUDEN
SONNETS FROM CHINA, VI
I. Google: The Pioneer of Surveillance Capitalism
Google is to surveillance capitalism what the Ford Motor Company and General
Motors were to mass-production–based managerial capitalism. New economic
logics and their commercial models are discovered by people in a time and place
and then perfected through trial and error. In our time Google became the
pioneer, discoverer, elaborator, experimenter, lead practitioner, role model, and
diffusion hub of surveillance capitalism. GM and Ford’s iconic status as
pioneers of twentieth-century capitalism made them enduring objects of
scholarly research and public fascination because the lessons they had to teach
resonated far beyond the individual companies. Google’s practices deserve the
same kind of examination, not merely as a critique of a single company but
rather as the starting point for the codification of a powerful new form of
capitalism.
With the triumph of mass production at Ford and for decades thereafter,
hundreds of researchers, businesspeople, engineers, journalists, and scholars
would excavate the circumstances of its invention, origins, and consequences.1
Decades later, scholars continued to write extensively about Ford, the man and
the company.2 GM has also been an object of intense scrutiny. It was the site of
Peter Drucker’s field studies for his seminal Concept of the Corporation, the
1946 book that codified the practices of the twentieth-century business
organization and established Drucker’s reputation as a management sage. In
addition to the many works of scholarship and analysis on these two firms, their
own leaders enthusiastically articulated their discoveries and practices. Henry
Ford and his general manager, James Couzens, and Alfred Sloan and his
marketing man, Henry “Buck” Weaver, reflected on, conceptualized, and
proselytized their achievements, specifically locating them in the evolutionary
drama of American capitalism.3
Google is a notoriously secretive company, and one is hard-pressed to
imagine a Drucker equivalent freely roaming the scene and scribbling in the
hallways. Its executives carefully craft their messages of digital evangelism in
books and blog posts, but its operations are not easily accessible to outside
researchers or journalists.4 In 2016 a lawsuit brought against the company by a
product manager alleged an internal spying program in which employees are
expected to identify coworkers who violate the firm’s confidentiality agreement:
a broad prohibition against divulging anything about the company to anyone.5
The closest thing we have to a Buck Weaver or James Couzens codifying
Google’s practices and objectives is the company’s longtime chief economist,
Hal Varian, who aids the cause of understanding with scholarly articles that
explore important themes. Varian has been described as “the Adam Smith of the
discipline of Googlenomics” and the “godfather” of its advertising model.6 It is
in Varian’s work that we find hidden-in-plain-sight important clues to the logic
of surveillance capitalism and its claims to power.
In two extraordinary articles in scholarly journals, Varian explored the theme
of “computer-mediated transactions” and their transformational effects on the
modern economy.7 Both pieces are written in amiable, down-to-earth prose, but
Varian’s casual understatement stands in counterpoint to his often-startling
declarations: “Nowadays there is a computer in the middle of virtually every
transaction… now that they are available these computers have several other
uses.”8 He then identifies four such new uses: “data extraction and analysis,”
“new contractual forms due to better monitoring,” “personalization and
customization,” and “continuous experiments.”
Varian’s discussions of these new “uses” are an unexpected guide to the
strange logic of surveillance capitalism, the division of learning that it shapes,
and the character of the information civilization toward which it leads. We will
return to Varian’s observations from time to time in the course of our
examination of the foundations of surveillance capitalism, aided by a kind of
“reverse engineering” of his assertions, so that we might grasp the worldview
and methods of surveillance capitalism through this lens. “Data extraction and
analysis,” Varian writes, “is what everyone is talking about when they talk about
big data.” “Data” are the raw material necessary for surveillance capitalism’s
novel manufacturing processes. “Extraction” describes the social relations and
material infrastructure with which the firm asserts authority over those raw
materials to achieve economies of scale in its raw-material supply operations.
“Analysis” refers to the complex of highly specialized computational systems
that I will generally refer to in these chapters as “machine intelligence.” I like
this umbrella phrase because it trains us on the forest rather than the trees,
helping us decenter from technology to its objectives. But in choosing this
phrase I also follow Google’s lead. The company describes itself “at the
forefront of innovation in machine intelligence,” a term in which it includes
machine learning as well as “classical” algorithmic production, along with many
computational operations that are often referred to with other terms such as
“predictive analytics” or “artificial intelligence.” Among these operations
Google cites its work on language translation, speech recognition, visual
processing, ranking, statistical modeling, and prediction: “In all of those tasks
and many others, we gather large volumes of direct or indirect evidence of
relationships of interest, applying learning algorithms to understand and
generalize.”9 These machine intelligence operations convert raw material into
the firm’s highly profitable algorithmic products designed to predict the behavior
of its users. The inscrutability and exclusivity of these techniques and operations
are the moat that surrounds the castle and secures the action within.
Google’s invention of targeted advertising paved the way to financial success,
but it also laid the cornerstone of a more far-reaching development: the
discovery and elaboration of surveillance capitalism. Its business is characterized
as an advertising model, and much has been written about Google’s automated
auction methods and other aspects of its inventions in the field of online
advertising. With so much verbiage, these developments are both over-described
and under-theorized. Our aim in this chapter and those that follow in Part I is to
reveal the “laws of motion” that drive surveillance competition, and in order to
do this we begin by looking freshly at the point of origin, when the foundational
mechanisms of surveillance capitalism were first discovered.
Before we begin, I want to say a word about vocabulary. Any confrontation
with the unprecedented requires new language, and I introduce new terms when
existing language fails to capture a new phenomenon. Sometimes, however, I
intentionally repurpose familiar language because I want to stress certain
continuities in the function of an element or process. This is the case with “laws
of motion,” borrowed from Newton’s laws of inertia, force, and equal and
opposite reactions.
Over the years historians have adopted this term to describe the “laws” of
industrial capitalism. For example, economic historian Ellen Meiksins Wood
documents the origins of capitalism in the changing relations between English
property owners and tenant farmers, as the owners began to favor productivity
over coercion: “The new historical dynamic allows us to speak of ‘agrarian
capitalism’ in early modern England, a social form with distinctive ‘laws of
motion’ that would eventually give rise to capitalism in its mature, industrial
form.”10 Wood describes how the new “laws of motion” eventually manifested
themselves in industrial production:
The critical factor in the divergence of capitalism from all other forms of “commercial society”
was the development of certain social property relations that generated market imperatives and
capitalist “laws of motion”… competitive production and profit-maximization, the compulsion to
reinvest surpluses, and the relentless need to improve labour-productivity associated with
capitalism.… Those laws of motion required vast social transformations and upheavals to set them
in train. They required a transformation in the human metabolism with nature, in the provision of
life’s basic necessities.11
My argument here is that although surveillance capitalism does not abandon
established capitalist “laws” such as competitive production, profit
maximization, productivity, and growth, these earlier dynamics now operate in
the context of a new logic of accumulation that also introduces its own
distinctive laws of motion. Here and in following chapters, we will examine
these foundational dynamics, including surveillance capitalism’s idiosyncratic
economic imperatives defined by extraction and prediction, its unique approach
to economies of scale and scope in raw-material supply, its necessary
construction and elaboration of means of behavioral modification that
incorporate its machine-intelligence–based “means of production” in a more
complex system of action, and the ways in which the requirements of behavioral
modification orient all operations toward totalities of information and control,
creating the framework for an unprecedented instrumentarian power and its
societal implications. For now, my aim is to reconstruct our appreciation of
familiar ground through new lenses: Google’s early days of optimism, crisis, and
invention.
II. A Balance of Power
Google was incorporated in 1998, founded by Stanford graduate students Larry
Page and Sergey Brin just two years after the Mosaic browser threw open the
doors of the world wide web to the computer-using public. From the start, the
company embodied the promise of information capitalism as a liberating and
democratic social force that galvanized and delighted second-modernity
populations around the world.
Thanks to this wide embrace, Google successfully imposed computer
mediation on broad new domains of human behavior as people searched online
and engaged with the web through a growing roster of Google services. As these
new activities were informated for the first time, they produced wholly new data
resources. For example, in addition to key words, each Google search query
produces a wake of collateral data such as the number and pattern of search
terms, how a query is phrased, spelling, punctuation, dwell times, click patterns,
and location.
Early on, these behavioral by-products were haphazardly stored and
operationally ignored. Amit Patel, a young Stanford graduate student with a
special interest in “data mining,” is frequently credited with the groundbreaking
insight into the significance of Google’s accidental data caches. His work with
these data logs persuaded him that detailed stories about each user—thoughts,
feelings, interests—could be constructed from the wake of unstructured signals
that trailed every online action. These data, he concluded, actually provided a
“broad sensor of human behavior” and could be put to immediate use in
realizing cofounder Larry Page’s dream of Search as a comprehensive artificial
intelligence.12
Google’s engineers soon grasped that the continuous flows of collateral
behavioral data could turn the search engine into a recursive learning system that
constantly improved search results and spurred product innovations such as spell
check, translation, and voice recognition. As Kenneth Cukier observed at that
time,
Other search engines in the 1990s had the chance to do the same, but did not pursue it. Around
2000 Yahoo! saw the potential, but nothing came of the idea. It was Google that recognized the
gold dust in the detritus of its interactions with its users and took the trouble to collect it up.…
Google exploits information that is a by-product of user interactions, or data exhaust, which is
automatically recycled to improve the service or create an entirely new product.13
What had been regarded as waste material—“data exhaust” spewed into
Google’s servers during the combustive action of Search—was quickly
reimagined as a critical element in the transformation of Google’s search engine
into a reflexive process of continuous learning and improvement.
At that early stage of Google’s development, the feedback loops involved in
improving its Search functions produced a balance of power: Search needed
people to learn from, and people needed Search to learn from. This symbiosis
enabled Google’s algorithms to learn and produce ever-more relevant and
comprehensive search results. More queries meant more learning; more learning
produced more relevance. More relevance meant more searches and more
users.14 By the time the young company held its first press conference in 1999,
to announce a $25 million equity investment from two of the most revered
Silicon Valley venture capital firms, Sequoia Capital and Kleiner Perkins,
Google Search was already fielding seven million requests each day.15 A few
years later, Hal Varian, who joined Google as its chief economist in 2002, would
note, “Every action a user performs is considered a signal to be analyzed and fed
back into the system.”16 The Page Rank algorithm, named after its founder, had
already given Google a significant advantage in identifying the most popular
results for queries. Over the course of the next few years it would be the capture,
storage, analysis, and learning from the by-products of those search queries that
would turn Google into the gold standard of web search.
The key point for us rests on a critical distinction. During this early period,
behavioral data were put to work entirely on the user’s behalf. User data
provided value at no cost, and that value was reinvested in the user experience in
the form of improved services: enhancements that were also offered at no cost to
users. Users provided the raw material in the form of behavioral data, and those
data were harvested to improve speed, accuracy, and relevance and to help build
ancillary products such as translation. I call this the behavioral value
reinvestment cycle, in which all behavioral data are reinvested in the
improvement of the product or service (see Figure 1).
The cycle emulates the logic of the iPod; it worked beautifully at Google but
with one critical difference: the absence of a sustainable market transaction. In
the case of the iPod, the cycle was triggered by the purchase of a high-margin
physical product. Subsequent reciprocities improved the iPod product and led to
increased sales. Customers were the subjects of the commercial process, which
promised alignment with their “what I want, when I want, where I want”
demands. At Google, the cycle was similarly oriented toward the individual as
its subject, but without a physical product to sell, it floated outside the
marketplace, an interaction with “users” rather than a market transaction with
customers.
This helps to explain why it is inaccurate to think of Google’s users as its
customers: there is no economic exchange, no price, and no profit. Nor do users
function in the role of workers. When a capitalist hires workers and provides
them with wages and means of production, the products that they produce
belong to the capitalist to sell at a profit. Not so here. Users are not paid for their
labor, nor do they operate the means of production, as we’ll discuss in more
depth later in this chapter. Finally, people often say that the user is the “product.”
This is also misleading, and it is a point that we will revisit more than once. For
now let’s say that users are not products, but rather we are the sources of rawmaterial supply. As we shall see, surveillance capitalism’s unusual products
manage to be derived from our behavior while remaining indifferent to our
behavior. Its products are about predicting us, without actually caring what we
do or what is done to us.
To summarize, at this early stage of Google’s development, whatever Search
users inadvertently gave up that was of value to the company they also used up
in the form of improved services. In this reinvestment cycle, serving users with
amazing Search results “consumed” all the value that users created when they
provided extra behavioral data. The fact that users needed Search about as much
as Search needed users created a balance of power between Google and its
populations. People were treated as ends in themselves, the subjects of a
nonmarket, self-contained cycle that was perfectly aligned with Google’s stated
mission “to organize the world’s information, making it universally accessible
and useful.”
Figure 1: The Behavioral Value Reinvestment
Cycle
III. Search for Capitalism: Impatient Money and the State of
Exception
By 1999, despite the splendor of Google’s new world of searchable web pages,
its growing computer science capabilities, and its glamorous venture backers,
there was no reliable way to turn investors’ money into revenue. The behavioral
value reinvestment cycle produced a very cool search function, but it was not yet
capitalism. The balance of power made it financially risky and possibly
counterproductive to charge users a fee for search services. Selling search results
would also have set a dangerous precedent for the firm, assigning a price to
indexed information that Google’s web crawler had already taken from others
without payment. Without a device like Apple’s iPod or its digital songs, there
were no margins, no surplus, nothing left over to sell and turn into revenue.
Google had relegated advertising to steerage class: its AdWords team
consisted of seven people, most of whom shared the founders’ general antipathy
toward ads. The tone had been set in Sergey Brin and Larry Page’s milestone
paper that unveiled their search engine conception, “The Anatomy of a LargeScale Hypertextual Web Search Engine,” presented at the 1998 World Wide Web
Conference: “We expect that advertising funded search engines will be
inherently biased towards the advertisers and away from the needs of the
consumers. This type of bias is very difficult to detect but could still have a
significant effect on the market… we believe the issue of advertising causes
enough mixed incentives that it is crucial to have a competitive search engine
that is transparent and in the academic realm.”17
Google’s first revenues depended instead on exclusive licensing deals to
provide web services to portals such as Yahoo! and Japan’s BIGLOBE.18 It also
generated modest revenue from sponsored ads linked to search query
keywords.19 There were other models for consideration. Rival search engines
such as Overture, used exclusively by the then-giant portal AOL, or Inktomi, the
search engine adopted by Microsoft, collected revenues from the sites whose
pages they indexed. Overture was also successful in attracting online ads with its
policy of allowing advertisers to pay for high-ranking search listings, the very
format that Brin and Page scorned.20
Prominent analysts publicly doubted whether Google could compete with its
more-established rivals. As the New York Times asked, “Can Google create a
business model even remotely as good as its technology?”21 A well-known
Forrester Research analyst proclaimed that there were only a few ways for
Google to make money with Search: “build a portal [like Yahoo!]… partner with
a portal… license the technology… wait for a big company to purchase them.”22
Despite these general misgivings about Google’s viability, the firm’s
prestigious venture backing gave the founders confidence in their ability to raise
money. This changed abruptly in April 2000, when the legendary dot-com
economy began its steep plunge into recession, and Silicon Valley’s Garden of
Eden unexpectedly became the epicenter of a financial earthquake.
By mid-April, Silicon Valley’s fast-money culture of privilege was under
siege with the implosion of what came to be known as the “dot-com bubble.” It
is easy to forget exactly how terrifying things were for the valley’s ambitious
young people and their slightly older investors. Startups with outsized valuations
just months earlier were suddenly forced to shutter. Prominent articles such as
“Doom Stalks the Dotcoms” noted that the stock prices of Wall Street’s mostrevered internet “high flyers” were “down for the count,” with many of them
trading below their initial offering price: “With many dotcoms declining, neither
venture capitalists nor Wall Street is eager to give them a dime.…”23 The news
brimmed with descriptions of shell-shocked investors. The week of April 10 saw
the worst decline in the history of the NASDAQ, where many internet
companies had gone public, and there was a growing consensus that the “game”
had irreversibly changed.24
As the business environment in Silicon Valley unraveled, investors’ prospects
for cashing out by selling Google to a big company seemed far less likely, and
they were not immune to the rising tide of panic. Many Google investors began
to express doubts about the company’s prospects, and some threatened to
withdraw support. Pressure for profit mounted sharply, despite the fact that
Google Search was widely considered the best of all the search engines, traffic to
its website was surging, and a thousand résumés flooded the firm’s Mountain
View office each day. Page and Brin were seen to be moving too slowly, and
their top venture capitalists, John Doerr from Kleiner Perkins and Michael
Moritz from Sequoia, were frustrated.25 According to Google chronicler Steven
Levy,
“The VCs were screaming bloody murder. Tech’s salad days were over, and it
wasn’t certain that Google would avoid becoming another crushed radish.”26
The specific character of Silicon Valley’s venture funding, especially during
the years leading up to dangerous levels of startup inflation, also contributed to a
growing sense of emergency at Google. As Stanford sociologist Mark
Granovetter and his colleague Michel Ferrary found in their study of valley
venture firms, “A connection with a high-status VC firm signals the high status
of the startup and encourages other agents to link to it.”27 These themes may
seem obvious now, but it is useful to mark the anxiety of those months of sudden
crisis. Prestigious risk investment functioned as a form of vetting—much like
acceptance to a top university sorts and legitimates students, elevating a few
against the backdrop of the many—especially in the “uncertain” environment
characteristic of high-tech investing. Loss of that high-status signaling power
assigned a young company to a long list of also-rans in Silicon Valley’s fastmoving saga.
Other research findings point to the consequences of the impatient money
that flooded the valley as inflationary hype drew speculators and ratcheted up the
volatility of venture funding.28 Studies of pre-bubble investment patterns
showed a “big-score” mentality in which bad results tended to stimulate
increased investing as funders chased the belief that some young company
would suddenly discover the elusive business model destined to turn all their
bets into rivers of gold.29 Startup mortality rates in Silicon Valley outstripped
those for other venture capital centers such as Boston and Washington, DC, with
impatient money producing a few big wins and many losses.30 Impatient money
is also reflected in the size of Silicon Valley startups, which during this period
were significantly smaller than in other regions, employing an average of 68
employees as compared to an average of 112 in the rest of the country.31 This
reflects an interest in quick returns without spending much time on growing a
business or deepening its talent base, let alone developing the institutional
capabilities that Joseph Schumpeter would have advised. These propensities
were exacerbated by the larger Silicon Valley culture, where net worth was
celebrated as the sole measure of success for valley parents and their children.32
For all their genius and principled insights, Brin and Page could not ignore
the mounting sense of emergency. By December 2000, the Wall Street Journal
reported on the new “mantra” emerging from Silicon Valley’s investment
community: “Simply displaying the ability to make money will not be enough to
remain a major player in the years ahead. What will be required will be an ability
to show sustained and exponential profits.”33
IV. The Discovery of Behavioral Surplus
The declaration of a state of exception functions in politics as cover for the
suspension of the rule of law and the introduction of new executive powers
justified by crisis.34 At Google in late 2000, it became a rationale for annulling
the reciprocal relationship that existed between Google and its users, steeling the
founders to abandon their passionate and public opposition to advertising. As a
specific response to investors’ anxiety, the founders tasked the tiny AdWords
team with the objective of looking for ways to make more money.35 Page
demanded that the whole process be simplified for advertisers. In this new
approach, he insisted that advertisers “shouldn’t even get involved with choosing
keywords—Google would choose them.”36
Operationally, this meant that Google would turn its own growing cache of
behavioral data and its computational power and expertise toward the single task
of matching ads with queries. New rhetoric took hold to legitimate this unusual
move. If there was to be advertising, then it had to be “relevant” to users. Ads
would no longer be linked to keywords in a search query, but rather a particular
ad would be “targeted” to a particular individual. Securing this holy grail of
advertising would ensure relevance to users and value to advertisers.
Absent from the new rhetoric was the fact that in pursuit of this new aim,
Google would cross into virgin territory by exploiting sensitivities that only its
exclusive and detailed collateral behavioral data about millions and later billions
of users could reveal. To meet the new objective, the behavioral value
reinvestment cycle was rapidly and secretly subordinated to a larger and more
complex undertaking. The raw materials that had been solely used to improve
the quality of search results would now also be put to use in the service of
targeting advertising to individual users. Some data would continue to be applied
to service improvement, but the growing stores of collateral signals would be
repurposed to improve the profitability of ads for both Google and its
advertisers. These behavioral data available for uses beyond service
improvement constituted a surplus, and it was on the strength of this behavioral
surplus that the young company would find its way to the “sustained and
exponential profits” that would be necessary for survival. Thanks to a perceived
emergency, a new mutation began to gather form and quietly slip its moorings in
the implicit advocacy-oriented social contract of the firm’s original relationship
with users.
Google’s declared state of exception was the backdrop for 2002, the
watershed year during which surveillance capitalism took root. The firm’s
appreciation of behavioral surplus crossed another threshold that April, when the
data logs team arrived at their offices one morning to find that a peculiar phrase
had surged to the top of the search queries: “Carol Brady’s maiden name.” Why
the sudden interest in a 1970s television character? It was data scientist and logs
team member Amit Patel who recounted the event to the New York Times,
noting, “You can’t interpret it unless you know what else is going on in the
world.”37
The team went to work to solve the puzzle. First, they discerned that the
pattern of queries had produced five separate spikes, each beginning at fortyeight minutes after the hour. Then they learned that the query pattern occurred
during the airing of the popular TV show Who Wants to Be a Millionaire? The
spikes reflected the successive time zones during which the show aired, ending
in Hawaii. In each time zone, the show’s host posed the question of Carol
Brady’s maiden name, and in each zone the queries immediately flooded into
Google’s servers.
As the New York Times reported, “The precision of the Carol Brady data was
eye-opening for some.” Even Brin was stunned by the clarity of Search’s
predictive power, revealing events and trends before they “hit the radar” of
traditional media. As he told the Times, “It was like trying an electron
microscope for the first time. It was like a moment-by-moment barometer.”38
Google executives were described by the Times as reluctant to share their
thoughts about how their massive stores of query data might be commercialized.
“There is tremendous opportunity with this data,” one executive confided.39
Just a month before the Carol Brady moment, while the AdWords team was
already working on new approaches, Brin and Page hired Eric Schmidt, an
experienced executive, engineer, and computer science Ph.D., as chairman. By
August, they appointed him to the CEO’s role. Doerr and Moritz had been
pushing the founders to hire a professional manager who would know how to
pivot the firm toward profit.40 Schmidt immediately implemented a “belttightening” program, grabbing the budgetary reins and heightening the general
sense of financial alarm as fund-raising prospects came under threat. A squeeze
on workspace found him unexpectedly sharing his office with none other than
Amit Patel.
Schmidt later boasted that as a result of their close quarters over the course of
several months, he had instant access to better revenue figures than did his own
financial planners.41 We do not know (and may never know) what other insights
Schmidt might have gleaned from Patel about the predictive power of Google’s
behavioral data stores, but there is no doubt that a deeper grasp of the predictive
power of data quickly shaped Google’s specific response to financial emergency,
triggering the crucial mutation that ultimately turned AdWords, Google, the
internet, and the very nature of information capitalism toward an astonishingly
lucrative surveillance project.
Google’s earliest ads had been considered more effective than most online
advertising at the time because they were linked to search queries and Google
could track when users actually clicked on an ad, known as the “click-through”
rate. Despite this, advertisers were billed in the conventional manner according
to how many people viewed an ad. As Search expanded, Google created the selfservice system called AdWords, in which a search that used the advertiser’s
keyword would include that advertiser’s text box and a link to its landing page.
Ad pricing depended upon the ad’s position on the search results page.
Rival search startup Overture had developed an online auction system for
web page placement that allowed it to scale online advertising targeted to
keywords. Google would produce a transformational enhancement to that model,
one that was destined to alter the course of information capitalism. As a
Bloomberg journalist explained in 2006, “Google maximizes the revenue it gets
from that precious real estate by giving its best position to the advertiser who is
likely to pay Google the most in total, based on the price per click multiplied by
Google’s estimate of the likelihood that someone will actually click on the ad.”42
That pivotal multiplier was the result of Google’s advanced computational
capabilities trained on its most significant and secret discovery: behavioral
surplus. From this point forward, the combination of ever-increasing machine
intelligence and ever-more-vast supplies of behavioral surplus would become the
foundation of an unprecedented logic of accumulation. Google’s reinvestment
priorities would shift from merely improving its user offerings to inventing and
institutionalizing the most far-reaching and technologically advanced rawmaterial supply operations that the world had ever seen. Henceforth, revenues
and growth would depend upon more behavioral surplus.
Google’s many patents filed during those early years illustrate the explosion
of discovery, inventiveness, and complexity detonated by the state of exception
that led to these crucial innovations and the firm’s determination to advance the
capture of behavioral surplus.43 Among these efforts, I focus here on one patent
submitted in 2003 by three of the firm’s top computer scientists and titled
“Generating User Information for Use in Targeted Advertising.”44 The patent is
emblematic of the new mutation and the emerging logic of accumulation that
would define Google’s success. Of even greater interest, it also provides an
unusual glimpse into the “economic orientation” baked deep into the technology
cake by reflecting the mindset of Google’s distinguished scientists as they
harnessed their knowledge to the firm’s new aims.45 In this way, the patent
stands as a treatise on a new political economics of clicks and its moral universe,
before the company learned to disguise this project in a fog of euphemism.
The patent reveals a pivoting of the backstage operation toward Google’s
new audience of genuine customers. “The present invention concerns
advertising,” the inventors announce. Despite the enormous quantity of
demographic data available to advertisers, the scientists note that much of an ad
budget “is simply wasted… it is very difficult to identify and eliminate such
waste.”46
Advertising had always been a guessing game: art, relationships,
conventional wisdom, standard practice, but never “science.” The idea of being
able to deliver a particular message to a particular person at just the moment
when it might have a high probability of actually influencing his or her behavior
was, and had always been, the holy grail of advertising. The inventors point out
that online ad systems had also failed to achieve this elusive goal. The thenpredominant approaches used by Google’s competitors, in which ads were
targeted to keywords or content, were unable to identify relevant ads “for a
particular user.” Now the inventors offered a scientific solution that exceeded
the most-ambitious dreams of any advertising executive:
There is a need to increase the relevancy of ads served for some user request, such as a search
query or a document request… to the user that submitted the request.… The present invention may
involve novel methods, apparatus, message formats and/or data structures for determining user
profile information and using such determined user profile information for ad serving.47
In other words, Google would no longer mine behavioral data strictly to
improve service for users but rather to read users’ minds for the purposes of
matching ads to their interests, as those interests are deduced from the collateral
traces of online behavior. With Google’s unique access to behavioral data, it
would now be possible to know what a particular individual in a particular time
and place was thinking, feeling, and doing. That this no longer seems
astonishing to us, or perhaps even worthy of note, is evidence of the profound
psychic numbing that has inured us to a bold and unprecedented shift in
capitalist methods.
The techniques described in the patent meant that each time a user queries
Google’s search engine, the system simultaneously presents a specific
configuration of a particular ad, all in the fraction of a moment that it takes to
fulfill the search query. The data used to perform this instant translation from
query to ad, a predictive analysis that was dubbed “matching,” went far beyond
the mere denotation of search terms. New data sets were compiled that would
dramatically enhance the accuracy of these predictions. These data sets were
referred to as “user profile information” or “UPI.” These new data meant that
there would be no more guesswork and far less waste in the advertising budget.
Mathematical certainty would replace all of that.
Where would UPI come from? The scientists announce a breakthrough. They
first explain that some of the new data can be culled from the firm’s existing
systems with its continuously accruing caches of behavioral data from Search.
Then they stress that even more behavioral data can be hunted and herded from
anywhere in the online world. UPI, they write, “may be inferred,” “presumed,”
and “deduced.” Their new methods and computational tools could create UPI
from integrating and analyzing a user’s search patterns, document inquiries, and
myriad other signals of online behaviors, even when users do not directly
provide that personal information: “User profile information may include any
information about an individual user or a group of users. Such information may
be provided by the user, provided by a third-party authorized to release user
information, and/or derived from user actions. Certain user information can be
deduced or presumed using other user information of the same user and/or user
information of other users. UPI may be associated with various entities.”48
The inventors explain that UPI can be deduced directly from a user’s or
group’s actions, from any kind of document a user views, or from an ad landing
page: “For example, an ad for prostate cancer screening might be limited to user
profiles having the attribute ‘male’ and ‘age 45 and over.’”49 They describe
different ways to obtain UPI. One relies on “machine learning classifiers” that
predict values on a range of attributes. “Association graphs” are developed to
reveal the relationships among users, documents, search queries, and web pages:
“user-to-user associations may also be generated.”50 The inventors also note that
their methods can be understood only among the priesthood of computer
scientists drawn to the analytic challenges of this new online universe: “The
following description is presented to enable one skilled in the art to make and
use the invention.… Various modifications to the disclosed embodiments will be
apparent to those skilled in the art.…”51
Of critical importance to our story is the scientists’ observation that the most
challenging sources of friction here are social, not technical. Friction arises
when users intentionally fail to provide information for no other reason than that
they choose not to. “Unfortunately, user profile information is not always
available,” the scientists warn. Users do not always “voluntarily” provide
information, or “the user profile may be incomplete… and hence not
comprehensive, because of privacy considerations, etc.”52
A clear aim of the patent is to assure its audience that Google scientists will
not be deterred by users’ exercise of decision rights over their personal
information, despite the fact that such rights were an inherent feature of the
original social contract between the company and its users.53 Even when users
do provide UPI, the inventors caution, “it may be intentionally or unintentionally
inaccurate, it may become stale.… UPI for a user… can be determined (or
updated or extended) even when no explicit information is given to the system.…
An initial UPI may include some expressly entered UPI information, though it
doesn’t need to.”54
The scientists thus make clear that they are willing—and that their inventions
are able—to overcome the friction entailed in users’ decision rights. Google’s
proprietary methods enable it to surveil, capture, expand, construct, and claim
behavioral surplus, including data that users intentionally choose not to share.
Recalcitrant users will not be obstacles to data expropriation. No moral, legal, or
social constraints will stand in the way of finding, claiming, and analyzing
others’ behavior for commercial purposes.
The inventors provide examples of the kinds of attributes that Google could
assess as it compiles its UPI data sets while circumnavigating users’ knowledge,
intentions, and consent. These include websites visited, psychographics,
browsing activity, and information about previous advertisements that the user
has been shown, selected, and/or made purchases after viewing.55 It is a long list
that is certainly much longer today.
Finally, the inventors observe another obstacle to effective targeting. Even
when user information exists, they say, “Advertisers may not be able to use this
information to target ads effectively.”56 On the strength of the invention
presented in this patent, and others related to it, the inventors publicly declare
Google’s unique prowess in hunting, capturing, and transforming surplus into
predictions for accurate targeting. No other firm could equal its range of access
to behavioral surplus, its bench strength of scientific knowledge and technique,
its computational power, or its storage infrastructure. In 2003 only Google could
pull surplus from multiple sites of activity and integrate each increment of data
into comprehensive “data structures.” Google was uniquely positioned with the
state-of-the-art knowledge in computer science to convert those data into
predictions of who will click on which configuration of what ad as the basis for a
final “matching” result, all computed in micro-fractions of a second.
To state all this in plain language, Google’s invention revealed new
capabilities to infer and deduce the thoughts, feelings, intentions, and interests of
individuals and groups with an automated architecture that operates as a oneway mirror irrespective of a person’s awareness, knowledge, and consent, thus
enabling privileged secret access to behavioral data.
A one-way mirror embodies the specific social relations of surveillance based
on asymmetries of knowledge and power. The new mode of accumulation
invented at Google would derive, above all, from the firm’s willingness and
ability to impose these social relations on its users. Its willingness was mobilized
by what the founders came to regard as a state of exception; its ability came
from its actual success in leveraging privileged access to behavioral surplus in
order to predict the behavior of individuals now, soon, and later. The predictive
insights thus acquired would constitute a world-historic competitive advantage
in a new marketplace where low-risk bets about the behavior of individuals are
valued, bought, and sold.
Google would no longer be a passive recipient of accidental data that it could
recycle for the benefit of its users. The targeted advertising patent sheds light on
the path of discovery that Google traveled from its advocacy-oriented founding
toward the elaboration of behavioral surveillance as a full-blown logic of
accumulation. The invention itself exposes the reasoning through which the
behavioral value reinvestment cycle was subjugated to the service of a new
commercial calculation. Behavioral data, whose value had previously been “used
up” on improving the quality of Search for users, now became the pivotal—and
exclusive to Google—raw material for the construction of a dynamic online
advertising marketplace. Google would now secure more behavioral data than it
needed to serve its users. That surplus, a behavioral surplus, was the gamechanging, zero-cost asset that was diverted from service improvement toward a
genuine and highly lucrative market exchange.
These capabilities were and remain inscrutable to all but an exclusive data
priesthood among whom Google is the übermensch. They operate in obscurity,
indifferent to social norms or individual claims to self-determining decision
rights. These moves established the foundational mechanisms of surveillance
capitalism.
The state of exception declared by Google’s founders transformed the
youthful Dr. Jekyll into a ruthless, muscular Mr. Hyde determined to hunt his
prey anywhere, anytime, irrespective of others’ self-determining aims. The new
Google ignored claims to self-determination and acknowledged no a priori limits
on what it could find and take. It dismissed the moral and legal content of
individual decision rights and recast the situation as one of technological
opportunism and unilateral power. This new Google assures its actual customers
that it will do whatever it takes to transform the natural obscurity of human
desire into scientific fact. This Google is the superpower that establishes its own
values and pursues its own purposes above and beyond the social contracts to
which others are bound.
V. Surplus at Scale
There were other new elements that helped to establish the centrality of
behavioral surplus in Google’s commercial operations, beginning with its pricing
innovations. The first new pricing metric was based on “click-through rates,” or
how many times a user clicks on an ad through to the advertiser’s web page,
rather than pricing based on the number of views that an ad receives. The clickthrough was interpreted as a signal of relevance and therefore a measure of
successful targeting, operational results that derive from and reflect the value of
behavioral surplus.
This new pricing discipline established an ever-escalating incentive to
increase behavioral surplus in order to continuously upgrade the effectiveness of
predictions. Better predictions lead directly to more click-throughs and thus to
revenue. Google learned new ways to conduct automated auctions for ad
targeting that allowed the new invention to scale quickly, accommodating
hundreds of thousands of advertisers and billions (later it would be trillions) of
auctions simultaneously. Google’s unique auction methods and capabilities
earned a great deal of attention, which distracted observers from reflecting on
exactly what was being auctioned: derivatives of behavioral surplus. Clickthrough metrics institutionalized “customer” demand for these prediction
products and thus established the central importance of economies of scale in
surplus supply operations. Surplus capture would have to become automatic and
ubiquitous if the new logic was to succeed, as measured by the successful
trading of behavioral futures.
Another key metric called the “quality score” helped determine the price of
an ad and its specific position on the page, in addition to advertisers’ own
auction bids. The quality score was determined in part by click-through rates and
in part by the firm’s analyses of behavioral surplus. “The clickthrough rate
needed to be a predictive thing,” one top executive insisted, and that would
require “all the information we had about the query right then.”57 It would take
enormous computing power and leading-edge algorithmic programs to produce
powerful predictions of user behavior that became the criteria for estimating the
relevance of an ad. Ads that scored high would sell at a lower price than those
that scored poorly. Google’s customers, its advertisers, complained that the
quality score was a black box, and Google was determined to keep it so.
Nonetheless, when customers followed its disciplines and produced high-scoring
ads, their click-through rates soared.
AdWords quickly became so successful that it inspired significant expansion
of the surveillance logic. Advertisers demanded more clicks.58 The answer was
to extend the model beyond Google’s search pages and convert the entire
internet into a canvas for Google’s targeted ads. This required turning Google’s
newfound skills at “data extraction and analysis,” as Hal Varian put it, toward
the content of any web page or user action by employing Google’s rapidly
expanding semantic analysis and artificial intelligence capabilities to efficiently
“squeeze” meaning from them. Only then could Google accurately assess the
content of a page and how users interact with that content. This “content-targeted
advertising” based on Google’s patented methods was eventually named
AdSense. By 2004, AdSense had achieved a run rate of a million dollars per day,
and by 2010, it produced annual revenues of more than $10 billion.
So here was an unprecedented and lucrative brew: behavioral surplus, data
science, material infrastructure, computational power, algorithmic systems, and
automated platforms. This convergence produced unprecedented “relevance”
and billions of auctions. Click-through rates skyrocketed. Work on AdWords and
AdSense became just as important as work on Search.
With click-through rates as the measure of relevance accomplished,
behavioral surplus was institutionalized as the cornerstone of a new kind of
commerce that depended upon online surveillance at scale. Insiders referred to
Google’s new science of behavioral prediction as the “physics of clicks.”59
Mastery of this new domain required a specialized breed of click physicists who
would secure Google’s preeminence within the nascent priesthood of behavioral
prediction. The firm’s substantial revenue flows summoned the greatest minds of
our age from fields such as artificial intelligence, statistics, machine learning,
data science, and predictive analytics to converge on the prediction of human
behavior as measured by click-through rates: computer-mediated fortune-telling
and selling. The firm would recruit an authority on information economics, and
consultant to Google since 2001, as the patriarch of this auspicious group and the
still-young science: Hal Varian was the chosen shepherd of this flock.
Page and Brin had been reluctant to embrace advertising, but as the evidence
mounted that ads could save the company from crisis, their attitudes shifted.60
Saving the company also meant saving themselves from being just another
couple of very smart guys who couldn’t figure out how to make real money,
insignificant players in the intensely material and competitive culture of Silicon
Valley. Page was haunted by the example of the brilliant but impoverished
scientist Nikola Tesla, who died without ever benefiting financially from his
inventions. “You need to do more than just invent things,” Page reflected.61 Brin
had his own take: “Honestly, when we were still in the dot-com boom days, I felt
like a schmuck. I had an internet startup—so did everybody else. It was
unprofitable, like everybody else’s.”62 Exceptional threats to their financial and
social status appear to have awakened a survival instinct in Page and Brin that
required exceptional adaptive measures.63 The Google founders’ response to the
fear that stalked their community effectively declared a “state of exception” in
which it was judged necessary to suspend the values and principles that had
guided Google’s founding and early practices.
Later, Sequoia’s Moritz recalled the crisis conditions that provoked the firm’s
“ingenious” self-reinvention, when crisis opened a fork in the road and drew the
company in a wholly new direction. He stressed the specificity of Google’s
inventions, their origins in emergency, and the 180-degree turn from serving
users to surveilling them. Most of all, he credited the discovery of behavioral
surplus as the game-changing asset that turned Google into a fortune-telling
giant, pinpointing Google’s breakthrough transformation of the Overture model,
when the young company first applied its analytics of behavioral surplus to
predict the likelihood of a click:
The first 12 months of Google were not a cakewalk, because the company didn’t start off in the
business that it eventually tapped. At first it went in a different direction, which was selling its
technology—selling licenses for its search engines to larger internet properties and to corporations.
… Cash was going out of the window at a feral rate during the first six, seven months. And then,
very ingeniously, Larry… and Sergey… and others fastened on a model that they had seen this
other company, Overture, develop, which was ranked advertisements. They saw how it could be
improved and enhanced and made it their own, and that transformed the business.64
Moritz’s reflections suggest that without the discovery of behavioral surplus
and the turn toward surveillance operations, Google’s “feral” rate of spending
was not sustainable and the firm’s survival was imperiled. We will never know
what Google might have made of itself without the state of exception fueled by
the emergency of impatient money that shaped those crucial years of
development. What other pathways to sustainable revenue might have been
explored or invented? What alternative futures might have been summoned to
keep faith with the founders’ principles and with their users’ rights to selfdetermination? Instead, Google loosed a new incarnation of capitalism upon the
world, a Pandora’s box whose contents we are only beginning to understand.
VI. A Human Invention
Key to our conversation is this fact: surveillance capitalism was invented by a
specific group of human beings in a specific time and place. It is not an inherent
result of digital technology, nor is it a necessary expression of information
capitalism. It was intentionally constructed at a moment in history, in much the
same way that the engineers and tinkerers at the Ford Motor Company invented
mass production in the Detroit of 1913.
Henry Ford set out to prove that he could maximize profits by driving up
volumes, radically decreasing costs, and widening demand. It was an unproven
commercial equation for which no economic theory or body of practice existed.
Fragments of the formula had surfaced before—in meatpacking plants, flourmilling operations, sewing machine and bicycle factories, armories, canneries,
and breweries. There was a growing body of practical knowledge about the
interchangeability of parts and absolute standardization, precision machines, and
continuous flow production. But no one had achieved the grand symphony that
Ford heard in his imagination.
As historian David Hounshell tells it, there was a time, April 1, 1913, and a
place, Detroit, when the first moving assembly line seemed to be “just another
step in the years of development at Ford yet somehow suddenly dropped out of
the sky. Even before the end of the day, some of the engineers sensed that they
had made a fundamental breakthrough.”65 Within a year, productivity increases
across the plant ranged from 50 percent to as much as ten times the output of the
old fixed-assembly methods.66 The Model T that sold for $825 in 1908 was
priced at a record low for a four-cylinder automobile in 1924, just $260.67
Much as with Ford, some elements of the economic surveillance logic in the
online environment had been operational for years, familiar only to a rarefied
group of early computer experts. For example, the software mechanism known
as the “cookie”—bits of code that allow information to be passed between a
server and a client computer—was developed in 1994 at Netscape, the first
commercial web browser company.68 Similarly, “web bugs”—tiny (often
invisible) graphics embedded in web pages and e-mail and designed to monitor
user activity and collect personal information—were well-known to experts in
the late 1990s.69
These experts were deeply concerned about the privacy implications of such
monitoring mechanisms, and at least in the case of cookies, there were
institutional efforts to design internet policies that would prohibit their invasive
capabilities to monitor and profile users.70 By 1996, the function of cookies had
become a contested public policy issue. Federal Trade Commission workshops
in 1996 and 1997 discussed proposals that would assign control of all personal
information to users by default with a simple automated protocol. Advertisers
bitterly contested this scheme, collaborating instead to avert government
regulation by forming a “self-regulating” association known as the Network
Advertising Initiative. Still, in June 2000 the Clinton administration banned
cookies from all federal websites, and by April 2001, three bills before Congress
included provisions to regulate cookies.71
Google brought new life to these practices. As had occurred at Ford a century
earlier, the company’s engineers and scientists were the first to conduct the entire
commercial surveillance symphony, integrating a wide range of mechanisms
from cookies to proprietary analytics and algorithmic software capabilities in a
sweeping new logic that enshrined surveillance and the unilateral expropriation
of behavioral data as the basis for a new market form. The impact of this
invention was just as dramatic as Ford’s. In 2001, as Google’s new systems to
exploit its discovery of behavioral surplus were being tested, net revenues
jumped to $86 million (more than a 400 percent increase over 2000), and the
company turned its first profit. By 2002, the cash began to flow and has never
stopped, definitive evidence that behavioral surplus combined with Google’s
proprietary analytics were sending arrows to their marks. Revenues leapt to $347
million in 2002, then $1.5 billion in 2003, and $3.5 billion in 2004, the year the
company went public.72 The discovery of behavioral surplus had produced a
stunning 3,590 percent increase in revenue in less than four years.
VII. The Secrets of Extraction
It is important to note the vital differences for capitalism in these two moments
of originality at Ford and Google. Ford’s inventions revolutionized production.
Google’s inventions revolutionized extraction and established surveillance
capitalism’s first economic imperative: the extraction imperative. The extraction
imperative meant that raw-material supplies must be procured at an everexpanding scale. Industrial capitalism had demanded economies of scale in
production in order to achieve high throughput combined with low unit cost. In
contrast, surveillance capitalism demands economies of scale in the extraction of
behavioral surplus.
Mass production was aimed at new sources of demand in the early twentieth
century’s first mass consumers. Ford was clear on this point: “Mass production
begins in the perception of a public need.”73 Supply and demand were linked
effects of the new “conditions of existence” that defined the lives of my great-
grandparents Sophie and Max and other travelers in the first modernity. Ford’s
invention deepened the reciprocities between capitalism and these populations.
In contrast, Google’s inventions destroyed the reciprocities of its original
social contract with users. The role of the behavioral value reinvestment cycle
that had once aligned Google with its users changed dramatically. Instead of
deepening the unity of supply and demand with its populations, Google chose to
reinvent its business around the burgeoning demand of advertisers eager to
squeeze and scrape online behavior by any available means in the competition
for market advantage. In the new operation, users were no longer ends in
themselves but rather became the means to others’ ends.
Reinvestment in user services became the method for attracting behavioral
surplus, and users became the unwitting suppliers of raw material for a larger
cycle of revenue generation. The scale of surplus expropriation that was possible
at Google would soon eliminate all serious competitors to its core search
business as the windfall earnings from leveraging behavioral surplus were used
to continuously draw more users into its net, thus establishing its de facto
monopoly in Search. On the strength of Google’s inventions, discoveries, and
strategies, it became the mother ship and ideal type of a new economic logic
based on fortune-telling and selling—an ancient and eternally lucrative craft that
has fed on humanity’s confrontation with uncertainty from the beginning of the
human story.
It was one thing to proselytize achievements in production, as Henry Ford
had done, but quite another to boast about the continuous intensification of
hidden processes aimed at the extraction of behavioral data and personal
information. The last thing that Google wanted was to reveal the secrets of how
it had rewritten its own rules and, in the process, enslaved itself to the extraction
imperative. Behavioral surplus was necessary for revenue, and secrecy would be
necessary for the sustained accumulation of behavioral surplus.
This is how secrecy came to be institutionalized in the policies and practices
that govern every aspect of Google’s behavior onstage and offstage. Once
Google’s leadership understood the commercial power of behavioral surplus,
Schmidt instituted what he called the “hiding strategy.”74 Google employees
were told not to speak about what the patent had referred to as its “novel
methods, apparatus, message formats and/or data structures” or confirm any
rumors about flowing cash. Hiding was not a post hoc strategy; it was baked into
the cake that would become surveillance capitalism.
Former Google executive Douglas Edwards writes compellingly about this
predicament and the culture of secrecy it shaped. According to his account, Page
and Brin were “hawks,” insisting on aggressive data capture and retention:
“Larry opposed any path that would reveal our technological secrets or stir the
privacy pot and endanger our ability to gather data.” Page wanted to avoid
arousing users’ curiosity by minimizing their exposure to any clues about the
reach of the firm’s data operations. He questioned the prudence of the electronic
scroll in the reception lobby that displays a continuous stream of search queries,
and he “tried to kill” the annual Google Zeitgeist conference that summarizes the
year’s trends in search terms.75
Journalist John Battelle, who chronicled Google during the 2002–2004
period, described the company’s “aloofness,” “limited information sharing,” and
“alienating and unnecessary secrecy and isolation.”76 Another early company
biographer notes, “What made this information easier to keep is that almost none
of the experts tracking the business of the internet believed that Google’s secret
was even possible.”77 As Schmidt told the New York Times, “You need to win,
but you are better off winning softly.”78 The scientific and material complexity
that supported the capture and analysis of behavioral surplus also enabled the
hiding strategy, an invisibility cloak over the whole operation. “Managing search
at our scale is a very serious barrier to entry,” Schmidt warned would-be
competitors.79
To be sure, there are always sound business reasons for hiding the location of
your gold mine. In Google’s case, the hiding strategy accrued to its competitive
advantage, but there were other reasons for concealment and obfuscation. What
might the response have been back then if the public were told that Google’s
magic derived from its exclusive capabilities in unilateral surveillance of online
behavior and its methods specifically designed to override individual decision
rights? Google policies had to enforce secrecy in order to protect operations that
were designed to be undetectable because they took things from users without
asking and employed those unilaterally claimed resources to work in the service
of others’ purposes.
That Google had the power to choose secrecy is itself testament to the
success of its own claims. This power is a crucial illustration of the difference
between “decision rights” and “privacy.” Decision rights confer the power to
choose whether to keep something secret or to share it. One can choose the
degree of privacy or transparency for each situation. US Supreme Court Justice
William O. Douglas articulated this view of privacy in 1967: “Privacy involves
the choice of the individual to disclose or to reveal what he believes, what he
thinks, what he possesses.…”80
Surveillance capitalism lays claim to these decision rights. The typical
complaint is that privacy is eroded, but that is misleading. In the larger societal
pattern, privacy is not eroded but redistributed, as decision rights over privacy
are claimed for surveillance capital. Instead of people having the rights to decide
how and what they will disclose, these rights are concentrated within the domain
of surveillance capitalism. Google discovered this necessary element of the new
logic of accumulation: it must assert the rights to take the information upon
which its success depends.
The corporation’s ability to hide this rights grab depends on language as
much as it does on technical methods or corporate policies of secrecy. George
Orwell once observed that euphemisms are used in politics, war, and business as
instruments that “make lies sound truthful and murder respectable.”81 Google
has been careful to camouflage the significance of its behavioral surplus
operations in industry jargon. Two popular terms—“digital exhaust” and “digital
breadcrumbs”—connote worthless waste: leftovers lying around for the taking.82
Why allow exhaust to drift in the atmosphere when it can be recycled into useful
data? Who would think to call such recycling an act of exploitation,
expropriation, or plunder? Who would dare to redefine “digital exhaust” as
booty or contraband, or imagine that Google had learned how to purposefully
construct that so-called “exhaust” with its methods, apparatus, and data
structures?
The word “targeted” is another euphemism. It evokes notions of precision,
efficiency, and competence. Who would guess that targeting conceals a new
political equation in which Google’s concentrations of computational power
brush aside users’ decision rights as easily as King Kong might shoo away an
ant, all accomplished offstage where no one can see?
These euphemisms operate in exactly the same way as those found on the
earliest maps of the North American continent, in which whole regions were
labeled with terms such as “heathens,” “infidels,” “idolaters,” “primitives,”
“vassals,” and “rebels.” On the strength of those euphemisms, native peoples—
their places and claims—were deleted from the invaders’ moral and legal
equations, legitimating the acts of taking and breaking that paved the way for
church and monarchy.
The intentional work of hiding naked facts in rhetoric, omission, complexity,
exclusivity, scale, abusive contracts, design, and euphemism is another factor
that helps explain why during Google’s breakthrough to profitability, few noticed
the foundational mechanisms of its success and their larger significance. In this
picture, commercial surveillance is not merely an unfortunate accident or
occasional lapse. It is neither a necessary development of information capitalism
nor a necessary product of digital technology or the internet. It is a specifically
constructed human choice, an unprecedented market form, an original solution to
emergency, and the underlying mechanism through which a new asset class is
created on the cheap and converted to revenue. Surveillance is the path to profit
that overrides “we the people,” taking our decision rights without permission and
even when we say “no.” The discovery of behavioral surplus marks a critical
turning point not only in Google’s biography but also in the history of
capitalism.
In the years following its IPO in 2004, Google’s spectacular financial
breakthrough first astonished and then magnetized the online world. Silicon
Valley investors had doubled down on risk for years, in search of that elusive
business model that would make it all worthwhile. When Google’s financial
results went public, the hunt for mythic treasure was officially over.83
The new logic of accumulation spread first to Facebook, which launched the
same year that Google went public. CEO Mark Zuckerberg had rejected the
strategy of charging users a fee for service as the telephone companies had done
in an earlier century. “Our mission is to connect every person in the world. You
don’t do that by having a service people pay for,” he insisted.84 In May 2007 he
introduced the Facebook platform, opening up the social network to everyone,
not just people with a college e-mail address. Six months later, in November, he
launched his big advertising product, Beacon, which would automatically share
transactions from partner websites with all of a user’s “friends.” These posts
would appear even if the user was not currently logged into Facebook, without
the user’s knowledge or an opt-in function. The howls of protest—from users but
also from some of Facebook’s partners such as Coca-Cola—forced Zuckerberg
to back down swiftly. By December, Beacon became an opt-in program. The
twenty-three-year-old CEO understood the potential of surveillance capitalism,
but he had not yet mastered Google’s facility in obscuring its operations and
intent.
The pressing question in Facebook’s headquarters—“How do we turn all
those Facebook users into money?”—still required an answer.85 In March 2008,
just three months after having to kill his first attempt at emulating Google’s logic
of accumulation, Zuckerberg hired Google executive Sheryl Sandberg to be
Facebook’s chief operating officer. The onetime chief of staff to US Treasury
Secretary Larry Summers, Sandberg had joined Google in 2001, ultimately
rising to be its vice president of global online sales and operations. At Google
she led the development of surveillance capitalism through the expansion of
AdWords and other aspects of online sales operations.86 One investor who had
observed the company’s growth during that period concluded, “Sheryl created
AdWords.”87
In signing on with Facebook, the talented Sandberg became the “Typhoid
Mary” of surveillance capitalism as she led Facebook’s transformation from a
social networking site to an advertising behemoth. Sandberg understood that
Facebook’s social graph represented an awe-inspiring source of behavioral
surplus: the extractor’s equivalent of a nineteenth-century prospector stumbling
into a valley that sheltered the largest diamond mine and the deepest gold mine
ever to be discovered. “We have better information than anyone else. We know
gender, age, location, and it’s real data as opposed to the stuff other people
infer,” Sandberg said. Facebook would learn to track, scrape, store, and analyze
UPI to fabricate its own targeting algorithms, and like Google it would not
restrict extraction operations to what people voluntarily shared with the
company. Sandberg understood that through the artful manipulation of
Facebook’s culture of intimacy and sharing, it would be possible to use
behavioral surplus not only to satisfy demand but also to create demand. For
starters, that meant inserting advertisers into the fabric of Facebook’s online
culture, where they could “invite” users into a “conversation.”88
VIII. Summarizing the Logic and Operations of Surveillance
Capitalism
With Google in the lead, surveillance capitalism rapidly became the default
model of information capitalism on the web and, as we shall see in coming
chapters, gradually drew competitors from every sector. This new market form
declares that serving the genuine needs of people is less lucrative, and therefore
less important, than selling predictions of their behavior. Google discovered that
we are less valuable than others’ bets on our future behavior. This changed
everything.
Behavioral surplus defines Google’s earnings success. In 2016, 89 percent of
the revenues of its parent company, Alphabet, derived from Google’s targeted
advertising programs.89 The scale of raw-material flows is reflected in Google’s
domination of the internet, processing over 40,000 search queries every second
on average: more than 3.5 billion searches per day and 1.2 trillion searches per
year worldwide in 2017.90
On the strength of its unprecedented inventions, Google’s $400 billion
market value edged out ExxonMobil for the number-two spot in market
capitalization in 2014, only sixteen years after its founding, making it the
second-richest company in the world behind Apple.91 By 2016, Alphabet/Google
occasionally wrested the number-one position from Apple and was ranked
number two globally as of September 20, 2017.92
It is useful to stand back from this complexity to grasp the overall pattern and
how the puzzle pieces fit together:
1. The logic: Google and other surveillance platforms are sometimes
described as “two-sided” or “multi-sided” markets, but the mechanisms of
surveillance capitalism suggest something different.93 Google had discovered a
way to translate its nonmarket interactions with users into surplus raw material
for the fabrication of products aimed at genuine market transactions with its real
customers: advertisers.94 The translation of behavioral surplus from outside to
inside the market finally enabled Google to convert investment into revenue. The
corporation thus created out of thin air and at zero marginal cost an asset class of
vital raw materials derived from users’ nonmarket online behavior. At first those
raw materials were simply “found,” a by-product of users’ search actions. Later
those assets were hunted aggressively and procured largely through surveillance.
The corporation simultaneously created a new kind of marketplace in which its
proprietary “prediction products” manufactured from these raw materials could
be bought and sold.
The summary of these developments is that the behavioral surplus upon
which Google’s fortune rests can be considered as surveillance assets. These
assets are critical raw materials in the pursuit of surveillance revenues and their
translation into surveillance capital. The entire logic of this capital accumulation
is most accurately understood as surveillance capitalism, which is the
foundational framework for a surveillance-based economic order: a surveillance
economy. The big pattern here is one of subordination and hierarchy, in which
earlier reciprocities between the firm and its users are subordinated to the
derivative project of our behavioral surplus captured for others’ aims. We are no
longer the subjects of value realization. Nor are we, as some have insisted, the
“product” of Google’s sales. Instead, we are the objects from which raw
materials are extracted and expropriated for Google’s prediction factories.
Predictions about our behavior are Google’s products, and they are sold to its
actual customers but not to us. We are the means to others’ ends.
Industrial capitalism transformed nature’s raw materials into commodities,
and surveillance capitalism lays its claims to the stuff of human nature for a new
commodity invention. Now it is human nature that is scraped, torn, and taken for
another century’s market project. It is obscene to suppose that this harm can be
reduced to the obvious fact that users receive no fee for the raw material they
supply. That critique is a feat of misdirection that would use a pricing
mechanism to institutionalize and therefore legitimate the extraction of human
behavior for manufacturing and sale. It ignores the key point that the essence of
the exploitation here is the rendering of our lives as behavioral data for the sake
of others’ improved control of us. The remarkable questions here concern the
facts that our lives are rendered as behavioral data in the first place; that
ignorance is a condition of this ubiquitous rendition; that decision rights vanish
before one even knows that there is a decision to make; that there are
consequences to this diminishment of rights that we can neither see nor foretell;
that there is no exit, no voice, and no loyalty, only helplessness, resignation, and
psychic numbing; and that encryption is the only positive action left to discuss
when we sit around the dinner table and casually ponder how to hide from the
forces that hide from us.
2. The means of production: Google’s internet-age manufacturing process is
a critical component of the unprecedented. Its specific technologies and
techniques, which I summarize as “machine intelligence,” are constantly
evolving, and it is easy to be intimidated by their complexity. The same term
may mean one thing today and something very different in one year or in five
years. For example, Google has been described as developing and deploying
“artificial intelligence” since at least 2003, but the term itself is a moving target,
as capabilities have evolved from primitive programs that can play tic-tac-toe to
systems that can operate whole fleets of driverless cars.
Google’s machine intelligence capabilities feed on behavioral surplus, and
the more surplus they consume, the more accurate the prediction products that
result. Wired magazine’s founding editor, Kevin Kelly, once suggested that
although it seems like Google is committed to developing its artificial
intelligence capabilities to improve Search, it’s more likely that Google develops
Search as a means of continuously training its evolving AI capabilities.95 This is
the essence of the machine intelligence project. As the ultimate tapeworm, the
machine’s intelligence depends upon how much data it eats. In this important
respect the new means of production differs fundamentally from the industrial
model, in which there is a tension between quantity and quality. Machine
intelligence is the synthesis of this tension, for it reaches its full potential for
quality only as it approximates totality.
As more companies chase Google-style surveillance profits, a significant
fraction of global genius in data science and related fields is dedicated to the
fabrication of prediction products that increase click-through rates for targeted
advertising. For example, Chinese researchers employed by Microsoft’s Bing’s
research unit in Beijing published breakthrough findings in 2017. “Accurately
estimating the click-through rate (CTR) of ads has a vital impact on the revenue
of search businesses; even a 0.1% accuracy improvement in our production
would yield hundreds of millions of dollars in additional earnings,” they begin.
They go on to demonstrate a new application of advanced neural networks that
promises 0.9 percent improvement on one measure of identification and
“significant click yield gains in online traffic.”96 Similarly, a team of Google
researchers introduced a new deep-neural network model, all for the sake of
capturing “predictive feature interactions” and delivering “state-of-the-art
performance” to improve click-through rates.97 Thousands of contributions like
these, some incremental and some dramatic, equate to an expensive,
sophisticated, opaque, and exclusive twenty-first-century “means of
production.”
3. The products: Machine intelligence processes behavioral surplus into
prediction products designed to forecast what we will feel, think, and do: now,
soon, and later. These methodologies are among Google’s most closely guarded
secrets. The nature of its products explains why Google repeatedly claims that it
does not sell personal data. What? Never! Google executives like to claim their
privacy purity because they do not sell their raw material. Instead, the company
sells the predictions that only it can fabricate from its world-historic private
hoard of behavioral surplus.
Prediction products reduce risks for customers, advising them where and
when to place their bets. The quality and competitiveness of the product are a
function of its approximation to certainty: the more predictive the product, the
lower the risks for buyers and the greater the volume of sales. Google has
learned to be a data-based fortune-teller that replaces intuition with science at
scale in order to tell and sell our fortunes for profit to its customers, but not to
us. Early on, Google’s prediction products were largely aimed at sales of targeted
advertising, but as we shall see, advertising was the beginning of the surveillance
project, not the end.
4. The marketplace: Prediction products are sold into a new kind of market
that trades exclusively in future behavior. Surveillance capitalism’s profits derive
primarily from these behavioral futures markets. Although advertisers were the
dominant players in the early history of this new kind of marketplace, there is no
reason why such markets are limited to this group. The new prediction systems
are only incidentally about ads, in the same way that Ford’s new system of mass
production was only incidentally about automobiles. In both cases the systems
can be applied to many other domains. The already visible trend, as we shall see
in the coming chapters, is that any actor with an interest in purchasing
probabilistic information about our behavior and/or influencing future behavior
can pay to play in markets where the behavioral fortunes of individuals, groups,
bodies, and things are told and sold (see Figure 2).
Figure 2: The Discovery of Behavioral Surplus
CHAPTER FOUR
THE MOAT AROUND THE CASTLE
The hour of birth their only time in college,
They were content with their precocious knowledge,
To know their station and be right forever.
—W. H. AUDEN
SONNETS FROM CHINA, I
I. Human Natural Resources
Google’s former CEO Eric Schmidt credits Hal Varian’s early examination of the
firm’s ad auctions with providing the eureka moment that clarified the true
nature of Google’s business: “All of a sudden, we realized we were in the
auction business.”1 Larry Page is credited with a very different and far more
profound answer to the question “What is Google?” Douglas Edwards recounts a
2001 session with the founders that probed their answers to that precise query. It
was Page who ruminated, “If we did have a category, it would be personal
information.… The places you’ve seen. Communications.… Sensors are really
cheap.… Storage is cheap. Cameras are cheap. People will generate enormous
amounts of data.… Everything you’ve ever heard or seen or experienced will
become searchable. Your whole life will be searchable.”2
Page’s vision perfectly reflects the history of capitalism, marked by taking
things that live outside the market sphere and declaring their new life as market
commodities. In historian Karl Polanyi’s 1944 grand narrative of the “great
transformation” to a self-regulating market economy, he described the origins of
this translation process in three astonishing and crucial mental inventions that he
called “commodity fictions.” The first was that human life could be subordinated
to market dynamics and reborn as “labor” to be bought and sold. The second was
that nature could be translated into the market and reborn as “land” or “real
estate.” The third was that exchange could be reborn as “money.”3 Nearly eighty
years earlier, Karl Marx had described the taking of lands and natural resources
as the original “big bang” that ignited modern capital formation, calling it
“primitive accumulation.”4
The philosopher Hannah Arendt complicated both Polanyi’s and Marx’s
notion. She observed that primitive accumulation wasn’t just a one-time primal
explosion that gave birth to capitalism. Rather, it is a recurring phase in a
repeating cycle as more aspects of the social and natural world are subordinated
to the market dynamic. Marx’s “original sin of simple robbery,” she wrote, “had
eventually to be repeated lest the motor of capital accumulation suddenly die
down.”5
In our time of pro-market ideology and practice, this cycle has become so
pervasive that we eventually fail to notice its audacity or contest its claims. For
example, you can now “purchase” human blood and organs, someone to have
your baby or stand in line for you or hold a public parking space, a person to
comfort you in your grief, and the right to kill an endangered animal. The list
grows longer each day.6
Social theorist David Harvey builds on Arendt’s insight with his notion of
“accumulation by dispossession”: “What accumulation by dispossession does is
to release a set of assets… at very low (and in some instances zero) cost.
Overaccumulated capital can seize hold of such assets and immediately turn
them to profitable use.” He adds that entrepreneurs who are determined to “join
the system” and enjoy “the benefits of capital accumulation” are often the ones
who drive this process of dispossession into new, undefended territories.7
Page grasped that human experience could be Google’s virgin wood, that it
could be extracted at no extra cost online and at very low cost out in the real
world, where “sensors are really cheap.” Once extracted, it is rendered as
behavioral data, producing a surplus that forms the basis of a wholly new class
of market exchange. Surveillance capitalism originates in this act of digital
dispossession, brought to life by the impatience of over-accumulated investment
and two entrepreneurs who wanted to join the system. This is the lever that
moved Google’s world and shifted it toward profit.
Today’s owners of surveillance capital have declared a fourth fictional
commodity expropriated from the experiential realities of human beings whose
bodies, thoughts, and feelings are as virgin and blameless as nature’s onceplentiful meadows and forests before they fell to the market dynamic. In this
new logic, human experience is subjugated to surveillance capitalism’s market
mechanisms and reborn as “behavior.” These behaviors are rendered into data,
ready to take their place in a numberless queue that feeds the machines for
fabrication into predictions and eventual exchange in the new behavioral futures
markets.
The commodification of behavior under surveillance capitalism pivots us
toward a societal future in which market power is protected by moats of secrecy,
indecipherability, and expertise. Even when knowledge derived from our
behavior is fed back to us as a quid pro quo for participation, as in the case of socalled “personalization,” parallel secret operations pursue the conversion of
surplus into sales that point far beyond our interests. We have no formal control
because we are not essential to this market action.
In this future we are exiles from our own behavior, denied access to or
control over knowledge derived from its dispossession by others for others.
Knowledge, authority, and power rest with surveillance capital, for which we are
merely “human natural resources.” We are the native peoples now whose tacit
claims to self-determination have vanished from the maps of our own
experience.
Digital dispossession is not an episode but a continuous coordination of
action, material, and technique, not a wave but the tide itself. Google’s leaders
understood from the start that their success would require continuous and
pervasive fortifications designed to defend their “repetitive sin” from contest and
constraint. They did not want to be bound by the disciplines typically imposed
by the private market realm of corporate governance or the democratic realm of
law. In order for them to assert and exploit their freedom, democracy would have
to be kept at bay.
“How did they get away with it?” It is an important question that we will
return to throughout this book. One set of answers depends on understanding the
conditions of existence that create and sustain demand for surveillance
capitalism’s services. This theme was summarized in Chapter 2’s discussion of
the “collision.” A second set of answers depends upon a clear grasp of
surveillance capitalism’s basic mechanisms and laws of motion. This exploration
has begun and will continue through Part II.
A third set of answers requires an appreciation of the political and cultural
circumstances and strategies that advanced surveillance capitalism’s claims and
protected them from fatal challenge. It is this third domain that we pursue in the
sections that follow. No single element is likely to have done the job, but
together a convergence of political circumstances and proactive strategies helped
enrich the habitat in which this mutation could root and flourish. These include
(1) the relentless pursuit and defense of the founders’ “freedom” through
corporate control and an insistence on the right to lawless space; (2) the shelter
of specific historical circumstances, including the policies and juridical
orientation of the neoliberal paradigm and the state’s urgent interest in the
emerging capabilities of behavioral surplus analysis and prediction in the
aftermath of the September 2001 terror attacks; and (3) the intentional
construction of fortifications in the worlds of politics and culture, designed to
protect the kingdom and deflect any close scrutiny of its practices.
II. The Cry Freedom Strategy
One way that Google’s founders institutionalized their freedom was through an
unusual structure of corporate governance that gave them absolute control over
their company. Page and Brin were the first to introduce a dual-class share
structure to the tech sector with Google’s 2004 public offering. The two would
control the super-class “B” voting stock, shares that each carried ten votes, as
compared to the “A” class of shares, which each carried only one vote. This
arrangement inoculated Page and Brin from market and investor pressures, as
Page wrote in the “Founder’s Letter” issued with the IPO: “In the transition to
public ownership, we have set up a corporate structure that will make it harder
for outside parties to take over or influence Google.… The main effect of this
structure is likely to leave our team, especially Sergey and me, with increasingly
significant control over the company’s decisions and fate, as Google shares
change hands.”8
In the absence of standard checks and balances, the public was asked to
simply “trust” the founders. Schmidt would voice this theme on their behalf
whenever challenged on the subject. For example, at the Cato Institute in
December 2014, Schmidt was asked about the possibility of abuse of power at
Google. He simply assured the audience of the continuity of the firm’s dynastic
line. Page had succeeded Schmidt as CEO in 2011, and the current leaders would
handpick future leaders: “We’re fine with Larry… same circus, same clowns…
it’s the same people… all of us who built Google have the same view, and I am
sure our successors will have the same view.”9
By that year, Page and Brin had a 56 percent majority vote, which they used
to impose a new tri-class share structure, adding a “C” class of zero-votingrights stock.10 As Bloomberg Businessweek observed, “The neutered ‘C’ shares
ensure Page and Brin retain control far into the future.…”11 By 2017, Brin and
Page controlled 83 percent of the super-voting-class “B” shares, which translated
into 51 percent of the voting power.12
Many Silicon Valley founders followed Google’s lead. By 2015, 15 percent
of IPOs were introduced with a dual-class structure, compared to 1 percent in
2005, and more than half of those were for technology companies.13 Most
significantly, Facebook’s 2012 IPO featured a two-tiered stock structure that left
founder Mark Zuckerberg in control of voting rights. The company then issued
nonvoting class “C” shares in 2016, solidifying Zuckerberg’s personal control
over every decision.14
While financial scholars and investors debated the consequences of these
share structures, absolute corporate control enabled the Google and Facebook
founders to aggressively pursue acquisitions, establishing an arms race in two
critical arenas.15 State-of-the-art manufacturing depended on machine
intelligence, compelling Google and later Facebook to acquire companies and
talent representing its disciplines: facial recognition, “deep learning,” augmented
reality, and more.16 But machines are only as smart as the volume of their diet
allows. Thus, Google and Facebook vied to become the ubiquitous net
positioned to capture the swarming schools of behavioral surplus flowing from
every computer-mediated direction. To this end the founders paid outsized
premiums for the chance to corner behavioral surplus through acquisitions of an
ever-expanding roster of key supply routes.
In 2006, for example, just two years after its IPO, Google paid $1.65 billion
for a one-and-a-half-year-old startup that had never made any money and was
besieged by copyright-infringement lawsuits: YouTube. While the move was
called “crazy” and the company was criticized for the outsized price tag,
Schmidt went on the offensive, freely admitting that Google had paid a $1
billion premium for the video-sharing site, though saying little about why. By
2009, a canny Forrester Research media analyst had unpacked the mystery: “It
actually becomes worth the additional value because Google can tie all of its
advertising expertise and search traffic into YouTube… it ensures that these
millions and millions of viewers are coming to a Google-owned site rather than
someone’s else’s site.… As a loss leader goes, if it never makes its money back,
it’s still going to be worth it.”17
Facebook’s Zuckerberg pursued the same strategies, paying “astronomical”
prices for a “fast and furious” parade of typically unprofitable startups like
virtual reality firm Oculus ($2 billion) and the messaging application WhatsApp
($19 billion), thus ensuring Facebook’s ownership of the gargantuan flows of
human behavior that would pour through these pipes. Consistent with the
extraction imperative, Zuckerberg told investors that he would not consider
driving revenue until the service reaches “billions” of users.18 As one tech
journalist put it, “There’s no real need for Zuckerberg to chat with the board…
there’s no way for shareholders to check Zuckerberg’s antics.…”19
It’s worth noting that an understanding of this logic of accumulation would
have usefully contributed to the EU Commission’s deliberations on the
WhatsApp acquisition, which was permitted based on assurances that data flows
from the two businesses would remain separate. The commission would discover
later that the extraction imperative and its necessary of economies of scale in
supply operations compel the integration of surplus flows in the quest for better
prediction products.20
Google’s founders constructed a corporate form that gave them absolute
control in the market sphere, and they also pursued freedom in the public sphere.
A key element of Google’s freedom strategy was its ability to discern, construct,
and stake its claim to unprecedented social territories that were not yet subject to
law. Cyberspace is an important character in this drama, celebrated on the first
page of Eric Schmidt and Jared Cohen’s book on the digital age: “The online
world is not truly bound by terrestrial laws… it’s the world’s largest ungoverned
space.”21 They celebrate their claim to operational spaces beyond the reach of
political institutions: the twenty-first-century equivalent of the “dark continents”
that drew nineteenth-century European speculators to their shores.
Hannah Arendt’s examination of British capitalists’ export of overaccumulated capital to Asia and Africa in the mid-nineteenth century helps to
develop this analogy: “Here, in backward regions without industries and political
organizations, where violence was given more latitude than in any Western
country, the so-called laws of capitalism were actually allowed to create realities.
… The secret of the new happy fulfillment was precisely that economic laws no
longer stood in the way of the greed of the owning classes.”22
This kind of lawlessness has been a critical success factor in the short history
of surveillance capitalism. Schmidt, Brin, and Page have ardently defended their
right to freedom from law even as Google grew to become what is arguably the
world’s most powerful corporation.23 Their efforts have been marked by a few
consistent themes: that technology companies such as Google move faster than
the state’s ability to understand or follow, that any attempts to intervene or
constrain are therefore fated to be ill-conceived and stupid, that regulation is
always a negative force that impedes innovation and progress, and that
lawlessness is the necessary context for “technological innovation.”
Schmidt, Page, and Brin have each been outspoken on these themes. In a
2010 interview with the Wall Street Journal, Schmidt insisted that Google
needed no regulation because of strong incentives to “treat its users right.”24 In
2011 Schmidt cited former Intel CEO Andy Grove’s antidemocratic formula to a
Washington Post reporter, commenting that Grove’s idea “works for me.”
Google was determined to protect itself from the slow pace of democratic
institutions:
This is an Andy Grove formula.… “High tech runs three-times faster than normal businesses. And
the government runs three-times slower than normal businesses. So we have a nine-times gap.…
And so what you want to do is you want to make sure that the government does not get in the way
and slow things down.”25
Business Insider covered Schmidt’s remarks at the Mobile World Congress
that same year, writing, “When asked about government regulation, Schmidt said
that technology moves so fast that governments really shouldn’t try to regulate it
because it will change too fast, and any problem will be solved by technology.
‘We’ll move much faster than any government.’”26
Both Brin and Page are even more candid in their contempt for law and
regulation. CEO Page surprised a convocation of developers in 2013 by
responding to questions from the audience, commenting on the “negativity” that
hampered the firm’s freedom to “build really great things” and create
“interoperable” technologies with other companies: “Old institutions like the law
and so on aren’t keeping up with the rate of change that we’ve caused through
technology.… The laws when we went public were 50 years old. A law can’t be
right if it’s 50 years old, like it’s before the internet.” When asked his thoughts
on how to limit “negativity” and increase “positivity,” Page reflected, “Maybe
we should set aside a small part of the world… as technologists we should have
some safe places where we can try out some new things and figure out what is
the effect on society, what’s the effect on people, without having to deploy kind
of into the normal world.”27
It is important to understand that surveillance capitalists are impelled to
pursue lawlessness by the logic of their own creation. Google and Facebook
vigorously lobby to kill online privacy protection, limit regulations, weaken or
block privacy-enhancing legislation, and thwart every attempt to circumscribe
their practices…

Calculate your order
275 words
Total price: $0.00

Top-quality papers guaranteed

54

100% original papers

We sell only unique pieces of writing completed according to your demands.

54

Confidential service

We use security encryption to keep your personal data protected.

54

Money-back guarantee

We can give your money back if something goes wrong with your order.

Enjoy the free features we offer to everyone

  1. Title page

    Get a free title page formatted according to the specifics of your particular style.

  2. Custom formatting

    Request us to use APA, MLA, Harvard, Chicago, or any other style for your essay.

  3. Bibliography page

    Don’t pay extra for a list of references that perfectly fits your academic needs.

  4. 24/7 support assistance

    Ask us a question anytime you need to—we don’t charge extra for supporting you!

Calculate how much your essay costs

Type of paper
Academic level
Deadline
550 words

How to place an order

  • Choose the number of pages, your academic level, and deadline
  • Push the orange button
  • Give instructions for your paper
  • Pay with PayPal or a credit card
  • Track the progress of your order
  • Approve and enjoy your custom paper

Ask experts to write you a cheap essay of excellent quality

Place an order