Communications Question

The notes for each reading will be prefaced by the full citation for the article and include the thesis of the reading (The thesis will appear near the top of the page and be marked as “Thesis:”).

Save Time On Research and Writing
Hire a Pro to Write You a 100% Plagiarism-Free Paper.
Get My Paper

Received: 17 February 2022
Revised: 12 June 2022
Accepted: 5 August 2022
DOI: 10.1111/soc4.13032
REVIEW ARTICLE
What shapes the internet? An overview of social
science and interdisciplinary perspectives
Pauline Hoebanx
Sociology and Anthropology, Concordia
University, Montreal, Quebec, Canada
Correspondence
Pauline Hoebanx, Social and Cultural Analysis,
Concordia University, Montreal, Canada.
Email: pauline.hoebanx@mail.concordia.ca
Funding information
Fonds de Recherche du Québec-Société
et Culture, Grant/Award Number: B2;
Fonds de Recherche du Québec – Nature et
Technologies, Grant/Award Number: PBEEE
Doctoral Grant (1W)
Abstract
This overview of the social science and interdisciplinary
research about the internet focuses on how social, political, and economic contexts affect the internet. Drawing on
over 50 sources from 1964 to 2020 this paper categorizes
the history of internet research into three periods: (1) the
internet as a virtual reality, spanning from the 1990s to
2000; (2) the internet as a mirror of society, from 2000 to
2010; and (3) the privatized internet, from 2010 onwards.
The internet of the first phase was a new—virtual—reality
where geographical distances were abolished, and communities of strangers were coming together. Scholarship of
the first phase was characterized by speculations about the
future of the technology. The internet of the second phase,
accessible to a broader public, also became a tool of surveillance for governments and corporations. Internet studies
became more descriptive. The third phase was marked by
the widespread use of proprietary algorithms to collect user
data. Scholars of the third phase raised concerns about
highly privatized digital landscapes. Far from agreeing with
early dystopian scholarship about the internet, contemporary scholarship offers a nuanced understanding of the relationship between the internet and its political, economic,
and social contexts.
KEYWORDS
algorithms, datafication, internet governance, internet history,
intersectionality, literature review, platforms
© 2022 John Wiley & Sons Ltd.
Sociology Compass. 2022;16:e13032.
https://doi.org/10.1111/soc4.13032
wileyonlinelibrary.com/journal/soc4
1 of 14
HOEBANX
1 | INTRODUCTION
The internet is a recent invention, yet it has undergone significant transformations since its creation. Scholars who
attempt to describe the internet’s complex history usually divide it into three or four phases, based on criteria
such as developments in internet regulation (Palfrey, 2010), new research interests in the field of internet studies
(Wellman, 2004), or popular metaphors used to describe the internet (Cavanagh, 2010).
Previous reviews have addressed the transformations in Internet research objects (Wellman, 2004), research
methods (Cavanagh, 2010), or follow the development of specific technologies, such as surveillance advertising
(Crain, 2021). This article reveals that scholars often focus on the internet’s impact on society, but more rarely on
the co-constitutive relationship between the internet and society. This critical overview of the scholarship about the
internet seeks to fill this gap in the literature. I focus here on research that reveals how social, political, and economic
contexts affect the internet. I look at research that has studied the internet as an object, rather than its social implications, highlighting the changes in research interests over time as the internet is transformed by the society into
which it is embedded. The following questions guide this review: (1) In what ways have social, political, and economic
contexts been studied in the social sciences since the 1990s? and (2) How have these contexts historically affected
the internet?
To answer these questions, I consider the historical development of internet studies by dividing them into three
phases, based on existing categorizations of internet history (e.g., Cavanagh, 2010; Curran, 2016b; Goldsmith &
Wu, 2006b; Mager, 2012), as well as on the observed similarities in research interests and methods between the texts
in my sample. Internet research does not always neatly fit in these phases and there are overlaps between phases.
Rather than a static typology, these phases are guidelines to help make sense of the transformations of the Internet
and internet research.
The first phase, the internet as a virtual reality, from the 1990s to 2000, begins with the explosion of social
science research about the internet in the 1990s (Cavanagh, 2010). The internet of the first phase was a publicly
funded American network developed by scholars and the U.S. military (Curran, 2016b). Early search engines were
simply academic directories (Mager, 2012). There was little to no legal regulation and users widely believed that the
internet was ungovernable (Palfrey, 2010). The internet was often described as an “information superhighway,” or a
“cyberspace” (Cavanagh, 2010, p. 50).
The second phase, the internet as a mirror of society, spans 2000 to 2010, when the internet became accessible
to a broader public (Curran, 2016b), but it also became a tool of surveillance for governments (Palfrey, 2010), as well as
for corporations (Mager, 2012). Internet studies reflected this change by becoming more descriptive and less predictive (Wellman, 2004). Spatial metaphors were shelved in favor of the metaphor of the network (Cavanagh, 2010). This
phase also saw the emergence of social media platforms.
The third phase, the privatized internet, from 2010 onwards, is characterized by the consolidation of influence
and resources in the hands of a few multinational corporations (Snircek, 2017). For scholars, certain aspects of the
internet became difficult to study because of the culture of secrecy surrounding data collection and algorithms
(Gillespie, 2018; Zuboff, 2015). Researchers had to adapt their methods to study what had become a ‘black box
society’ (Pasquale, 2015).
The texts included in this purposive overview of the literature were selected according to the following criteria:
they focus on the internet itself as an object of study, were published in English since the early 1990s—an exception
was made for Marshall McLuhan’s Understanding media (1964), and were widely cited by other scholars in the field.
As a result, this review is focused on the Western context. Many sources are not considered here, but the aim of this
article is to provide an overview of the field and its historical transformations.
17519020, 2022, 10, Downloaded from https://compass.onlinelibrary.wiley.com/doi/10.1111/soc4.13032 by Tulane University, Wiley Online Library on [04/05/2023]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
2 of 14
3 of 14
2 | 1990S–2000S: THE INTERNET AS A VIRTUAL REALITY
The early internet held the promise of a better future. It was a new—virtual—reality where geographical distances
were abolished, and communities of strangers were coming together (Curran et al., 2016). Some scholars believed it
would allow individuals to take back power from institutions (Benkler, 2006), break down barriers between nation
states (Negroponte, 1995) and bring an end to the monopoly of mass media conglomerates (De Sola Pool, 1983).
The internet of the 1990s was only accessible to tech-savvy users (Negroponte, 1995), filled with chatrooms
(Rheingold, 1993) and multiplayer worlds (Turkle, 1995), and believed to be ungovernable (Palfrey, 2010). Some
studies in this phase were techno-deterministic (e.g., Negroponte, 1995), positing that advanced technology would
transform existing political, economic, and social regimes, and solve all human problems (Vaidhyanathan, 2012).
Others were more cautious, calling for more tempered expectations for this new communications medium (Gillespie
& Robins, 1989; Jones, 1995, 1997).
These early texts are often retrospectively characterized as utopian or dystopian (e.g. Cavanagh, 2010;
Gillespie, 2018; Vaidhyanathan, 2012). However, this simplified binary fails to account for the uncertainty expressed
by these early authors, who often identified potential obstacles to the realization of their predictions, such as struggles over the control of the Internet (Rheingold, 1993) or the difficult adaptation of legal systems to the new technology (De Sola Pool, 1983; Negroponte, 1995). Texts of the first phase focused on speculations about the distant
future, virtual communities, the emergence of digital networks, and policy concerns. It is to these themes that I know
turn.
2.1 | The distant future
Some early internet scholars speculated about the future of the internet, a technology brimming with potential—
though not all agreed (Stoll, 1995). The texts that would later be labeled as utopian or dystopian were often drawn
from the field of information technology studies. They usually describe an inevitable future, minimizing human influence on and agency over the course of the technological revolution (Winner, 1983/2014). These texts offered the
most far-reaching predictions about the future of the internet and were often based on personal accounts.
Being digital (Negroponte, 1995), Silicon snake oil (Stoll, 1995), and The future does not compute (Talbott, 1995)
were all published in the same year but predict different futures for the Internet. Negroponte (1995) argued that the
computer would transform society, while Stoll (1995) predicted that it would have no impact at all. Talbott (1995)
believed that computers would reproduce the same biases as the societies in which they were created.
Negroponte (1995) is often cited as an early example of techno-determinism (e.g. Farivar, 2011; Jenkins, 2014;
Pariser, 2011). In his book Being digital (1995), Negroponte imagined that computers would act as editors, producing the
‘Daily Me,’ a newspaper-like collection of content tailored to its user’s personal taste. Like other techno-determinists,
Negroponte also believed that the popularization of the internet would bring an end to national borders and transform society. Despite these utopian predictions, Negroponte (1995) did recognize some obstacles to the realization
of a truly digital world, such as material costs, copyright laws, and the risk of digital monopolies.
Stoll’s Silicon snake oil (1995) is a rare example of a truly pessimistic view of the internet. The author argued that
the internet was overhyped and that it was a detriment to education and critical thinking. For him, the internet of
1995 had reached its peak. E-commerce, e-books, and online news were doomed to fail. Aside from brash comments
about the future, Stoll’s book is interesting for its critique of the imperfect machine. Stoll (1995) focuses on the failures of computers and the internet, a topic that would be taken up again by algorithm studies in the 2010s. However,
technological limitations, glitches, and failures in the 2010s result in mundane user experiences rather than in the end
of the internet as predicted by Stoll (1995).
In The future does not compute, Talbott (1995) also discussed the imperfection of computers, though she did
not attribute failure to technological limitation. Rather, machines are limited because they are reflections of their
17519020, 2022, 10, Downloaded from https://compass.onlinelibrary.wiley.com/doi/10.1111/soc4.13032 by Tulane University, Wiley Online Library on [04/05/2023]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
HOEBANX
HOEBANX
creators. For Talbott, the computer is the recreation of human intelligence, stripped of everything but its most mechanistic functions. As such, Talbott (1995) warned that we must fix biases within ourselves if we do not want computers
to perpetuate them.
Despite the speculative nature of these texts, their insights about personalization, failure, and bias should not be
overlooked, as these early accounts continue to inform our contemporary imaginations about the internet.
2.2 | Virtual communities
Virtual communities were another central theme in early Internet research. By collapsing geographic barriers in
unprecedented ways (Gillespie & Robins, 1989), the internet had the potential to create entirely new community
dynamics and social interactions. Though it predates the internet, Marshall McLuhan’s Understanding media (1964)
is often cited in early studies about the virtual communities (e.g., Poster, 2001; Rheingold, 1993; Terranova, 2004).
McLuhan’s central thesis is that ‘the medium is the message’ (1964, p. 8), a crucial tenet for scholars of the internet,
where content is constantly updated, transformed, and circulated. For McLuhan, the popularization of electronic
technologies would lead to a ‘global village’ (1964, p. 106), where humanity would reach a level of understanding that
would transcend the need for words. He predicted that new technologies of communication would abolish time and
space, breaking down the barriers separating communities from each other all over the world.
Many predictions from this first phase of scholarship, such as those of Poster (2001) and Rheingold (1993) where
the internet is portrayed as a free, ungovernable, and global community were based on McLuhan’s (1964) ideas. Later
scholarship would revisit McLuhan’s global village, giving more importance to local influences on the global network
(e.g., Farivar, 2011; Goldsmith & Wu, 2006b). Other scholars questioned whether the concept of community itself
would be transformed by the new medium (e.g., Jones, 1998).
Studies about virtual communities often drew on sociological and anthropological methodologies—participant
observation, ethnography—and poststructuralist theorists, such as Deleuze, Lacan and Guattari to make sense of
this new technology (e.g., Poster, 1990; Rheingold, 1993; Turkle, 1995). Scholars working with sociological theories
often depicted the internet as an object affected by the offline world. This can be attributed, at least in part, to the
social-constructionist branch of sociological theory where the co-constituting influence of society and technology
has long been recognized.
Rheingold’s The virtual community (1993) and Turkle’s Life on the screen (1995) were ethnographies conducted in
virtual communities. While Turkle examined how the self is constituted through language and technology, Rheingold
questioned whether virtual communities could help revitalize democracy. Both authors rejected the then-prevalent
metaphor of cyberspace. For Turkle (1995), the internet is akin to a laboratory where users can constitute and reconstitute the self as a multiple and distributed system, through the object of the computer. The constitution of the
online self through language reflects the landscape of the internet at the time: text-based chatrooms, forums, and
multiplayer games, with minimal graphical interface.
Rheingold (1993) compared the internet to a petri dish, where communities grow in an organic and unpredictable manner, albeit confined to the constraints of the dish. Rheingold foresaw the tension between early users
of the internet and the vested interests of corporations, and the power both had to change the face of the internet. Turkle and Rheingold portrayed Internet users as having agency in relation to the future of the internet, unlike
techno-determinist scholars associated with the field of information studies such as Negroponte (1995).
2.3 | Digital networks
A. Gillespie and Robins (1989), and Castells (1996-1998), focused on the structure of the Internet, rather than user
experience. In The information age (1996–1998), Castells explains his theory of the network society. Castells defines
17519020, 2022, 10, Downloaded from https://compass.onlinelibrary.wiley.com/doi/10.1111/soc4.13032 by Tulane University, Wiley Online Library on [04/05/2023]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
4 of 14
5 of 14
a network as ‘a set of interconnected nodes’ (2004, p. 3). For Castells (1996-1998), a network has no center, and
its nodes are connected by ‘flows,’ or streams of information. Prior to electronic communications, the efficiency of
networks was limited by their scale. With electronic communications, network societies expanded, benefitting from
the flexibility, scalability, and survivability of these new technological environments.
In a similar fashion, Gillespie and Robins (1989) also show the influence of social, economic and political contexts
on the design and implementation of new communications technologies. The authors argued that digital communications networks were not public “electronic highways”, democratizing access to information. Rather, the new networks
were affected by the same center-periphery models as previous communications technologies, but on a global scale,
and were composed of “proprietary systems available only to an authorized group of end-users” (p. 12).
While subsequent scholars would recognize the abundant empirical work conducted by scholars of the first
phase, they criticize the lack of theoretical innovation (Cavanagh, 2010; Jones, 2006).
2.4 | Policy concerns
De Sola Pool (1983) analyzes the American legal apparatus that controls the mass media and its potential impact
on digital technologies. His book is an early example of a study surveying the influence of offline institutions over
the internet. In Technologies of freedom, De Sola Pool (1983) described computers as technologies with the inherent
potential to increase freedom and openness in a society, limited only by policy. The author showed how conditional
freedom was granted to each of the major forms of mass media (print, telephone, telegraph, and broadcasting). He
argued that similar conditional freedom needed to be established for new electronic technologies, to avoid an online
landscape ruled by monopolies. De Sola Pool (1983) feared that policy makers would be recalcitrant against trying
to solve conflicts of monopoly, copyright, and privacy by new methods designed for electronic technologies. This is
one of the first studies that recognized the effect that the law could have on the future of electronic technologies.
2.5 | Summary
The internet of the 1990s was a government funded space that had yet to be integrated into the economy (Goldsmith
& Wu, 2006a); populated mostly by English-speaking users, often academics (Rheingold, 1993), a space in which
media conglomerates had yet to invest (De Sola Pool, 1983). The early internet was shaped by its creators and its
techno-savvy, enthusiastic users. Scholars of the first phase placed their hopes in the unrealized potential of this
new medium, yet raised important concerns about the technological, geographical, and legal limitations this new
technology was bound to face.
3 | 2000S–2010S: THE INTERNET AS A MIRROR OF SOCIETY
The internet of the 2000s, no longer a distinct, digital reality, had become integrated into economic, legal, and
governmental systems. A series of events marked the transition from the internet as a cyberspace to the internet as a
mirror of society. The U.S.A. lifted their ban on the commercialization of the internet in 1991(Curran, 2016b). In 1994,
AT&T commissioned the first clickable ad, ushering in the era of ‘impression’-based revenue models (Introna, 2016).
The U.S.A. passed the Internet Tax Freedom Act in 1998, preventing taxes on internet access and e-commerce,
marking the beginning of the race to commercialize the internet (Crain, 2021). In 2000, Yahoo! lost a case against
French courts and was required to filter their content to respect national laws (Goldsmith & Wu, 2006b). From then
on, geo-localized filtering of content became common practice.
17519020, 2022, 10, Downloaded from https://compass.onlinelibrary.wiley.com/doi/10.1111/soc4.13032 by Tulane University, Wiley Online Library on [04/05/2023]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
HOEBANX
HOEBANX
Research of this era went from being primarily speculative to descriptive and increasingly interdisciplinary
(Cavanagh, 2010), often focusing on the shift of power from individual users to corporations. Core debates in the
field began to emerge, such as those relating to network society, community, democracy, governance, and capitalism.
I will now address these in what follows.
3.1 | Network society
Inquiries about the structure of the internet were developed in the second phase of internet studies. They are based
on Castell’s concept of the network society (1996–1998) from the first phase but incorporated empirical observations from the second to support the theory (Benkler, 2006; Terranova, 2004). In other words, there is an overlap
between the first and second phases of scholarship.
Terranova (2004) described network culture as a way to think about the interconnections of our global culture,
networks of power, and the influence of the analog world on the internet. While she did not deny the importance
of local cultures, she envisioned a global culture, reminiscent of McLuhan’s global village (1964). She argued that
the development of the internet is tied to the development of late postindustrial societies and is integrated into the
economy of late capitalism.
Unlike Terranova (2004), Benkler (2006) believed that in the early 2000s, the internet had not yet been incorporated into the capitalist economy. He hoped that the nonproprietary modes of production that flourished online, with
projects such as Wikipedia and the Search for Extraterrestrial Intelligence, could resist the capitalist takeover of the
internet. However, he warned that old actors, using tools such as proprietary law, could tip the scales in their favor.
Benkler’s argument is similar to De Sola Pool’s (1983), in that they both consider policy to be a limitation to the full
development of information technologies as technologies of freedom (De Sola Pool, 1983) or as tools of nonmarket,
collaborative production (Benkler, 2006).
Drawing on Castell’s theory of network society, Terranova (2004) and Benkler (2006) still held some of the beliefs
characterizing the first phase, such as the idea of a global culture or of a separate digital economy. However, they
also recognized that entire systems—such as copyright policies, and capitalism more generally—must be dismantled
to fully realize the internet’s potential: the existence of the internet alone was not enough.
3.2 | Community
Baym (2015) and boyd’s (2014) studies both focused on interpersonal relationships online. Baym (2015) argued
that interpersonal relationships—both online and off— are shaped by cultural forces. boyd (2014) held a similar position, defining networked publics as publics created by the affordances of networked technologies, but also as the
imagined communities that emerge from these networked spaces. In other words, these communities exist beyond
the material reality of the network. Baym (2015) and boyd (2014) both showed that the internet is an avenue for
personal connection but is not the driving force behind communities: users are. This departs from the observations
of earlier ethnographies, where online communities were described as distinct from the analog world and insulated
from its influences. Baym and boyd’s studies are indicative of the changes undergone by the internet of the 2000s: its
access no longer required high levels of technical knowledge, it was increasingly popular, and its use was becoming
normalized. The internet was no longer an exceptional technology that spontaneously created communities, but a
communication technology that facilitated them.
17519020, 2022, 10, Downloaded from https://compass.onlinelibrary.wiley.com/doi/10.1111/soc4.13032 by Tulane University, Wiley Online Library on [04/05/2023]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
6 of 14
7 of 14
3.3 | Democracy
At the beginning of the new millennium, it became clear that the Internet would neither be the saving Grace of
democracy nor the demise of dictatorships (Curran, 2016a). Deibert et al. (2010) found that states no longer hid that
they were regulating the Internet. In one of the most rigorous empirical studies that characterizes this phase, the
researchers systematically measured the magnitude of content filtering in 36 countries by evaluating whether users
around the world could connect to the same list of given websites.
Studies about democracy in the second phase rejected the techno-determinist idea that the internet would
revive and spread democracy. These studies were published at a time when the internet was increasingly controlled
by national governments, and at a time when this control was becoming visible to users, marking an end to the
rule-free cyberspace of the 1990s.
3.4 | Governance
Studies of the second phase indicate that the internet was in fact controllable. In his topology of modes of internet
regulation, Palfrey (2010) argued that researchers should no longer ask if the internet can be regulated. Instead, they
need to ask how it is being regulated. In agreement with Palfrey (2010), book Who controls the internet? is a detailed
historical account of the transformation of the internet from a space challenging nation-state rule to a place where
governments assert their control. Goldsmith and Wu (2006b) point out that the ideal of the ungovernable internet
is based on an American ideal of unabashed free speech. Drawing on court cases, policies, and the history of the
internet, the authors show how geographical borders were reconstituted online. Borders came to matter in the first
place because of differing language and cultural needs. Geographical targeting is also a cost-effective way of filtering
information that will have more chances of reaching interested consumers. Subsequently, borders became important
when national laws began to be enforced online (Goldsmith & Wu, 2006b).
Within the theme of governance, there was an effort to look beyond the U.S. and Western Europe to find different examples of internet control. This is what Farivar did in The internet of elsewhere (2011). From the perspective of
their local histories, he studied the development and control of the internet in South Korea, Estonia, Senegal, and
Iran. For example, Estonia and its neighboring countries had to rapidly build governments and infrastructures after
the collapse of the Soviet Union in 1991. Estonia emerged a more advanced and connected society than its neighbors, which Farivar (2011) attributes to the cybernetics and astronautics institute that the Soviet Union had installed
in the country before its fall.
3.5 | Capitalism
Researchers in the first phase debated the role that capitalism would play online. Even the most optimistic warned that
capitalism could transform the open, democratic culture of the early internet (De Sola Pool, 1983; Rheingold, 1993).
The commercialization of the internet changed its nature (Curran et al., 2016), but it also accelerated its technological
development and its distribution (Poster, 2001). In her study based on 17 qualitative interviews with experts in the
development of search technology, Mager (2012) showed how search engines profit from websites’ need to reach
consumers through an intermediary. For Mager (2012), users and websites alike solidify the capitalist motives of
search engines. Users do so by continuing to click on ads and expecting a high quality of search results. Jenkins and
Deuze (2008) describe the Internet as a convergence culture, shaped by the contradictory forces of the bottom-up
democratization of media use and the top-down concentration of power in the hands of traditional media gatekeepers.
Fuchs (2017) is one of the few scholars who examined the global ecology of digital labor. Using a Marxist
framework, he called for the study of all forms of labor needed to create and distribute digital media. This includes
17519020, 2022, 10, Downloaded from https://compass.onlinelibrary.wiley.com/doi/10.1111/soc4.13032 by Tulane University, Wiley Online Library on [04/05/2023]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
HOEBANX
HOEBANX
everything from mining in the Democratic Republic of the Congo for the minerals necessary to produce computers
and cell phones, to the unpaid labor of creating user-generated content. Fuchs pointed out that while society had
changed with the popularization of the internet, so had capitalism. This does not mean that relations of labor production have completely evolved beyond the capitalist mode of production. Rather, new activities such as spending time
on social media are integrated into the capitalist model of production. For Fuchs (2017), social media usage is a form
of labor producing surplus value that is exploited by platforms and advertisers. The solution to this exploitation is the
development of a ‘working class internet,’ controlled by users rather than corporations.
3.6 | Summary
As the internet became integrated into society, scholars realized that it would not spur social change simply by its
very existence. It was shaped less by its users, and more by those who controlled it. Scholarship of the second phase
turned its focus away from the potential benefits of the internet for humanity, instead focusing on power, as the
Internet became integrated into offline institutions.
4 | 2010S-PRESENT: THE PRIVATIZED INTERNET
The third phase is characterized by the growth of digital platforms. Platforms are digital infrastructures that facilitate
exchanges between two groups or more (Snircek, 2017). These exchanges can be of content, such as on Facebook
and YouTube (Gillespie, 2018) or of services, such as with Uber or Airbnb (Snircek, 2017). Platforms rarely produce
content themselves (Gorwa, 2019), but they moderate, organize, and circulate it (Gillespie, 2018). Platforms generate
their revenue by using customer data to individually target advertisements and services (Snircek, 2017), a business
model directly grown out of the race to commercialize the Internet in the late 1990s, as well as lax regulations of
digital commerce (Crain, 2021). For Introna (2016), the quantitative assessment of users’ behaviors online is a necessary condition for the ongoing existence of the internet. With platforms, information filtering and surveillance is
privatized. Mass data collection is normalized and is used to—among other things—inform content recommendation
algorithms. These algorithms help increase the time users spend on a single platform, through a “cyclical anticipation
of needs” (Roberge & Seyfert, 2016, p. 6).
Algorithms are decision making systems, that use input data to determine a desired output (Yeung, 2018). In
other words, a recommendation algorithm uses information about the user, such as past behaviors and identity
traits to suggest content that the user might like. Yeung (2018) identifies eight different types of algorithms, based
on how they set goals, gather information, and enforce behavior modification. The type of algorithm that most feeds
the anxieties of scholars and policy makers alike, is the machine-learning algorithm (Yeung, 2018). Machine-learning
algorithms optimize themselves and make decisions without needing human intervention (Mittelstadt et al., 2016;
Yeung, 2018). The inner workings of these technologies are mostly kept hidden from public view.
They key debates that emerged in this phase related to content moderation algorithms, their potential to discriminate, their regulation, and issues of platforms and surveillance.
4.1 | Content moderation
The increasing reliance on search engines is tied to the exponential growth of digital content. Users cannot easily sift
through all available content without an intermediary (Mager, 2012). Some scholars, however, grew concerned that
search engines would reduce users’ exposure to diverse worldviews (Pariser, 2011; Yeung, 2017). In turn, they feared
that this would fuel apathy toward political life (Pariser, 2011). For Pariser (2011), creativity is spurred by unexpected
17519020, 2022, 10, Downloaded from https://compass.onlinelibrary.wiley.com/doi/10.1111/soc4.13032 by Tulane University, Wiley Online Library on [04/05/2023]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
8 of 14
9 of 14
outputs, which are lost in highly personalized environments. This is a concern shared by earlier scholars, for whom
the internet would inhibit creativity (Stoll, 1995), especially in children (Talbott, 1995).
In his book The filter bubble (2011), Pariser analyzed personalization practices online. Personalization creates
‘filter bubbles,’ that are digital environments tailored to individual users—reminiscent of Negroponte’s (1995) ‘Daily
Me.’ Pariser (2011) feared that these filter bubbles would insulate users from political issues that do not directly
concern them. The solution lies in the hands of coders who need to build public life and citizenship into the worlds
they create—although Pariser does not specify how.
The concept of the filter bubble has seduced many academics critical of the personalization of the internet.
In their review of empirical studies about personalization and European policy documents, Zuiderveen Borgesius
et al. (2016) found that fears about the filter bubble are very present in academia and public discourse. However,
they found little empirical evidence to support these fears. Examples of such empirical studies are Haim et al. (2018)
and Nechustai and Lewis’s (2019) studies of Google News’s recommendation algorithm. They found that there were
no significant differences in the news articles recommended to different user profiles, but a high homogeneity in the
news sources appearing in search results, thus not presenting users with a wide variety of viewpoints.
For Zuiderveen Borgesius et al. (2016), the lack of empirical evidence does not mean we should cast away
concerns about the filter bubble. Algorithms are still a recent technology that will evolve over time. It is also of note
that filter bubbles, as invisible, unconsciously entered spaces, are difficult to study empirically (Pariser, 2011). For
example, Haim et al. (2018), and Nechustai and Lewis’s (2019) studies focused on a single recommendation algorithm, over a limited span of time, and did not account for the curated environment in which users find themselves,
created by the intersection of many recommendation algorithms from different platforms.
Yeung (2017, 2018) expounded on Pariser’s theory in her study of algorithms using analysis based in legal and
design scholarship. She found that algorithms do not necessarily lock users into filter bubbles, but that they do
influence users’ choices. Yeung (2017) argues that the power of algorithms lies in their capacity to modify behavior.
Recommendation algorithms present several choices to the user, but all these choices benefit the owner of the
algorithm. Users are only given the illusion of choice. For example, a video hosting platform will never recommend a
competitor’s content. Hunt and McKelvey (2019) argue that algorithms can influence internet users’ personal tastes,
which become amalgams of personal preferences and algorithmic outputs. Content curation, while a necessary
process, poses questions of power and knowledge, such as whether filtering encourages political apathy, or whether
private content distributors should have the power to decide what content is relevant—especially if they profit from
these algorithms. These questions are especially important in environments where users are not privy to the decisions and criteria being used to filter the content.
4.2 | Algorithms
In the second half of this phase, there is a shift in focus away from the effects of personalization towards a focus on
the mechanism of personalization itself: the algorithm. In the same way that early discourses of the 1990s romanticized the internet (Curran, 2016b), algorithms continued to be treated as esoteric or arcane objects in scholarship as
late as the 2010s. As machine learning algorithms became ubiquitous, scholars studying them from 2015 onwards
called for their normalization in internet studies (e.g., Beer, 2017; Burrell & Fourcade, 2021; Roberge & Seyfert, 2016).
Beer (2017), using a Foucauldian analysis of power, examined how algorithms are understood in communications literature. He argued that the influence of algorithms goes beyond their material intervention in the world, but
extends to the discourse used to speak about algorithms. In the same vein, Roberge and Seyfert (2016) argued that
the imaginaries constructed about algorithms as picture-perfect tools fail to account for their messiness, instability
and routine failures. Users are accustomed to algorithmic failure, such as being advertised a product in which they
have no interest (e.g., pet food when they have no pets) (Hunt & McKelvey, 2019). Minor dysfunctions like these are
relatively benign, however algorithmic failure can become problematic when users are harmed.
17519020, 2022, 10, Downloaded from https://compass.onlinelibrary.wiley.com/doi/10.1111/soc4.13032 by Tulane University, Wiley Online Library on [04/05/2023]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
HOEBANX
HOEBANX
Concern about the discriminatory power of algorithms only emerged later in the third phase of Internet studies.
Benjamin (2019) attributed this to a ‘one-size-fits-all’ approach to humanity in tech-enthusiastic literature. Additionally, coders and engineers are often held responsible for the failures of the programs they create (Hoffmann, 2019).
Instead, Benjamin (2019) argued that these racist glitches are not outliers in an otherwise objective and fair recommendation system. They are in fact signals of a deeper issue: systemic discrimination.
Burrell and Fourcade’s (2021) reviewed the literature “concerned with the social implications of algorithms” (p.213)
through a Marxist perspective. They caution readers about romanticizing algorithms and their capabilities, while also
acknowledging the immense social, economic, and political power wielded by the tech industry. Contemporary algorithms, they show, still heavily rely on the data and training provided by the underpaid labor “of a global digital assembly line of silent, invisible men and women” (p.218). This new social class, that Burrell and Fourcade name the ‘cybertariat’, is dominated by the ‘coding elite’ who control the digital means of production. They conclude that algorithms are
not objective decision-making programs, but biased processes that are “deeply, inescapably human” (p. 231).
There is a growing body of literature on racism and algorithmic discrimination. It is often the work of
Benjamin (2019), Noble (2018), O’Neil (2016) and Jefferson (2020) that is cited by researchers as prime examples of
algorithmic discrimination in general.
Benjamin (2019), using critical race studies and science and technology scholarship showed that new technologies reproduce existing inequities, despite being touted as more objective and progressive than the technologies of
the past. Benjamin names this process the ‘new Jim Code’ (2019). The new ‘Jim Code’ can be observed in Digitize and
punish, where Jefferson (2020) described the historical development of digital databases in American criminal justice.
He showed how digital tools have updated technologies of control, especially in racialized social management, such
as the use of predictive crime statistics to justify the policing of poorer neighborhoods.
O’Neil (2016) also showed how digital technologies reinforce the bias in law enforcement. She demonstrated
how ‘weapons of math destruction,’ the predictive technologies based on algorithms, impact the futures of individuals, such as when for-profit universities are advertised to poorer Internet users, or when credit scores are used to
decide whether to approve a lease.
Noble’s Algorithms of oppression (2018) focused on Google Search queries yielding racist results, such as the
query ‘black girls’ resulting in links to pornographic websites. Noble (2018) showed how the internet has become
an extension of American imperialism online, reproducing the stereotypes rooted in racist and sexist histories. The
solution she proposed is to make the mechanisms of search visible and to understand how stereotypes appear in
the first place. Benjamin (2019), in a similar fashion, called for the use of audits and the incorporation of abolitionist
tools into tech design.
It is difficult to design a regulatory framework for invisible algorithms that are increasingly autonomous from
their creators. Mittelstadt et al. (2016) suggested treating machine-learning algorithms as moral agents. In cases of
algorithmic failure, blame needs to be assigned simultaneously to several moral agents, including the decision-making
algorithms. The danger with this, however, is taking away accountability from algorithm designers (Mittelstadt
et al., 2016). Hunt and McKelvey (2019) instead proposed qualifying coding as policy making, to encourage coders
to take responsibility for algorithmic failure—though they did not offer a method to do so. Despite their differences,
both texts argued that algorithms should be held accountable, as they participate in the surveillance of internet user
behavior. These concerns for algorithmic regulation highlight the lag between new technologies and adequate policies to regulate them, reminiscent of Winner’s (1983/2014) ‘technological somnambulism’ as well as the issues raised
by De Sola Pool (1983) in the 1980s.
4.3 | Platforms and surveillance
By the 2010s, a few powerful platforms monopolized the digital landscape (Gillespie, 2018). Vaidhyanathan (2012)
ascribes this to a power vacuum in the early internet where early platforms did not face much competition. These
17519020, 2022, 10, Downloaded from https://compass.onlinelibrary.wiley.com/doi/10.1111/soc4.13032 by Tulane University, Wiley Online Library on [04/05/2023]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
10 of 14
11 of 14
platforms now benefit from a quasi-monopoly of large user databases, collected over years of user surveillance
(Crain, 2021; Vaidhyanathan, 2012). These databases are used to train recommendation algorithms. Thus, by default,
the services proposed by smaller competitors, based on smaller training databases, will be less effective than those
offered by already powerful platforms (Pasquale, 2015). Crain (2021) traces the emergence of the “Google/Facebook
duopoly” (p. 6) back to an early laissez-faire attitude toward the internet in the U.S.A. Free trade agreements and lax
regulations allowed the private sector to rapidly commercialize the internet, relying on the development of global
surveillance advertising networks.
In this third phase, surveillance and data collection became a profitable market (Couldry & Mejias, 2019). In a
risky comparison, Couldry and Mejias (2019) compared the global appropriation of resources during historical colonialism to the current moment of data colonialism. Unlike historical colonialism however, data colonialism does not
appropriate natural resources, but rather life itself, quantified as data. This appropriation is normalized in discourse.
For example, corporations claim that they are the sole actors with the necessary resources to collect and analyze data
in a way that benefits society (Couldry & Mejias, 2019). This argument is comparable to Van Dijck’s (2014), for whom
the exploitation of user data is veiled by the paradigm of datafication. Datafication is the belief that data is objective
and that the agents collecting, interpreting, and sharing this data are trustworthy (Van Dijck, 2014).
Users are not entirely oblivious to the collection of their data, and sometimes even willingly participate in their
own surveillance—such as by liking a post or sharing content they create to platforms (Lyon, 2018). For Zuboff (2015),
compliance to surveillance is not based on trust, but on a system of punishment and rewards. In other words, the
burden of resisting this constant surveillance falls into the hands of the consumer, who risks losing access to platforms if they do not comply with their terms of service (Haim et al., 2018). In these circumstances, consumers’
consent to have their data collected cannot be freely given: it is coerced.
4.4 | Summary
The internet of the third phase is characterized by a concentration of power in the hands of American corporations,
with global reach. These corporations, as well as their platforms and proprietary algorithms have benefitted from
decades of lenient regulations, an increasing cause for concern in scholarship. Anxieties about the effects of data
collection and biased algorithms echo first phase anxieties about the impact of the internet on society. Studies in this
phase also highlight the need for firmer regulation of user data collection and online surveillance practices.
5 | HINTS OF A FOURTH PHASE
Today’s internet—the medium epitomized by social media platforms, predictive search engines, and personalized
ads—has grown a lot since ARPANET, its predecessor, created in 1969 for the US Department of Defense (Goldsmith
& Wu, 2006b). The texts in this overview show that the contemporary internet—ubiquitous, populated by social
media platforms, reliant on user data collection and surveillance—was not just the result of technological innovations.
It was also shaped by social, cultural, and economic contexts. The Internet is a tool, a space and an actor, shaped by
institutional regulation—or lack thereof, private corporate interests, but also user intervention and resistance. The
internet does not exist in a vacuum but is enmeshed in and co-created by our social contexts.
The trends in internet research have changed in accordance with the internet’s transformations. In the first phase
of scholarship, from the 1990s to 2000, the internet was still a new communications medium, accessible only to a few
and featuring predominantly text-based content (Rheingold, 1993). The internet seemed to be a space beyond the
reach of the law (Palfrey, 2010). While some acclaimed it as a revolutionary medium that would transform societies,
others were more cautious in their evaluation of this medium’s transformative potential (Jones, 1997; Gillespie &
Robins, 1989; Winner, 1983/2014).
17519020, 2022, 10, Downloaded from https://compass.onlinelibrary.wiley.com/doi/10.1111/soc4.13032 by Tulane University, Wiley Online Library on [04/05/2023]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
HOEBANX
When the Internet was commercialized, it attracted more users and with this increase, governmental monitoring and control became inevitable (De Sola Pool, 1983; Palfrey, 2010). Studies in the second phase of scholarship
focused on the shifts of power between users and governments. They also showed how the internet was becoming
emmeshed with offline life. These studies became more descriptive as more data about the internet became available
(Wellman, 2004).
From the 2010s onwards, in the third phase of scholarship, the landscape of the internet was marked by a
growing number of privately-owned platforms. As data collection became a lucrative business model, scholars of
the third phase were concerned with issues of a highly personalized and privatized digital landscape (Crain, 2021;
Pariser, 2011). Scholars of the third phase found that platforms and recommendation algorithms are as biased as
those who create them (Burrell & Fourcade, 2021). Subsequently, an emerging debate in internet studies is the regulation of this privatized digital landscape.
The outlines of a fourth phase in internet research are already taking shape. Some examples of new research
trends include misinformation and radicalization online (e.g., Dignam & Rohlinger, 2019; Smith & Graham, 2019), digital well-being (e.g., Büchi, 2021; Vanden Abeele & Nguyen, 2022), or the global shift to online work, education, and
socialization, accelerated during the COVID-19 pandemic (French & Monahan, 2020; Nguyen, et al., 2020). While it
would be hasty to attempt to discern the core debates of this emerging phase, these early trends illustrate internet
scholars’ interest in the impact of internet and social media use on the self, political opinions, and the organization of
society–all debates that are reminiscent of questions raised by early internet researchers.
This overview is not an exhaustive account of internet scholarship literature. It is also limited by its focus on
anglophone Western contexts. Additionally, because of its focus on texts about how society affects the internet,
this review is inherently biased against techno-determinist texts claiming that that the internet, by its very existence, would transform society. Future scholarship needs to devise and describe clear methodologies and theories
to advance the study of an internet whose processes are increasingly secretive. Most importantly, there is a growing
demand for social scientists to work in collaboration with coders and policy makers to help create a fairer internet
for its users (Beer, 2017).
ACKNOWLE DG ME NTS
I would like to thank Dr. Marc Lafrance, Dr. Bart Simon, and Dr. Nayrouz Abu Hatoum (Concordia University) for their
discussions and useful critiques during the development of this manuscript. Pauline Hoebanx acknowledges financial support for this research from the Fonds de recherche du Québec – Nature et technologies (FRQ-NT) PBEEE
Doctoral Grant (1W), and the Fonds de recherche du Québec – Societé et culture (FRQ-SC) Doctoral Grant (B2).
CO N FLI CT OF I NTE RE ST
The author has no conflicts of interest to disclose.
O RC ID
Pauline Hoebanx
https://orcid.org/0000-0002-1033-8289
R EF ERE NCE S
Baym, N. K. (2015). Personal connections in the digital age. John Wiley & Sons.
Beer, D. (2017). The social power of algorithms. Information, Communication & Society, 20(1), 1–13. https://doi.org/10.1080
/1369118X.2016.1216147
Benjamin, R. (2019). Race after technology: Abolitionist tools for the new Jim Code. John Wiley & Sons.
Benkler, Y. (2006). The wealth of networks: How social production transforms markets and freedom. Yale University Press.
Boyd, D. (2014). It’s complicated: The social lives of networked teens. Yale University Press.
Büchi, M. (2021). Digital well-being theory and research. New Media & Society, 1–18. https://doi.org/10.1177/1461
4448211056851
Burrell, J., & Fourcade, M. (2021). The society of algorithms. Annual Review of Sociology, 47(1), 213–241. https://doi.
org/10.1146/annurev-soc-090820-020800
17519020, 2022, 10, Downloaded from https://compass.onlinelibrary.wiley.com/doi/10.1111/soc4.13032 by Tulane University, Wiley Online Library on [04/05/2023]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
HOEBANX
12 of 14
13 of 14
Castells, M. (1996-1998). The information age: Economy, society and culture (Vols. 1–3). Blackwell.
Castells, M. (2004). The network society: A cross-cultural perspective. Edward Elgar. https://doi.org/10.4337/9781845421663
Cavanagh, A. (2010). Sociology in the age of the internet. Tata McGraw-Hill.
Couldry, N., & Mejias, U. A. (2019). Data colonialism: Rethinking Big Data’s relation to the contemporary subject. Television &
New Media, 20(4), 336–349. https://doi.org/10.1177/1527476418796632
Crain, M. (2021). Profit over privacy: How surveillance advertising conquered the Internet. University of Minnesota Press.
Curran, J. (2016a). Reinterpreting the internet. In J. Curran, N. Fenton, & D. Freedman (Eds.), Misunderstanding the Internet
(pp. 3–33). Routledge.
Curran, J. (2016b). Rethinking internet history. In J. Curran, N. Fenton, & D. Freedman (Eds.), Misunderstanding the Internet
(pp. 34–65). Routledge.
Curran, J., Fenton, N., & Freedman, D. (2016). Misunderstanding the internet. Routledge.
Deibert, R., Palfrey, J., Rohozinski, R., & Zittrain, J. (Eds.). (2010). Access controlled: The shaping of power, rights, and rule in
cyberspace. The MIT Press.
De Sola Pool, I. (1983). Technologies of freedom. Belknap Press.
Dignam, P. A., & Rohlinger, D. A. (2019). Misogynistic men online: How the red pill helped elect trump. Signs: Journal of Women
in Culture and Society, 44(3), 589–612. https://doi.org/10.1086/701155
Farivar, C. (2011). The Internet of elsewhere: The emergent effects of a wired world. Rutgers University Press.
French, M., & Monahan, T. (2020). Dis-ease surveillance: How might surveillance studies address COVID-19? Surveillance and
Society, 18(1), 1–11. https://doi.org/10.24908/ss.v18i1.13985
Fuchs, C. (2017). Digital labour and Karl Marx. Routledge.
Gillespie, A., & Robins, K. (1989). Geographical inequalities: The spatial bias of new communications technologies. Journal of
Communication, 39(3), 7–18. https://doi.org/10.1111/j.1460-2466.1989.tb01037.x
Gillespie, T. (2018). Custodians of the Internet: Platforms, content moderation, and the hidden decisions that shape social media.
Yale University Press. https://doi.org/10.12987/9780300235029
Goldsmith, J., & Wu, T. (2006a). Introduction: Yahoo. In J. Goldsmith & T. Wu (Eds.), Who controls the internet? Illusions of a
borderless world (pp. 1–10). Oxford University Press. https://doi.org/10.1093/oso/9780195152661.003.0004
Goldsmith, J., & Wu, T. (2006b). Who controls the internet? Illusions of a borderless world. Oxford University Press. https://doi.
org/10.1093/oso/9780195152661.001.0001
Gorwa, R. (2019). What is platform governance? Information, Communication & Society, 22(6), 854–871. https://doi.org/10.1
080/1369118X.2019.1573914
Haim, M., Graefe, A., & Brosius, H. B. (2018). Burst of the filter bubble? Effects of personalization on the diversity of Google
news. Digital Journalism, 6(3), 330–343. https://doi.org/10.1080/21670811.2017.1338145
Hoffmann, A. L. (2019). Where fairness fails: Data, algorithms, and the limits of antidiscrimination discourse. Information,
Communication & Society, 22(7), 900–915. https://doi.org/10.1080/1369118X.2019.1573912
Hunt, R., & McKelvey, F. (2019). Algorithmic regulation in media and cultural policy: A framework to evaluate barriers to
accountability. Journal of Information Policy, 9(1), 307–335. https://doi.org/10.5325/jinfopoli.9.1.0307
Introna, L. D. (2016). The algorithmic choreography of the impressionable subject. In R. Seyfert & J. Roberge (Eds.), Algorithmic
cultures (pp. 38–63). Routledge.
Jefferson, B. (2020). Digitize and punish: Racial criminalization in the digital age. U of Minnesota Press.
Jenkins, H. (2014). Rethinking ‘rethinking convergence/culture. Cultural Studies, 28(2), 267–297. https://doi.org/10.1080/0
9502386.2013.801579
Jenkins, H., & Deuze, M. (2008). Editorial: Convergence culture. Convergence, 14(1), 5–12. https://doi.org/10.1177/
1354856507084415
Jones, S. (Ed.). (1995). Cybersociety: Computer-mediated communications and community. Sage.
Jones, S. (Ed.). (1997). Virtual culture: Identity and communication in cybersociety. Sage.
Jones, S. (1998). Information, internet and community: Notes toward an understanding of community in the information age.
In J. S. (Ed.), Cybersociety 2.0: Revising computer-mediated communication and community (pp. 1–34). Sage Publications.
Jones, S. (2006). Dreams of fields: Possible trajectories of internet studies. In D. Silver & A. Massanari (Eds.), Critical cyberculture studies (pp. ix–xv). New York University Press.
Lyon, D. (2018). The culture of surveillance: Watching as a way of life. John Wiley & Sons.
Mager, A. (2012). Algorithmic ideology: How capitalist society shapes search engines. Information, Communication & Society,
15(5), 769–787. https://doi.org/10.1080/1369118X.2012.676056
McLuhan, M. (1964). Understanding media: The extensions of man. MIT Press.
Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data
& Society, 3(2), 1–21. https://doi.org/10.1177/2053951716679679
Nechustai, E., & Lewis, S. C. (2019). What kind of news gatekeepers do we want machines to be? Filter bubbles, fragmentation, and the normative dimensions of algorithmic recommendations. Computers in Human Behavior, 90, 298–307.
https://doi.org/10.1016/j.chb.2018.07.043
17519020, 2022, 10, Downloaded from https://compass.onlinelibrary.wiley.com/doi/10.1111/soc4.13032 by Tulane University, Wiley Online Library on [04/05/2023]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
HOEBANX
HOEBANX
Negroponte, N. (1995). Being digital. Alfred A. Knopf.
Nguyen, M. H., Gruber, J., Fuchs, J., Marler, W., Hunsaker, A., & Hargittai, E. (2020). Changes in digital communication during
the COVID-19 global pandemic: Implications for digital inequality and future research. Social Media & Society, 6(3).
https://doi.org/10.1177/2056305120948255
Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. New York University Press.
O’Neil, C. (2016). Weapons of math destruction: How Big Data increases inequality and threatens democracy. Broadway Books.
Palfrey, J. (2010). Four phases of internet regulation. Social Research: International Quarterly, 77(3), 981–996. https://doi.
org/10.1353/sor.2010.0021
Pariser, E. (2011). The filter bubble: What the internet is hiding from you. Penguin.
Pasquale, F. (2015). The black box society. Harvard University Press.
Poster, M. (1990). The mode of information: Poststructuralism and social context. University of Chicago Press.
Poster, M. (2001). What’s the matter with the Internet? University of Minnesota Press.
Rheingold, H. (1993). The virtual community: Finding connection in a computerized world. Addison-Wesley Longman Publishing Co.
Roberge, J., & Seyfert, R. (2016). What are algorithmic cultures? In R. Seyfert & J. Roberge (Eds.), Algorithmic cultures
(pp. 1–37). Routledge.
Smith, N., & Graham, T. (2019). Mapping the anti-vaccination movement on Facebook. Information, Communication & Society,
22(9), 1310–1327. https://doi.org/10.1080/1369118X.2017.1418406
Snircek, N. (2017). Platform capitalism. John Wiley & Sons.
Stoll, C. (1995). Silicon snake oil: Second thoughts on the information highway. Anchor Books.
Talbott, S. T. (1995). The future does not compute: Transcending the machines in our midst. O’Reilly Media.
Terranova, T. (2004). Network culture: Politics for the information age. Pluto Press.
Turkle, S. (1995). Life on the screen: Identity in the age of the internet. Simon & Schuster.
Vaidhyanathan, S. (2012). The Googlization of everything (and why we should worry). University of California Press.
Vanden Abeele, M., & Nguyen, M. H. (2022). Digital well-being in an age of mobile connectivity: An introduction to the
special issue. Mobile Media & Communication, 10(2), 174–189. https://doi.org/10.1177/20501579221080899
Van Dijck, J. (2014). Datafication, dataism and dataveillance: Big Data between scientific paradigm and ideology. Surveillance
and Society, 12(2), 197–208. https://doi.org/10.24908/ss.v12i2.4776
Wellman, B. (2004). The three ages of internet studies: Ten, five and zero years ago. New Media & Society, 6(1), 123–129.
https://doi.org/10.1177/1461444804040633
Winner, L. (2014). Technologies as forms of life. In R. L. Sandler (Ed.), Ethics and emerging technologies (pp. 48–61). Palgrave
Macmillan. (Original work published 1983).
Yeung, K. (2017). Hypernudge: Big Data as a mode of regulation by design. Information, Communication & Society, 20(1),
118–136. https://doi.org/10.1080/1369118X.2016.1186713
Yeung, K. (2018). Algorithmic regulation: A critical interrogation. Regulation & Governance, 12(4), 505–523. https://doi.
org/10.1111/rego.12158
Zuboff, S. (2015). Big other: Surveillance capitalism and the prospects of an information civilization. Journal of Information
Technology, 30(1), 75–89. https://doi.org/10.1057/jit.2015.5
Zuiderveen Borgesius, F., Trilling, D., Möller, J., Bodó, B., De Vreese, C. H., & Helberger, N. (2016). Should we worry about
filter bubbles? Internet Policy Review: Journal on Internet Regulation, 5(1), 1–16. https://doi.org/10.14763/2016.1.401
AUT HOR BI OGRAPHY
Pauline Hoebanx is a PhD candidate in the department of Sociology and Anthropology at Concordia University.
Her research interests include the transformation of digital spaces and digital communications; the mobilization
of anti-feminist movements online; and the moderation of risky behaviors on social media platforms. Pauline.
hoebanx@mail.concordia.ca
How to cite this article: Hoebanx, P. (2022). What shapes the internet? An overview of social science and
interdisciplinary perspectives. Sociology Compass, 16(10), e13032. https://doi.org/10.1111/soc4.13032
17519020, 2022, 10, Downloaded from https://compass.onlinelibrary.wiley.com/doi/10.1111/soc4.13032 by Tulane University, Wiley Online Library on [04/05/2023]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
14 of 14
Online Identities and Gender Norms
MICHELE WHITE
Tulane University, USA
In contemporary society, especially in the West, individuals are thought to have
unique characteristics, to be coherently embodied, and to be agentive. Nevertheless, identity is often contextual. Individuals self-present and are understood in
terms of their environment. For example, a person’s self-presentation in the classroom or office is probably different than an individual’s actions and appearance
at a bar. Identity is associated with the individual’s understanding of the self, a
person’s self-presentations, other people’s view of the individual, state and cultural conceptions of the individual, and technological readings and renderings
of the person (Marwick, 2013). The internet has confirmed and challenged cultural conceptions of the unique, authentic, and unchanging individual because
people produce and engage with different versions of the online self. The notion
of the unchanging individual is complicated online by people’s ability to operate
numerous accounts, change names and profile descriptions, and self-present in
varied ways on platforms. Yet many social networking sites try to establish the
authenticity of participants and content by connecting members’ online identifiers,
including their name and avatar representation, to their legal name and physical
attributes.
In reality television programs and other media forms, individuals are often depicted
in a search for their authentic and best self. These notions of individuality and authenticity, as well as the production and mediation of the self, are also features of online
identities. While the individual’s identity is often thought to be stable and unique,
people’s online practices can result in fragmented, multiple, and shifting identities.
Describing the position of the individual can be difficult when a person is standing in a
physical location, is talking to a family member by using a mobile phone, and is sited in
a massively multiplayer online role-playing game because of an avatar, maps, and other
elements of the collaboratively supported reality. Due to these fragmented aspects
of identity and people’s connections to technologies, scholars studying digital media
often reference Donna Haraway’s (1991) conception of the cyborg. People’s cyborgian
connections to technologies, as considered in more detail below, challenge notions of
coherent individuals by making it difficult to distinguish among embodied beings, the
devices they employ, and the online identities that they produce and employ to engage
online.
Online identities can be defined as constructed personas, which are maintained by
system participants and technologies. These online identities can be associated with
aspects of people’s everyday identities, including their gender, race, sexuality, age, and
The International Encyclopedia of Gender, Media, and Communication. Karen Ross (Editor-in-Chief),
Ingrid Bachmann, Valentina Cardo, Sujata Moorti, and Marco Scarcelli (Associate Editors).
© 2020 John Wiley & Sons, Inc. Published 2020 by John Wiley & Sons, Inc.
DOI: 10.1002/9781119429128.iegmc026
2
ONL I NE I D E NT I T I E S A ND GE ND E R NOR MS
class. Feminist and other addresses to these intersectional identities consider how a person is addressed, self-presents, and understands themselves through a combination of
characteristics. For instance, individual and cultural conceptions of gender are influenced by ideas about race. Online identities can also be enacted in ways that do not
acknowledge or reproduce aspects of individuals’ embodied identities. People establish
online identities through such internet engagements as their use of chat settings, gaming platforms, e-commerce shopping, social media, and websites. An online identity is
often constituted when an individual signs up for a site and it is developed in concert
with the options of the setting and the social standards and terms of service of membership. An online identity is also produced through email addresses, user names, profile
pictures, and the chronicling of age, gender, and other information that individuals are
required (or decide) to convey when signing up to use a site. While identity is often
thought to be authentic and under the control of the individual, online drop down forms
and other sign-up features dictate the kinds of information individuals share. Prompts
and other reminders from sites may cajole individuals into providing specific kinds of
information.
Online identities are correlated with a specific individual through authentication,
which usually requires registration and signing in to a site. Some specifics of the
individual and the person’s online habits are also provided to sites through the installation of cookies and other tracking software on computers. Scholars consider how
the individual has an algorithmic identity that is produced by signing up for accounts,
using sites and systems, selecting articles and links, making purchases, and engaging
with other members. For instance, Facebook has a patent to allow lenders to reject
loans because of the credit scores of Facebook friends. An individual’s algorithmic
identity is garnered from online activities and the ways these practices relate to other
people’s interactions. However, the features of such attributes are not listed in profiles
or other accessible information and individuals cannot easily adjust the algorithmic
identities that are constituted by programmers’ conceptions and the functions of
software.
Algorithms perpetuate the kinds of social structures and identity categories that
are present in contemporary society. Algorithms can facilitate profiling and systems
of oppression that disenfranchise people because of their gender, race, and other
characteristics. For instance, Google’s algorithmic scans of content and its assessment
of what people want to know have resulted in intolerant autocomplete suggestions that
appear when people begin to perform searches. Google’s autocomplete suggestions
have included “Jews are evil,” “Islamists are terrorists,” and “Women need to know
their place.” The critical literature on algorithmic profiling and identity formation
raises questions about how these systems reproduce hierarchical identity categories,
where White heterosexual men from Western countries are privileged. Research on
algorithms also interrogates technologically supported systems of surveillance and
methods of getting individuals to control their own behavior because of fears of being
watched, algorithmically assessed, judged, and punished. These considerations of
self-regulation often reference Michel Foucault (1995) and his consideration of the
panopticon, which is a prison structure where the design of the building, people’s
knowledge that they can be watched, and people’s inability to know when they are
ONL I NE I D E NT I T I E S A ND GE ND E R NOR MS
3
being observed and might be punished get everyone in the system to control their
behavior.
The features of internet sites convey expectations about online identities and
behaviors. Individuals are often greeted with their user name and may be gendered
through the pushing of internet content and site representations. The generic and
supposedly invisible individual who is browsing websites is constituted as a particular
kind of consumer and user by the ways that sites address that person. Thus, internet
identities are also formed by how sites depict individuals, including representations
of expected users, site mascots and greeters, design features, site taglines and logos,
fonts, color schemes and layouts, and the characteristics that are listed in drop down
sign-up forms. For instance, wedding sites often constitute their users as young White
heterosexual women. They feature representations of such individuals in banners and
photographic slide shows. They tend to mention brides (but not other participants) in
the name of the site and forum categories and use pink and purple design elements
and cursive fonts that are associated with women and femininity. They depict such
gendered items as wedding dresses and engagement rings, which are ordinarily
associated with and worn by women, in graphics and site favicons. These visual icons
visually represent the site and members and appear as part of the site tab. While some
wedding sites have forums for grooms, the few men who sign up for these sites often
find that their identity is listed as that of a new bride. Men’s forum requests to help
address this identification indicate how stable, traditional, and hierarchical gender
categories persist in internet settings.
Online identities are produced in collaboration and contestation with other participants and sites. People on wedding sites assert and challenge their gender identity
by having to protest that they are men and that their online identifier is incorrect.
Individuals render and elaborate on other people’s identities when they tag someone
in pictures, indicate that they are in the same place as another individual, include the
person’s name in posts, reuse or alter a member’s name, and share representations
of someone in the form of photographs, videos, or memes. For instance, alt-right,
anti-Semitic, and White supremacist Twitter participants have commented on and tried
to threaten and dismiss Jewish journalists and other Jewish individuals by using multiple parentheses, or the echo symbol, around members’ names. They use these graphics
as a method of indicating people’s religious and ethnic identities. These graphics virtually point to Jewish individuals and make them into targets. Jewish participants have
diffused this dismissive commentary by adopting the graphic echo representation for
themselves—sometimes inverting the parenthesis to indicate their refusal of the cultural
dismissal. Some anti-Semitic participants then started using the inverted parentheses
to assert that they were not Jewish and that they were opposed to Jewish people.
Internet and computer identities are also shaped by software and operating system
design. Anna Everett (2002) has questioned how computer start-up messages once read
“Pri Master Disk, Pri. Slave Disk, Sec. Master, Sec. Slave” and rendered a “digitally configured ‘master/slave’ relationship” (p. 125). These and other messages also indicate that
the expected and ideal users are white. Lisa Nakamura (2002) provides intersectional
considerations of online identities and the ways male participants perpetuate stereotyped conceptions of gender and race by performing as servile women of color online.
4
ONL I NE I D E NT I T I E S A ND GE ND E R NOR MS
Programmers continue to employ varied visual representations and erotic terms to support their desires and views of other people (White, 2006). For example, the “finger”
protocol was developed as a means of providing information about the people who
are logged into a system and has resulted in phrases like “I fingered her.” Information
sources like “The Jargon Lexicon” support hierarchical gender relations, the objectification of women, and nonconsensual sex by providing the following example of the finger
command: “OK, finger Lisa and see if she’s idle.” The continued tendencies to identify
empowered participants as male, sign-up forms that only list binary gender options, the
inclusion of male before or above female on required sign-up forms, and security questions that ask for your mother’s maiden name produce participants, hierarchical gender
structures, and presumptions about heterosexuality and marriage mores. Security questions that require individuals to provide information about their mother’s maiden name
suggest that everyone has one set of heterosexual married parents and women’s maiden
names are unimportant and not commonly known within society.
Feminist internet studies scholars have chronicled how science fiction literature
influences these cultural conceptions of online identities and technologies and figures
White men as ideal and empowered users. “Cyberpunk” literature from the 1980s,
figures men’s purported escape from their bodies, conveys such gendered erotic
conceptions as “jacking in” to the network, and depicts men controlling technologies
and other people’s bodies. The term “avatar” is employed in online settings and some
cyberpunk literature to describe individuals’ online identities and the relationship
between empowered subjects and controllable objects. Neal Stephenson considers
power inequities and avatar-based computer use in his 1992 novel Snow Crash. He
depicts a setting where “people are pieces of software called avatars” and they communicate in the virtual metaverse (1992, p. 35). However, this is not an equitable setting
where distinctions between people have been effaced. In the novel, off-the-shelf avatars
and the people who use public terminals are ostracized. Stephenson underscores some
of the problems with online identities and communication but he has also tried to
establish his place in an internet and science fiction canon by claiming that he coined
the term “avatar.” Stephenson indicates that it was only after publishing Snow Crash
that he discovered that Randy Farmer and Chip Morningstar used the term to describe
characters in Habitat, which was an early internet setting designed in 1985. Stephenson
does not mention that the term also appears in Poul Anderson’s The Avatar, which was
published in 1978.
Science fiction literature and online settings suggest that there is a correlation
between physical bodies and representations and a sort of trace of the embodied
person is deposited in internet settings. For instance, the name of the Second Life
interface and aspects of the system, which is a massively multiplayer online setting,
correlate individuals’ embodied identities with their site representations. The online
experience is supposed to be the person’s alternative life. This is highlighted when
Second Life uses a twinned representation of an individual to illustrate the question
“Who Do You Want To Be?” In the doubled image, the photograph and the graphical
avatar image have the same features and clothing. The few distinctions between them,
which include differently shaped glasses, emphasize their association. Second Life
further argues, “Your avatar is the digital persona you create and customize. It’s you in
ONL I NE I D E NT I T I E S A ND GE ND E R NOR MS
5
the virtual world—whoever you want to be.” While individuals are promised that they
can be anything, the associated images tend to feature young, thin, and light-skinned
women. Most of the female avatars also sport tight clothing and low cut tops that
reveal their prominent breasts. Second Life thus suggests that individuals want to
be associated with a very traditional set of attributes and practices, including being
feminine women.
The white hand-pointer, which appears as part of the graphical user interface
when individuals mouse over website links and allows people to manipulate digital
documents and files, also acts as an avatar (White, 2006). It supports other online
representations of the body and is correlated with depictions of White people. It
may seem that the white color of the hand is a design convention and is needed to
make the interface hand visible against varied screen backgrounds but the arrow and
cursor are black, outlined in white, and quite visible. Due to these attributes, the white
hand-pointer produces a racial inside and outside as a part of internet and computer
interfaces and engagements. The white hand seems to float over various landscapes
and promises that White people can possess all situations and terrain. The white
hand-pointer is supposed to be a representation of the individual’s options and actions
and thereby presents a raced image of ideal users and renders online identities as
white.
Diane Carr (2003) notes the intense correlation of the individual and the manipulated
representation. Viewers often respond to gaming with such physical gestures as “flinching when their avatar bangs their head” and moving when the avatar changes position.
Carr’s indication that avatars are “our emissaries and, at least to a degree, our doubles,”
which is also conveyed by Second Life, suggests how identification happens when using
and after disengaging from digital media. Online identities persist in systems after individuals have logged off and these representations inform individuals about who they are
and who they should be. While earlier literature about online identities proposed that
they offered opportunities for identity play and exploring different selves, algorithmic
identities and other structuring aspects of systems suggest how these technologies shape
and produce people.
Carr chronicles how people respond to online events with embodied reactions. People’s emotions and physiognomic states are often conveyed through embodied gestures
and tone of voice. These physical expressions are limited in many online engagements.
In written texts, including online communication, nuanced emotions and tone of voice
can be conveyed through such punctuation as exclamation and question marks. Graphic
representations of individuals’ facial gestures are also employed in digital settings to
elaborate on people’s feelings and identities. Emoticons, or emotional icons, are ordinarily produced by using ASCII keyboard characters and allow individuals to convey
aspects of their identity, including their embodied feelings. Classic emoticons were supposed to be read as sideways facial expressions. Scott E. Fahlman is often credited with
proposing ASCII emoticons on a bulletin board discussion in 1982 as a method of distinguishing between serious and joke posts. He proposed that the smiley face be used to
mark joke posts. While emoticon use continues, and in some systems the ASCII characters are translated into visual representations, emoji have also become a common
language for conveying individuals’ ideas, gestures, and feelings. Emoji were developed
6
ONL I NE I D E NT I T I E S A ND GE ND E R NOR MS
in Japan in the late 1990s by Shigetaka Kurita to address the character limitations of cell
phone screens. Emoji were available to international users through add-on applications
but Apple mainstreamed emoji with its iOS 5 smartphone software, which came with
a library of emoji. These representations are supported by a Unicode standard. Popular
use has established white and yellow emoji as the norm and thus the standard kind of
online persona and participant. However, Unicode introduced a range of skin colors for
some emoji in 2015.
In the early 1990s, Susan C. Herring (1992) considered online communication. She
and other scholars started interrogating claims that the gender and other identity characteristics of participants were imperceptible online, and that users were unconcerned
about traditional identities and treated everyone equitably. These feminists challenged
the promises of empowerment and anonymity that were often linked to internet use
during this time period. Some individuals used and continue to employ the internet
with the idea that personal information cannot be verified and, as Peter Steiner’s 1993
New Yorker cartoon asserts, “On the internet, nobody knows you’re a dog.” Steiner’s portrayal of a dog using a computer and advising another dog about internet anonymity is
repeated in numerous venues and it is one of the reasons that animals continue to be
associated with the computer and internet.
Steiner and other people’s early narratives about online engagement suggested that
people could explore and perform different identities. This promise of anonymity and
equity was never fully realized. Requests for individuals to share their ASL, or age, sex,
and location, were a common occurrence in text-based chat settings. Women webcam operators are similarly barraged with requests to hold up the names of individuals as a means of demonstrating that their video images are being delivered in real
time and that the depicted women are available to viewers. These demands indicate
the value that participants place on traditional conceptions of identity and realness in
online settings. The continued correlation of online identities to individuals’ names
is typified by Facebook’s emphasis on members’ real names and its removal from the
system of Native Americans, drag queens, and other people it deemed to have inappropriate identifiers.
Alice Marwick (2013) has chronicled the cultural shift from people identifying online
identities as a form of play to individuals correlating internet identities with specific
named individuals. Twitter has underscored the link between accounts and named individuals by offering a service to verify the accounts of celebrities, politicians, and other
well-known individuals. The Catfish (dir. Henry Joost and Ariel Schulman, 2010) film
and television show underscore people’s concerns about and the cultural fascination
with individuals who are self-presenting as something or someone else online. There is
a growing emphasis on verifying accounts with phone numbers and credit cards. Individuals may also have different accounts and manage their identities and versions of
the self such as creating finsta (fake Instagram) and rinsta (real Instagram) accounts.
While the articulation of finsta and rinsta may seem to suggest that there is a real and
inauthentic version of the individual, people often use finsta accounts to post more personal and embarrassing images and information.
Julian Dibbell’s 1993 Village Voice article, which considered “A Rape in Cyberspace,”
offers a critical account about experiences of online identity and embodiment.
ONL I NE I D E NT I T I E S A ND GE ND E R NOR MS
7
He describes how textual enactments of sexual violence in the LambdaMOO programmable setting affectively influenced individuals. Dibbell chronicles how responses
to a series of graphic descriptions of participants being raped resulted in the development of community governance and, more disturbingly, in increased interest in the
system, including academic research. Articles on feminist news websites chronicle
women’s experiences with violent threats and having personal information posted in
internet settings, which is known as doxing and is a continuation of the behaviors
Dibbell describes. For instance, feminist journalists have chronicled a man’s use of
Twitter to inform reporter Amanda Hess that he was going to rape and decapitate her.
Feminist scholar Emma A. Jane (2016) argues that there has been too little feminist
research on such violent and oppressive practices, which are designed to constitute
women as vulnerable and too outspoken. Some women’s current and former domestic
partners use the internet as a means of conveying threats and instigating groups
of individuals to harass women. The goal of the men who employ these aggressive
practices is to make women’s online identities unbearable and to silence the very
feminist reporters and bloggers who address these issues.
Donna Haraway’s (1991) feminist articulation of the cyborg has challenged how we
understand the relationship between bodies and technologies and the characteristics
of identity and individuality. While Haraway’s 1985 manifesto proposes an intersectional politics and way of revisioning identity, community, romantic and reproductive
unions, and origin stories, the text is often employed in considering cyborgian connections with machines. Haraway proposes that individuals are experiencing border
and identity breakdowns and the purported distinctions between animal and human,
animal-human organisms and machines, and physical and nonphysical are ruptured.
She employs feminist and queer science fiction literature to refuse the idea that these
connections need to be contained. Feminist theory, including Haraway’s conception of
the cyborg, is employed in Theresa M. Senft’s (1996) introduction to an edited journal
issue on sexuality and cyberspace. Senft resists indications that people are liberated by
online gender performances and instead focuses on the ways technologies materialize
and efface bodies, including cultural and medical conceptions of women and the disabled. Senft’s interest in researching the internet is correlated with her mother’s illness
and encouragement for her mother to labor along with the ventilator and machine that
are sustaining her life. Senft thus articulates the intimate ways individuals are produced
through and connected to varied technologies.
Individuals produce their identities through online engagements and are produced
by them. Individuals are also identified as valuable commodities when online businesses can sell information about their members and offer algorithmic information
about these participants’ behaviors and buying habits. Thus, verified and tracked online
identities have a value for companies and sites. People also support the values and worth
of companies by developing brand identities and communities. Individuals labor for
corporations when they produce content for sites, wear branded goods, speak enthusiastically about products, reenact rituals associated with the company, and declare their
allegiance to the product and other brand community members. For instance, individuals extend Apple’s brand and identity when the Apple icon appears along with
their computer use. People also express a composite or cyborgian identity when they
8
ONL I NE I D E NT I T I E S A ND GE ND E R NOR MS
deeply identify with the product and other members of the brand community. They are
attached to the company and device when they keep mobile phones in their hands and
cannot leave branded technologies behind. People’s practices can support notions of the
individual and their behaviors can have more monstrous implications when they do not
distinguish between the self and companies, products, technologies, and other individuals. Thus, people’s online identities can support traditional beliefs and structures and
they can challenge cultural investments in normative values and embodied positions.
SEE ALSO: Gendered Hate Online; Gendered Identities Online; Online Abuse and
Harassment; Sexism and Misogyny
References
Carr, D. (2003). Play dead: Genre and affect in Silent Hill and Planescape Torment. Game studies: The International Journal of Computer Game Research, 3(1), Retrieved from http://www.
gamestudies.org/0301/carr
Dibbell, J. (1993, December 21). A rape in cyberspace; or, how an evil clown, a Haitian trickster
spirit, two wizards, and a cast of dozens turned a database into a society. Village Voice, 36–42.
Everett, A. (2002). The revolution will be digitized: Afrocentricity and the digital public sphere.
Social Text, 20(2[71]), 125–146.
Foucault, M. (1995). Discipline and punish: The birth of the prison. New York, NY: Vintage.
Haraway, D. (1991). Simians, cyborgs, and women: The reinvention of nature. New York, NY:
Routledge.
Herring, S. C. (1992). Gender and participation in computer-mediated linguistic discourse. Paper
presented at the annual meeting of the Linguistic Society of America, Philadelphia, PA.
Jane, E. A. (2016). Online misogyny and feminist digilantism. Continuum, 30(3), 284–297.
Marwick, A. (2013). Online identity. In J. Hartley, J. Burgess, & A. Bruns (Eds.), A companion to
new media dynamics (pp. 355–364). Malden, MA: Blackwell.
Nakamura, L. (2002). Cybertypes: Race, ethnicity, and identity on the internet. New York, NY:
Routledge.
Senft, T. M. (1996). Introduction: Performing the digital body—A ghost story. Women and Performance: A Journal of Feminist Theory, 9(1), 9–33.
Stephenson, N. (1992). Snow crash. New York, NY: Bantam Books.
White, M. (2006). The body and the screen: Theories of internet spectatorship. Cambridge, MA:
MIT Press.
Further Reading
Baym, N. K. (2015). Personal connections in the digital age. Cambridge, UK: Polity Press.
Donath, J., & Boyd, D. (2004). Public displays of connection. BT Technology Journal, 22(4), 71–82.
Flanagan, M., & Booth, A. (Eds.). (2002). Reload: Rethinking women + cyberculture. Cambridge,
MA: MIT Press.
Hillis, K. (2009). Online a lot of the time: Ritual, sign, fetish. Durham, NC: Duke University Press.
Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. New York:
NYU Press.
Papacharissi, Z. (Ed.). (2010). A networked self: Identity, community, and culture on social network
sites. New York, NY: Routledge.
ONL I NE I D E NT I T I E S A ND GE ND E R NOR MS
9
Ruberg, B. (2017). What is your mother’s maiden name? A feminist history of online security
questions. Feminist Media Histories, 3(3), 57–81.
Tiidenberg, K., & Cruz, E. G. (2015). Selfies, image and the re-making of the body. Body & Society, 21(4), 77–102.
Michele White is professor of internet and new media studies in the Department
of Communication at Tulane University. Her monographs include The Body and
the Screen: Theories of Internet Spectatorship (MIT Press, 2006); Buy It Now: Lessons
from eBay (Duke University Press, 2012); Producing Women: The Internet, Traditional
Femininity, Queerness, and Creativity (Routledge, 2015); and Producing Masculinity: The Internet, Gender, and Sexuality (Routledge, 2019). She is currently writing
Touch/Screen/Theory. Some of her ongoing research interests include online narratives
about expertise, internet aesthetics and beauty cultures, and how notions of touching
and feeling are employed to elide computer interfaces and representations.

Calculate your order
275 words
Total price: $0.00

Top-quality papers guaranteed

54

100% original papers

We sell only unique pieces of writing completed according to your demands.

54

Confidential service

We use security encryption to keep your personal data protected.

54

Money-back guarantee

We can give your money back if something goes wrong with your order.

Enjoy the free features we offer to everyone

  1. Title page

    Get a free title page formatted according to the specifics of your particular style.

  2. Custom formatting

    Request us to use APA, MLA, Harvard, Chicago, or any other style for your essay.

  3. Bibliography page

    Don’t pay extra for a list of references that perfectly fits your academic needs.

  4. 24/7 support assistance

    Ask us a question anytime you need to—we don’t charge extra for supporting you!

Calculate how much your essay costs

Type of paper
Academic level
Deadline
550 words

How to place an order

  • Choose the number of pages, your academic level, and deadline
  • Push the orange button
  • Give instructions for your paper
  • Pay with PayPal or a credit card
  • Track the progress of your order
  • Approve and enjoy your custom paper

Ask experts to write you a cheap essay of excellent quality

Place an order

Order your essay today and save 30% with the discount code ESSAYHELP