UT Asking Questions & Making Comments Based on Readings Paper
Pose a question, share a quote, or make a comment about something you found interesting, confusing, etc. in each of the readings that I provide.
Did you find it interesting? Boring? Outdated? Prescient? Contemporary? Relevant? Alarmist? Confusing? Impenetrable? Condescending? Persuasive? Why or why not? Feel free to draw connections between an idea or example from the reading and your own experiences, either to confirm or challenge what the author(s) have argued, or to express confusion or disagreement with the substance, form, or even style of the readings.
THE INFORMATION SOCIETY
2016, VOL. 32, NO. 5, 318–325
http://dx.doi.org/10.1080/01972243.2016.1212616
Superhero fan service: Audience strategies in the contemporary interlinked
Hollywood blockbuster
Bart Beaty
Department of English, University of Calgary, Calgary, Canada
ABSTRACT
ARTICLE HISTORY
This article explores the specific textual strategies employed by Marvel Studios to construct insider
and outsider audiences of the interlinked film series comprising the Marvel Cinematic Universe. It
argues that Marvel Studios is borrowing storytelling strategies developed by Marvel Comics in the
1960s as a means of growing audiences for film franchises in a modular fashion. These strategies
include the use of anticipatory postcredit sequences that serve as advertising teases for future
releases; “Easter eggs,” or semi-hidden onscreen elements intended to be noticed only by certain
viewers; crossovers, or the use of characters from one franchise in the film or television program of
other characters; linked repercussions, narrative consequences that play out across multiple media
properties; and modular story development, a development strategy intended to reduce economic
risk. It is noted, ironically, that the strategies that have led to the current fascination with superhero
films are the same ones that caused the collapse of interest in superhero comic books.
Received 24 September 2014
Accepted 25 April 2016
The central paradox of superheroes today is not that they
have never been both more popular and less popular;
rather, it is that very thing that has made them so popular is the thing that has made them so unpopular. To
unpack this conundrum, consider that at a time when
superhero movies have become the dominant economic
driver of the Hollywood film industry—selling hundreds
of millions of tickets and generating billions of dollars in
revenues—the sales of superhero comic books are reaching their historic nadir. Even the top-selling superhero
comic books today struggle to sell 100,000 copies, with
the typical titles selling less than half of that number.
Although, due to their relatively low production costs,
superhero comic books have remained generally profitable for their publishers (Marvel Comics, DC Comics,
Image Comics, and a number of smaller rivals), it has
become clear that they are no longer a significant popular cultural phenomenon in their own right; rather,
superhero comic books have become the source material
for more lucrative media forms ranging from movies
and television, to video games and licensed properties
like toys and t-shirts. At a time when the superhero has
moved to the center of global popular culture, so-called
“geek chic” has become increasingly prominent, establishing new social relations among audiences for this cultural form even as the central medium of American fan
culture fades quickly into irrelevance.
CONTACT Bart Beaty
beaty@ucalgary.ca
Published with license by Taylor & Francis. © Bart Beaty
KEYWORDS
Serial continuity; superhero
comic books; superhero
movies; superheroes
As Benjamin Woo has noted, the social spaces of
comic book fandom have long been structured by hierarchies of taste that are played out at the level of fan knowledge (Woo 2011). In various physical and online fan
communities, cultural capital is accrued through (among
other things) mastery of the arcane backstories that organize the fictional collaborative worlds inhabited by
superhero characters. The insider/outsider relationship
studied by Woo is frequently played as a distinction
between “hardcore” and “casual” audiences. Often deeply
gendered (particularly given the heightened visibility of
female fans over the past decade), the tension between
these social groups is one of affective intensity—the
hardcore superhero fan base (which is comprised of only
a few hundred thousand members worldwide) is the one
with a deep and abiding interest in superhero comic
books (and, by extension, films), while the casual audience (which numbers in the tens of millions) is frequently assumed to be solely interested in superhero
films, television, and licensed products. Frustration in
the hardcore superhero fans is often expressed toward
the casual audience (as in criticism of “fake geek girls”)
or toward cultural creators, who structure superhero
films around different continuities and characterizations
than the one that predominate in the source material—
comic books. Generally speaking, this frustration is unidirectional. Hardcore superhero fans often resent casual
Department of English, University of Calgary, Calgary, AB T2N 1N4, Canada.
THE INFORMATION SOCIETY
fans for their lack of deep involvement (the casual fan
“hasn’t done the work” of fandom). Casual fans, by way
of contrast, do not resent the hardcore; when they
acknowledge them it is usually to make light of their fannish knowledge and their affective intensity (see, e.g., the
treatment of fandom on the American television program The Big Bang Theory).
The insider/outsider dynamic that structures contemporary superhero fandom has its origin in the long,
slow decline of interest in superhero comic books and
the explosive rise of interest in superhero films. While
superhero titles regularly sold millions of copies per
month in the 1940s, and many hundreds of thousands
of copies as late as the 1990s, their economic decline
has been precipitous over the course of the past two
decades. Reasons for this are varied, but a prime cause
has been the ever-increasing complexification of superhero storytelling that has narrowed the audience to only
the most committed readers. Superman and Batman,
for example, have had their monthly adventures published for more than 75 years, amounting to literally
thousands of stories—all of which are, to a greater or
lesser degree, considered to be part of the “canon.”
Moreover, since the mid 1980s, superhero publishers
have frequently resorted to gimmick-driven sales events
in which “universe-wide” stories are told that have
repercussions across the titles of dozens of characters.
In this way, Superman’s adventures include not only
thousands of stories in which he is the featured character, but additional thousands in which he is a supporting character.
Given the vastness of this narrative, it is perhaps
unsurprising for publishers to discover that the extensive
backstories of their characters are off-putting to new
entrants into the field (casual readers). At the same time,
however, superhero films since about 2000 have increasingly and deliberately adopted this exact same storybuilding strategy (interlinked and extensive continuity
between titles) as a way of cross-marketing films (in the
X-Men and Wolverine franchises of Fox Studios, or,
more pointedly, in the Marvel Cinematic Universe films
that feature the adventures of Iron Man, Hulk, Thor,
Captain America, Ant-Man, and The Guardians of the
Galaxy in seven distinct but interconnected film franchises), albeit on a smaller scale and at a slower velocity
(seven films in 16 years for the X-Men and 12 films in
8 years for Marvel Studios, as opposed to the hundreds
of monthly titles published by Marvel or DC Comics
over the course of a single year). This storytelling strategy, arguably the most innovative development in Hollywood filmmaking of the past quarter century, has been
borrowed directly from comic book publishing strategies
developed originally in the 1960s.
319
Ironically, the very techniques that have conspired to
make Marvel Studios the most successful production
company in contemporary cinema are the exact same
ones that rendered Marvel Comics a marginal publishing
presence in the same era. This article focuses on the specific developments that have brought this to pass, the
narrative characteristics that it has created, including
postcredit scenes and linked repercussions, and the new
social relations that are likely to be one of the results.
Continuity and the shared universe in superhero
comic books
Continuity is central to all forms of ongoing narrative
development. In television, for instance, the daytime
soap opera format has long utilized a highly complex
form of continuity in which the life stories of a small
number of characters are shared over a period of decades. Because of this, soap operas are closed to new viewers. Producers seek to mitigate this problem through
repetition and slower-than-life pacing (Modleski 1983).
On the other end of the spectrum, certain sitcoms have a
tendency toward only a low level of continuity. In the
1950s and 1960s, sitcoms were constructed in such a way
that it was not necessary to watch some, or even most,
episodes to appreciate any specific episode. Frequently,
the necessary plot elements were explicitly recounted in
the show’s theme song (Gilligan’s Island, e.g., has a
theme song that lays out the entire backstory of the program, and introduces all of the characters and their relationships), allowing viewers to begin watching at any
time. Contemporary American television programming
has tended to integrate at least some level of continuity
even into episodic television (Friends has the series-long
through-line of the Ross/Rachel romantic relationship,
but most individual episodes can stand alone).
In American superhero comic books, the notion of
continuity developed over time. From the introduction
of the genre in 1938 through approximately 1960, there
was only a very low level of continuity in the majority of
superhero comic books, and individual stories were
crafted to be self-contained. Although characters like
Batman and Superman regularly interacted across the
so-called DC Universe, their stories were not integrated
in any meaningful way. Each story reset the relations
between characters, and specific knowledge of previous
events was never required to understand a story published. One notable exception was the use of the character Mister Mind (a tiny alien inchworm) as a villain in
the adventures of Fawcett’s Captain Marvel from 1943 to
1945, wherein an ongoing story unfolded in a manner
akin to the cinematic adventure serials of that period. As
a narrative and industrial strategy, continuity, or the lack
320
B. BEATY
of same, carries significant risks and rewards. Cultural
objects with a low degree of continuity are relatively
open—audiences can join at any time, and the audience
has the potential to grow easily—while at the same time
the low level of investment can allow the works to fade
quickly. Those with a high degree of continuity are relatively closed and often keep potential audiences from
joining unless they are willing to start from beginning of
the narrative. That potentially limits the audience but
also generates a more affectively engaged one (consider,
e.g., how few people casually watch random episodes of
highly continuity-driven shows like Breaking Bad or
Lost).
For the most part, however, American superhero
comic books eschewed continuity during the immediate
post-World War II period, opting rather to feature
stand-alone stories that could reach the broadest possible
audience, which was generally assumed to be both young
and highly transient. One of the major innovations introduced to the superhero comic book by Marvel Comics in
the 1960s was the notion of heightened continuity within
a shared narrative universe. As early as Fantastic Four #4
(May 1962), Marvel ran announcements at the bottom
of its pages revealing that “The Hulk Is Coming!” in
anticipation of the launch of a new title that would debut
the following month, and in Fantastic Four #6 (September 1962), the title introduced footnotes referencing earlier appearances of the issue’s villains, Dr. Doom and the
Submariner. The first issue of The Amazing Spider-Man
(March 1963) had the character interacting with the Fantastic Four, and characters from that title appeared in
issues 5, 17, and 19. Charles Hatfield has argued that the
“tight fictive continuity” in Marvel Comics was developed ex post facto and not from a deliberate publishing
strategy, citing the lack of spin-off titles for characters
introduced in the pages of other titles (Hatfield 2013).
Nonetheless, from at least the introduction of SpiderMan in 1963, it was clear that Marvel Comics, and
writer–editor Stan Lee in particular, conceptualized the
Marvel Universe as a shared storytelling space in a much
different manner than had publishers like DC Comics
and Fawcett.
Hatfield correctly notes that the increasingly heightened levels of continuity at Marvel (and, later, DC) were
a function of the influence of organized superhero fandom. Hatfield specifically cites the influence of Roy
Thomas (a fan who became a staff writer in 1965, and
later became editor-in-chief), Peter Sanderson (hired as
Marvel’s archivist), and Mark Gruenwald (a fan who
rose to the level of executive editor). While Marvel Comics had always had a strong soap operatic element (in
Fantastic Four, the courtship of Reed Richards and Sue
Storm), it was increasingly heightened in the 1970s, and
complicated by a tendency to feature characters in multiple titles (Spider-Man in both The Amazing Spider-Man
and Peter Parker, The Spectacular Spider-Man beginning
in 1976). By the 1980s universe-wide storytelling was
increasingly the norm, with company-wide crossovers
that affected vast numbers of titles (the twinned “maxiseries” Marvel Super Hero Secret Wars from May 1984 to
April 1985 at Marvel Comics and Crisis on Infinite
Earths from April 1985 to March 1986 at DC Comics
ushered in the model of heightened continuity complexity). By the 1990s, popular characters like the X-Men
and Spider-Man were featured in multiple monthly titles
and featured continuities that were fraught with contradiction. Fantastical elements ranging from time travel to
cloning allowed creators to craft increasingly baroque
story lines that ended up limiting audience growth by
frustrating comprehensibility.
While the causes of declining superhero comic book
sales are complex (competition from other forms of
media, multiple economic recessions, significant changes
in distribution, and the shifting priorities of publishers,
to name but a few), it is nonetheless clear that a strong
correlation exists between the intensification of crosscompany continuity in superhero stories and declining
sales of the titles as publishers increasingly catered to a
small, dedicated fandom rather than a broad, casual
readership. During the 1960s, monthly sales of Superman
(one of the few titles that stretches back all the way to the
1930s) regularly topped 800,000 copies. By the early
1980s, that figure had declined to only 200,000 copies
(Comichron 2015), and in a post-Crisis on Infinite Earths
context sales fell even more quickly. Indeed, a chart of
Superman sales from the 1940s to 2010 is an almost vertical line downward that depicts a catastrophic decline in
the fortunes of the title. Christian Hoffer has recently
studied the length of time between major DC Comics
reboots of the franchise (Hoffer 2015). Prior to Crisis on
Infinite Earths, the DC Universe had remained largely
unchanged for 293 months—almost two and a half decades. During this period, sales declined slowly but inexorably over time. Since that time, however, DC has
published Zero Hour (1994), Crisis on Infinite Earths
(2005–2006), 52 (2006–2007), Flashpoint (2011), Convergence (2015), and other, smaller continuity-altering
storylines. As Hoffer notes, over the past three decades
DC has rebooted its universe on average every 6 years,
but in the decade since Crisis on Infinite Earths that has
fallen to every 40 months. During this period, sales of
their comics have plummeted. In a 2001 essay, Matthew
McAllister noted that comic book sales in 1997 (which
were overwhelmingly but not exclusively sales of superhero comics) had fallen to $425 million from
$850 million in 1993 (McAllister 2001). In 2013, The
THE INFORMATION SOCIETY
New York Times reported that the comics market had
risen to $870 million in sales (Gustines 2014). Notably,
this figure included revenues from two new sources:
$90 million in digital sales (“Digital Comics”) and
$96 million in sales through traditional booksellers
(Hibbs 2014); both of these additional revenue streams
include significant conversions of publisher’s back catalogues rather than the sales of new, in-continuity comic
books. Nonetheless, even accounting for significant new
revenue streams, in inflation-adjusted dollars the comics
industry has shed half a billion dollars worth of annual
sales since 1993 when the continuities of corporateowned superheroes entered into their baroque phase. In
the 2000s, it has not been uncommon for certain crosscompany special events to play out in literally hundreds
of issues and dozens of titles, even while monthly sales
fall to historic lows.
The development of the superhero film
Although the heroes of American comic books migrated
to the silver screen in the 1940s with serials featuring the
exploits of Superman, Batman, and Captain Marvel, the
first wave of big-budget Hollywood superhero films did
not begin until Superman (1978), starring Christopher
Reeve as the man from Krypton. The subsequent Superman franchise, currently encompassing six films, is
symptomatic of the complex web of approaches to continuity that can exist within the framework of a single film
franchise. The first Superman movie was a stand-alone
film. The story it tells is complete in itself, and there
need not have been any additions to the story to attain
narrative closure. Had the film failed at the box office
and the franchise been abandoned, the film itself would
have been sufficient. Superman II (1980) is a direct
sequel in close continuity with the first film, and parts of
it were filmed at the same time as the earlier work. While
enough of the action is recapped so as to allow viewers
unfamiliar with the original access to the story, it is a
clear continuation with the same cast and characters.
The third film (1983) bears only a small connection to
the previous two, adopting a different tone and new
threats, and retaining only a fraction of the original cast.
Superman IV: The Quest for Peace (1987) returns Gene
Hackman’s Lex Luthor to the storyline and continues the
continuity of the earlier films. While there is a continuity
across the four films, it is loose enough to allow the individual films to stand and fall on their own merits, and
the series is not constructed as a tight narrative in multiple parts (as are the Star Wars films, to use a contemporaneous example). With the declining fortunes of the
franchise, Superman was put on hiatus for nearly two
decades, resurrected in 2006 with Superman Returns
321
starring Brandon Routh. This film is a clear and strong
example of a “retcon,” or retroactive continuity change.
Retcons are accidental or deliberate shifts common in
superhero, science fiction, and soap opera storytelling in
which previously established narrative facts (or canon)
are altered by subsequent developments. The events of
Superman Returns, notably, take place after those of
Superman II, but proceed as if neither Superman III nor
Superman IV (and, indeed, large parts of Superman and
Superman II) had never occurred. In this way, Superman
Returns sought to deliberately invalidate two of the previous Superman films, removing them from the canon of
the storyline. When this film did not perform to studio
expectations, the franchise was rebooted a second time
in 2013 with Man of Steel, starring Henry Cavil. Akin to
a remake, the reboot simply serves to begin the series
afresh, proceeding as if none of the previous films exist
in narrative terms.
A similar, but less complex, system defines the multiple series of Batman movie franchises. The series of four
films begun by director Tim Burton in 1989, and initially
starring Michael Keaton, proceeds, like the Superman
series, as loosely connected sequels, but does not tell an
ongoing or complete story. When the franchise was
rebooted by Christopher Nolan in 2005 it was as a threepart story cycle that largely revolves around Batman’s
relationship with Jim Gordon and Rachel Dawes. The
Nolan series proceeds as if the series created by Burton
does not exist, and both assume that the 1960s television
series has no particular influence on the development of
their story structures. While the Burton-initiated Batman
series has a very loose continuity across its four films, the
Nolan trilogy is far tighter, despite the numerous contradictions that are introduced into his series through poor
plotting. Nonetheless, while the Batman films move
from a situation of low to high continuity, even the
Nolan films eschew the kind of complex play with continuity that have become the hallmark of the films produced by Marvel Studios.
Continuity, the shared universe, and fan service
in superhero cinema
The strategy adopted by Marvel Studios in creating what
is now widely known as the Marvel Cinematic Universe
deliberately draws upon many of the innovations introduced into comic book storytelling by Marvel Comics in
the early and mid 1960s. Importantly, while the film
franchises involving Iron Man, Thor, Captain America,
The Hulk, Ant-Man, the Guardians of the Galaxy, and
the television series Marvel’s Agents of S.H.I.E.L.D. and
Marvel’s Agent Carter are closely interlinked, they can
also be understood as distinct entities. In this way,
322
B. BEATY
Marvel Studios has crafted a modular system in which
parts of the whole can be emphasized and deemphasized
as circumstances warrant. Anticipation is created for
future films to the extent that each film not only is an
event/text in itself, but serves as a promotional tool for
future events/texts. The elements that serve this promotional function variously address hardcore and casual
audiences in different ways, establishing a hierarchy of
knowledge, connection, and intimacy within the consumer base to bring about the conversion of casual viewers into deeply committed hardcores. A number of
specific strategies have been used by Marvel Studios
toward this end, including postcredit scenes, Easter eggs,
crossovers, linked repercusions, and modular story
development:
1. Postcredit scenes. While postcredit scenes have
been a relative commonplace in American filmmaking for decades, particularly in comedies (e.g.,
Ferris Bueller’s Day Off), they have been widely
used in the Marvel Cinematic Universe since Iron
Man (2008). In that film, the postcredit sequence
introduces the central character of S.H.I.E.L.D.
director Nick Fury (Samuel L. Jackson), who
approaches Tony Stark (Robert Downey, Jr.) to discuss “the Avengers initiative.” Released only a
month later, The Incredible Hulk (2008) featured
an ending in which Downey, in a cameo as Tony
Stark, is introduced to discuss the Avengers. The
appearance of Downey in the second film firmly
established the continuity between them, in the
same way that early issues of the Fantastic Four
featured the rampaging monster as part of the
same world as the superhero family. Since those
initial films, postcredit sequences have become
increasingly one of the hallmarks of the studio.
The arrival of the first Thor film was foreshadowed
by the discovery of his hammer, Mjolnir, at the
end of Iron Man 2 (2010), while Guardians of the
Galaxy was introduced with a cameo featuring The
Collector (Benicio Del Toro) at the conclusion of
Thor: The Dark World (2013). Some films in the
series have directed attention to presumed sequels
within the specific title: Captain America: The Winter Soldier (2014) hints at the future direction for
the character Bucky Barnes; The Avengers (2012),
which has both a midcredits scene and a postcredit
scene, introduced the character of Thanos, a major
villain in both Guardians of the Galaxy and, presumably, in future Avengers films. To date, most
postcredit scenes in the Marvel Cinematic Universe
have been forward-looking, promoting future
releases, although Guardians of the Galaxy, with its
appearance of cult character Howard the Duck,
may have been more clearly fan service than a
promise of material to come. Importantly, some
postcredit scenes also serve importantly to segregate the audience into insider/outsider groups.
Thor: The Dark World, for example, introduced a
character who had not appeared in that film, and
who was unlikely to be familiar to anyone who had
not read extensively in the Marvel Comics
universe.
2. Easter eggs. The insider/outsider divide is heightened considerably by the tendency of Marvel Studios to introduce “Easter eggs,” or semi-hidden
visual clues to potential future plot directions. This
tendency began with Iron Man, in which a version
of Captain America’s shield is briefly visible in
scenes set in Tony Stark’s laboratory, and has continued throughout the development of the universe. These elements are innocuous to audience
members who do not catch them, but are important elements for more knowledgeable members of
the crowd. Thus, in Captain America: The Winter
Soldier when Agent Sitwell informs the hero of
Hydra’s plan to eliminate humans that they perceive to be potential threats, he enumerates Tony
Stark and Bruce Banner of The Avengers, but also
Stephen Strange, the alter ego of Dr. Strange, the
hero of a then long-rumored but unannounced
Marvel Cinematic Universe feature film.1 Guardians of the Galaxy is especially replete with Easter
eggs; in the scene in The Collector’s museum, hardcore fans believe that they have spotted both Adam
Warlock’s cocoon and the body of Beta Ray Bill.
The logic of these Easter eggs, whose appearances
are so brief as to be almost subliminal, is to activate
the imagination of the most dedicated readers and
to reward their brand loyalty and story knowledge.
The insider/outsider relationship is formed around
the ability to recognize obscure and often trivial
relationships, many of which may never be developed in a meaningful away.
3. Crossovers. Marvel Comics made the crossover a
hallmark of its storytelling as early as 1963. The
narrative approach has obvious mercantile attractions. A fan of the Fantastic Four may have no
interest in the adventures of Spider-Man, but might
be tempted to buy an issue of The Amazing SpiderMan that features a guest appearance of that fan’s
favorite superhero team. This not only temporarily
increases sales of The Amazing Spider-Man, but
opens the possibility of converting readers of one
title into readers of multiple titles. Not only do the
appearances of characters across titles help establish the sense of a shared universe, but they also
THE INFORMATION SOCIETY
reinforce the necessity of buying all Marvel comic
books, or seeing all of the Marvel films. To date,
for example, Samuel L. Jackson has appeared in
seven Marvel Cinematic Universe films and in the
television show Marvel’s Agents of S.H.I.E.L.D., and
his appearances across the franchises (with the
exception of The Incredible Hulk and Guardians of
the Galaxy) are what most clearly connect them as
a series. More recently, Marvel Studios have
focused on the use of cameo appearances by the
characters across the franchises. Loki briefly takes
the form of Chris Evans’s Captain America while
walking with Thor in Thor: The Dark World, and
Mark Ruffalo appears as Bruce Banner in the postcredit sequence in Iron Man 3 to pay off a joke.
One effect of the use of crossovers in the Marvel
Cinematic Universe has been a tendency for fans
to draw attention to what might be only potential
crossovers. Thus, the actress Laura Haddock has a
2-second appearance in Captain America: The First
Avenger and plays the more significant role of Meredith Quill in Guardians of the Galaxy, leading
some to surmise that the unnamed autograph
seeker from the former will eventually be revealed
as the mother of the lead character in the later
franchise.
4. Linked repercussions. Perhaps the most important
narrative element of the Marvel Cinematic Universe is the notion of linked repercussions across
the elements. As these films take place in a shared
universe, events in one work have an impact on
characters in the others. Notably, the invasion of
New York in The Avengers is referred to in subsequent Thor and Captain America movies, is the
source of much of the drama in Marvel’s Agents of
S.H.I.E.L.D., and has a significant psychological
impact on Iron Man in the third film in that franchise, where he suffers from a form of posttraumatic stress disorder as a result of his near death in
the earlier film. It is also the central factor in Tony
Stark’s decision making in Avengers: Age of Ultron.
Marvel Studios has used the idea of linked repercussions to especially drive the action of Marvel’s
Agents of S.H.I.E.L.D. In one first-season episode of
that program, the agents clean up London after the
battle that unfolded there in Thor: The Dark World,
and when the agency is revealed to have been corrupted by Hydra in Captain America: The Winter
Soldier it is shown collapsing in the television
series. Notably, there is an exceptionally close connection between the latter film franchise and the
television show. Airing on Tuesday evenings, the
episode of Marvel’s Agents of S.H.I.E.L.D. dealing
323
with the Hydra revelation aired only 4 days after
the release of the second Captain America movie,
meaning that fans who had not seen that film during the first weekend of its release had the likelihood of having the ending spoiled by viewing the
television show. In this way, Marvel Studios has
made it clear that the television show is not secondary to the films, but is tightly linked in continuity and an essential piece of the overall picture
being developed.
5. Modular story development. More of an industrial
factor than a narrative one, it should be noted that
a central element of the approach adopted by Marvel Studios has been modular development. As
they are creating a series of linked films with
extraordinarily large budgets—in the hundreds of
millions of dollars for production and marketing—
the level of risk is considerable. While its $300C
million domestic box office returns made it the biggest hit of the summer 2014 movie season, Guardians of the Galaxy was originally perceived to be the
riskiest project that Marvel Studios had taken on
since the original Iron Man film—both because it
was an action comedy, and because it was based on
characters that were never particularly popular
even within organized comics fandom, and none of
the characters were familiar at all to casual audiences. By making Guardians of the Galaxy only tangentially related to the core Marvel Cinematic
Universe titles, Marvel Studios crafted a film that
could potentially be left behind had it not proved
so popular. Notably, The Incredible Hulk (the lowest grossing film released by Marvel Studios to
date) has been almost retconned out of existence.
Not only was the actor playing the Bruce Banner/
The Hulk changed from Edward Norton to Mark
Ruffalo, but the events of the film have not been
incorporated into later films, except passingly in
The Avengers; planned sequels were dropped in
favor of more lucrative franchises and the character
of Betty Brant has been abandoned. The modular
nature of the series development means that the
studio can route around failure and develop unexpected successes. When it widely surpassed studio
expectations, Guardians of the Galaxy was not only
confirmed for a sequel, but will likely be more
forcefully integrated into the main storyline featuring the Avengers moving forward.
All of these elements work to construct a conception
of a shared fictional universe across a large number of
independent texts by developing an active fandom
around a set of texts. Again, this is a technique
completely familiar to readers of Marvel comic books,
324
B. BEATY
which at one time worked to draw sharp distinctions
between Marvel products and those of the main competitor, DC Comics. Indeed, the stereotype of the “Marvel
Zombie” was a commonplace in the 1980s and 1990s,
used disparagingly to refer to superhero comic book fans
who read primarily—or even exclusively—the products
of Marvel comics. These hardcore fans, who are now
among the most knowledgeable fan base for the Marvel
films, provide a model for committed insider audiences
that Marvel Studios seeks to model on the example of
Marvel Comics. To do so, a series of narrative “rewards”
is established in the film in a process known as “fan service.” This term, originating in the Western fandom for
Japanese manga, refers to the tendency of cultural creators to provide fans with story elements that they long to
see—to cater unabashedly to an audience’s expressed
desires. Typical examples of fan service in comics include
highly detailed images of robots and other forms of technology, or strongly eroticized and sexualized elements
akin to what Laura Mulvey has termed “visual pleasure”
in the domain of cinema studies (Mulvey 1975). In the
Marvel Cinematic Universe this type of fan service is
quite common, particularly when the muscled bodies of
the series stars are displayed. Yet an expanded notion of
fan service is a useful way to denote textual elements that
reward high levels of engagement with the franchise and
with its sources materials. The frisson of excitement that
is generated in a knowledgeable fan when she spots
Cosmo the Spacedog in Guardians of the Galaxy, for
example, is a reward reserved for hardcore fans who can
be flatteringly positioned as connoisseurs or opinion
leaders within organic fan communities. That each Marvel Cinematic Universe film is greeted with dozens of
articles with titles like “Guardians of the Galaxy: All the
Easter Eggs REVEALED!” or “Sixteen Captain America:
The Winter Soldier Easter Eggs” is a way of training
audiences in the “proper” method of engaging with these
texts. As Marvel leaves certain plot points unexplained
within the films themselves (who is Peter Quill’s father?
who is the menacing cosmic character at the end of The
Avengers?), casual fans are encouraged to search for
answers to these questions—either by surfing the Internet or by turning toward more knowledgeable fans
amongst their acquaintance.
Complications and conclusions
The overwhelming economic and critical success of the
Marvel Cinematic Universe has had a profound impact
on the development strategies of Hollywood generally.
Now the most lucrative film franchise of all time (having
surpassed the Harry Potter franchise with the release of
Avengers: Age of Ultron, and notably ahead of James
Bond, the Tolkien universe films, and Star Wars), the
success has been adopted by other Hollywood players.
Notably, the rights to several Marvel Comics characters
are owned by non-Disney studios. Spider-Man, owned
by Sony, is now in its second cycle of storytelling, having
been rebooted in 2012 to mixed reviews. Sony
announced elaborate plans to develop at least four Spider-Man films, plus spin-offs featuring The Sinister Six,
and, potentially, Venom and The Black Cat, but later
reached an agreement with Marvel to fold Spider-Man
into the Marvel Cinematic Universe beginning with Captain America: Civil War in 2016. The X-Men, whose film
rights are owned by 20th Century Fox, have appeared in
seven films since 2000, five under the X-Men name and
two featuring the solo exploits of Wolverine, which are
tied to the core continuity. In 2011, the franchise was
given a soft reboot with X-Men: First Class (whose action
was set during the Cuban Missile Crisis of the 1960s),
and then integrated with the earlier films in 2014’s
X-Men: Days of Future Past, with the promise of additional films to come. Fox has also hinted at integrating
The Fantastic Four into the narrative universe occupied
by the X-Men. Finally, Warner Brothers has plans to
directly mimic the success of The Avengers by building
to its own Justice League film, featuring Batman, Superman, Wonder Woman, and other heroes from the DC
Universe. This series will build on the foundation of
Christopher Nolan’s Batman trilogy and Man of Steel,
and the strategy took its first step in 2016 with the poorly
received Batman v Superman: Dawn of Justice.
While the interconnected film cycle is quickly becoming a Hollywood staple, the approach carries a great deal
of risk. Obviously, the financial implications of unpopular films in a franchise of this nature can be severe—
Sony slowed the rate of Spider-Man universe films when
Amazing Spider-Man 2 was poorly received. From a narrative standpoint, it is clear that franchises built without
a clear narrative pathway can be highly compromised.
Certainly the clearest example of this is the X-Men franchise, whose narrative continuity after seven films is
hopelessly flawed. Fox has created dozens of narrative
holes in its universe, including killing off Professor Xavier in the third X-Men film only to have him resurrected
with no explanation in The Wolverine, and the fact that
the same character is played in different films as an
imposingly large African-American man (Bill Duke) and
by Peter Dinklage, who is notable for his short stature.
Fox has proceeded with a continuity in which characters
have irreconcilabel ages and races because they seemingly perceive these issues to be unimportant to the central thrills presented in their films—and they may be
right to do so: A foolish consistency is the hobgoblin of
little minds, as we learned from Ralph Waldo Emerson.
THE INFORMATION SOCIETY
Marvel Studios has similarly been challenged by shifting
circumstances, notably replacing one of the lead actors
in the Iron Man franchise (Terrence Howard) with
another (Don Cheadle), and publicly fretting about the
possible departure of Robert Downey Jr. from the franchise. Moreover, the Marvel Cinematic Universe has
been plagued by the kind of illogic that enters into an
ongoing franchise when storylines are allowed to grow
complex. Why, for example, do none of The Avengers
assist Iron Man in saving the life of the American president in Iron Man 3? If S.H.I.E.L.D. was controlled by
Hydra for decades, as we learn in 2014’s Captain America: The Winter Soldier, why, in The Avengers, did Hydra
ever bring together a team of super-beings who might be
called upon to defeat them in the future, and why did
they ever promote the impossibly competent and incorruptible Nick Fury to a position of authority? These, of
course, are the questions asked by the hardcore fan, who
hopes to have insider knowledge rewarded with a tightly
controlled narrative machine that is unlikely to be created in the real world of Hollywood contracts and expansive creative teams.
At the current time, it is clear that this modular fan
filmmaking service strategy is firmly in place. Marvel’s
Avengers: Age of Ultron broke box office records and set
up Captain America: Civil War as the next essential film
in the franchise series. Sony, hoping to cash in on the
series’ appeal, has lent its right to Spider-Man, who duly
appeared in the trailer. The Warner film Batman v
Superman: Dawn of Justice, references several moments
in the histories of both characters that are more readily
availabel to comic book fans—blink and you will miss
the moments that include references to the comics Superman: Red Son and Crisis on Infinite Earths, and Batman’s
appearance replicates the feel of The Return of the Dark
Knight—than being the sort of general nostalgia for the
character that Warner’s has traded on in the past with
winks to catchphrases and the like (Gordon 2003; Ndalianis 2009). All of this replicates the manner in which
Marvel promoted its comic books beginning in the
1960s, as a vast intertext, footnoting and cross-referencing each other, and with which readers could directly
engage through the letters pages. The serious fan engaged
with the totality of this material. Therefore, for the casual
325
fan of these interlinked Hollywood blockbusters the pleasure of the superhero film resides in its qualities as a film
exclusively; for the hardcore fan, however, it stems not
only from that pleasure, but from the promise of additional pleasure in the future, and from the elevated status
that comes from the way that filmmakers seek to flatter
fans who are “in the know.”
Note
1. A Doctor Strange film has now been announced for a
November 4, 2016 release.
References
Comichron. 2015. Superman sales figures. http://www.comi
chron.com/titlespotlights/superman.html (accessed June
23, 2015).
Gordon, I. 2003. Superman on the set: The market, nostalgia
and television audiences. In Quality popular television: Cult
TV, the industry and fans, ed. M. Jancovich and J. Lyons,
148–62. London, UK: British Film Institute and Berkeley:
University of California Press.
Gustines, G. 2014. Comics sales rise, in paper and pixels. New
York Times, July 20. http://www.nytimes.com/2014/07/21/
business/media/comics-sales-rise-in-paper-and-pixels.html
(accessed June 23, 2015).
Hatfield, C. 2013. Jack Kirby and the marvel aesthetic. In The
superhero reader, ed. C. Hatfield, J. Heer, and K. Worcester,
136–54. Jackson, MS: University Press of Mississippi.
Hibbs, B. 2014. Digging through the 2013 BookScan numbers.
ComicBookResources.com. http://www.comicbookresour
ces.com/?pageDarticle&idD50992 (accessed June 23, 2015).
Hoffer, C. 2015. How long does a DC comics reboot last? Com
icbook.com. http://comicbook.com/2015/06/06/how-longdoes-a-dc-comics-reboot-last- (accessed June 23, 2015).
McAllister, M. 2001. Ownership concentration in the U.S.
comic book industry. In Comics & ideology, ed. M. P.
McAllister, E. H. Sewell, Jr., and I. Gordon, 15–38. New
York, NY: Peter Lang.
Modleski, T. 1983. The rhythms of reception: Daytime television and women’s work. In Regarding television: Critical
approaches, ed. E. A. Kaplan, 67–75. New York, NY: University Publications of America.
Ndalianis, A. 2009. Enter the aleph: Superhero worlds and
hypertime realities. In The contemporary comic book superhero, ed. A. Nadalianis, 270–90. New York, NY: Routledge.
Woo, B. 2011. The Android’s dungeon: Comic-bookstores, cultural spaces, and the social practices of audiences. Journal of
Graphic Novels and Comics 2:125–36.
Article
Television by the numbers:
The challenges of audience
measurement in the age
of Big Data
Convergence: The International
Journal of Research into
New Media Technologies
2019, Vol. 25(1) 113–132
ª The Author(s) 2017
Article reuse guidelines:
sagepub.com/journals-permissions
DOI: 10.1177/1354856517700854
journals.sagepub.com/home/con
JP Kelly
University of London, UK
Abstract
This article examines recent innovations in how television audiences are measured, paying
particular attention to the industry’s growing efforts to utilize the large bodies of data generated
through social media platforms – a paradigm of research known as Big Data. Although Big Data is
considered by many in the television industry as a more veracious model of audience research,
this essay uses Boyd and Crawford’s (2011) ‘Six Provocations of Big Data’ to problematize and
interrogate this prevailing industrial consensus. In doing so, this article explores both the
affordances and the limitations of this emerging research paradigm – the latter having largely
been ignored by those in the industry – and considers the consequences of these developments
for the production culture of television more broadly. Although the full impact of the television
industry’s adoption of Big Data remains unclear, this article traces some preliminary connections
between the introduction of these new measurement practices and the production culture of
contemporary television. First, I demonstrate how the design of Big Data privileges real-time
analysis, which, in turn, encourages increased investment in ‘live’ and/or ‘event’ television.
Second, I argue that despite its potential to produce real-time insights, the scale of Big Data
actually limits its utility in the context of the creative industries. Third, building on this discussion
of the debatable value and applicability of Big Data, I describe how the introduction of social
media metrics is further contributing to a ‘data divide’ in which access to these new information
data sets is highly uneven, generally favouring institutions over individuals. Taken together, these
three different but overlapping developments provide evidence that the introduction of Big Data
is already having a notable effect on the television industry in a number of interesting and
unexpected ways.
Corresponding author:
JP Kelly, Royal Holloway College, University of London, Egham Hill, Egham TW20 0EX, UK.
Email: jp.kelly@rhul.ac.uk
114
Convergence: The International Journal of Research into New Media Technologies 25(1)
Keywords
Algorithms, apache, audience measurement, BARB, Big Data, data divide, Netflix, Nielsen, realtime analytics, social media, TV viewing, TV ratings
Introduction
On 24 August 2013, leading global market research firm A.C. Nielsen celebrated its 90th anniversary. To mark the occasion, the company launched an interactive web-based timeline highlighting the key achievements over their 90-year reign. According to a press release that
accompanied the launch (Nielsen, 2013), the website was designed to showcase the numerous
innovations introduced by Nielsen over the past nine decades. Scrolling through the timeline,
however, a different picture emerges; one that reveals a relatively consistent and conservative
approach to audience measurement. This alternative narrative is consonant with scholarly accounts
of the television ratings industry (Balnaves and O’Regan, 2011; Bermejo, 2009; Buzzard, 2012;
Meehan, 1990). Indeed, having established itself as the dominant force in market research in the
1940s, and with relatively limited competition since that time (Buzzard, 2012), Nielsen has had
little incentive to innovate and has therefore been able to maintain its dominant position in the
market by pursuing the same core strategy of sampling audiences. The logic that underpins this
approach dictates that a small sample, if truly representative of the television viewing audience,
can accurately apply to the entire market. As one ratings executive put it: ‘If you have a bowl of
soup in which all the ingredients are thoroughly mixed, you do not have to eat the entire bowl to
know how the soup tastes’ (Seiler in Buzzard, 2011: 47).
Today, however, small group sampling is not nearly as effective. Audiences are increasingly
fragmented, watching at different times and on different screens. To return to the analogy above, it
is hard to know how the soup tastes if the ingredients are in different bowls. Moreover, likening the
audience to a bowl of soup presupposes that the recipe is always the same, whereas in reality, the
demographic make-up of the audience is constantly changing. Thus, as a result of widespread
changes in the profile of the audience as well as the development of new viewing practices, the
long-established system of sampling audiences as they watch content in their living rooms is
clearly no longer as effective as it once was. To be fair, the Nielsen timeline does highlight some
examples of how the company has responded to these changes, such as the inclusion of timeshifted viewing figures and the introduction of a more comprehensive methodology that allows for
the monitoring of content across multiple screens. Yet, the timeline stops at a rather critical
juncture in the company’s history. Just 2 months after the website’s launch, Nielsen announced the
introduction of their latest and arguably most innovative measurement service: a new metric that
utilizes enormous quantities of data mined from Twitter, aptly named the Nielsen Twitter Television Ratings (NTTRs). In contrast to the company’s legacy system of monitoring audiences on
the basis of small group sampling, this new approach to audience research uses vast, if not quite
complete, data sets and thus represents a major shift in ratings methodologies; the implications of
which are the subject of this article.
In order to properly examine the significance of these recent developments, it is necessary to
begin by defining some key concepts. The NTTRs are an example of social media metrics; a
relatively new approach to market research and audience measurement that utilizes large quantities
of data generated by social media activity. Social media metrics, in turn, can be considered part of
the emerging science and industry of Big Data. Although the precise definition of Big Data is the
Kelly
115
subject of ongoing scholarly debate (Kitchin, 2014; Kitchin and McArdle, 2016; Manovich, 2012),
in her pioneering study of its role within the media industries, Martha L Stone offers a broad but
useful working definition of Big Data, describing it as ‘an umbrella term for a variety of strategies
and tactics that involve massive data sets, and technologies that make sense out of these mindboggling reams of data’ (2014: 2). While many definitions of Big Data have focussed on volume as
its defining feature, Rob Kitchin has produced a more nuanced definition in which he notes that
‘Big Data is characterized by being generated continuously, seeking to be exhaustive and finegrained in scope, and flexible and scalable in its production’ (2014: 2). In many respects, Kitchin’s
definition recalls one of the earliest attempts to describe this phenomena, namely Laney’s (2001)
‘Three Vs’ of Big Data which identified volume, variety and velocity as its key features.1 Of
course, data have always played an important role in the television industry, particularly in relation
to ratings and market research, but the volume, variety and velocity of Big Data clearly marks it as
distinct from earlier forms of ‘small(er) data’. For instance, data generated through Twitter are
enormous in size (volume), is produced in real time (velocity) and is comprised of a range of
different data types such as text, URLs, geographical location, date published and retweet count
(variety). As such, the developments discussed throughout this article, such as the NTTRs, constitute a marked departure from earlier audience measurement practices and therefore have significant implications for how the television industry operates.
Over the past decade or so, Big Data has been utilized by, and subsequently shaped, a number of
different fields including finance, retail, healthcare and policing among many others. Yet, as the
introduction of the NTTRs indicates, Big Data is also making significant inroads within the
entertainment industries. This move has been driven by a number of factors, including the proliferation of viewer-related data generated directly through services such as Netflix and the BBC
iPlayer or indirectly through platforms such as Twitter, Facebook or Telfie (formerly GetGlue).
Thus, Big Data in the context of the television industry refers to a range of data types and sources,
though it is the latter of these (‘indirect data’) that will be the primary focus of this article, in part
because of the limited availability of ‘direct data’. The volume, variety and velocity of data
described here, coupled with the increasing fragmentation of the television audience, have
therefore forced market researchers and the ratings industry to develop more innovative and
effective ways to accurately measure viewers and their increasingly diverse viewing habits.
In the context of this rapidly evolving television landscape, Allie Kosterich and Philip M Napoli
suggest that the industry has subsequently ‘become enamored with notions of “Big Data” and how
they can be harnessed to generate strategic insights and enhanced revenues’ (2015: 9). For many,
this new approach to audience research promises more stability, perhaps even predictability, for an
industry typically characterized by risk and uncertainty. In his keynote address at the 2014
Edinburgh Television Festival, Channel 4’s chief executive David Abraham took the opportunity
to underline his long-standing belief in Big Data: ‘Over the past few years I’ve been encouraging
the TV industry to embrace the power of data’, explained Abraham, before warning attendees that,
‘a TV channel without a data strategy is like a submarine without sonar’ (2014). Abraham’s faith in
Big Data is consistent with many of his industry peers. Shortly following her appointment as Chair
of the BBC trust in 2015, Rona Fairhead observed that ‘when it comes to using data to understand
its audiences the BBC is a long way behind the competition’ (Reevell, 2015).
Whether these attitudes are driven by a fear of being left behind – as indicated by Fairhead’s
comments – or whether there is a genuine belief that Big Data can have a positive creative and
cultural impact – as was the nature of Abraham’s keynote address – the emerging consensus
among network executives is that data have become an integral part of televisual culture; an
116
Convergence: The International Journal of Research into New Media Technologies 25(1)
essential tool for survival in the increasingly fragmented, crowded and competitive marketplace
of digital TV.
Although Abraham and Fairhead’s comments specifically relate to the data strategies of public
service broadcasters in the United Kingdom, the growing international trade of television coupled
with the emergence of transnational services such as Netflix and Amazon Prime demonstrates that
Big Data is clearly a worldwide phenomenon and is therefore very much on the minds of network
executives across the globe. Thus, while many of the examples discussed below stem from the
United States, they are nevertheless indicative of how Big Data is being adapted and adopted
around the world.
In contrast to the widespread optimism shared by those within the television industry, there is a
noticeably more cautious and critical tone in much of the scholarly work on Big Data within the
humanities. While its proponents are quick to point out the economic and scientific2 efficacy of this
new research paradigm, more sceptical observers have stressed the need to acknowledge and
interrogate its limitations (see, for instance, Manovich, 2012). In their polemical ‘Six provocations
for Big Data’, boyd and Crawford (2011) describe how Big Data has been hailed by many –
including those in market research – as a more rigorous, veracious and objective system of producing knowledge. However, their account highlights a number of problems with Big Data, calling
the integrity of this emerging research paradigm into question. Big data, they explain:
tempts some researchers to believe that they can see everything at a 30,000 foot-view. It is the kind of
data that encourages the practice of apophenia: seeing patterns where none actually exist, simply
because massive quantities of data can offer connections that radiate in all directions. (boyd and
Crawford, 2011: 2)
In light of these epistemological concerns, boyd and Crawford maintain that ‘it is crucial to
begin asking questions about the analytic assumptions, methodological frameworks, and underlying biases embedded in the Big Data phenomenon’ (2011: 2); a sentiment shared by a number of
other critics including Kitchin (2014: 10). In contrast, then, to the television industry’s seemingly
uncritical acceptance of this new research paradigm, this article assumes a more cautious and
critical tone in examining the relationship between television and Big Data. Rather than focussing
exclusively on the potential value or benefits of Big Data (creative, cultural, economic or otherwise), this essay is concerned with also investigating the challenges and limitations that the
industry faces as it continues to integrate Big Data within its existing business model – challenges
and limitations that have largely been neglected in both industry discourse and, to a lesser extent,
academic work.
With the creative industries evidently poised on the cusp of a Big Data revolution, the need to
examine this emerging phenomenon has never been more pressing. In the context of these developments, this article adds to and advances a somewhat limited but growing body of research on social
media metrics and Big Data by examining how these innovative methodologies are being developed,
utilized and integrated specifically within the television industry. In doing so, it recognizes and
responds to Kosterich and Napoli’s (2016) call for research to more thoroughly examine the formal
processes, technologies and institutions that underpin this emerging industrial strategy.
In taking up this challenge, the article is divided into two parts. The first of these provides a
broad but necessary overview of the history of ratings in order to contextualize these more recent
developments. Following this, I map out the relevant critical terrain through a brief outline of key
research on TV ratings, social media metrics and Big Data. Although the article draws on a
Kelly
117
relatively limited pool of TV studies oriented scholarship on Big Data, it combines this material
with a much larger and somewhat untapped body of research that sits at the intersection of
computer sciences and cultural studies. Having established the critical role of ratings and how this
industry has been conceptualized within academia, the second section of this article utilizes the
most pertinent of boyd and Crawford’s (2011) ‘six provocations’ in order to examine how the
television industry is responding to the challenges of Big Data. While these provocations provide
the main critical framework for the article, I also draw on trade press, marketing materials and
interviews with producers and television executives. In including these ‘industrial voices’, this
essay provides a more holistic, empirical and critical account of these recent developments that
traces preliminary connections between the introduction of new audience measurement practices
and the production culture of contemporary television.
Big Data and television
TV ratings have long been an integral component of television production yet are often overlooked
in scholarly analyses. This is not always a deliberate oversight. The mechanics of ratings are highly
complex and can be difficult if not impenetrable to those working in the humanities who are more
au fait with textual analysis than statistical analytics. Their complexity and (in)accessibility is
further exacerbated by the industry’s growing reliance on Big Data; a model of analysis that
requires a combination of sophisticated software and expensive hardware, not to mention a highly
specialized skill set. What is more, ratings, as well as the means through which they are attained,
are often highly guarded, only of monetary value when their availability is restricted to certain
parties. This is especially true of more recent data-driven companies such as Netflix whose entire
business model relies upon the limited availability of such data.
As such, television ratings constitute an example of a ‘transparent intermediary’;3 a term that
Braun (2014) has used to describe the hidden or invisible (infra)structures that underpin and shape
the production, distribution and consumption of cultural goods. Whereas Braun’s account focusses
on the importance of institutions such as Nielsen and BARB, Lahey’s (2016) more recent analysis
of the critical role that application programming interfaces (APIs) play in the construction of
connected viewing experiences represents a more specific and technical example of an equally
invisible but influential force in the ecosystem of television, one that Lahey describes in similar
terms to Braun as ‘invisible actors’.4 Regardless of their different subject matter, both accounts
ultimately draw attention to a rich yet often overlooked array of institutions, technologies and
protocols that, despite their lack of visibility to consumers, play an integral role in every facet of
television culture. Indeed, it is fair to say that most viewers are largely unaware of how television
ratings work or their own role within such a system. Yet, the content we consume is ultimately
defined by this process. According to Braun, the inconspicuous nature of these systems is the very
reason that they should be subject to more rigorous scrutiny.
Braun suggests that we should pay closer attention to transparent intermediaries, not only
because they are too often overlooked in critical accounts but because they ultimately ‘facilitate the
exercise of structural power’ (2014: 124). However, the need to scrutinize ratings is also motivated
by the fact that the industry is currently undergoing a process of significant transformation in
which the gathering and analysis of information about audiences is increasingly performed by
computer algorithms. Of course, the increasing mechanization of market research isn’t necessarily
a recent phenomenon (see Striphas, 2015). Nevertheless, this trend is especially prevalent today
within the realm of television ratings and audience research, where companies such as Nielsen, in
118
Convergence: The International Journal of Research into New Media Technologies 25(1)
the United States, and BARB, in the United Kingdom, as well as smaller start-ups including BluFin
Labs, SecondSync (both of which have since been acquired by Twitter), Canvs and Datasift, are
beginning to utilize larger data sets in an attempt to provide more comprehensive and complex
portraits of the audience.
This transition has taken place in the broader context of what some critics are calling the
‘computational’ or ‘algorithmic turn’ in media production in which the production and distribution
of media is increasingly determined by insights gathered via the harvesting and analysis of large
amounts of pertinent data (Napoli, 2014; Striphas, 2015; Uricchio, 2011, 2015). Drawing on
Bourdieu’s (1984) notion of cultural intermediaries, Morris (2015) has argued that the organization
and presentation of cultural goods is increasingly driven by algorithms, citing Amazon’s recommendation engine as a key example. This ‘curation by code’, Morris maintains, has profound
implications for the production and consumption of culture more widely. Whereas in Bourdieu’s
account the role of the intermediary is performed by an individual (or institution) acting as a
cultural gatekeeper, Morris suggests that the role of human agency within the (re)presentation of
digital culture is gradually diminishing, with creative and curatorial decisions increasingly delegated to computational processes and complex algorithms. It is important to stress that Morris is
not suggesting that the individual has become obsolete as a consequence of the development of
curatorial algorithms. Rather, his account makes the simple but valuable observation that we are
living in an era in which cultural production is subject to a combination of human (and institutional) intermediaries as well as computerized infomediaries.5 It is also important to note that
although algorithms are a distinct entity in and of themselves, they are an integral component of
Big Data and an increasingly significant part of the mechanics of television ratings today. Algorithms are used to gather and analyse data but, as Morris’ account makes clear, they are also
utilized when it comes to producing, organizing and recommending cultural goods.
In bringing these two distinct but related lines of enquiry together, it would therefore be productive to think of ratings firms such as Nielsen as BARB as transparent infomediaries as they are
highly transparent (Braun, 2014; Lahey, 2016) and increasingly automated (Morris, 2015;
Striphas, 2015). Conceiving of the ratings industry as transparent infomediaries has critical value
as it ultimately encourages us to re-evaluate the function and influence of these institutions while
drawing our attention to a slow but significant shift in how companies such as Nielsen and BARB
now operate – a shift that has profound implications for the production and consumption of
television that will be explored below.
Manufacturing and measuring audiences: The ratings effect
Given the contemporaneity of the developments discussed above, critics such as Braun (2014),
Lahey (2016), and Morris (2015) remain somewhat uncertain as to what the algorithmic turn and
the emergence of transparent infomediaries might mean for television and the production of
cultural goods more broadly. However, there seems to be very little doubt that transformations in
the way that television viewing is measured will have a profound effect upon the industry. Indeed,
regardless of their in/visibility, TV ratings play a crucial role within the delicate ecosystem of
television and the slightest change in how audiences are measured can have profound consequences. As Karen Buzzard explains:
the business implications of a shift or a change in the currency could be staggering. One truism in
media market research is that different methodologies produce different ratings and different portraits
Kelly
119
of audiences. And these differences could mean millions of dollars won or lost if a new methodology
were adopted. (2012: 148)
Buzzard’s use of ‘could’ implies a hypothetical argument. However, there are a number of
precedents that demonstrate the significant role that ratings play within television culture and the
impact that even relatively minor changes can have on production practices. In the early 1970s, for
instance, Nielsen adopted a new approach to measurement in which they shifted their focus from
the quantity of the audience towards the quality of the audience, a process that Feuer has described
as involving ‘a de-emphasis on numbers and a greater emphasis on “demographics”‘ (1984: 3).6
The consequences of this shift were profound. As Clarke explains:
changes in these ratings in the early 1970s that supported more granular, demographic-sensitive data,
famously encouraged CBS’s movement from broad-appeal, so-called hayseed comedies to the slickmodern MTM and socially conscious Norman Lear product, in an effort to capture more upscale, urban
ratings
adding that ‘a change in the metrics used to calculate consumption greatly changed the
picture of the television market – from one of a tremendously large, undifferentiated audience to
one serving elite niches – and, thereby, creative decision making’ (2013: 123). Clarke’s deliberate
use of ‘encouraged’ is important here. In reality, there were a number of other factors at play,
including the introduction of the Financial Syndication Rules in 1970 as well as significant sociopolitical shifts in the demographic make-up of US audiences around this time. Nevertheless, this
transformation in audience measurement practices significantly contributed to what Gitlin has
called the ‘turn to relevance’ (1994) within US prime time television – namely the production of
a more relevant style of programming that addressed a range of different niches and targeted
more lucrative demographics.
To date, this is one of the clearest examples of how a shift in the currency of TV ratings has had
a tangible impact upon the content and culture of television production. Naturally, this raises a
number of questions about television today. If the ‘turn to relevance’ was even partly inspired by a
shift in Nielsen’s ratings strategy, then the recent adoption of Big Data and social media metrics
should be of even greater interest to critics and industry figures alike. Indeed, as noted above, the
industry’s rather precipitous adoption of Big Data involves a much more dramatic reconfiguration
of the way that audiences are measured and manufactured, particularly when compared to
developments in the 1970s.
To suggest that this change in fact will produce a different portrait of the audience is probably
an understatement. However, while Big Data has become a buzzword of late within the media
industries, it is important not to exaggerate its influence. Using Greenwood et al.’s economic
model of ‘institutional change’ (2002), Kosterich and Napoli (2016) have produced a detailed
account of the processes through which social media metrics have become formally adopted within
the industry, concluding that they have not displaced the previous currency, or ‘market information
regime’. Rather, ‘Nielsen’s efforts to diversify into social TV analytics’ they note, ‘have clearly
been accompanied by a discursive effort to explicitly position social TV analytics as supplementary to traditional Nielsen ratings’ (2016: 12). The same can also be said of the ratings industry
in the United Kingdom where, at the time of writing, BARB are currently piloting a new system
dubbed Project Dovetail. As its name implies, Dovetail is an attempt to combine traditional
methods of small group sampling with newer data sets such as those acquired through social media
– an approach very similar to the NTTRs and one that even involves the assistance of Nielsen.
120
Convergence: The International Journal of Research into New Media Technologies 25(1)
However, like Nielsen, BARB have been careful to stress the importance of more traditional
measurement practices. As the accompanying narration to a promotional video for Project Dovetail
explains:
Some argue that these new mountains of Big Data outperform more traditional forms of data. BARB
believes that they are complimentary to the strengths of our panel. In fact, we think it is the combination of the two that gives us the best possible way forward for measuring viewing. (BARB, 2015)
While organizations such as Nielsen and BARB are clearly working to envelop social media
metrics into their existing business models in order to nullify their disruptive potential, Big Data
continues to gather momentum as a new paradigm of market research and therefore warrants closer
critical attention.7
Although the above constitutes a somewhat broad overview of the history of TV ratings, it
nevertheless demonstrates the integral role that this industry plays within the precarious ecosystem
of television production. At the same time, this brief survey draws attention to more recent and
significant changes in the industry – namely the ‘computational’ or ‘algorithmic’ turn – and suggests
that a more productive way to understand and approach organizations such as Nielsen and BARB is
to conceive of them as transparent infomediaries – largely invisible forces yet increasingly automated and ever-more influential.
Six provocations
Having outlined some of the key works on TV ratings and the ‘algorithmic turn’ in the cultural
industries, the second part of this article uses several of boyd and Crawford’s (2011) ‘six provocations’ to explore these ideas in more concrete terms. Comprised of a series of critiques of this
new approach to research, these provocations draw attention to a number of different challenges
associated with Big Data and thus functions as a critical framework that encourages us to question
the prevailing industrial consensus that Big Data represents a superior model of knowledge production. Through this critical lens, I trace some preliminary connections between these burgeoning
methods of data-driven audience research and emerging trends in the production, promotion and
commissioning of television. Whereas much of the literature cited above has sought to map the
conceptual terrain around ratings, audience research and/or Big Data, I want to advance these
debates by considering some of the more tangible implications of these developments.
In their seminal account, boyd and Crawford (2011) use the following provocations to critique
Big Data:
1.
2.
3.
4.
5.
6.
Automating research changes the definition of knowledge;
Bigger data are not always better data;
Limited access to Big Data creates new digital divides;
Not all data are equivalent;
Claims to objectivity and accuracy are misleading; and
Just because it is accessible doesn’t make it ethical.8
Although all of these provocations highlight significant challenges for the industry, some are
more pertinent than others when it comes to television ratings and audience research. For that
reason, the final section of this article focusses on the first three of these provocations which, as
Kelly
121
will become clear below, arguably present the biggest challenge for those working in the television
industry today.
Automating research changes the definition of knowledge
The first of boyd and Crawford’s six provocations echoes the concerns of critics such as Striphas
(2015) and Morris (2015) who have expressed a similar degree of wariness regarding the
increasing automation of culture. Comparing the emergence of Big Data to the development of
Fordist regimes of production in the first half of the 20th century, boyd and Crawford argue that
‘just as Ford changed the way we made cars – and then transformed work itself – Big Data has
emerged [as] a system of knowledge that is already changing the objects of knowledge, while also
having the power to inform how we understand human networks and community’ (2011: 3). What
is at stake, according to boyd and Crawford, is more than simply a change in the processes through
which knowledge is acquired, but rather a fundamental redefinition of knowledge itself. In other
words, the adoption of Big Data will not only deliver new insights but will also foster a research
culture more attuned to the specific properties and affordances of such a methodological approach.
Indeed, boyd and Crawford argue that Big Data privileges real-time analyses, which in turn will
determine the kinds of questions asked as well as the outcomes of said research (2011: 4).
There is evidence of this privileging of real-time analysis in the very design of some of the most
popular hardware and software configurations currently utilized in the market research industry.
For instance, the intense volume and velocity of Big Data – two of Laney’s (2001) infamous ‘three
Vs’ – has encouraged the development of analytics software such as Apache Storm and Apache
Kafka, both of which are designed to analyse information in real time and both of which are used
extensively in social media metrics. Depending on the particular software and server configuration,
they can analyse information on-the-fly, without data ever being committed to disk. These tools
therefore prioritize the analysis of data in the present over data from the past, making it difficult if
not impossible to perform retrospective analyses due to the technical limitations and prohibitive
costs associated with data retention. As the volume and velocity of data continues to grow at a rate
that exceeds the development of affordable storage space, it is safe to say that real-time analytics
will become an even more prevalent form of research in the future.
Although boyd and Crawford argue that Big Data privileges real-time analysis, and therefore
the production of a particular form of knowledge centred on real-time analytics, the situation is
more complicated than this. Services such as Twitter (who are used by Nielsen for their NTTRs)
utilize software packages such as Apache Storm to perform real-time analytics, which are necessary for providing features such as ‘trending topics’. At the same time, however, Tweets are
retained and remain available for retrospective analyses – an approach known as batch processing
(which is performed on clusters stored using cloud services such as Amazon’s S3). In short, Twitter
(and Netflix for that matter) have adopted a more complex approach to Big Data, in that they both
combine real-time analysis (which underpins certain key features including Netflix’s recommendation engine) with retrospective batch analysis (which allows analysts to go back and review historical data): a form of data-processing known as Lambda architecture. Even so, the preservation of
content via batch processing is often limited to a relatively short period of time so that space can be
made for the relentless stream of new incoming data. Despite the addition of retrospective batch
analysis to the arsenal of companies such as Nielsen and Netflix, there is still a methodological
emphasis on real-time or very recent data analytics, which in turn privileges and produces certain
forms of knowledge and user experiences. In relation to the examples discussed above, it may be too
122
Convergence: The International Journal of Research into New Media Technologies 25(1)
pre-emptive to say that the definition of knowledge itself is changing, as boyd and Crawford (2011)
argue. However, these examples certainly indicate that certain methods and forms of knowledge are
privileged by the design, affordances and limitations of these emerging technologies.
But what does the increasing automation of audience research mean for television? For one
thing, if Big Data, or more precisely social media metrics, are more conducive to real-time analysis,
this could lead to a greater investment in content that can more effectively deliver this kind of
measurable data, such as live programming and event television (see Sørensen, 2016). Indeed,
NTTRs indicate that live programmes perform much better than their scripted counterparts when it
comes to generating social data. This, in turn, potentially changes the criteria of what might be
considered a success. The 2015 MTV Video Music Awards (VMAs), for example, dominated the
NTTRs for all television broadcasts during the week commencing 24 August 2015. According to the
figures, the VMAs generated tweets from just over 2,200,000 unique authors producing a total of
21,300,000 tweets, resulting in almost 680,000,000 impressions (in other words, the number of times
a VMA related Tweet was seen across the wider social media sphere – whether or not these were
actually read or had any positive economic effect is another matter altogether).9 To put the social
media success of the VMAs into some perspective, we need to only consider the second most popular
series or special that week,10 WWE Monday Night RAW, which generated just 66,000 Tweets –
roughly 97% fewer Tweets than the VMAs. Even the most popular sports broadcast for that week
(MLB Baseball: Chicago Cubs at Los Angeles Dodgers) generated a comparably meagre 102,000
Tweets from unique authors – again, a small fraction of the social media activity generated by the
VMAs.
Despite the VMAs dominant performance in social media ratings, these figures appear to
contradict Nielsen’s more established method of measuring TV viewership which revealed that the
broadcast itself was only the fourth most-watched cable series that week with just over 5,000,000
viewers – more than 3,000,000 short of the top cable broadcast that week, the pilot episode of Fear
the Walking Dead which proved to be stiff competition in more ways than one.11 In fact, if we take
into account the viewing figures that week for major networks, syndicated networks and cable, the
VMAs didn’t even feature in the top 20. This discrepancy between live viewing figures and social
media ratings raises a number of important questions related to the currency of these different
approaches. As Kosterich and Napoli’s (2016) study suggests, networks and advertisers still place
most of their faith in traditional ratings. Nevertheless, as Big Data continues to make further
inroads within the market research culture of the television industry, it is likely that networks,
producers and advertisers will feel more inclined to invest in or design programming that can
generate high levels of social media activity.
Although the impact of Big Data is, somewhat ironically, difficult to measure, research shows
that networks and advertisers are spending more on sports, event television and other genres of live
programming. ‘Call it the eventization of TV’, one journalist recently explained,
at a time when nearly half of all US homes have DVRs, networks are shelling out an estimated $7 billion
for rights to air NFL games, awards shows are popping up all over the dial, and there doesn’t seem to be a
major cable network that isn’t exploring a foray into topical late-night. (Guthrie and Rose, 2013)
The same article continues: ‘advertisers, too, are clamoring for such opportunities in a fractured,
ad-skipping environment, shelling out $444 million on awards shows and live non-sports events in
2012, up 22 percent compared with five years ago’ (ibid.) According to this account, the growing
investment in live programming is largely attributed to the threat of time-shifting and other
Kelly
123
ad-skipping technologies. However, it could be argued that the popularity of social media metrics,
and their privileging of live/real-time analysis, is also encouraging the increasing ‘eventization of
television’.
If Big Data is privileging real-time analytics, which in turn encourages greater investment in
live programming, what does this mean for scripted television? As Mike Proulx and Stacey
Shepatin observe, ‘scripted dramas tend to produce lower volume backchannels during show
airings’ (2012: 118). In other words, people are less likely to Tweet during the types of programmes that demand greater viewer engagement. It is worth exploring this point further as these
are precisely the kinds of viewers that ad-supported networks are keen to attract: people highly
engaged in their programming and, by extension, their advertising. However, because of their
increased level of engagement in content, these kinds of viewers are more likely to be poorly
represented in social ratings. As a consequence, networks might then assume that scripted drama is
not producing sufficient enough ‘buzz’; a failure to provide data that not only translates to free
promotion but also forms the basis of the insights gathered through social media metrics. Of
course, it would be wrong to suggest that Big Data’s methodological emphasis on real-time
analytics will signal the end of scripted programming. On the contrary, many television networks and producers have explicitly sought to increase social media engagement in fictional
programming in ways that strategically reinforce viewer engagement while also soliciting their
feedback. AMC, for example, have developed a number of second screen apps for their major
scripted series, grouped together under the ‘Story Sync’ initiative. Although second screen
applications appear to be more conducive to live television genres (Lee and Andrejevic, 2014), the
continued investment in companion apps for scripted dramas provides further evidence of a
growing appetite for viewer data regardless of the genre.
In light of the above examples we can conclude that, contrary to the claims of boyd and
Crawford, we are not so much witnessing the wholesale emergence of a new epistemological
regime or a redefinition of knowledge, but rather we are seeing the adoption, exploitation and
coercion of new tools to reinforce existing practices and industrial lore, as demonstrated in
Kosterich and Napoli’s (2016) account of the institutionalization of social media metrics. For
example, the announcement of Nielsen’s NTTR service in 2013 caused much consternation across
the industry (Watercutter, 2013). Advertisers, network executives, journalists among others all
speculated as to what form these would take and how they might unsettle or disrupt the industry. In
the end, the numbers – at least those that are made available to the general public – appear to
simply reinforce Nielsen’s longstanding practice of ‘counting eyeballs’ (Gitlin, 1994: 49). Nevertheless, the introduction of the NTTRs and Project Dovetail suggests that the industry is
beginning to change tact. This is significant because, as several critics have already pointed out
(Braun, 2014; Buzzard, 2012; Kosterich and Napoli, 2015; Lee and Andrejevic, 2014), the
implications of a shift in how audiences are measured would be profound. In explaining why there
is a considerable amount at stake in how the ratings industry operates, Braun argues that:
the politics involved are the politics of representation and the risk of misrepresentation or, worse,
omission and invisibility. The worry here is that groups whose viewing activities are not accurately
recorded will not be sought after as audiences. Their interests and views may therefore be less readily
represented in the content of media, and thereby omitted from the public agenda. (2014: 136–137)
Although social media ratings have the potential for a fairer and more representative system of
audience measurement, there is also the possibility that we are simply seeing the development of a
124
Convergence: The International Journal of Research into New Media Technologies 25(1)
culture in which viewers are being ‘interpellated into ever more convenient, instrumental, and
commercially viable social identities’ (ibid.). In the context of this analysis, it could be argued that
the allure of data generated through social media renders certain demographics, particularly those
who are less prolific users of these platforms, entirely invisible altogether. If anything then, the use
of social media metrics (i.e. ‘indirect data’) is very limited as it magnifies the potential for
misrepresentation by privileging certain types of audiences (younger, more technologically savvy)
and viewing behaviours (such as real-time commentary and interactions) at the expense of
others.12
Bigger data are not always better data
boyd and Crawford are not alone in their assertion that a profound epistemological shift is taking
place in the wake of Big Data. Andrejevic (2014), for example, has express a similar set of
concerns in response to the automation of research, arguing that the excess (or volume, to use
Laney’s (2001) terminology) of information associated with Big Data, and its subsequent need for
computational processing, has produced a culture more concerned with the results of research
rather than what those findings might actually mean; a quality that he describes as ‘knowing
without understanding’ (2014: 21). In many ways, Andrejevic’s concerns resonate with boyd and
Crawford’s claim that there is:
an arrogant undercurrent in many Big Data debates where all other forms of analysis can be sidelined
by production lines of numbers, privileged as having a direct line to raw knowledge. Why people do
things, write things, or make things is erased by the sheer volume of numerical repetition and large
patterns. (2011: 4)
When considered in the context of audience research, these arguments suggest that the industry’s growing reliance on Big Data may lead to a culture of production in which ‘correlation
supersedes causation’ (Kitchin, 2014: 4). However, this kind of logic is true of how the ratings
industry has always operated. Historically, organizations such as Nielsen and BARB have been
almost exclusively concerned with questions of quantity (how many people are watching, who is
watching) as opposed to questions of quality (how and why they are watching). Although Big Data
and social media metrics have the potential to reveal new insights pertaining to the latter, this may
not work to the advantage of the ratings industry as the breadth and depth of such data will
ultimately produce too many competing demands. As one television ratings executive explained
when asked about the prospect of a move towards such large scale research: ‘a census is not
[necessarily] a good thing. When they take your blood, they don’t take all of it’ (Bachman in
Buzzard, 2012: 148). This perspective speaks to Andrejevic’s observation that a ‘paradox of an era
of information glut emerges against the background of the new information landscape’ in which,
‘to inform ourselves as never before, we are simultaneously and compellingly confronted with the
impossibility of ever being fully informed’ (2013: 2). In other words, the more information there is,
the harder it can be to make sense of.
Despite the qualitative opportunities inherent in social media analytics, the ratings industry is
still primarily concerned with how many people are watching rather than why they are watching –
the latter of which is an approach that Lev Manovich has described as ‘cultural analytics’ (2009) –
and in this respect Big Data has an obvious advantage. Nevertheless, boyd and Crawford maintain
that the biggest strength of Big Data – its size – is also one of its main weaknesses. Indeed, the
volume and variety of Big Data produces a larger number of variables that can amplify the margin
Kelly
125
of error. In detailing this particular frailty, boyd and Crawford note that ‘large data sets from
Internet sources are often unreliable, prone to outages and losses, and these errors and gaps are
magnified when multiple data sets are used together’ (2011: 5).
Although the problem of data excess has been the subject of a number of recent scholarly
studies, the notion that bigger data are not always better data is hardly new, especially in the
context of television ratings. For instance, as John Ellis observed in 2000:
Across the day, the evening, the week and the month, the level of detail provided by BARB is
extraordinary and even perhaps counterproductive. For the schedulers have to undertake a considerable
degree of interpretation in order to deal with the figures. From the plethora of detail, they first construct
a narrative of the audience for themselves. (2000: 136–137)
Clearly, information excess is a problem that has plagued the television industry since at least
the early 2000s. Even so, the volume of data generated at the beginning of the millennium pales
in comparison to the rate of information produced today. Since Ellis published his account, the
data generated, tracked and available has expanded exponentially – perhaps best exemplified by
CISCO’s study that found internet traffic had grown from 100 gb per h in 1997 to 16,000 gb per s in
2014: an increase of more than 57,000,000% (Cisco, 2015).13 Naturally, this proliferation of data
changes the prospects and possibilities of television audience research. On the one hand, this
sudden explosion of information generates a near limitless pool of data from which numerous
correlations can be drawn and converted into potential economic gains. On the other hand, however, this data deluge ultimately produces a dissonance in consumer demand; a digital cacophony
of desires and preferences that can never be fully satiated. In other words, platforms such as Twitter
and Facebook are generating data that might be considered too granular, with market researchers,
network executives and television producers more inclined to construct their own preferred,
apophenic narratives in an attempt to make sense of this overwhelming information.
If information excess can be counterproductive when it comes to audience research, then the
new breed of data-driven services such as Netflix arguably face a greater challenge than ratings
firms and traditional broadcast networks. As Tricia Jenkins notes, ‘using programs such as
Hadoop, Pig, Python, Cassandra, Hive, Presto, Teradata and Redshift, Netflix is able to process
10þ petabytes of data along with 400þ billion new events on a daily basis in order to learn about
its users’ viewing habits’ (forthcoming). These ‘events’ refer to user generated data including the
time, location and device used to access the service as well as a plethora of other interactions such as
pausing, rewinding, rewatching, search history and so forth. Based on the figures cited by Jenkins,
Netflix subscribers generate 2.8 trillion events per week – a figure that will have grown exponentially
following the subscription video on demand (SVOD) provider’s rollout to more than 130 new territories in early 2016 (Netflix, 2016). Although Netflix produces an unprecedented level of data, the
lack of access to this information makes it difficult for onlookers to ascertain the role that data play in
the creative process. However, CEO Reed Hastings offered some insight in a recent interview,
insisting that Netflix ‘start[s] with the data [ . . . ] but the final call is always gut. It’s informed
intuition’ (O’Brien, 2016). Such an approach to creativity is evocative of the paint-by-numbers motif
employed by pop artists such as Andy Warhol in the early 1960s which came to symbolize – and was
ultimately used to critique – the increasing mechanization of popular culture at the time. Although
Reed insists that Netflix’s approach is more TV-with-the-assistance-of-numbers than TV-accordingto-the-numbers, Big Data is nevertheless occupying a more prominent role within contemporary
television culture. Yet, given the various challenges associated with data excess outlined above, early
adopters should clearly be wary of the supposed truism that bigger is better.
126
Convergence: The International Journal of Research into New Media Technologies 25(1)
Limited access to Big Data creates new digital divides
In addition to the problems of data excess, another common misconception surrounding Big Data is
that it is an inherently more democratic paradigm of research (see Baack, 2015). Such a fallacy is
based on the widespread assumption that individuals, businesses and/or governments all have
equal access to data. However, as has been pointed out numerous times before (Andrejevic, 2013;
boyd and Crawford, 2011; Zelenkauskaite and Bucy, 2016), this utopian version of data equality
does not reflect reality. Social media data are rarely free or accessible to the general public. Indeed,
popular services such as Twitter and Facebook use APIs to restrict access to their data ‘firehose’,
with permission usually only granted to those who are able to pay. Individuals and smaller
organizations therefore face significant economic barriers as the cost of data access is often too
prohibitive. Citing a recent independent academic study that used social media data, Dixon et al.
noted that in order to obtain data from Twitter, the researchers in question ‘paid thousands of
dollars and signed a contract that prevented them from sharing data with others’ (2015: 298).14
Conversely, larger and more established organizations such as Nielsen and BARB have ongoing
and long-standing contracts with Twitter through which they are able to access large quantities of
data (Nielsen, 2012). The high cost of access to data thus ultimately works in the favour of
established firms such as Nielsen and BARB who already have the necessary economic power and
industrial alliances firmly in place.
It is worth noting that in the context of the television industry this data/digital divide has always
existed, with firms such as Nielsen and BARB restricting their data to a limited number of (paying)
clients. Rather than increasing this divide, it could be argued that Big Data has created new
opportunities to bridge this data gap. Indeed, despite the increasing barriers of access to information for individuals and smaller companies, there is evidence to suggest that networks and
producers are finding ways to complement if not bypass their long-standing relationships
with ratings providers by turning to Big Data and social media analytics on a more ad hoc basis.
A notable example of this occurred when the producers of Being Mary Jane (BET, 2013) decided
to conduct their own research using Adobe Social; a relatively inexpensive and widely available
social analytics software programme. In taking this step, the network was surprised to discover a
high degree of interest in one of the series’ more peripheral characters, Avery (played by Robinne
Lee). According to the network’s senior director of social media, this discovery led them to ‘amp
up [their] coverage of [the character] from a content perspective’ (Lespinasse, in Kuchinskas,
2014). Although it was far too late to change the direction of the main narrative in order to reflect
this discovery, the network was still able to recut commercials and create other original promotions
as a way to capitalize on these insights. Although this particular example refers to a promotion
rather than a programme that has been affected by the availability of social media analytics, it still
represents an important development. As Jonathan Gray has demonstrated in his study of media
paratexts, Show Sold Separately (2010), ancillary texts such as promotions play an important role
in framing our understanding and interpretation of the primary text.
Being Mary Jane is also a significant example as it demonstrates signs of a potential shift in the
power dynamics between ratings companies and their clients. In this instance, the network was able
to bypass Nielsen, discovering something that the market research giant might never have known
or considered to tell them. Of course, networks and producers have always conducted their own
independent research, but the quality and relative availability of social media data is allowing them
to do this more efficiently and cost effectively than ever before, not to mention on a larger scale
than was previously possible. Although critics such as Andrejevic are justifiably concerned when it
Kelly
127
comes to the problem of data access, the example of Being Mary Jane is evidence of the more
democratic side of Big Data.
Although access to data remains a barrier for many – in particular producers, writers but also
academics (Zelenkauskaite and Bucy, 2016) – the technologies, software and various protocols that
underpin this emerging industry are much more open to critical scrutiny. One key reason for this is
due to the influence and involvement of the open source community which has played, and continues
to play, a pivotal role in the development of Big Data. The Netflix ‘tech blog’, for instance, regularly
publishes detailed posts about the key hardware and software developments that help deliver the
company’s streaming service. Although the tech blog openly details Netflix’s infrastructure and
therefore constitutes an invaluable resource when it comes to understanding the mechanics of Big
Data, the data itself continues to remain elusive. As such, the tech blog clearly has limited critical
value and ultimately stands as further evidence of Andrejevic’s data divide.
Data access remains an issue when it comes to private organizations such as Netflix, yet is less
of an issue in the case of public service broadcasters. The BBC, for instance, publishes a monthly
report about its iPlayer usage which contains a wide array of facts and figures. However, these
reports are an example of what Stefan Baack describes as an ‘interpretative monopoly’ (2015) in
which the information is only accessible in its final processed form, rather than as raw data, and
which may therefore reflect the biases of the relevant institution. In this example, the raw data is
processed and presented in a way that ultimately serves to justify the economic and cultural value
of the iPlayer by focussing primarily on issues of popularity. Were the raw data readily available,
who knows …
Top-quality papers guaranteed
100% original papers
We sell only unique pieces of writing completed according to your demands.
Confidential service
We use security encryption to keep your personal data protected.
Money-back guarantee
We can give your money back if something goes wrong with your order.
Enjoy the free features we offer to everyone
-
Title page
Get a free title page formatted according to the specifics of your particular style.
-
Custom formatting
Request us to use APA, MLA, Harvard, Chicago, or any other style for your essay.
-
Bibliography page
Don’t pay extra for a list of references that perfectly fits your academic needs.
-
24/7 support assistance
Ask us a question anytime you need to—we don’t charge extra for supporting you!
Calculate how much your essay costs
What we are popular for
- English 101
- History
- Business Studies
- Management
- Literature
- Composition
- Psychology
- Philosophy
- Marketing
- Economics