research paper


Final Project

Task 1: Project Proposal

One Page Proposal should include the following elements:

· Technology and/or Social Topic for the project

· Significance and importance of the topic as well as your interests

· Identify key issues and resources for additional information or study

· Research plan, anticipated results and Timeline.

Following is a list of project topics:

Contemporary Technology:

3D Printer, UAV, Robots, Artificial Intelligence, Nanotechnology, Block chain, Bitcoin, Cloud Computing, Big Data Technology, etc.

Technology in Society:

Technology and finance, Technology and food safety, Technology and agriculture, Technology and politics, Technology and government, Technology and education, etc.

Technology in History:

Technology Diffusion in Japan, Technology Diffusion in China, Technology Diffusion in India

Technology Diffusion in Saudi Arabia, Technology Diffusion in Israel,

Technology Diffusion in Africa, etc.

Theories and Philosophy:

Technology Determinism and Artificial Intelligence, Constructivism and Big Data, Max Weber and C.S. Lewis, Adam Smith and Karl Marx, Scripture Perspective of Technology Advancement, etc.

Task 2 – Project Report

For report writing, please use APA format (Max 10 pages Time New Roman font 12, 1 1/2 space including front matters, body and back end matters).

Please use the following outline for the individual report:

·

Introduction – technology researched for this project, significance and value of technology, etc.

· Background – history, major development, research questions, hypotheses, social and culture settings, text book relevance, e.g.: driving forces, technology diffusion, global impact and convergence, technology determinism, etc.

· Results- literature review, personal interview, major findings, data, pictures, videos, society impacts, trends, points related to topics in the textbook, answers to research questions, possible theory and explanation of phenomena for proposed hypotheses, etc.

· Conclusions and Discussions – summary of your work, surprises, serendipities, discussions of what learned related to technology, research process, team work, course goals, etc.

· References in APA style

Related links:

Purdue Online Writing Lab (OWL) (

https://owl.english.purdue.edu/owl/section/2/10/

)

SOCIETY and
TECHNOLOGICAL
CHANGE
Volti7e_01_FM_i-xviii_highres.indd 1 28/11/12 5:52 PM

A Macmillan Higher Education Company
WORTH PUBLISHERS
Volti7e_01_FM_i-xviii_highres.indd 2 28/11/12 5:52 PM
this page left intentionally blank

Pitzer College
SOCIETY and
TECHNOLOGICAL
CHANGE S E V E N T H E D I T I O N
R u d i V o l t i
A Macmillan Higher Education Company
WORTH PUBLISHERS
Volti7e_01_FM_i-xviii_highres.indd 3 28/11/12 5:52 PM

Senior Vice President, Editorial and Production: Catherine Woods
Acquisitions Editor: Sarah Berger
Developmental Editor: Kirk Bomont
Executive Marketing Manager: Katherine Nurre
Marketing Assistant: Julie Tompkins
Director of Print and Digital Development: Tracey Kuehn
Associate Managing Editor: Lisa Kinne
Photo Research Manager: Ted Szczepanski
Photo Editor: Cecilia Varas
Art Director: Babs Reingold
Cover and Text Designer: Kevin Kall
Production Manager: Barbara Seixas
Composition: MPS Ltd.
Printing and Binding: RR Donnelley
Cover Art: ©James Brittain/View/Corbis
Library of Congress Control Number: 2012951470
ISBN-13: 978-1-4292-7897-3
ISBN-10: 1-4292-7897-8
©2014, 2009, 2001, 1995 by Worth Publishers
All rights reserved.
Printed in the United States of America
First printing
Worth Publishers
41 Madison Avenue
New York, NY 10010
www.worthpublishers.com
Volti7e_01_FM_i-xviii_highres.indd 4 28/11/12 5:52 PM

http://www.worthpublishers.com

v
C o n t e n t s
About the Author xiii
Preface xv
part one O r i e n t a t i o n s 1
Chapter 1 The Nature of Technology 3
Defining Technology 3
Technological Advance and the Image of Progress 7
Technology as a Metaphor 10
Technology and Rationality 12
Technological Determinism 15
Living in a Technological Society 17
Questions for Discussion 18
Notes 18
Chapter 2 Winners and Losers: The Differential Effects
of Technological Change 21
Technology as a Subversive Force 21
The Luddities 26
Neo-Luddism 28
Whose Technology? 29
What Technology Can Do—And What
It Cannot Do 29
Volti7e_01_FM_i-xviii_highres.indd 5 28/11/12 5:52 PM

vi Contents
The Technological Fix 30
Why Technology Can’t Always Fix It 31
The Appeal of Technocracy 33
The Technocrat’s Delusion 36
Questions for Discussion 37
Notes 37
part two T h e P r o c e s s o f T e c h n o l o g i c a l C h a n g e 3 9
Chapter 3 The Sources of Technological Change 41
Technological Change as a Social Process 41
The Great Breakthrough 42
The “D” in R&D 44
All Together Now 45
Push and Pull 48
Belated Demand 51
Market Economies and Technological Advance 52
Noneconomic Sources of Technological Advance 54
Questions for Discussion 57
Notes 58
Chapter 4 Scientific Knowledge and Technological Advance 61
The Historical Separation of Science and Technology 61
Studies of Contemporary Science–Technology
Relationships 62
How Technology Differs from Science 64
How Technology Stimulates Scientific Discovery 66
Indirect Effects of Technology on Scientific Advance 69
The Commonalities of Science and Technology 71
The Translation of Science into Technology 74
Questions for Discussion 76
Notes 76
Chapter 5 The Diffusion of Technology 79
The International Diffusion of Technology 79
Clever Copyists 84
Adaptation and Adoption 85
Learning to Make Steel in Old Japan 86
Volti7e_01_FM_i-xviii_highres.indd 6 28/11/12 5:52 PM

Contents vii
Appropriate Technology 87
Business Firms and Technological Diffusion 90
A Risky Business 91
The NIH Syndrome 92
Efforts to Restrict the Diffusion of Technology 93
Patents and the Diffusion of Technology 94
Questions for Discussion 96
Notes 96
part three H o w T e c h n o l o g y A f f e c t s t h e H e a l t h
o f t h e E a r t h a n d I t s I n h a b i t a n t s 1 0 1
Chapter 6 Technology, Energy, and the Environment 103
Fossil Fuels, Air Pollution, and Climate Change 103
A Planet under Stress 107
Is Technology the Problem or the Solution? 108
Some Technological Fixes of the Past 109
Alternatives to Fossil Fuels 110
Doing More with Less 114
More Miles to the Gallon 116
Economic Systems, Government Policies,
and the Environment 118
Questions for Discussion 121
Notes 122
Chapter 7 Medical Technologies 125
New Medical Technologies: Choices
and Trade-offs 127
The Case of Kidney Dialysis 127
Replacing Broken Hearts 131
Diagnostic Technologies 135
Medical Technologies and Medical
Ethics 137
New Ways of Making and Sustaining Babies 138
When Does Life End? When Should It? 140
Halfway Technologies 140
Questions for Discussion 141
Notes 141
Volti7e_01_FM_i-xviii_highres.indd 7 28/11/12 5:52 PM

viii Contents
Chapter 8 Genetic Technologies 145
The Genetic Fix 145
Discovering Genes and Patenting Them 146
Bioengineering on the Farm 148
Genetic Mapping and Screening 151
Cloning, Present and Future 152
Stem Cells and Future Therapies 155
The Ethics of Genetic Intervention 156
Questions for Discussion 159
Notes 159
part four T e c h n o l o g y a n d t h e T r a n s f o r m a t i o n
o f W o r k 1 6 3
Chapter 9 Work in Nonindustrial Societies 165
Working with the Earliest Tools 165
Work and Leisure in Technologically Primitive Societies 166
Work and the Development of Agriculture 168
Farming Techniques and Patterns of Work 169
The Ironies of Progress 171
Artisan and Craft Work 171
Guild Organization and Technological Change 174
Slavery and the Inhibition of Technological Development 175
The Measurement of Time and Changed Working
Patterns 176
The Clock 178
Questions for Discussion 180
Notes 180
Chapter 10 Technology and Jobs: More of One and Less of the Other? 183
The Technological Threat in Historical Perspective 183
A Case for Optimism 184
How Technology Creates Jobs 186
The Indirect Effects of New Technologies
on Employment 188
Volti7e_01_FM_i-xviii_highres.indd 8 28/11/12 5:52 PM

Contents ix
The Machines Aren’t Ready to Take Over 189
Technology, Jobs, and the Changing Structure
of the Economy 191
Technology and the Distribution of Income 194
Technology, Globalization, and Jobs 196
Rebounding from Job Losses 197
Benefits, but Disruption Too 198
Questions for Discussion 199
Notes 199
Chapter 11 Technological Change and Life on the Job 203
Industrial Production 203
Machine-Paced Labor 205
Is Technology to Blame? 207
Industrial Technology and the Division of Labor 209
Scientific Management Once Again 212
Industrial Work and Recent Technological
Developments 213
Technological Change and White-Collar Work 214
Telework 216
Smart Technologies and Dumb Jobs? 217
Questions for Discussion 219
Notes 220
part five C o m m u n i c a t i o n 2 2 3
Chapter 12 Printing 225
The Printing Revolution 226
Printing and the Expansion of Knowledge 228
Printing and the Rise of Protestantism 229
Printing, Literacy, and Social Change 230
Psychological Effects of Printing 232
Newspapers 233
Circulation Wars and the Shaping of Public Opinion 235
Questions for Discussion 237
Notes 238
Volti7e_01_FM_i-xviii_highres.indd 9 28/11/12 5:52 PM

x Contents
Chapter 13 The Electronic Media: From the Telegraph to Television 241
The Invention of Radio 241
The Origins of Commercial Radio 243
The Rise of Television 245
The Federal Government Steps In 246
Problems of Regulation 248
The Television-Viewing Public 249
Violence on Television and Its Consequences 250
Delivering the News 253
Television and Politics 255
Television and Thought 257
Questions for Discussion 258
Notes 259
Chapter 14 The Internet Age 263
The Birth and Growth of the Internet 263
E-Mail and the Network Effect 266
Mobile Communications 267
More Digital Connections: Social Networks 268
Social Media and Social Movements 270
Video Games 272
The Digital Divide 275
Intellectual Property 276
Privacy in the Digital Age 278
The Electronic Media in Modern Society 279
Questions for Discussion 280
Notes 281
part six T h e T o o l s o f D e s t r u c t i o n 2 8 5
Chapter 15 Weapons and Their Consequences 287
Military Technology in the Ancient World 287
Military Technology and the Feudal Order 289
New Weapons and the Decline of Feudalism 290
The Gunpowder Revolution 293
Volti7e_01_FM_i-xviii_highres.indd 10 28/11/12 5:52 PM

Contents xi
War and the Centralized State 296
Technological Change and Naval Culture
in the Era of the Battleship 297
Weapons and the Making of the Modern World 298
Questions for Discussion 302
Notes 303
Chapter 16 The Era of Smart Weapons 305
Cruise Missiles 305
Smart Bombs 307
High-Tech Surveillance 308
Drones 309
The Cost of Technological Sophistication 310
Asymmetrical Warfare 311
Technology and Terrorism 313
Cyberterrorism and Cyberattacks 315
Military Technologies in a Changing World 317
Questions for Discussion 319
Notes 320
Chapter 17 How New Weapons Emerge—And How They
May Be Contained 323
Action and Reaction 323
Social Structure and the Development of Military
Technologies 324
Organizational Interests and the Air Weapon 329
Social Revolution and the Enlargement of War 331
Industrial Technology in the Service of War 333
Controlling Military Technologies 335
Historical Attempts to Limit New Weapons 336
A Successful Example of Arms Control 337
Gun Control in Old Japan 339
The Control of Nuclear Weapons 341
Deterrence, but No More 341
The Perils of Proliferation 342
Questions for Discussion 343
Notes 344
Volti7e_01_FM_i-xviii_highres.indd 11 28/11/12 5:52 PM

part seven T h e S h a p i n g a n d C o n t r o l
o f T e c h n o l o g y 3 4 7
Chapter 18 Technology and Its Creators: Who’s in Charge
of Whom? 349
Technological Advance and Cultural Lag 349
Technology, Globalization, and Cultural Convergence 351
Experts, Expertise, and the Shaping
of Technology 355
Engineers and the Control of Technology 358
Questions for Discussion 363
Notes 364
Chapter 19 Organizations and Technological Change 367
Technology as a Cause of Organizational Structure 367
Technology as a Consequence of Organizational
Structure 372
Organizations and New Information Technologies 375
Interorganizational Relations and Technological
Development 378
Organizations and Technological Innovation 379
Entrepreneurs and Organizations 381
Questions for Discussion 383
Notes 384
Chapter 20 Governing Technology 387
Government Actions and the Shaping
of Technology 387
But Is It Really Necessary? 391
Government Institutions for the Guidance of Technology 392
Processes 394
The Democratic Control of Technology 399
The Challenges of the Future 402
Questions for Discussion 403
Notes 404
Index 407
xii Contents
Volti7e_01_FM_i-xviii_highres.indd 12 28/11/12 5:52 PM

Rudi Volti is Emeritus Professor of Sociology at Pitzer College, where he was a
founding member of the program in Science, Technology, and Society of the
Claremont Colleges. His books and articles have covered a variety of topics on the
interaction of technology and society, including technology transfer to East Asia,
the history of the engineering profession, the origin of frozen foods, and the history
of automobile engines. His personal encounters with modern technology center on
cars, motorcycles, and model railroading.
xiii
A b o u t t h e A u t h o r
Volti7e_01_FM_i-xviii_highres.indd 13 28/11/12 5:52 PM

A Macmillan Higher Education Company
WORTH PUBLISHERS
Volti7e_01_FM_i-xviii_highres.indd 2 28/11/12 5:52 PM
this page left intentionally blank

When the first edition of Society and Technological Change came out in 1988, Microsoft’s
initial public offering had occurred only two years earlier, tweets were something
birds did, and Mark Zuckerberg had not yet entered kindergarten. Since that time,
ongoing technological changes and new ways of interpreting the interaction of
technology and society have provided new opportunities to revise and expand suc-
ceeding editions. Even so, the animating spirit of the book remains the same. This
seventh edition of Society and Technological Change continues to explore the many
ways in which various technologies have influenced our lives. At the same time, it
shows how these technologies have themselves been shaped by social, economic,
cultural, and political forces, and that the study of technology is important not just
for its own sake but also for what it tells us about the kinds of societies we make for
ourselves.
This book is intended to be used in the growing number of courses on tech-
nology and society, as well as in other courses that take into account technology’s
role in human affairs. It presents perspectives, theories, and facts that should help
the reader to understand the consequences of technological changes, as well as the
forces that have produced these changes. Many specific examples of the interaction
between technological change and other changes are introduced, for general pro-
cesses are often best understood through references to particular instances.
The rapid pace of technological change during the opening years of the
twenty-first century may have led to an overuse of the word “revolutionary,” but
it also provides the basis for significant new discussions of the reciprocal interac-
tions of technology and society. In particular, the seventh edition of this book
now devotes an entire chapter to the Internet and digital communications media.
Chapter 14, “The Internet Age,” discusses mobile communications, social media
and social movements, the digital divide, and challenges to intellectual prop-
erty and personal privacy. Another new chapter, Chapter 16, “The Era of Smart
Weapons,” tracks advances in weaponry amid a changing military and political
environment. Among the topics covered are weapons such as cruise missiles,
smart bombs, and drones, which are raising remote-control warfare to a new level.
Also discussed are cyberattacks, terrorism, the financial costs of technologically
sophisticated weaponry, and the psychological distance that new weapons put
between those who deploy them and the consequences of their deployment.
xv
P r e f a c e
Volti7e_01_FM_i-xviii_highres.indd 15 28/11/12 5:52 PM

One of modern technology’s strongest influences has been on the development
of the cluster of political, cultural, social, and economic changes that are subsumed
in the term “globalization.” New material in this edition covers offshoring and tech-
nology transfer, appropriate technologies in poor countries, new media and social
movements in authoritarian societies, and the extent to which the world’s cultures
are converging toward a common pattern.
Some of the most important issues involving technology and society center
on health, both the health of humans and the health of the earth. In regard to
the latter, the broad issue of sustainability is addressed by expanded coverage of
climate change and the use of sources of energy other than fossil fuels. As far as
human health is concerned, advances in genetics research are giving rise to new
healing technologies. At the same time, however, DNA-based technologies also
pose many practical and ethical problems that are noted in an expanded chapter on
these technologies. Apart from human health concerns, genetic technologies offer
a number of benefits, everything from improved crop yields to ascertaining the guilt
or innocence of criminal suspects. These too present a number of concerns that will
be explored in this chapter.
The preparation of this new edition also has provided an opportunity to update
and extend many pertinent facts and statistics. These include new data on climate
change, the costs of medical care, unemployment, the distribution of income, video
game sales, the use of various media (including e-mail, mobile phones, and social
media), future employment prospects, and government support of research and
development.
Also new in this edition are short introductions to related chapter groupings
that preview some of the overarching themes of each chapter. In addition, new dis-
cussion questions have been added at the end of every chapter, intended to stimu-
late further consideration of how particular technologies interact with the societies
in which they emerge, are adopted, and mutate.
Although this edition has quite a lot of new material, no pretense is made that
it presents an all-encompassing view of technology and society. Much has been
left out because of space limitations and my own limitations of time, energy, and
expertise. At the same time, systematic study of the interactions between technol-
ogy and society is a relatively recent endeavor, and many gaps remain to be filled. It
can only be hoped that this book will provide a foundation for thought and future
study. If annoyance at the inadequacy of coverage leads the reader to undertake
more extensive explorations of some of the topics presented, then this book will
have served its purpose.
Acknowledgments
Writing can be a lonely activity. While I was putting this book together, some of
my loneliness was alleviated by being able to call on a number of colleagues for
assistance. I would like to thank the following people for reading portions of the
manuscript and making invaluable suggestions: Hugh G. J. Aitken, Newton Copp,
David Cressy, Stephen Cutcliffe, Paul Faulstich, Barbara Gutek, Margaret Hamilton,
xvi Preface
Volti7e_01_FM_i-xviii_highres.indd 16 28/11/12 5:52 PM

Lamont Hempel, Christine Ilgen, Sue Mansfield, Meg Mathies, Richard Olsen,
Robert Post, Leonard Reich, Kathryn Rogers, Mark Rose, John Truxal, James C.
Williams, and Andrew W. Zanella.
I would also like to thank those who have reviewed this and previous editions:
Janet Abbate, University of Maryland; Patience Akpan, Arizona State University;
Elazar Barnette, North Carolina A&T University; Wenda K. Bauchspies,
Pennsylvania State University; Donald Beaver, Williams College; Paul Cesarini,
Bowling Green State University; Dave Conz, Arizona State University; Jennifer
Croissant, University of Arizona; Adam Driscoll, North Carolina State University;
Kerry Dugan, Northeastern University; R. Valentine Dusek, University of New
Hampshire; Anna Erwin, Appalachian University; Nora Foust, Alamance
Community College; Martin Friedman, SUNY Binghamton; Ted Gaiser, Boston
College; Gary Gappert, The University of Akron; James Gerhardt, Southern
Methodist University; Kenneth Gould, Northwestern University; James
P. Hamilton, Pennsylvania State University; Kurt Helgeson, St. Cloud State
University; Robert Hoffman, North Carolina State University; Charles Jaret,
Georgia State University; Richard Kahoe, University of Central Missouri; Felix
Kaufmann, Eastern Michigan University; Robert Keel, University of Missouri—
St. Louis; Mark Kelso, Embry-Riddle Aeronautical University; David Klein, Metro
State College of Denver; Diane N. Long, California Polytechnic University;
Carol MacLennan, Michigan Technological University; Toy McEvoy, Wayne
State College; Marilyn Mertens, Midwestern State University; Todd Morgan, De
Paul University; Karen Oslund, University of Maryland, College Park; Robert S.
Paradowski, Rochester Institute of Technology; Karin E. Peterson, NC-Asheville;
Dretha M. Phillips, Roanoke College; John Renzelman, Wayne State College;
Terry Richardson, Northern State College; Laurel Smith-Doerr, Boston University;
Donald Sorsa, DePaul University; James Steele, James Madison University; David
Swift, University of Hawaii; L. E. Trachtman, Purdue University; Yung-Mei
Tsai, Texas Tech University; Della M. Vanhuss, Tri-County Technical College;
Steve Vergara, Wayne State College; Rollin Williams III, East Tennessee State
University; and Thomas Zeller, University of Maryland, College Park. Their
knowledge and expertise exceed my ability to make complete use of the help they
have given me, and they are not responsible for any errors of fact or interpretation
that may be found in these pages.
I would also like to thank the editorial and production staffs of Worth
Publishers. Sarah Berger and Kirk Bomont have been terrific sources of guidance
and encouragement; although I am pleased to see the publication of this new edi-
tion, I will miss our regular conferences regarding its style and content. I also appre-
ciate the able assistance of Cecilia Varas, Lisa Kinne, Edward Dionne, and Barbara
Seixas. Finally, special thanks go to my wife, Ann Stromberg, and our daughter,
Kate, for their unfailing support.
Rudi Volti
Preface xvii
Volti7e_01_FM_i-xviii_highres.indd 17 28/11/12 5:52 PM

A Macmillan Higher Education Company
WORTH PUBLISHERS
Volti7e_01_FM_i-xviii_highres.indd 2 28/11/12 5:52 PM
this page left intentionally blank

p a r t o n e
1

The ability to create and use a great variety of technologies is one of the
distinguishing characteristics of humans, but what exactly is meant by “technology”?
The term is a familiar one, but like many words in current circulation it carries
with it a multitude of meanings. Chapter 1 offers a definition of technology that
is meant to be precise but elastic enough to cover the many connotations of the
word. Although technology is often associated with particular items of hardware,
the ultimate basis of technology is knowledge, and the chapter delineates the ways
of thinking that are associated with technological advance.
Chapter 1 also includes an effort to disentangle technological advance from an
even more slippery concept: “progress.” In Chapter 2 the discussion is continued
by noting that many technological changes do not necessarily make things better
for everyone, as is implied in the word “progress.” To the contrary, they may affect
individuals and groups in different ways, leaving some better off while others are left
in a worse position. This aspect of technological change is often ignored, making it
hard to resist the temptation to seek technological fixes for problems that require
more than the introduction of new devices and processes. This chapter describes the
kinds of situations where technological fixes are likely to be successful and others
where they are doomed to failure.
Orientations
Volti7e_02_ch01_001-020_highres.indd 1 28/11/12 5:53 PM

A Macmillan Higher Education Company
WORTH PUBLISHERS
Volti7e_01_FM_i-xviii_highres.indd 2 28/11/12 5:52 PM
this page left intentionally blank

Today’s technology leaves us both exhilarated and terrified. Recent technological
developments have presented us with such marvels as spacecraft leaving the solar
system, instant access to billions of Internet Web pages, and diseases cured through
gene therapy. At the same time, however, the seemingly inexorable march of
technology has produced global pollution, overpopulation, and the threat of nuclear
annihilation. On many occasions technological change has also produced social
disruptions, as when automation destroys jobs in a particular industry or a new
weapon upsets the balance of power between nations. And when technologies fail,
some of them do so in a big way, as exemplified by the loss of the Challenger and
Columbia space shuttles, the massive oil spill in the Gulf of Mexico, the catastrophic
failure of the Fukushima nuclear plant in Japan, and the disastrous breaching of the
levees in New Orleans in the wake of Hurricane Katrina.
Despite all the crises, disruptions, and disasters that have accompanied it,
modern technology is still viewed in a favorable light, according to public opinion
surveys. Although significant minorities of respondents express their disapproval of
certain technologies like nuclear power and genetically modified foods, the positive
achievements of technology as a whole are seen to substantially outweigh the negative
ones.1 But this support of technology is based more on faith than on understanding.
When confronting technology, most of us are poorly informed spectators, seemingly
incapable of understanding an esoteric realm of lasers, microprocessors, gene splicing,
and nanomaterials.
This inability to understand technology and perceive its effects on our society and
on ourselves is one of the greatest, if most subtle, problems of an age that has been so
heavily influenced by technological change.2 But ignorance need not be a permanent
condition. Although no one can hope to comprehend the inner workings of even a
small number of the most significant technologies, it is still possible to come to a better
understanding of the major causes and consequences of technological change. All
technologies, be they high-definition televisions or reinforced concrete bridges, have
some basic features in common. It will be the task of this chapter to show what they are.
Defining Technology
Gaining an understanding of the meaning of words is often the beginning of
knowledge. Before plunging into a discussion of the nature of technology, it is
3
The Nature of Technology
c h a p t e r o n e
Volti7e_02_ch01_001-020_highres.indd 3 28/11/12 5:53 PM

4 Orientations
necessary to provide a more precise definition of what is meant when we use the
term. The linguistic roots of the word “technology” can be traced to the Indo-
European stem tekhn-, which seems to have referred to woodworking. It is the
source of the Greek word tekne, which can be variously translated as “art,” “craft,” or
“skill.” It is also the root of the Latin word texere, “to weave,” which eventually took
on the larger meaning of fabrication or construction. The term “technologist” was
occasionally used by Aristotle and others of his time, but in their usage it referred
to a grammarian or rhetorician. By the early eighteenth century the word had come
close to its present meaning when an English dictionary defined it as “a Description
of Arts, especially the Mechanical.” In 1831 Jacob Bigelow published Elements of
Technology, the first book in English with the word “technology” in its title. As he
defined it, technology consisted of “the principles, processes, and nomenclatures
of the more conspicuous arts, particularly those which involve applications of
science.”3
Technologies are developed and applied so that we can do things not otherwise
possible, or so that we can do them cheaper, faster, and more easily. The capacity
of human beings to employ technologies sets us apart from other creatures. To be
sure, beavers build dams, otters crack open shellfish with rocks, and chimpanzees
use sticks to extract termites from their nests. But no other animal comes close
to humans in the ability to create tools and techniques—the first two elements
in our definition of technology—and no other creature is so dependent on them.
The development of technology is in large measure responsible for the survival and
expansion of a species that lacks many of the innate abilities of other animals. Left
with only their innate physical capabilities, humans cannot match the speed of a
cheetah, the strength of an elephant, or the leaping ability of a kangaroo. They
do not possess the eyesight of an eagle or the defensive armament of a porcupine,
and they are among the 25 percent of all species that are incapable of flying. All
in all, humankind is a physically puny bunch. But compensating for this physical
weakness is an intelligence that is the ultimate source of technology. Humans
stand apart from all other animals in their ability to gain and transmit knowledge,
and to use this knowledge to develop tools and techniques. Without this capacity
to invent and use a great variety of technologies, members of the human species
would have never been able to establish themselves on virtually every part of the
globe.
Reliance on technology is as old as humanity itself. Whatever evils have
accompanied the use of particular technologies, it is pointless to indict technology
as being somehow “unnatural.” Our past as well as our future as a species is inex-
tricably linked to our capacity to shape our existence through the invention and
application of implements and techniques that allow us to transcend our meager
physical endowments. It is certainly true, as Jacob Bronowski observed, that “to
quarrel with technology is to quarrel with the nature of man—just as if we were to
quarrel with his upright gait, his symbolic imagination, his faculty for speech, or his
unusual sexual posture and appetite.”4
Tools and techniques have been of unquestioned importance in allowing
the physical survival of the human species. Still, they are not the whole story.
Volti7e_02_ch01_001-020_highres.indd 4 28/11/12 5:53 PM

The Nature of Technology 5
It is necessary to add some elements to our definition of technology that go
beyond the usual identification of technology with pieces of hardware and ways of
manipulating them. The first of these is organization. This follows from the fact that
the development, production, and employment of particular technologies require a
group effort. Even a relatively simple technology, such as one centering on the use
of earthenware pots, requires a complex network of material suppliers, potters, tool
makers, marketing agents, and consumers capable of making good use of the pots. Of
course, one person can learn all these skills adequately if not expertly, but the day is
not long enough for him or her to do them all on a scale that produces a reasonable
degree of efficiency. In the case of a complex technology like a computerized
manufacturing system, there is no possibility of a single individual developing even
a tiny fraction of the requisite skills. For a technology to be developed and used,
the energies and skills of many individuals have to be combined and coordinated
through some organizational structure. Organization may be likened to the software
that controls and guides a computer; without an operating system and application
programs, a computer is a useless arrangement of capacitors, transistors, resistors,
and other bits of hardware. In similar fashion, an organizational structure allows the
integration of diffuse human and material inputs for the attainment of particular
tasks. From this standpoint, there is considerable merit in Lewis Mumford’s
assertion that the first “machine” was not a physical object, but the organizational
structures that the Egyptian pharaohs employed to build the pyramids.5
When technology is seen as a combination of devices, skills, and organizational
structures, it becomes natural to think of it as a system, the next element in our
definition. For an individual technology to operate effectively, more is required
than the invention of a particular piece of hardware; it has to be supported by
other elements that are systematically interconnected. When Thomas Edison
began to work on electrical illumination, he realized that this technology would
require the development of such a system. The invention of a practical, long-
lasting light bulb rested on the development of a serviceable filament and the use
of an improved vacuum pump that evacuated the interior of the bulb, thereby
preventing the combustion of the filament. But by itself, a light bulb was useless.
An effective electrical generator was needed to supply the current that produced
the incandescence of the filament. A network of electrical lines had to be strung up
between the generator and individual homes, shops, and factories. And metering
devices were necessary so that users could be accurately billed for the electricity
they used. Edison and his associates worked out all of these problems, and in so
doing brought large-scale electrical illumination to the world.6
The development of all the elements of a technological system can be an un-
even process, for technological advance often entails the resolution of tensions
that are generated when one part of the technological system changes. This process
is exemplified by the development of the modern airplane. Early biplanes with
their drag-inducing wires and struts could not make effective use of more powerful
engines. The availability of these engines became a strong inducement to the
design of aerodynamically cleaner aircraft. The faster aircraft that resulted from the
marriage of streamlined airframes and powerful engines produced a new problem:
Volti7e_02_ch01_001-020_highres.indd 5 28/11/12 5:53 PM

6 Orientations
dangerously high landing speeds. This, in turn, stimulated the invention of wing
flaps and slots. By the 1940s it had become apparent that improved airframes could
achieve still higher speeds if provided with more powerful engines; this possibility
gave a strong stimulus to the development of the turbojet.7
For an example of the interplay of devices, skills, and organizational patterns,
we can take note of Lewis Mumford’s analysis of the technology of handwriting.8
Two hundred years ago, the standard writing instrument was a goose-quill pen.
Based on an organic product and sharpened by the user, it represented the handicraft
technologies typical of its time. Cheap and crude, it called for a fair degree of skill if
it was to be used effectively. In contrast, the steel-nib pen of the nineteenth century
was a typical artifact of the industrial age, the product of a complex manufacturing
process. Less adaptable than the quill, it was mass-produced in many different forms
in order to meet specialized needs. Although Mumford’s ideas were formulated
before the invention of the ballpoint pen in the 1940s, his analysis fits this
implement perfectly. Made from a variety of artificial materials and manufactured
to close tolerances, the ballpoint pen could only be produced through sophisticated
industrial processes. It is completely divorced from the organic world and requires
very little skill from its user. Indeed, the technological artistry embodied in the pen
itself stands in sharp contrast to the poor quality of the writing that so often comes
from the hand that wields it.
A technological system does not emerge all at once with every one of its
components neatly fitting together. In addition to changes in tools, techniques,
and organizational structures, many social, psychological, economic, and political
adjustments may be required for the support of a technological system. Technological
change is not always a smooth process, and many of the necessary changes may
entail considerable pain and disruption. Seeing technology as a system should help
us to understand that technological change is closely connected with a variety of
associated changes, and that the creation of a technological system may be fraught
with tension and discomfort.
Much of what has just been said can be incorporated into a schematic definition
of technology: a system created by humans that uses knowledge and organization
to produce objects and techniques for the attainment of specific goals.
Useful as it may be, this definition of technology is incomplete and possibly
misleading in one important respect. The last part of the definition implies that
technological change comes about as a response to existing needs: its purpose is
“the attainment of specific goals.” In the first place, one could legitimately ask whose
goals are to be attained. This is an important issue, but it is best left for the next
chapter. For now, we should note that although it is a human creation, technology
does not always respond to existing needs; a new technology may in fact create its
own needs. The development of technology on occasion exemplifies a phenomenon
that has been dubbed “the law of the hammer”: give a six-year-old a hammer, and
to the child everything starts looking like a nail.
The history of technology is replete with examples of inventions looking for
problems to solve. One example that illustrates this point is found in almost every
medicine chest: a bottle of aspirin. One of the most common uses of aspirin is
Volti7e_02_ch01_001-020_highres.indd 6 28/11/12 5:53 PM

The Nature of Technology 7
to suppress fevers that accompany various illnesses. But recent medical research
(as well as some ancient practices) has demonstrated that running a fever is a
therapeutic process that aids in a patient’s recovery; it is the body’s way of naturally
combating infection. Yet since the introduction of aspirin in the early 1900s, fever
has been seen as a problem requiring intervention. As one medical researcher has
noted, “It’s no surprise that society’s deep worries about fever closely followed the
synthesis of aspirin, the first drug that could safely reduce it.”9 In short, a new
technology created its own need.
It is also important to note that the goals achieved through the use of a
technology do not have to be “practical” ones. Some technologies have been
developed so that we can grow more food or construct more comfortable buildings,
but others have been developed simply for the challenge and enjoyment of
solving technological problems,10 a proclivity that Robert Post has described as
“technological enthusiasm.”11 The prodigious efforts that went into the Daedalus
Project, a successful attempt to build a human-powered aircraft capable of flying
forty miles across the open sea, were certainly not motivated by an effort to produce
a new form of transportation. A major reason for creating the aircraft was that its
construction posed an intriguing technological challenge to those who designed,
built, and flew it.
Flight seems to be a particularly attractive object for this kind of spirit.
Immensely expensive technological endeavors such as the supersonic Concorde
airliner and manned space exploration programs are hard to justify on practical
grounds, although their supporters have made valiant efforts to do so. Their primary
purpose seems to be the elevation of national prestige by demonstrating a nation’s
collective ability to solve daunting technological problems. At the same time,
many other technologies have a dual nature; they serve a practical purpose, but
they are not valued only for this reason. An outstanding example is the automobile.
It would be hard to justify the enormous resources employed for the building and
operation of cars if transportation were the only goal. For many people (the author
included), cars are objects of inherent fascination. Technological features like
variable valve timing and active suspension systems have little to do with utilitarian
transportation. The appeal is at least as much in the sophisticated technologies
themselves as in the purposes that they serve.
Technological Advance and the Image of Progress
The development of technology is an inherently dynamic and cumulative process.
It is dynamic because a technology is never perfect; there is always room for
improvement. As Henry Ford said of his firm, “If we have a tradition it is this:
Everything can always be done faster and better.”12 It is cumulative, for one
advance paves the way for another. The lessons learned in working with an existing
technology very often provide materials, tools, and, most importantly, a knowledge
base for the next stage of development.
The dynamic and cumulative nature of technological change sets it apart from
many other human endeavors. Ignoring for the moment the social consequences
Volti7e_02_ch01_001-020_highres.indd 7 28/11/12 5:53 PM

8 Orientations
Sometimes we are inclined to look to technology for our salvation, as personified in this
tongue-in-cheek rendition of a sanctified Steve Jobs. (© The Economist Newspaper Limited, London)
of technology, the process of technological change is usually one of continuous
improvement in the internal workings of a particular technology: as they evolve,
engines develop more power and are more efficient, integrated electronic circuits
pack more components on a single chip, aircraft fly higher and faster.
The process of technological advance can be graphically portrayed according
to the following diagram, in which the horizontal axis represents time and the
vertical axis represents just about any aspect of technological advance: the speed
of commercial airliners, the production of synthetic materials, or the number of
articles in engineering journals. Although there are inevitable fits and starts over
time, the general trend can be depicted as a sigmoid, or S-shaped curve:
Note that at first the curve rises rather slowly, inclines steeply in the middle,
and then begins to slow down. That is, after an initial period of slow growth, the
rate of advance accelerates, reaches a maximum, and then begins to proceed at a
slower pace but never completely levels off. Although the rate of increase is smaller
as the curve moves toward the right, this rate is applied to an increasingly larger
base, so the actual addition is still substantial.
Volti7e_02_ch01_001-020_highres.indd 8 28/11/12 5:53 PM

The Nature of Technology 9
Not all human endeavors can be fitted to this sort of curve. While technology
tends to be dynamic and cumulative, the same cannot always be said of other
manifestations of human creativity. Although there is ample room for debate, a
good case can be made that succeeding generations of writers, composers, and
painters have not produced works superior to the ones created by Shakespeare,
Beethoven, and Vermeer. And while we continue to take great pleasure in the
artistic creations of eras long past, few of us would be satisfied with the technologies
that were prevalent in those times. We also see few indications that people are
more humane than they were centuries ago. The present era certainly provides a
multitude of horrifying examples of human cruelty, many of them augmented by
enlisting technology in the service of slaughter and destruction.
Still, when judged solely according to internal criteria, technology is one of
the best examples of humankind’s largely unrealized dream of continual progress.
Technological progress, however, is not the same thing as progress in general. The
fact that a society is able to develop and make use of advanced technologies does
not guarantee that it will be equally advanced in other areas.13 Nazi Germany
produced many technological triumphs, such as the all-conquering Mercedes and
Auto Union grand prix racing cars of the late 1930s and the V-2 rocket used during
World War II, but in its ideology and treatment of people it can only be described
as barbaric. Conversely, many technologically primitive peoples have exhibited
a high level of sophistication in their artistic creations, religious beliefs, and
social relationships. The term “progress” can be used with some precision when
applied to the development of technology per se, although even here problems
can crop up because different standards of evaluation may lead to conflicting
conclusions. Is it really “progress” when a new medical technology maintains an
individual’s life, but does so only at enormous expense while preserving nothing
but the maintenance of organic functions? Does maintaining a “life” of this sort
justify expenditures that otherwise might be used for expanded prenatal care or
other preventative measures? Given all of the value judgments, ambiguities, and
complexities surrounding the word “progress,” its use is avoided here unless its
meaning is clearly defined.
Te
ch
no
lo
gi
ca
l a
dv
an
ce
Time
Volti7e_02_ch01_001-020_highres.indd 9 28/11/12 5:53 PM

10 Orientations
Built with slave labor, the V-2 rocket exemplified the technological advances achieved in Nazi
Germany. (Hulton Archive/Getty Images)
Technology as a Metaphor
Despite these qualifications, it is evident that beginning in the late eighteenth
century and continuing today, technology’s stunning advances have fueled a belief
Volti7e_02_ch01_001-020_highres.indd 10 28/11/12 5:53 PM

The Nature of Technology 11
in generalized human progress. In this way, technology has operated as a metaphor—
the transference of an idea from one area to another. Technology has provided many
other metaphors that have affected our way of looking at ourselves and the world,
as when human thought is made analogous to the operation of a digital computer.
A further example of the power of a technology to shape our way of thinking
comes from the late eighteenth century. At that time the designers of windmills
and steam engines discovered the important principle of feedback, which the
great twentieth-century mathematician Norbert Wiener defined as “a method of
controlling a system by reinserting in it the results of its past performance.”14 When
a steam engine begins to rotate too rapidly, a feedback device such as a flyball
governor closes the valve that admits the steam, thereby bringing the engine back
into its proper operating range. When it slows down, the reverse happens, and the
governor opens the valve to admit more steam.
A steam engine with a flyball governor. Changes in the rotational speed of the vertical shaft
at the top of the engine causes the two balls to move up or down, controlling the linkage
that opens and closes the throttle. (Hulton-Deutsch Collection/CORBIS)
Flyball
governor
Volti7e_02_ch01_001-020_highres.indd 11 28/11/12 5:53 PM

12 Orientations
During the late eighteenth century the feedback principle offered a suggestive
metaphor for the workings of the economic system: instead of being guided by a
centralized authority, an economy might best be organized through the operation of
a self-regulating market, with the actions of independent buyers and sellers providing
the feedback. Thus, when buyers wanted a particular commodity, its price would be
high, motivating sellers to produce more of it. If the price were low, less would be
produced. In similar fashion, an increase in production would cause the price of a
commodity to fall, so more of it would be purchased, while a drop in production would
cause the price to rise, leading to a reduction of purchases. In this way, the actions of
buyers and sellers in the market provide a feedback mechanism through which supply
and demand are supposedly brought into equilibrium. It is probably no coincidence
that the Scottish economist Adam Smith developed this basic concept at the same
time that the steam engine was being put into service.15 Today, the widespread use
of the feedback principle makes its apparent applicability to the economic system
even more appealing, even though the real-world economy is hardly a neat closed
system like a steam engine. Laws and regulations as well as a host of other extraneous
elements may strongly affect individual feedback loops, thereby preventing a
complex economy from operating solely on the basis of supply-and-demand signals.
Technological development has supplied a useful metaphor in the feedback principle,
but like all metaphors it cannot be taken as a literal depiction of reality.
Technology and Rationality
The development of technology has stimulated a belief that progress is a natural
part of human life. At the same time, the progressive development of technology
has itself been the product of a distinctive set of cultural values and mental
processes that are characterized by a rational approach to the world and how it is to
be controlled. Technological development is more than the random accumulation
of tools, techniques, and organizational forms. Underlying the process is a set of
attitudes and orientations that are collectively described as “rational.”
What makes a technologically progressive society different from others is that
its methods of problem solving are oriented toward an objective scrutiny of the
problem at hand, coupled with a systematic, empirically based examination of
possible solutions and a logical selection of the most appropriate ones. Beyond this
approach to the solution of problems lies another cultural attribute: the belief that
solutions are possible and that constant changes are necessary in order to realize
them. A society imbued with a rational ethos is dynamic and essentially optimistic,
and it exhibits the confidence necessary to alter existing ways of doing things in
order to gain particular benefits.
These abstract concepts may be illustrated through a simple example. All
societies are faced with the problem of coping with the capriciousness of the
weather. A great deal of human suffering has been the result of the vagaries of
rainfall, and history provides many examples of the tragic consequences of drought.
A number of responses are possible when people are confronted with this problem.
The simplest is to succumb to despair, and perhaps try to find meaning in it by
Volti7e_02_ch01_001-020_highres.indd 12 28/11/12 5:53 PM

The Nature of Technology 13
attributing the drought to fate or God’s will. A more active approach might be to
offer prayers, perform a special ceremony, or sacrifice a member of the community.
These latter activities are not likely to meet with success. There is no logical
or empirically verifiable connection between them and the circumstances that
produced the drought, a fact that could be demonstrated by a systematic inquiry
into the long-term connection between prayers, ceremonies, or human sacrifices
and the incidence of rainfall.
Attitudes and behaviors of this sort stand in sharp contrast with rational
ones. Through the use of logic and empirical observation, it is possible to develop
ways of dealing with problems like drought that are both more effective and more
closely connected to the way the world actually works. A systematic and empirical
observation of weather patterns might allow the prediction of a drought so that
necessary steps can be taken to alter farming practices and conserve water. Other
solutions could be the development of drought-resistant crops, improved methods
of conserving water, and the distillation of sea water. It might also be possible to
artificially stimulate rainfall through cloud seeding. In short, a rational approach
to problem solving is continuously concerned with identifying and developing
appropriate means for achieving particular ends.
These remarks are not meant to convey the ethnocentric belief that modern
Western culture is superior to all others. The intention here is not to ridicule the
beliefs and practices of people and societies that use nonrational approaches to
problem solving. There is no reason to believe that rationality has been and always
will be the special attribute of a particular group of people. Moreover, modern
societies often manifest behaviors and patterns of thought that are anything but
rational, as when large numbers of people continue to find value in astrology,
numerology, and the predictions of supposed psychics.
It is also important to recognize that rational ways of thinking do not confer
moral superiority. To the contrary, the rigorous development and use of rational
procedures can be accompanied by major moral and ethical transgressions. The
rational method of problem solving, with its overarching concern for devising
appropriate means for attaining particular ends, makes no distinction concerning
the ends being pursued. There is nothing in the rational approach to the world that
prevents the use of logically and empirically derived means in the service of goals
that are neither rational nor ethically justifiable. We can take note of the words
of Captain Ahab, the main figure in Herman Melville’s novel Moby Dick: “All my
means are sane, my motive and subject mad.” Nazi Germany provides many ghastly
historical examples of human destruction ensuing from rational thinking and its
resultant technologies. As Albert Speer, Hitler’s Minister of Armaments, ruefully
noted, “The criminal events of these years were not only an outgrowth of Hitler’s
personality. The extent of the crimes was also due to the fact that Hitler was the first
to be able to employ the implements of technology to multiply crime.”16
Even when rationality is not used for manifestly immoral purposes, it can still
leave a dubious spiritual legacy. The very strength of rationality and the scientific
and technological accomplishments that flow from it lie in their matter-of-fact
Volti7e_02_ch01_001-020_highres.indd 13 28/11/12 5:53 PM

14 Orientations
approach to the world. A rational approach to things is often accompanied by
a reluctance to admit there are any forces incapable of withstanding logical and
empirical scrutiny. As the great German sociologist Max Weber put it, the world
defined by rational thought processes had become “disenchanted,” for it was bereft
of the gods, genies, and spiritual forces that people not imbued with the spirit of
rationality used to explain their world.17 But “disenchantment” is a two-edged
sword, as the everyday meaning of the word makes clear. To be disenchanted is
to lose the sense of awe, commitment, and loyalty that is a necessary part of a
meaningful existence. Weber’s melancholy analysis of a world that has lost its
enchantment is summarized by the French sociologist Julian Freund:18
With the progress of science and technology, man has stopped believing in magic
powers, in spirits and demons; he has lost his sense of prophecy and, above all, his
sense of the sacred. Reality has become dreary, flat and utilitarian, leaving a great
void in the souls of men which they seek to fill by furious activity and through
various devices and substitutes.
Similar misgivings were voiced by the eighteenth-century political philosopher
Edmund Burke. Burke’s primary concern was the destruction of traditional
authority by modern mass movements, as exemplified by the French Revolution.
Burke attributed much of the demonic energy of that movement to the spread of
rational modes of thought that left no room for the traditional attitudes, values, and
political structures that had long sustained European civilization. Burke’s comment
on the downfall of the queen of France, Marie Antoinette, thus contains a sharp
indictment of the bearers of rational values who, in his estimation, were leading
Europe to its doom:19
Little did I dream that I should have lived to see such disasters fallen upon her in
a nation of gallant men, in a nation of men of honor and of cavaliers. I thought
ten thousand swords must have leaped from their scabbards to avenge even a
look that threatened her with insult. But the age of chivalry is gone. That of
sophisters, economists, and calculators, has succeeded; and the glory of Europe is
extinguished forever.
Rationality also implies objectivity; coolness and detachment are part of the
rational approach to understanding and changing the world. Guided by a rational
outlook, scientific inquiry and technological application are usually based on the
abstraction or isolation of the part of the natural world that is being studied or
manipulated. This isn’t always a good thing, for it can produce a sharp separation
between the individual and the rest of the world. The scientist or technologist
stands apart from the system that is being studied and manipulated, resulting in a
kind of tunnel vision that all too often ignores the larger consequences of gaining
and applying knowledge.20 For example, in discovering a genetic marker for a
serious disease, a researcher might not consider potential abuses of that discovery,
such as insurance companies refusing coverage of people with that marker.
It also may be argued that a logical, detached, and dispassionate approach to
the world is suffused with a “masculine” approach to understanding and interacting
Volti7e_02_ch01_001-020_highres.indd 14 28/11/12 5:53 PM

The Nature of Technology 15
with the world. Some technologies have largely been a male domain, but
throughout history women also have made significant contributions to technological
advance.21 The complex relationship of gender and technology is illustrated by the
history of the technological artifact most strongly associated with the present era,
the digital computer. Its development has generally been viewed as the product
of hyper-rational male engineers, mathematicians, scientists, and technicians. In
reality, many of the programmers of first-generation computers were women whose
accomplishments have often been passed over in standard histories.22 More recently,
the development of computer technology has depended on thought processes that
are relentlessly rational, objective, and logical, but at the same time has required
an intuitive, interactive, and generally less structured approach.23 This is not to say
that either style is the exclusive province of men or women, only that technological
advance often requires both approaches. Equally important, although these modes
of thinking may be described in gender terms, they need not reflect the cognitive
approaches of individual men and women.
Technological Determinism
Nothing worthwhile in life comes without some costs attached. So it is with
technology; while it has expanded human power and made our lives materially
richer, the advance of technology has created many problems—environmental
degradation, alienation, and the threat of nuclear annihilation, to name only the
most obvious ones. And, most bothersome of all, there looms the possibility that
technology is out of control. If this is so, what began more than a million years
ago as a human creation has taken on a life of its own, with technology advancing
according to its own inner dynamic, unrestrained by social arrangements, systems
of governance, culture, and thought.24 The belief that technology acts as an
independent force in our life, unaffected by social forces, is known as “technological
determinism,” and if it is true, we have become the servant of technology instead
of its master.
There can be little question that technology exerts a great influence on social,
political, and economic relationships. Everything from antibiotics to zippers has
affected our lives to some degree; many of these influences will be explored in subse-
quent portions of this book. But that is not the end of the story. As will be explored
at greater length in Chapter 3, students of technology have given extensive consid-
eration to the opposite possibility, that instead of operating as an independent force,
technology is shaped by social arrangements. According to social constructivists
(adherents of the Social Construction of Technology approach), the emergence
of particular technologies, choices between competing technologies, and the way
these technologies are actually used owe a great deal to socially grounded forces like
political power, social class, gender, and organizational dynamics.
Asserting the supremacy of either technological determinism or social
constructivism is not a very useful activity. Such straightforward cause-and-effect
relationships can be found in some realms—Newtonian physics, for example—but
technological and social change is better understood in terms of probabilities,
reciprocal interactions, and feedback loops. Even William F. Ogburn, a sociologist
Volti7e_02_ch01_001-020_highres.indd 15 28/11/12 5:53 PM

16 Orientations
who is often characterized as a technological determinist, on occasion took a
more nuanced view of the subject: “The whole interconnected mass [i.e., social
institutions, customs, technology, and science] is in motion. When each part is
in motion and banging up against some other part, the question of origins seems
artificial and unrealistic. If one pushes the question to the extreme, origins are lost
in a maze of causative factors.”25
The wondrously complicated interactions of technology and society often
result in unimagined consequences when new technologies emerge. To take one
example, when the first digital computers appeared in the mid-1940s, they elicited
modest expectations about their future applications. Today, the world as we know
it is almost unimaginable without computers, as everything from air travel to the
mapping of genomes is totally dependent on the storage, retrieval, and manipulation
of information performed by computers. Accordingly, the history of the computer
would seem to lend credence to technological determinism. Nobody saw it coming
in the 1940s, but within a few decades the computer had become a universal and
essential part of contemporary life.
This is the story from a technological determinist standpoint, but social
constructivists would challenge it by noting that the technical development of the
computer in the 1950s and 1960s was heavily supported by military expenditures,
just as one of today’s major computer applications, the Internet, was initially a
creation of the U.S. Department of Defense. Someone taking a social constructivist
approach might also point out that the expansion of the market for computers
was also powerfully stimulated by commercial enterprises like banks and insurance
companies, and that this huge market supported the research and development that
rapidly advanced computer technology.
A similar story could be repeated for most successful technologies. New
technologies bring changes to many aspects of society, while at the same time social
forces do much to stimulate and shape these technologies. To try to assign primacy
to one or the other is to ignore a crucial feature of technological and social change.
Both are dynamic processes characterized by the reciprocal interaction of a host
of factors, some of them narrowly technical in nature, others not. No reasonable
person could deny that technology has been a major force in making the world we
live in, but it is important to always keep in mind that technology has not operated
as an agent independent of the society in which it is imbedded.
Social constructivism therefore offers the possibility for more human agency
than technological determinism, but it is not likely that the ability to influence
the course of technological change will be evenly distributed among the population
as a whole. To the contrary, social constructivist analyses have often shown how
differences in power and access to resources have shaped technological change.
Particular technologies may be devised, selected, and disseminated because they
serve the interests of a particular group, possibly in opposition to the interests of
other groups. Technology confers power, but this power is not wielded over only the
nonhuman universe. As C. S. Lewis has reminded us, “Man’s power over nature is
really the power of some men over others with nature as their instrument.”26
Volti7e_02_ch01_001-020_highres.indd 16 28/11/12 5:53 PM

The Nature of Technology 17
Living in a Technological Society
The development and application of technologies that are suited to our needs requires
the informed participation of a wide range of people. Unfortunately, the very nature of
modern technology places severe limits on popular understanding. The sophistication
and complexity of contemporary technologies preclude direct involvement by all
but those immediately concerned with them. The rest of us are passive consumers,
content to reap the benefits of rationally derived knowledge but woefully ignorant of
it. This creates the fundamental paradox of modern society: technology has generated
massive powers available to human society, while as individuals we exert very little of
that power. We have access to a wide range of powerful technologies, yet our inability
to understand them often leaves us with feelings of impotence and frustration, as
anyone who has experienced a computer crash will attest.27
As has been noted, the application of rationality for the solution of human
problems is both the consequence and the cause of optimism and a willingness to
accept constant change. Yet one cannot help but wonder if these characteristics can
be sustained in an environment that sharply limits participation and inculcates wide-
spread feelings of having little or no power over the process of technological change.
Strange notions can emerge when feelings of powerlessness are coupled with an
extravagant faith in technology. The consequences of this combination are sometimes
exhibited by fervent believers in alien spacecraft or UFOs (unidentified flying objects).
Although convincing evidence of UFOs is lacking, a belief in their existence does
not necessarily make one a crackpot. In some cases, however, a strident belief in the
existence of UFOs takes on the characteristics of membership in a religious cult where
the deities are superior beings who have produced an advanced technology. Alien
space ships represent a level of technical sophistication not attained on Earth, and
some UFO enthusiasts entertain the hope that the aliens that created them will take
over this planet and solve its problems. Faith in a higher technology may be combined
with a mistrust of the “establishment,” as a fair number of UFO adherents claim that
their government is engaged in a massive conspiracy to prevent the general public
from being aware of the existence of UFOs. There is no denying that on occasion
governments lie to their citizens, but a cover-up of the required magnitude would be
impossible for even the most well-organized government to pull off. Still, conspiracy
theories strike a resonant chord with people who feel that they have been excluded
from decision making, both political and technological. A quasi-religious belief in
UFOs may therefore combine an excessive confidence in technology in general with
a distrust of the people and organizations that control it in actual practice.
Distrust flourishes when people have no ability to participate in decisions that
shape their lives, and the inability to affect the course of technological change
can produce a mixture of naïve hope and paranoid reaction. A realistic sense of
control, including a sense of having some control over technology, is essential for
an individual’s mental health. No less important, widespread participation in the
shaping of technology is essential for democracy. Technology’s benefits cannot be
separated from its costs, and thus it becomes necessary to determine if the former
justify the latter. If a society is truly democratic, such decisions will be made with
Volti7e_02_ch01_001-020_highres.indd 17 28/11/12 5:53 PM

18 Orientations
as much citizen participation as possible. Moreover, the benefits and costs of
technology are not shared equally, and once again the apportioning of costs and
benefits should be done in as participatory a manner as possible. We will return to
these themes in Chapter 17, but first we will take a closer look at how technology
can affect people and groups in different ways.
Questions for Discussion
1. In your opinion, which recent technology has produced the greatest benefit?
Which has produced the most harm? Are there any harmful elements to the
beneficial technology, and has anything good come from the harmful one?
2. Do all technologies require material artifacts of some sort? Does it make any
sense to speak of bureaucracy as a kind of technology?
3. Are technologies “gendered”? Are some technologies identified with women
and others with men? On what bases do we make these distinctions? Will this
situation necessarily continue in the years to come?
4. Can you think of any technologies that were developed simply because of the
technical challenges involved? How can these “impractical” technologies be
justified?
5. How do you feel when a technological device upon which you depend
malfunctions? What do these feelings tell you about your attitude toward tech-
nology in general?
6. It is sometimes asserted that the development and use of oral contraceptives
were responsible for the sexual revolution that began in the 1960s. Is there a
simple cause-and-effect relationship of the two? Have there been any other
forces that contributed to changing sexual mores?
Notes
1. National Science Foundation, “Science and Engineering Indicators: 2010,” accessed on
January 3, 2012, at http://www.nsf.gov/statistics/seind10/c7/c7i.htm.
2. James D. Carroll, “Participatory Technology,” in Thomas J. Kuehn and Alan L. Porter
(Eds.), Science, Technology, and National Policy (Ithaca, NY: Cornell University Press,
1981), p. 416.
3. This paragraph is derived from Carl Mitcham, Thinking Through Technology: The Path
Between Engineering and Technology (Chicago: University of Chicago Press, 1994),
pp. 117–134.
4. Jacob Bronowski, “Technology and Culture in Evolution,” Philosophy of the Social
Sciences 1, 3 (1971): 199.
5. Lewis Mumford, “Technics and the Nature of Man,” Technology and Culture 7,
3 (July 1966): 303–317.
6. Thomas P. Hughes, Networks of Power: Electrification in Western Society, 1880–1930
(Baltimore: Johns Hopkins University Press, 1983).
7. John B. Rae, Climb to Greatness: The American Aircraft Industry, 1920–1960 (Cambridge,
MA: MIT Press, 1968), p. 74; Edward Constant, Origins of the Turbojet Revolution
(Baltimore: Johns Hopkins University Press, 1980).
8. Lewis Mumford, Technics and Civilization (New York: Harcourt, Brace and World,
1934), p. 110.
9. Edwin Kiester, Jr., “A Little Fever Is Good for You,” Science 84 5, 9 (November 1984): 172.
Volti7e_02_ch01_001-020_highres.indd 18 28/11/12 5:53 PM

http://www.nsf.gov/statistics/seind10/c7/c7i.htm

The Nature of Technology 19
10. Daedalus of New Scientist, “Pure Technology,” Technology Review 72, 7 (June 1970):
38–45.
11. Robert C. Post, “Technological Enthusiasm,” in Rudi Volti (Ed.), The Encyclopedia of
Science, Technology, and Society, vol. 3 (New York: Facts on File, 1999), pp. 999–1001.
12. Quoted in Edward Constant, op. cit., p. 12.
13. Michael Adas, Machines as the Measure of Man: Science, Technology, and Ideologies of
Western Domination (Ithaca and London: Cornell University Press, 1989).
14. Otto Mayr, “The Origins of Feedback Control,” Scientific American 223, 4 (October 1970):
110–118.
15. Otto Mayr, “Adam Smith and the Concept of the Feedback System,” Technology and
Culture 12, 1 (1971).
16. Albert Speer, Inside the Third Reich (New York: Macmillan, 1970), p. 212.
17. This concept is explored by Weber in “Science as a Vocation,” in H. H. Gerth and
C. Wright Mills (Eds.), From Max Weber: Essays in Sociology (New York: Oxford
University Press, 1958), pp. 129–156.
18. Julian Freund, The Sociology of Max Weber (New York: Pantheon, 1968), p. 24.
19. Edmund Burke, Reflections on the Revolution in France (New York: Holt, Rinehart and
Winston, 1959), p. 91.
20. Richard Schlegel, “Why Can Science Lead to a Malevolent Technology?” Centennial
Review 21, 1 (Winter 1977): 14.
21. For a narrative of the historical processes that have led to the perception that technology
is “men’s work,” see Ruth Oldenziel, Making Technology Masculine: Men, Women,
and Machines in America, 1870–1945 (Amsterdam, University of Amsterdam Press,
1999).
22. Jennifer Light, “Programming,” in Nina E. Lehrman, Ruth Oldenziel, and
Arwin Mohun (Eds.), Gender and Technology: A Reader (Baltimore and London: Johns
Hopkins University Press, 2003)
23. Sherry Turkle, The Second Self: Computers and the Human Spirit (New York: Simon
and Schuster, 1984); Paul N. Edwards, “Industrial Genders: Hard/Soft,” in Gender and
Technology: A Reader.
24. The most influential exploration of this idea is Langdon Winner, Autonomous
Technology: Technics-Out-of-Control as a Theme in Political Thought (Cambridge, MA,
and London: MIT Press, 1977).
25. William F. Ogburn, “Technology and Governmental Change,” in Otis Dudley Duncan
(Ed.), On Culture and Social Change: Selected Papers (Chicago: University of Chicago
Press, 1964), pp. 132–133.
26. Quoted in Ted Howard and Jeremy Rifkin, Who Should Play God? The Artificial Creation
of Life and What It Means for the Future of the Human Race (New York: Dell, 1977), p. 8.
27. N. Bruce Hannay and Robert E. McGinn, “The Anatomy of Modern Technology:
Prolegomenon to an Improved Public Policy for the Social Management of Technology,”
Daedalus 109, 1 (Winter 1980): 30.
Volti7e_02_ch01_001-020_highres.indd 19 28/11/12 5:53 PM

A Macmillan Higher Education Company
WORTH PUBLISHERS
Volti7e_01_FM_i-xviii_highres.indd 2 28/11/12 5:52 PM
this page left intentionally blank

The last chapter may have seemed a bit negative in its assessment of technology and
the culture that supports it. In one regard, however, there is no denying technology’s
positive consequences: technological advance has been the greatest single source of
economic growth. If our material lives are better than those of our grandparents, it
is largely because technological development has boosted the production of goods
and services. Equally important, it has created entirely new products while at the
same time improving the quality of existing ones.
Curiously, economists were slow to grasp this seemingly obvious fact.
Conventional economic analysis identifies three basic “factors of production”: land
(which includes natural resources), labor, and capital. Any increase in production
is therefore taken to be the result of an increase of these factors. This view began
to change in the 1950s when the historical course of economic development in the
United States was analyzed through the use of sophisticated statistical techniques.
It then became apparent that increases in the traditional factors of production did
not adequately explain the actual record of economic growth. The amount of land
had remained constant, and capital accumulation and increases in the labor force
accounted for only 10 to 20 percent of economic growth during the first half of
the twentieth century.1 Accordingly, the major source of economic growth was a
“residual” factor of overwhelming importance. Most economists agree that tech-
nological advance is the main element of this residual, although organizational
development and improved worker skills, along with economies of scale, are also
key components. Still, as we have already seen, organization and skill are integral
parts of technology, so it is reasonable to consider technological change as the major
source of economic growth.
Technology as a Subversive Force
While technological development has been the primary source of economic advance,
it has not been cost-free. One of the most pleasant myths about technology is that
it can work its wonders without altering existing social arrangements. Americans in
particular have often seen technological progress as the surest basis for progress in
general, and have tended to believe that technological solutions to problems are less
Winners and Losers: The Differential Effects
of Technological Change
c h a p t e r t w o
21
Volti7e_03_ch02_021-038_highres.indd 21 28/11/12 5:54 PM

22 Orientations
painful than solutions that require political or social changes.2 These beliefs are not
easily sustained after an examination of the actual pattern of technological advance.
It is a truism that a particular technology can be used for either good or evil
purposes; a construction team employs explosives to build a road, while a terrorist
uses them for roadside bombs. But there is less appreciation for a more subtle point:
technological change is often a subversive process that results in the modification or
destruction of established social roles, relationships, and values. Even a technology
that is used exclusively for benign purposes will cause disruptions by altering existing
social structures and relationships. There are many technological changes that are
small in scope, the effects of which are felt by only a few. A few technological
changes are massive, and they lead to vast social restructuring. In either case,
technology does not yield its benefits without exacting a cost.
The disruptive effects of technological change can readily be seen in the
economic realm, where new technologies can lead to the destruction of obsolete
firms, as when the fabled Pony Express rapidly lost its customers after telegraph
wires had been strung across the West. Of course, sometimes the disruption is
less apparent when technological innovation results in the creation of entirely
new industries that are not in direct competition with existing ones. Many new
industries and individual firms owe their existence to the emergence of a new
technology. Witness, for example, the rapid growth of personal computer
manufacturing, peripheral equipment production, software publishing, and
app development that followed the invention of the integrated circuit. Even
so, lurking behind these successes were a number of failures, most notably the
manufacturers of vacuum tubes and transistors, who faced a diminished market
for their products.
Concerns about the disruptive effects of technological change are not new, as
can be seen in an English magazine editor’s fulminations against the first railroads
in 1835: “Railroads, if they succeed, will give an unnatural impetus to society,
destroy all the relations that exist between man and man, overthrow all mercantile
regulations, and create, at the peril of life, all sorts of confusion and distress.”3
Anyone convinced of the virtues of technological change could easily criticize
this reactionary view by noting how the railroad stimulated economic development
and produced many social benefits. Even so, there is more than a grain of truth in
the concerns expressed by the agitated magazine editor. Technological changes, both
major and minor, often lead to a restructuring of power relations, the redistribution
of wealth and income, and an alteration of human relationships.
The experiences of the Yir Yoront, a group of Australian aboriginals, gives us
an excellent, albeit sad, example of the disruptive effects of a new technology.4 The
Yir Yoront were a truly paleolithic people whose highest technological achievement
was the stone axe. These axes were simple implements, but a considerable amount
of skill went into their production. Several different materials had to be gathered—
wood for the handle, bark for binding, and gum for fixing the head to the handle.
The stone itself was obtained through an elaborate trading network that involved
only adult males. The actual possession of the axes was also an exclusively male
prerogative. Women and children could only borrow an axe, and even then only
Volti7e_03_ch02_021-038_highres.indd 22 28/11/12 5:54 PM

Winners and Losers: The Differential Effects of Technological Change 23
Technological change may contribute to the decline of many established products and organi-
zations. The closure of the Borders bookstore chain was due in part to the growing popularity
online ordering and e-readers. (David L Ryan/The Boston Globe via Getty Images; RICHARD B. LEVINE/
Newscom)
Volti7e_03_ch02_021-038_highres.indd 23 28/11/12 5:54 PM

24 Orientations
from close relatives. The axe also had an important symbolic value, for it was a
totemic symbol that was used in certain religious ceremonies performed by men
only. Thus, the production and use of the axes reflected and reinforced traditional
social relationships based on age, sex, and kinship.
All this changed when steel axes began to be introduced into Yir Yoront society
during the early twentieth century. These axes were dispensed as gifts by missionar-
ies, and they were given to all “worthy” members of the society, including women,
young men, and even children. As a result, mature men lost an important indicator
of their distinctive status. At the same time, the trading networks between men of
different tribes were bypassed. In their place new trading relationships emerged,
with some men even prostituting their wives in return for the axes. The possession
and distribution of axes no longer symbolized traditional relationships; a certain
kind of freedom was achieved, but at the expense of confusion and insecurity. A
more general malaise spread through the entire tribe, for the steel axes had no clear
links with the religiously based explanations of how the world came to be as it was;
they were alien objects whose origin could not be explained. Symbolically, steel
axes represented a new world that the Yir Yoront could not comprehend. The result
was rapid cultural disintegration and a bewildered and apathetic populace.
To be sure, it wasn’t the axes themselves that produced these disruptions. Steel
axes were part of an outside world that was impinging on the traditional aboriginal
order. Stone axes were an integral part of the indigenous technological system,
while steel axes were alien intrusions that represented both a new technology and
a new pattern of social relationships. For the Yir Yoront, the two were so closely
intertwined that the introduction of a new artifact produced a social and cultural
crisis that could not be surmounted.
Preindustrial people are not the only ones subject to the unpleasant conse-
quences of technological change. On occasion, technological advance has fatally
disrupted modern communities and the people living in them. One such place was
Caliente, Nevada.5 Caliente was a small town with a variety of civic amenities—
schools, churches, a hospital, a theater, a park, and many prosperous small retail
businesses. Many of its inhabitants were proud members of civic organizations such
as the Chamber of Commerce, the Rotary, the Masons, and the American Legion.
It was a typical American small town, with typical American small-town values.
The life of the town was supported by a single industry: the servicing of steam
locomotives. Caliente was an important division point on a transcontinental railroad,
and many of the town’s people worked as machinists, boilermakers, and repairmen.
Their incomes in turn supported Caliente’s commercial and civic establishments.
Then, in the late 1940s, the diesel-electric locomotive rapidly replaced the steam
locomotive. Diesels had many advantages; they were more fuel-efficient, hauled
longer trains, and did less damage to the rails and roadbed. They also required less
frequent servicing. When servicing was required, it took place in large centralized
shops. As a result, service facilities were eliminated at many division points, and
Caliente was one of them. The town lost its economic base, and within a few years
it had become a shell of its former self. People moved out, homes were abandoned,
and shops were boarded up. The local newspaper sadly noted, “Employees who have
Volti7e_03_ch02_021-038_highres.indd 24 28/11/12 5:54 PM

Winners and Losers: The Differential Effects of Technological Change 25
given the best years of their lives to this railroad are cut off without anything to
which they can turn, many of them with homes in which they have taken much
pride; while others, similarly with nice homes, are told to move elsewhere.”6
The tragedy of this small town has been repeated in many other communities
affected by technological change. Many places of employment have closed down as
new products and processes have replaced old ones, leaving communities and their
inhabitants in desperate straits. The technological advances that produced these
dislocations may have benefited society as a whole, but at great cost to the people
who were immediately affected.
Technological changes do not always result in the destruction or modification
of an existing social order; sometimes they may help to preserve it, as happened
when pneumatic molding machines were adopted by the McCormick reaper
manufacturing plant in the 1880s.7 These machines were not installed, as
conventional analysis would lead us to think, in order to reduce costs or to produce
a better product; in fact, they were deficient on both counts. They were installed
for the sole purpose of eliminating the skilled workers who formed the backbone
of the National Union of Iron Molders, an organization that was challenging
the entrenched authority of McCormick’s management. The molding machines
allowed the replacement of skilled workers by unskilled ones, and three years later,
having served their purpose, they were discarded by McCormick’s management.
By providing many jobs, the servicing of steam locomotives formed the economic base of
towns like Caliente, Nevada. (Jack Delano/Farm Security Administration—Office of War information
Photography Collection [Library of Congress])
Volti7e_03_ch02_021-038_highres.indd 25 28/11/12 5:54 PM

26 Orientations
Groups that are threatened by a technological innovation are not always as
helpless as the iron molders apparently were. Many affected parties have been
able to defend themselves against changes in the way of doing things. To take
one example, prefabricated buildings were vigorously resisted by many local
construction workers’ unions because they threatened their members’ jobs. One sad
tale is narrated by Peter Blake:8
Shortly after the end of World War II, an enterprising manufacturer decided to
mass-produce a so-called service core: a complete “package” containing kitchen,
bathroom, and utility room, with all fixtures, pipes, ducts, and wires in place,
ready to be plonked down in any typical suburban house.
The first twenty of these beautifully designed and beautifully made “packages”
arrived on a site near Detroit; local union plumbers and electricians promptly
refused to install them. Finally, after nine months of heated debate (during which
the units, parked on a sidewalk, were exposed to weather and vandalism), the
local unions agreed to handle the “packages”—by disassembling them on the
sidewalk and then reassembling them, piece by piece, in each of the houses. The
manufacturer, needless to say, thereupon went out of business.
Nineteenth-century China provides another example of the efforts of a group of
people defending their interests in the face of a potentially disruptive technological
change.9 For centuries, the Chinese had produced silk thread by manually unwinding
silkworm cocoons. The technology employed, although unsophisticated, was
adequate to serve a substantial domestic and export market. Then, in 1859, a
representative of the British Jardine Matheson Trading Company arrived in Shanghai
with the intention of building a modern factory that would use steam-powered
machinery to reel the silk. The machinery required skilled labor for its operation,
and many problems were encountered in mustering an adequate labor force. This
obstacle was eventually overcome, and the factory enjoyed an adequate measure of
technical success. Unfortunately, it was not an economic success, for the high price
of its basic raw material, silkworm cocoons, was not offset by increased productivity,
and the enterprise suffered chronic losses until it closed down less than 10 years
after its founding. The significant point here is that the factory could not obtain
cocoons at reasonable prices due to the opposition of an entrenched silk-makers’
guild. Accustomed to monopolizing silk manufacture, the guild prevented most
individual cocoon producers from having any dealings with the foreign operation,
while the few who did were able to charge high prices for their wares. As happened
with the disgruntled construction workers, the Chinese guild members effectively
undermined a technology that threatened their established ways of doing things.
The Luddities
There have been many other occasions when individuals and groups have
recognized that certain technological changes were not working to their
advantage. In some cases, their reactions have taken a violent turn. The most
famous of these are the outbreaks of machine-smashing that occurred in early
Volti7e_03_ch02_021-038_highres.indd 26 28/11/12 5:54 PM

Winners and Losers: The Differential Effects of Technological Change 27
nineteenth-century England.10 These attacks were the work of different groups
who were collectively known as Luddites, a name that was derived from one
Ned Ludlum, an apprentice stocking maker who, as legend had it, answered his
master’s reprimand by smashing his stocking frames with a hammer. There was
really nothing new about these attacks; the breaking of machines by disgruntled
workers had a long history in England, the earliest recorded episode taking
place in 1663. But the Luddite disturbances that began in 1811 did represent
a substantial increase in the scale of these attacks; by the following year, the
government had to deploy 12,000 troops to restore order to the parts of England
affected by the movement.
Since these attacks coincided with an era of rapid technological change, it is
easy to draw the conclusion that they were motivated by the fear of many work-
ers that their jobs would be lost to new machinery. The actual story is a bit more
complicated. Luddite attacks occurred in a number of separate branches of the
textile industry, and each was characterized by a distinctive set of motivations and
responses. The Luddite movement began in the hosiery trades, where there long had
been opposition to the use of wider stocking frames that allowed the employment of
poorly paid unskilled labor for the manufacture of an inferior product. The situation
might have been resolved in a peaceful manner had it not been for the dire condi-
tions encountered by many of England’s working people at the time. The Napoleonic
wars had resulted in the closure of many export markets, leading to a general trade
depression. To make matters worse, a series of bad harvests led to sharp increases in
the cost of food, and many workers found that their wages were insufficient to meet
their basic needs. These conditions produced a fertile ground for the spread of “col-
lective bargaining by riot,” and Luddite attacks were soon fomented by shearers in
the textile industry. Another occupational group, the handloom weavers, viewed the
advance of steam-powered weaving machinery with understandable apprehension,
and, following the example of workers in the hosiery trade, some of them attacked
the factories housing mechanized looms, as well as the houses of their owners. Only
in a few instances was the machinery itself directly attacked.
Luddite disturbances were expressly oriented toward the prevention of tech-
nological change in the cropping trade. Wool cloth was traditionally finished by
raising the nap and then leveling the surface through the use of a heavy set of
shears. The growing use of the gig mill, a device for raising the nap, along with the
employment of a crude device for the mechanized cropping of cloth, threatened the
livelihood of the traditional hand workers. They responded with some of the most
severe attacks of the Luddite epoch. Although the machinery had been used for
many years in many textile establishments, the severe economic conditions of the
time brought matters to a head. More than the other instances of Luddite revolt,
the attacks on cropping equipment were motivated by a deep fear of unemployment
induced by technological change.
Within a few years the Luddite assaults came to an end due to the deployment
of government troops; the execution, imprisonment, and exile to Australia of a
number of the participants; and the general improvement in living conditions after
the defeat of Napoleon. The succeeding decades of the nineteenth century also
Volti7e_03_ch02_021-038_highres.indd 27 28/11/12 5:54 PM

Farook Alalshi

28 Orientations
saw the replacement of the small manufacturing establishment by the large factory.
Machine-smashing by riotous crowds was a likely form of labor protest when work-
ers were scattered and lacking in permanent organizational linkages. In contrast,
the large factory served as a fertile ground for the development of labor unions and
other organizational vehicles for pressing the interests of workers. Industrial sabo-
tage did not come to an end, but it was generally superseded by unionization and
more effective forms of worker protest.
Neo-Luddism
These early episodes of machine-smashing have led to the application of the
“Luddite” label to anyone opposed to modern technology. But it is perhaps unfair
to impute to the original Luddites a hostility to technology per se. As we have
seen, most instances of Luddism were not motivated by a fear and hatred of new
machinery; their grievances were those of people suffering from the low wages and
unemployment caused by a generally depressed economy. The machines were seen
as convenient targets of their ire rather than the sources of it.
This is not to say that attacks on new technologies are always motivated by
concerns that transcend the technology in question. As the pace of technological
change has quickened and people have become more aware of its consequences,
numerous efforts have been made to prevent or restrict the spread of technologies
that are perceived as threats. For example, computerization in its initial stage posed
a threat to many established occupational roles and procedures, resulting in a fair
amount of resistance to computer installation and use. In one case that received a
good deal of national publicity during the mid-1970s, newspaper linotype operators
in Washington, D.C., demonstrated their opposition to computerized typesetting
equipment by engaging in large-scale industrial sabotage.
Another striking expression of Luddite sentiments appeared in 1995 when The
New York Times and the Washington Post published a lengthy critique of modern
society and the pivotal role of technology in creating and maintaining it. According
to its author, a society based on modern technology brings some material comforts,
but “all these technical advances taken together have created a world in which the
average man’s fate is no longer in his own hands or in the hands of his neighbors and
friends, but in those of politicians, corporation executives and remote, anonymous
technicians and bureaucrats whom he as an individual has no power to influence.”11
Regaining human freedom therefore required the total destruction of industrial soci-
ety and the technologies that made it possible. This would not be a peaceful revo-
lution, but one that required the destruction of factories, the burning of technical
books, and the eradication of all of the components of an industrial civilization. This
creed might have been dismissed as the agitated musings of a late twentieth-century
Luddite, but its author was not just a misguided critic of the modern world. Shortly
after the publication of the manifesto, it was discovered that its author was Theodore
Kaczynski, dubbed by the media as “The Unabomber,” an elusive figure who from
1978 to 1995 had been responsible for 16 bombings that killed three people and
wounded 23 others.
Volti7e_03_ch02_021-038_highres.indd 28 28/11/12 5:54 PM

Winners and Losers: The Differential Effects of Technological Change 29
Whose Technology?
We have just seen how specific technologies have been used and resisted by
particular groups in accordance with their own needs and concerns. These examples
should help us to realize that technology does not proceed solely through its own
momentum, as implied by technological determinism; its development is strongly
influenced by existing social and political arrangements. Technological changes
may take place because they advance the interests of a particular group. Conversely,
some technologies may meet with stiff resistance because they threaten a group’s
interests. Technologies do not stand or fall solely on their intrinsic merits. The
decision to develop and deploy a new technology is often shaped by the distribution
of power in a society.
Social and political arrangements affect the course of technological change by
influencing the kinds of investments that are made, the research projects that are
funded, and the general priorities that are established.12 Large organizations, such
as corporations and government agencies, often wield disproportionate influence
over the process of technological change. As we will see in Chapter 17, the federal
government is a major source of financial support for research and development, with
the Department of Defense, the National Aeronautics and Space Administration
(NASA), and the Department of Energy (primarily for nuclear research and
development) accounting for a large share of these expenditures. Although we can
only speculate about alternative outcomes, it seems likely that American technology
would have diverged markedly from its historic path if financial resources had been
distributed differently.
Perhaps with a different set of sponsors, technological development might have
made greater contributions to the solution of a number of pressing social problems,
such as poverty and crime. At the same time, however, it can be argued that certain
kinds of problems are simply not amenable to technological solutions. Even with
significant changes in the funding of research, technological solutions to many
social problems will not be forthcoming. This is an important objection, and we will
examine it in the next section.
What Technology Can Do—And What It Cannot Do
The growth of technology has brought dazzling changes to our lives. At the same
time, we seem to be mired in problems for which there seems to be no solution. The
continued existence of these problems is all the more frustrating when contrasted
with the rapid progress of technology. For example, we can use all kinds of sophisti-
cated medical equipment and techniques to preserve the lives of sickly infants who
have been born many weeks premature, but we can’t seem to conquer the poverty
that often results in sick infants. Why, it is often asked, is there such a gulf between
technological progress and social progress? Why can’t technology be applied as a
solution for more, if not all, of our problems? If we can put a man on the moon,
why can’t we. . . ?
Volti7e_03_ch02_021-038_highres.indd 29 28/11/12 5:54 PM

30 Orientations
The Technological Fix
These are troubling paradoxes, and in recent years we have searched for ways of
finding technological solutions to a host of problems. The drug methadone has
been widely used to eliminate addicts’ cravings for heroin. As highway accidents
continue to result in tens of thousands of deaths and hundreds of thousands of inju-
ries each year, efforts have been mounted to develop and manufacture cars capable
of protecting their occupants from the consequences of incompetent driving. Cities
befouled by graffiti have turned to the use of new paints and cleaning solutions that
resist the endeavors of spray-can artists. Overweight men and women spend billions
of dollars annually on medications, diet books, and exercise apparatus in the hope
of shedding excess pounds.
The list of technologies that have been or could be applied to the alleviation of
social problems is an extensive one, and examples could be supplied almost indefi-
nitely. What they have in common is that they are “technological fixes,” for they
seek to use the power of technology in order to solve problems that are nontechni-
cal in nature. In this section we will briefly examine a few of these technologies
and consider the extent to which technology can alleviate these pressing problems.
One study of a number of technologies directed at the solution of social prob-
lems bears the significant title “Technological ‘Shortcuts’ to Social Change.”13 The
authors examined a number of case studies, ranging from instructional television
to intrauterine devices for birth control. As might be expected, the application of
different technologies for the solution of social problems resulted in varying degrees
of success, but a few generalizations can be made about the efficacy of technological
solutions to social problems.
First, even if a technology “works” by producing the desired result, the actual
mechanisms through which the technology produces a change are often poorly
understood. This is particularly evident when the technology is used in conjunc-
tion with other interventions, such as the coupling of methadone maintenance
with individual counseling. Technological shortcuts also produce uneven results;
they work when applied to some segments of the targeted population but do noth-
ing for the rest. Above all, technological solutions only eliminate the surface mani-
festations of the problem and do not get at its roots. A methadone program does
not address the social and psychological causes of drug addiction, and improved
methods of removing graffiti do nothing to mitigate the anger and alienation
that may motivate the defacement of public spaces. These criticisms aside, tech-
nological shortcuts may be effective in alleviating a range of problems, and even
though these problems may not be eliminated, their alleviation may at least come
at a lower price than would be the case if nontechnological efforts at solutions
were employed.
Many other technological fixes have been employed over time, although
not always with the conscious understanding that technology was being used in
lieu of some other method of achieving a desired end. To take one example, at
the beginning of the twentieth century the United States was undergoing severe
growing pains; the urban population was expanding at a rapid rate, accompanied
by congestion, pollution, and a host of other urban ills. In a nation steeped in the
Volti7e_03_ch02_021-038_highres.indd 30 28/11/12 5:54 PM

Winners and Losers: The Differential Effects of Technological Change 31
Jeffersonian belief that cities were inherently evil and that the countryside was the
best location for virtuous living, the conversion of the American populace into a
race of unhealthy and disaffected city dwellers was viewed with alarm. A number of
technologies did make urban life more tolerable, most notably those concerned with
public health and sanitation, but these only served to ameliorate living conditions
without addressing the real issue: the desire of many Americans to escape the city
and return to a vaguely perceived rural idyll.
The pursuit of this goal gave a great impetus to the development of transporta-
tion technologies that would allow the solution of urban problems by eliminating
the need for cities, at least as places of residence. Instead of comprehensively
addressing urban ills through planning and the development of social programs,
Americans pinned their hopes on new transportation technologies. The first of
these was the electric trolley. Through the construction of extensive networks of
interurban electric lines, it was hoped, America’s urban problems could be literally
left behind as a new generation of workers could commute from their places of work
to their rural or suburban homes.14
In many American cities the trolley was displaced by the automobile, yet a great
deal of automobile ownership was motivated by similar sentiments. Widespread
automobile ownership promised an escape from the harsh realities of America’s cit-
ies through individual commuting. As Henry Ford neatly summed things up, “We
shall solve the city problem by leaving the city.”15 Ford’s sentiments were taken to
rhapsodical levels by one early twentieth-century journalist:16
Imagine a healthier race of workingmen, toiling in cheerful and sanitary factories,
with mechanical skill and tradecraft developed to the highest, as the machinery
grows more delicate and perfect, who, in late afternoon, glide away in their own
comfortable vehicles to their little farms or houses in the country or by the sea
twenty or thirty miles distant! They will be healthier, happier, more intelligent
and self-respecting citizens because of the chance to live among the meadows and
flowers of the country instead of in crowded city streets.
It is hardly necessary to note that these hopes were not realized. The mushrooming
growth of suburbs spawned by trolleys and automobiles did not create a harmoni-
ous social order based on rural values. All too often the legacy has been suburban
sprawl, the deterioration of city centers, visual blight, pollution, traffic fatalities,
and many other social costs. This is not to say that the automobile has been an
unmixed curse; the benefits of personal mobility, privacy, and a sense of power have
been too eagerly accepted to allow such a judgment. But the automobile, just like its
predecessor the trolley, was hardly the technological panacea that was envisioned.
The examples of the trolley and the automobile remind us that while some specific
problems may be amenable to technological solutions, larger issues rarely admit of
easy solutions through the application of technological fixes.
Why Technology Can’t Always Fix It
The main difficulty underlying the use of technology to solve social problems is that
these problems are fundamentally different from technical problems. In the first
Volti7e_03_ch02_021-038_highres.indd 31 28/11/12 5:54 PM

32 Orientations

place, social and technical problems differ in their specificity. If you intend to design
an air conditioner, you at least know what your goal is: to keep a space cool. In many
ways this problem is similar to the far more grandiose objective of landing a man
on the moon; although there may be daunting technical problems to overcome, at
least the goal is clear and unambiguous. But what if your goal is to reduce crime?
Crime, unlike air temperature, is a very diffuse concept, encompassing everything
from forgery to murder. Even when a particular crime is singled out for treatment,
its causes are likely to be manifold and not easily addressed by a single technology.
To make matters even more difficult, social problems are directly concerned
with human motivations and behaviors. It is one thing to change the temperature
of the air by inventing and installing an air conditioning system; it is quite another
to attempt to change human behavior through the same kind of technological inter-
vention. Human beings are wondrously intricate creatures whose actions are gov-
erned by extremely complex motivations. Trying to understand, let alone change,
human actions is an exceedingly difficult task. And humans are likely to resist when
attempts are made to change their behavior.
It is also apparent that technological solutions work best when they operate
within closed systems—that is, when the issue to be addressed is sealed off from
The trolley held out the promise of an escape from the noise, dirt, and congestion of the
early twentieth-century city. (The Chicago Historical Society)
Volti7e_03_ch02_021-038_highres.indd 32 28/11/12 5:54 PM

Winners and Losers: The Differential Effects of Technological Change 33
outside influences. Of course, no technology exists in isolation from the surround-
ing society. A transportation system based on private automobiles, for example, is
the result of choices exercised within the economic and political realm, such as a
government’s decision to build a highway network. But within a given technology
there are many specific matters that can be treated as purely technical problems.
In these cases, it is possible to approach the problem directly and not worry about
the influence of other factors. If your car fails to start one morning, you can be
sure that the problem lies only with its components; you need not concern yourself
with sunspot activity or a recent presidential election in Peru. When a problem is
not so easily isolated, a technological solution is much less likely. Today, millions
of children are diagnosed with attention deficit hyperactive disorder (ADHD).
This behavioral problem undoubtedly has a neurological basis, at least for some
children, and amphetamines such as Ritalin are routinely prescribed to alleviate
the symptoms of ADHD. It is likely, however, that many children afflicted with the
disorder have problems that go beyond the neurological. Dysfunctional relation-
ships and actions within a family can create stresses that produce ADHD. Under
these circumstances, the administration of a drug will be insufficient. As the ADHD
website of the National Institute of Mental Health notes, “Sometimes, the whole
family may need therapy.”17
As a final point, it should be noted that no problem, technical or otherwise,
is ever really “solved.” Not only are most solutions incomplete, they also generate
new (and sometimes very different) problems. These “residue problems” may be
considerably more intractable than the original problem.18 This process has been
dramatically illustrated by the rapid development of modern medical technologies,
a topic that will be explored in greater depth in Chapter 7. Technical solutions such
as the development of life-saving drugs, organ transplants, and sophisticated diag-
nostic techniques have proliferated, but at the same time they have created a host
of new dilemmas. Given the expense of many of these new technologies, it may be
necessary either to spend more on medical care or to attempt to ration it. If these
technologies are to be rationed, will this take place through the price mechanism, or
will it be done according to some formalized procedure? In either case, serious ethi-
cal issues will have to be faced. Life-extending technologies have also raised vexing
questions about the morality of prolonging a life under conditions that seem dismal
indeed. Moreover, a longer individual life span leads to an aging population and the
necessity for a wide range of adjustments to the society, the economy, and even the
culture. Without belaboring the point, it should be apparent that no set of technolo-
gies will make our lives better without requiring the enactment of other changes.
The Appeal of Technocracy
These inherent limitations have not deterred a number of individuals and groups
from trying to convert social problems into technical problems. There have been
numerous flirtations with technocracy—the governance of society by engineers
and other people with technical expertise, who attempt to develop policies based
on technical and “scientific” principles. There is no denying that the technocratic
Volti7e_03_ch02_021-038_highres.indd 33 28/11/12 5:54 PM

34 Orientations
vision is at first glance an appealing one. In a world too often governed by venal
and incompetent politicians, there is something very attractive about a system of
governance that supposedly bases itself on logic and the use of expertise. Moreover,
where conventional political systems of all types seem endlessly involved with
apportioning pieces of a small pie, adherents of some form of technocracy often
promise a social and economic order that produces an ever-expanding pie through
the application of the methods that have served technological development so well.
The promises and pitfalls of a technocratic approach to the solution of social
problems are well illustrated by the theories of Scientific Management, as developed
by Frederick W. Taylor (1856–1915) and his followers during the early decades of
the twentieth century.19 Scientific Management arose in an era marked by a pro-
found paradox: industrial production was increasing at a rapid pace, but at the same
time American society was racked by large-scale and potentially explosive conflicts
between workers and management. Many cures for labor unrest had been proposed,
but for Taylor all of them missed the mark. Taylor had earned an international repu-
tation as a metallurgical engineer, and his systematic studies on the cutting tools used
for machining metal had resulted in major technological advances. If obdurate metals
could be better controlled and shaped through the application of new technologies
guided by scientific principles, why couldn’t the same thing be done with workers?
To achieve this goal, Taylor and his colleagues developed a “scientific” regimen
for studying work. The main technique used for this task was the time-and-motion
study through which workers were systematically observed and their work motions
precisely timed. Through an analysis of these observations and measurements Taylor
came up with a supposedly optimum set of motions for a given job, all of them
subject to rigid time constraints. Equally important, the development and admin-
istration of these motions were the business of management exclusively, and any
attempt by workers to go about their tasks independently would necessarily result
in wasted motions and general inefficiency. A basic tenet of Scientific Management
was that the planning and organization of work had to be separated from its actual
execution. Only specially trained managers had the time and expertise necessary for
the devising of optimal methods of production. The prime obligation of the workers
was to do what they were told to do.20
Although they had no power to plan and manage their own work, workers were
supposed to benefit from the system. Because their work activities were now optimized,
production would supposedly increase significantly. Workers would necessarily share
in these higher returns, for Taylor also advocated that workers be paid according to
piece rates rather than straight wages; the more they produced, the more they earned.
The technocratic spirit of Scientific Management is thus evident: the tasks
and prerogatives of management rested not upon the exercise of raw power but
on management’s technical superiority in guiding the production process. At the
same time, Scientific Management promised relief from continual squabbling
over relative shares of the fruits of production; an optimal system of organization
would result in more of everything for everybody. Taylor was not content with
using Scientific Management as a solution for the problems of the workplace; its
principles, he claimed, “can be applied with equal force to all social activities: to
Volti7e_03_ch02_021-038_highres.indd 34 28/11/12 5:54 PM

Winners and Losers: The Differential Effects of Technological Change 35
Frederick Taylor, discussed on the previous page, believed that all kinds of work could be
reduced to rationally derived actions, much as machining operations could be precisely timed
through the use of this worksheet. (© 1911 by Frederick Winslow Taylor in Shop Management)
the management of our homes; the management of our farms; the management of
the business of our tradesmen large and small; of our churches, our philanthropic
organizations, our universities; and our governmental departments.”21
Volti7e_03_ch02_021-038_highres.indd 35 28/11/12 5:54 PM

36 Orientations
The appeal of Scientific Management was not confined to the United States,
or even to the capitalist world. No less a figure than Vladimir Lenin, the leader
of the Bolshevik Revolution in Russia, expressed a deep admiration for American
technology and American forms of industrial organization, and for Taylor’s ideas
in particular. Although he duly noted that Scientific Management embodied “the
refined cruelty of bourgeois exploitation,” Lenin made it clear that its basic prin-
ciples and procedures could contribute to the realization of Soviet economic goals:
“The possibility of building Socialism will be determined precisely by our success
in combining Soviet government and the Soviet organization of administration
with the modern achievements of capitalism. We must organize in Russia the study
and teaching of the Taylor System and systematically try it out and adopt it to our
purposes.”22
The Technocrat’s Delusion
Although some of its elements, such as the use of time-and-motion studies, can still
be found in contemporary managerial practices, Scientific Management in its pure
form never took hold in the United States, the Soviet Union, or anywhere else. A
number of technical problems impeded its use. Considerable skill was required for
the administration of time-and-motion studies, and they were especially difficult to
conduct in work settings not characterized by repetitious actions. But of equal or
greater importance, both management and labor realized that the implementation
of Taylor’s system posed fundamental threats to their own interests. Most manag-
ers were highly reluctant to delegate their authority to the dictates of “scientific”
procedures.23 Workers, on the other hand, resented the loss of what little autonomy
they had, and they widely believed—with considerable justification—that higher
levels of productivity would result in the downward adjustment of piece rates, leav-
ing them no better off than before the program had been enacted.
Scientific Management, like all technocratically inspired systems, ignored
the distinction between technical and sociopolitical problems. Even if Scientific
Management had generated the productive increases it promised—which is
unlikely—it would still have been strongly resisted by those who had to submit to
it. Scientific Management promised a conflict-free method of administration where
no such thing was possible. Workers and managers had their separate interests, and
each group was unwilling to entrust its fate to Taylor and his disciples.
The basic fallacy of Scientific Management, one shared by all other variants
of technocracy, is that administration can replace politics. Administration is based
on the application of rules that allow the realization of given ends. It is thus a
manifestation of the rational spirit of applying the best means for the achievement
of a particular goal. It does not, however, determine these ends. The Internal
Revenue Service officials who administer the tax system are not the authors of the
tax code. Around April 15 we may get angry about the perceived unfairness of the
tax code, but it is pointless to blame the officials at the local IRS office.
Tax codes and other policies are formulated through choices made in the politi-
cal arena. Neither technology nor administration can supply the values that form
Volti7e_03_ch02_021-038_highres.indd 36 28/11/12 5:54 PM

Winners and Losers: The Differential Effects of Technological Change 37
the basis of these choices. They cannot tell us what we should do with our lives, nor
can they help us to resolve the fundamental issue that all societies confront: how
to distribute fairly life’s necessities and luxuries. The resolution of these issues will
always be marked by sizeable differences of opinion and a good deal of conflict. The
technocrat’s hope that society can be run on the basis of engineering principles will
always remain an illusion.
To summarize, technological changes inevitably produce social changes. These
changes, in turn, do not affect everyone equally. Although many technologies
produce widespread benefits, not everyone benefits to the same degree, and there
are instances where particular individuals and groups lose out completely. A choice
of technology is often a determination of who wins and who loses; it is therefore
proper that affected parties have the opportunity to participate in the process. This
issue will be taken up in greater depth in the last three chapters. At this point it
can at least be hoped that without deflating the very real achievements of technol-
ogy, some sense of its inherent limitations has been conveyed. Technology and
the procedures underlying its development have been immensely powerful in their
own realm; outside this realm, however, they are less likely to be effective. Equally
important, the methods that have been so successful in developing and applying
new technologies cannot be transferred to the governance of society. Technological
development may make some aspects of our lives better, but it can never substitute
for a just and effective political and social system.
Questions for Discussion
1. Technological advance has often undermined established businesses. Most
recently, the growth of Internet-based e-commerce has posed a threat to
conventional bricks-and-mortar retail firms. Can you think of other business
enterprises that the Internet may damage or even destroy? Should anything be
done to prevent this from happening?
2. The story of the distribution of steel axes to the Yir Yoront seems to be a prime
example of technological determinism. Is it possible that the story is not as
straightforward as presented in this chapter? Might there have been any non-
technological changes that contributed to the deterioration of Yir Yoront society?
3. Were the Luddites justified in mounting their attacks on machinery? How else
might they have expressed their grievances? Would other kinds of actions have
been more successful?
4. What examples of technological “fixes” can you think of? Have they been suc-
cessful or not? What are your criteria for judging success and failure?
5. Political leaders at home and abroad are occasionally described as “techno-
crats.” What are the implications of this description? Would you be more or
less likely to vote for somebody who was described in this way?
Notes
1. Moses Abramowitz, “Resource and Output Trends in the United States Since 1870,”
American Economic Review, Papers and Proceedings 56 (May 1956): 5–23;
John W. Kendrick, “Productivity Trends, Capital and Labor,” Review of Economics and
Volti7e_03_ch02_021-038_highres.indd 37 28/11/12 5:54 PM

38 Orientations
Statistics 37 (August 1956): 248–257; R. M. Solo, “Technical Change and the Aggregate
Production Function,” Review of Economics and Statistics 39 (August 1957): 312–320.
2. See Howard P. Segal, Technological Utopianism in American Culture 1830–1940 (Chicago:
University of Chicago Press, 1985).
3. Quoted in Herbert J. Muller, “Human Values and Modern Technology,” in Edwin T.
Layton, Jr. (Ed.), Technology and Social Change in America (New York: Harper & Row,
1973), p. 159.
4. Lauriston Sharp, “Steel Axes for Stone Age Australians,” in Edward H. Spicer (Ed.),
Human Problems in Technological Change: A Casebook (New York: John Wiley & Sons,
1967).
5. W. F. Cottrell, “Death by Dieselization: A Case Study in the Reaction to Technological
Change,” American Sociological Review 16 (June 1951): 358–365.
6. Ibid., p. 362.
7. Langdon Winner, “Do Artifacts Have Politics?” Daedalus 109, 1 (Winter 1980): 123–125.
8. Peter Blake, Form Follows Fiasco: Why Modern Architecture Hasn’t Worked (Boston:
Little, Brown, 1974), p. 59.
9. Shannon R. Brown, “The Ewo Filature: A Study in the Transfer of Technology to China
in the Nineteenth Century,” Technology and Culture 20, 3 (July 1979).
10. George Rude, The Crowd in History: A Study of Popular Disturbances in France and
England, 1730–1848 (New York: John Wiley & Sons, 1965), pp. 66–92; Malcolm
I. Thomis, The Luddites: Machine-Breaking in Regency England (New York: Schocken
Books, 1972).
11. Paragraph 128 of “The Unabomber Manifesto.” This document is available on numer-
ous Internet sites.
12. Reinhard Rürup, “Reflections on the Development and Current Problems of the
History of Technology,” Technology and Culture 15, 2 (April 1974): 165.
13. Amatai Etzioni and Richard Remp, “Technological ‘Shortcuts’ to Social Change,”
Science 175, 4017 (7 January 1972): 31–38.
14. James C. Williams, “The Trolley: Technology and Values in Retrospect,” San Jose
Studies 3, 3 (November 1977): 74–90.
15. James J. Flink, The Car Culture (Cambridge, MA: The MIT Press, 1975), p. 39.
16. William F. Dix, “The Automobile as a Vacation Agent,” Independent 56 (2 June 1904):
1259–1260, quoted in Ibid., pp. 39–40.
17. National Institute of Mental Health, “Attention Deficit Hyperactivity Disorder
(ADHD)” accessed on January 4, 2012, at http://www.nimh.nih.gov/health/publications
/attention-deficit-hyperactivity-disorder/complete-index.shtml.
18. See Kan Chen et al., Growth Policy: Population, Environment, and Beyond (Ann Arbor:
University of Michigan Press, 1973), pp. 105–112.
19. For a recent biography of Taylor, see Robert Kanigel, The One Best Way: Frederick
Winslow Taylor and the Enigma of Efficiency (New York: Viking, 1997).
20. Harry Braverman, Labor and Monopoly Capital: The Degradation of Work in the Twentieth
Century (New York: Monthly Review Press, 1974), pp. 85–138.
21. Quoted in Samuel Florman, The Existential Pleasures of Engineering (New York:
St. Martin’s Press, 1976), p. 8.
22. V. I. Lenin, “The Immediate Tasks of the Soviet Government,” Izvestia, 28 April 1918,
translated in V. I. Lenin, Selected Works, vol. 2 (Moscow: Foreign Languages Publishing
House, 1947), p. 327.
23. Daniel Nelson, Managers and Workers: Origins of the New Factory System in the United States,
1880–1920 (Madison: University of Wisconsin Press, 1975), pp. 75–76.
Volti7e_03_ch02_021-038_highres.indd 38 28/11/12 5:54 PM

http://www.nimh.nih.gov/health/publications

39
p a r t t w o
Much of the research, writing, and thinking about the relationship of technol-
ogy and society centers on how particular technologies have affected some aspect
of the latter. But how do new technologies come into being in the first place?
Chapter 3 describes how social processes are at the core of technological innova-
tion and examines two modes of technological change in that light—epochal
transformations and less dramatic incremental ones. It describes how economics,
politics, culture, and social arrangements have influenced the trajectory of techno-
logical change. A market-based economy is an important stimulus for technological
innovation, but nonmarket forces—ranging from human curiosity to government
institutions—are also stressed.
Scientific discoveries are often seen as the major source of technological
advance, but Chapter 4 argues that this is at best a half-truth. Although science
and technology have much in common, they do not interact in a straightforward
manner. When they do interact, the commonly accepted relationship between sci-
entific and technological advance may be inverted as technological advances propel
advances in science.
Chapter 5 shifts the focus from the creation of technologies to the ways in
which they spread, or diffuse. It pays particular attention to the pitfalls of transfer-
ring a technology from one social and cultural environment to another and from
one business firm to another. The patent system can be both a stimulus and an
impediment to the diffusion of new technologies; its role in technological advance
is examined in the final section.
The Process of Technological Change
Volti7e_04_ch03_039-060_highres.indd 39 28/11/12 5:56 PM

A Macmillan Higher Education Company
WORTH PUBLISHERS
Volti7e_01_FM_i-xviii_highres.indd 2 28/11/12 5:52 PM
this page left intentionally blank

41
The Sources of Technological Change
What accounts for the emergence of particular technologies? Why do they appear
when they do? What sort of forces generate them? How is the choice of technology
exercised? To put it more concretely, why were digital computers developed only
during the second half of the twentieth century even though their basic principles
were understood more than a hundred years earlier? Why did photography undergo
rapid development during the nineteenth century? What were the inventors of
radio trying to accomplish, and how did their intentions differ from those of subse-
quent developers? These are some of the questions that this chapter will address as
it considers some of the most basic issues in the study of technology.
Technological Change as a Social Process
As a starting point, it is important to keep in mind that technological change does
not take place in a social vacuum. Technology is a human creation, and because
humans are social creatures, technological change is necessarily a social process.
In recent years, the study of technological change has been strongly influenced by
a perspective known as “social constructivism,” which we looked at in Chapter 1.
According to this approach, technological change does not occur because new
devices and processes demonstrate their clear-cut superiority over other ways of
doing things. For social constructivists, the analysis has to begin with the need
to explain why certain technologies are assumed to work better than others.1 As
Wiebe E. Bijker has noted, social constructivism is predicated on a belief in “the
malleability of technology, the possibility for choice, the basic insight that things
could have been otherwise.”2 To explain why things turned out the way they did,
social constructivists describe how social structures and processes have affected
choices of technologies. Since the presence of interest groups and unequal distribu-
tions of power are fundamental aspects of every society, social constructivists are
particularly interested in delineating the main actors involved in the development
and selection of particular technologies, and in noting how their actions reflect
their positions in society. Accordingly, for scholarly practitioners of social construc-
tivism, technological change is an inherently political process. New technologies
do not succeed or fail solely on the basis of narrow technical merits. Rather, the
achievement of technological “closure” (the point at which a particular technol-
ogy is recognized as the accepted way of doing things, while others disappear or are
c h a p t e r t h r e e
Volti7e_04_ch03_039-060_highres.indd 41 28/11/12 5:56 PM

42 The Process of Technological Change
marginalized) is closely tied to the presence of specific interest groups and their
ability to affect the selection process.
Some form of social constructivism informs most contemporary studies of tech-
nological change, but considerable variation can be found in the relative emphasis
put on social versus technical factors. Moreover, some students of technological
change, most notably Thomas P. Hughes, have argued that the strength of the social
constructivist approach may depend on the developmental stage of a particular
technology. According to Hughes, social constructivism is most valid when a tech-
nology is at an early stage of development. Social, political, and economic forces
are likely to exert the greatest influence when several alternative technologies
emerge at about the same time.3 Conversely, once a technology has become well
established, it becomes difficult to deviate from the path that has been laid out by
technical requirements. For example, at the end of the nineteenth and beginning of
the twentieth centuries, motorists could choose between cars with electric, steam,
or internal combustion power plants. The triumph of the latter was not simply a
matter of technical superiority, but was a reflection of the needs and expectations of
the individuals who were the prime purchasers of automobiles.4 Once automobiles
powered by internal combustion engines became well entrenched, the adoption
of another type of automobile engine became extraordinarily difficult because the
basic elements of our personal transportation infrastructure were firmly in place,
everything from the fuel used to the skills necessary for effective repair work. Under
these circumstances, technical requirements will prevail.
The Great Breakthrough
One benefit of the social constructivist approach is that it challenges the belief that
technological change largely stems from the insights and labors of a few supremely
talented individuals. Popular histories of technology have often looked to individual
genius as the chief source of technological advance; we have all heard or read sto-
ries of how the inspired labors of Thomas Edison or the Wright brothers produced
epochal inventions that transformed the world. Histories written from this point of
view are in essence biographies of great inventors whose brilliance is assumed to be
the sole source of technological advance. Other histories of technological advance
have remained within this framework but have looked to a different kind of genius
as the source of advance. The key players here are not the inventors of new tech-
nologies but rather the entrepreneurs who make inventions into commercial suc-
cesses by taking risks, moving into uncharted territory, and in general doing what
hadn’t been done before.5 There have been some individuals who have been both
inventor and successful entrepreneur—for example, Edwin Land, the inventor and
moving force behind the Polaroid camera—but they have been rare. Entrepreneurs
generally take other people’s inventions and make them into commercial successes.
From this perspective, the key figure in the development of the steel industry is not
Henry Bessemer, the co-inventor of the iron-refining furnace that bears his name,
but Andrew Carnegie, who laid the commercial and organizational foundations of
the industry.
Volti7e_04_ch03_039-060_highres.indd 42 28/11/12 5:56 PM

The Sources of Technological Change 43
Arriving at a definitive determination of the relative importance of “great men
and women” versus “social processes” in shaping the history of technology would
require much more space than is available here. It can be noted, however, that an
assessment of the relative contributions of the two has to take into account the fact
that a great deal of technological change is the result of small, incremental changes.
In contrast to the “heroic” approach to the history of technology, these involve
the work of largely anonymous inventors, engineers, mechanics, and technicians.
Although their individual contributions may seem modest, in aggregate they have
been an extremely important source of technological advance.
These incremental changes often are the result of a learning process that occurs
as a technology is used. Problems are identified and overcome, bugs are worked out,
and improvements are made. In many cases, the cumulative results of these efforts
are technological advances at least as important as those that stem from funda-
mental breakthroughs. In industries as different as petroleum refining and building
construction, the major source of productivity improvements has been a multitude
of small technological improvements that have resulted in large cumulative gains.6
This process is nicely illustrated by Louis Hunter’s narrative of how the impres-
sive development of nineteenth-century steamboats was the result7
of plodding progress in which invention in the formal sense counted far less than a
multitude of minor improvements, adjustments and adaptations. The story of the
evolution of steamboat machinery in the end resolves itself in a large part into such
seemingly small matters as, for instance, machining a shaft to hundredths instead
of sixteenths of an inch, or devising a cylinder packing which would increase the
effective pressure a few pounds, or altering the design of a boiler so that cleaning
could be accomplished in three hours instead of six and would be necessary
only every other instead of every trip. Matters such as these do not get into the
historical record, yet they are the stuff of which mechanical progress is made.
One can also witness the far-reaching consequences of numerous small improve-
ments in the development of railroad technology. According to one calcula-
tion, if the traffic loads borne in 1910 had been carried by railroads employing
the technologies of 1870, the additional costs would have amounted to
$1.3 billion by the latter date. Fortunately, by 1910 American railroads had benefited
from improvements in the size of cars and the power of locomotives, which in turn
were the result of steady evolution.8 And so it goes today. Even though the railroad
may be described as a “mature” industry, this pattern of incremental yet substan-
tial technological development continues. American railroads have significantly
lowered their costs through the implementation of a series of small improvements:
better insulation for electrical components (thereby allowing higher power loads),
improved turbochargers and fuel injection systems, higher compression ratios, more
efficient motors in cooling radiators, two-speed cooling fans, redesigned air ducts,
lower idling speeds, and the substitution of alternators for direct-current generators.
By themselves, none of these innovations is terribly significant. But when they are
all put together in a modern locomotive, the result is a 10 percent savings in fuel
costs, and a gain of 24 percent in the ton-miles of freight carried per gallon of fuel
Volti7e_04_ch03_039-060_highres.indd 43 28/11/12 5:56 PM

44 The Process of Technological Change
consumed. When it is noted that railroads spend several billion dollars each year for
fuel, the consequences of these improvements are all the more impressive.9
In a world where the ability to produce successful technological innovations
is increasingly a requirement for a firm’s success, if not its survival, each firm needs
to tap every source of technological advance, no matter how modest it may seem.
This is a point stressed by Anthony Athos and Richard Pascale in their book
about American and Japanese management: “Careful scrutiny reveals that despite
the exalted status of ‘strategy’ in the lexicon of American management, few great
successes stem from one bold-stroke strategic thrust. More often, they result from
one half-good idea that is improved upon incrementally. These improvements are
invariably the result of a lot of ‘little people’ paying attention to the product, the
customer, and the marketplace.”10
The “D” in R&D
The process of making a technology work is often summarized by the abbreviation
R&D, which stands for research and development. “Research” calls to mind images
of cutting-edge work in well-equipped laboratories, where great breakthroughs
produce dramatically new technologies. Research can be an exciting, even glamor-
ous activity, and we naturally look to it as the basis of technological progress. It is
the source of fundamental change in technology, like the invention of integrated
circuits, cloning, and composite materials. Still, this sort of research rarely results
in useable products. The realization of the potentialities created by research break-
throughs usually requires a lengthy process of development. Numerous problems
have to be resolved, and, equally important, the new material or device has to be
put into a form that allows it to be produced at a reasonable cost.
Here again we can see the importance of the slow, unspectacular improve-
ments that turn a good idea into a working product or process. And here, too, we
can often see a substantial outpouring of money. If basic research is an expensive
process, development is often even more so. The development work that goes into
preparing a new technology for actual production can entail massive expenditures
for equipment, material, manpower, pilot plants, and the like.
A great deal of development work is oriented toward “scaling up”—that is,
making the transition from a successful research result to large-scale production. It
is one thing to invent a device or process that works under laboratory conditions,
and quite another to produce it in an industrial setting where commercial success
is the goal. The development of penicillin provides an excellent illustration of the
many facets of the scaling-up process.11 Although the discovery of the bacteria-
killing properties of penicillin initiated a major technological breakthrough, the
development phase was certainly no less important.
Penicillin, a fermentation product of the mold Penicillium notatum, was discov-
ered in 1928 by Alexander Fleming, who observed that bacteria were destroyed in a
culture that had been accidentally contaminated by the mold. Penicillium cultures
grown in small quantities were the basis of laboratory and clinical research, but
this process could not yield the large quantities of the drug needed for widespread
Volti7e_04_ch03_039-060_highres.indd 44 28/11/12 5:56 PM

The Sources of Technological Change 45
therapeutic use. Large-scale production of penicillin was eventually done in huge
fermentation vats, a process that required the solution of many technical prob-
lems. The key step was the development of a submerged fermentation process that
allowed the mold to be grown directly in the nutrient medium. The success of this
process in turn required a number of other improvements, such as new tank designs
with special cooling systems and turbine mixers, which also had to be developed.
The use of corn steep liquor (a by-product of corn starch production) as a culture
medium increased yields tenfold, but it created a new problem. Penicillin requires
air in order to grow, but severe foaming occurred when the culture was aerated.
Anti-foaming products therefore had to be developed to alleviate this problem.
The extraction of penicillin from the moldy brew in the vats also created problems
that were ultimately solved by the use of freeze drying, which was itself scaled up
from a process first used to preserve blood plasma. As a result of all of this develop-
ment work, production of penicillin had reached 650 billion units by the end of
World War II, and the cost of a dose was 55 cents instead of the $20 it had been
three years earlier.
All Together Now
The first chapter noted the importance of thinking of technologies as systems. This
point is reinforced by a consideration of how particular technologies develop. Quite
often, a technological leap forward takes place because of the availability of comple-
mentary technological developments that allow the resolution of fundamental prob-
lems. The history of the computer provides a good example of how complementary
changes are essential for the translation of an idea into a workable technology. Back
in the 1820s Charles Babbage began to develop an “analytical engine” that con-
tained a set of input devices, a processor, a control unit, a memory storage, and an
output mechanism—the essential elements of today’s computers. But Babbage’s com-
puter was operated by an exceedingly complex set of gears, rods, and other mechani-
cal linkages. Although he could draw on the talents of Ada Lovelace, who is often
described as the world’s first computer programmer, Babbage’s ambitions were not
fully realized for more than a century, when solid-state electronics, the cathode ray
tube, and magnetic storage devices allowed the development of practical computers.
Another illustration of the importance of complementary technological changes
can be drawn from the history of one of the twentieth century’s most important
devices, the internal combustion engine. When these engines were first produced
during the late nineteenth century, the spark that ignited the air–fuel mixture was
timed to occur at the top of the compression stroke. This did not allow a sufficient
time for the mixture to be completely ignited, and efficiency consequently suffered.
When, after much experimentation, the spark was timed to occur before the piston
reached the top of its stroke, the combustion process was greatly improved, and
much more power was consequently delivered. Still, early engines were deficient in
power because compression ratios were kept low in order to prevent the sudden det-
onation of the air–fuel mixture and resultant damage to the engine. This problem
was solved in part by conducting careful inquiries into the combustion process and
Volti7e_04_ch03_039-060_highres.indd 45 28/11/12 5:56 PM

46 The Process of Technological Change

A portion of Charles Babbage’s computer (bottom), which drew on the assistance of Ada
Lovelace (top), who has been described as the world’s first computer programmer. (Top photo:
Mary Evans Picture Library/Alamy. Bottom photo: IBM Corporation)
Volti7e_04_ch03_039-060_highres.indd 46 28/11/12 5:56 PM

The Sources of Technological Change 47
by reshaping the combustion chamber and piston crown in ways suggested by this
research. But this was not enough. In order to employ higher compression ratios
successfully, it was necessary to modify not just the engine but also the fuel it used.
In particular, the research of Thomas Midgley and Charles Kettering demonstrated
that the addition of tetraethyl lead to gasoline allowed higher compression ratios
and a subsequent improvement in both power and economy. It was thus through a
combination of cumulative improvements in both the engine and the fuel it burned
that the internal combustion engine reached an acceptably high level of efficiency.
It is important to note that these changes in engine technology did not take
place when the internal combustion engine was first invented, but rather during a
period when it was already being sold to customers. This illustrates a point made
by Nathan Rosenberg: “The idea that an invention reaches a stage of commercial
profitability first and is then ‘introduced’ is, as a matter of fact, simple minded.
It is during a (frequently protracted) shakedown period in its early introduction
that it becomes obviously worthwhile to bother making the improvements.”12 In
effect, consumers at times may be unwitting participants in the beta testing of new
products.
The commercial viability of a new technology may thus stimulate the develop-
ment of complementary technologies. A bottleneck that restricts the continued
development of a particular technology creates strong economic incentives to find
new technologies that clear the obstruction.13 Many obstacles are surmounted
through the use of technologies that have been developed for use by a different
industry or in a different sector. This can be seen today in the automobile indus-
try, where emissions requirements have necessitated the use of microprocessors
and computers for the regulation of spark advance and air-fuel ratios. Thus, a
nineteenth-century technology, the four-stroke internal combustion engine, owes
its survival to its marriage to a late twentieth-century technology.
Technological advance in one area is often stimulated by the emergence of new
technologies in different, but related, areas. This process can be seen in the devel-
opment of nineteenth-century metal-working industries. Although their products
differed substantially, the processes employed by these industries were basically the
same: turning, boring, drilling, milling, and planing. At the same time, they all
confronted similar technical problems of transmitting power, reducing friction, and
controlling the rate of feed. This meant that a technological solution arrived at by
one industry was often directly applicable to the problems of another industry.14
This occurred in the early automobile industry, which made abundant use of
the products and manufacturing techniques that had been developed by the bicycle
industry during the 1880s and 1890s. Ball bearings, spoke wheels, drive chains,
and the use of electrical resistance welding had been extensively employed for the
manufacture of bicycles in the decades immediately preceding large-scale automo-
bile production. One of the most novel and significant technologies entailed the use
of stamped components to take the place of forgings. Their use eliminated a great
deal of machining, with a consequent lowering of production costs. The cheap,
mass-produced automobile thus owed much to technologies initially developed to
make a product that it subsequently eclipsed.15
Volti7e_04_ch03_039-060_highres.indd 47 28/11/12 5:56 PM

48 The Process of Technological Change
This is hardly the only example of technologies developed by an established
industry paving the way for a radically new one. Although the turbojet engine was
a novel method of propulsion when it first appeared in the 1940s, it drew heavily
on designs, components, and processes that had already been developed for steam
turbines. In Edward Constant’s summary, “All the work done on blade design, gas
f low, shaft and bearing loads, temperature distribution, lubrication systems, gover-
nors, blade-cutting machines, test procedures and instruments, and countless other
facets of design and production could be applied to gas turbine development.”16
Interindustry transfers of technology do not happen automatically. The effec-
tive transfer of hardware, information, or simply (but importantly) the belief that
a problem is solvable requires individuals and organizations that are capable of
functioning in both worlds and have the incentives to do so. It also requires that
these individuals and organizations are acceptable to the other individuals and
organizations with which they interact, and that they are capable of speaking the
same technical language. Technological innovation is, in Christopher Freeman’s
phrase, a “coupling process” that occurs at the interfaces between science, technol-
ogy, and the market. This does not take place solely through intuitive flashes: “It is
a continuous creative dialogue over a long period of research, experimental design,
and development.”17
Finally, it should be noted that sometimes the complementary changes neces-
sary for the success of a new technology are not technological. A successful techno-
logical change may require changes in basic habits and attitudes. This can be seen
in the failure of agricultural extension agents in New Mexico to get farmers to adopt
hybrid corn in the late 1940s. There was no question about the technical superiority
of the corn: demonstrations showed that its use resulted in a potential trebling of
yields. Impressed by this improvement, half of the farmers planted the new variety,
thereby immediately doubling their output. But after two years virtually all of the
farmers had abandoned hybrid corn and reverted to their traditional low-yielding
crop. The problem was that the cornmeal made from the hybrid variety could not
be made into good tortillas; it did not taste right and couldn’t be easily shaped. In
the absence of a change in culinary patterns, a technically superior product could
make no lasting impact.18
Push and Pull
The rejection of hybrid corn in New Mexico demonstrates the perils in the tech-
nological determinist perspective, in which technological change is viewed as a
largely self-contained process. As we have seen, social constructivists have taken
issue with “internalist” histories that ignore the social, economic, and political
forces that shape technological change. And as was noted a few pages ago, contem-
porary scholarship has moved away from the “great person” approach to the history
of technology. To be sure, many technologies owe their existence to the inspiration
and hard work of individuals, including the unsung heroes who were responsible
for the myriad improvements necessary for the realization of a new technology. But
surely more is involved than their efforts. After all, human ability is presumably
Volti7e_04_ch03_039-060_highres.indd 48 28/11/12 5:56 PM

The Sources of Technological Change 49
spread evenly throughout cultures and historical epochs, yet significant technologi-
cal changes are not equally distributed over time and place. Thomas Edison’s genius
produced 1,093 patents, and some of his inventions transformed the world. Had
he been born in ancient Rome or dynastic China, he might have helped to design
aqueducts or sections of the Great Wall, but it is unlikely that his talents would
have changed the course of Roman or Chinese history. Geniuses require appropriate
social settings for the realization of their talents.
What kind of social system is required if inventive ability is to flourish? Why
does technological innovation occur in some places and times and not in oth-
ers? In beginning to answer these questions, it is helpful to apply to technological
change the concepts that have been so useful to economists: supply and demand.
Everything from fundamental scientific breakthroughs to minor refinements serves
to “push” new technologies into the world. Still, simply having an available supply
of new devices and techniques does not guarantee that they will be used. Many
examples of technologies that languished because they were “ahead of their time”
can be cited. The pneumatic tire was patented in 1845 and then forgotten until it
was reinvented by John Dunlop in 1888. DDT was first synthesized in 1874, but it
was not put to use as an insecticide until 1941. Several decades elapsed before the
laser passed from being a laboratory curiosity to a practical device used for every-
thing from supermarket scanners to instruments for microsurgery.
For a technology to make the transition from the potential to the actual requires
not just that it exist; there must also be a desire for it, coupled with the ability to
pay for it. Economists call this “effective demand.” Seen in this light, technology is
like any other good or service; it will not be produced unless some person, group, or
organization wants it and is willing to buy it. Technology is “pushed” by a variety of
forces, but it also has to be “pulled” by effective demand. To understand why certain
technologies have flourished while others have languished, it is therefore necessary
to consider the configuration of a society and the way in which it determines the
effective demand for particular technologies.
The most influential research on the importance of effective demand-inducing
technological change was done by Jacob Schmookler.19 By examining a long series
of patents in various industries, Schmookler found that their emergence was closely
related to the level of demand for the products of these industries. To take one
rather obvious example, inventors’ interest in improving the horseshoe was strong
when the horse was a primary means of transportation, but interest evaporated
when the steam engine and the internal combustion engine began to displace it.20
Another illustration of the importance of demand-inducing technological
development can be extracted from the history of photography in the nineteenth
century.21 For centuries painters and scientists had made use of the camera obscura,
a darkened room into which light was admitted through a pinhole, resulting in the
projection of an inverted image of an outdoor scene on the opposite wall. Later
developments substituted optical lenses for the pinhole, which made for a sharper
image. Photographs were first produced during the 1820s and 1830s when pioneers
such as Niepce, Daguerre, and Fox Talbot devised chemical emulsions that preserved
the image on paper or a metal plate. But the rapid growth of photography cannot
Volti7e_04_ch03_039-060_highres.indd 49 28/11/12 5:56 PM

50 The Process of Technological Change
be attributed simply to the supply of these inventions. At this time, social changes
were sweeping across Europe, resulting in the ascendance of a new social elite, the
property-owning commercial and industrial bourgeoisie. The members of this group
had a keen desire to flaunt their affluence by taking on characteristics of the old
aristocracy. For the latter, a key artifact of their status was the painted portrait; we
have all seen renditions of persons such as the Eighth Duke of Puddleswallop hang-
ing in our local museums. But many of the rising bourgeoisie lacked the money or
the time for such symbols of their importance, and, in any event, there were not
enough skilled portrait painters to serve the needs of this growing group of people.
Their aspirations were therefore met by the photographic portrait studio, where the
subject posed with the trappings of upper-class status, such as rich draperies and
elegant furniture. In the later decades of the century, new and cheaper methods
of photography emerged to meet the demands of poorer yet increasingly affluent
people, such as American immigrants who wanted portraits that could be sent
back home. Today, the effective demand produced by a great mass of consumers
has stimulated the development of a huge variety of photographic apparatus, rang-
ing from simple disposables to ubiquitous cell phone cameras and sophisticated
digital cameras.
Good business practice is often implicitly based on the realization that suc-
cessful technological development requires the presence of effective demand. One
English study found that the key determinant of a firm’s innovative success was an
understanding of customer requirements. This meant that from its very inception,
The opulent interior of a nineteenth-century photographer’s portrait studio. (Alinari Archives/
CORBIS)
Volti7e_04_ch03_039-060_highres.indd 50 28/11/12 5:56 PM

The Sources of Technological Change 51
a new product or process had to be developed with an eye toward meeting the
needs of actual or potential customers. Similarly, a Canadian study found that
the commercial failure of many inventions was due to an inability to evaluate the
true extent of demand. A great deal of product development was done with little
consideration of market potential. It was often the case that entrepreneurs were so
infatuated with their product innovation that they were incapable of realistically
assessing opportunities and the nature of the market.22
Belated Demand
At the same time, however, gauging the potential demand for a new product can be
a tricky task. Many of today’s “essential” technologies were not at first recognized
as such. When in the late 1930s Chester Carlson attempted to interest established
business machine manufacturers in his photocopying device—the first Xerox
machine—they were of the uniform opinion that there was no point in employing
complicated apparatus and chemicals simply to replace carbon paper. And even
inventors can badly misjudge the ultimate consequences of their own creations.
Alexander Graham Bell initially thought that the telephone he invented would
primarily be used to transmit operas and other musical performances from the con-
cert hall to the home.23 Edison at first believed that one of the main applications of
his phonograph would be to record the last words of dying men. We can also take
note of the judgment of Howard Aiken, the director of the team that built one of
the world’s first computers, who in the early 1950s prophesied that in the foreseeable
future, the total need for computers in the United States could be met by no more
than a half-dozen machines. In similar fashion, before 1950 Thomas J. Watson,
the president of IBM, was of the opinion that there would be no commercial market
for computers.24
The history of radio also shows how a technology may be put to uses not envis-
aged by its inventors. When equipment for sending and receiving radio waves was
first developed during the late nineteenth century, no one imagined that it would be
used for commercial broadcasts. The first transmitters and receivers were devised for
purely intellectual purposes—in order to test the validity of James Clerk Maxwell’s
theories about the nature of electromagnetic waves. Only after the passage of more
than a decade did some visionaries perceive a commercial use for radio apparatus,
and then their imaginations were limited to the use of the radio for ship-to-shore
communications. Decades passed before the idea of broadcasting to a mass audience
emerged.25
What are we to make of these examples? They seem to refute the theory that
technologies are primarily the result of “demand-pull,” for all of the technological
developments just described emerged in the absence of apparent demand for them.
Perhaps we can salvage the demand-pull theory by distinguishing two different
kinds of technological advances. The first kind consists of refinements and improve-
ments to an existing way of doing things, while the second (and far less frequent)
is the truly revolutionary breakthrough—the Internet, the digital computer, radio,
the telephone, and the like. In cases such as these, the very novelty of a revolution-
ary breakthrough makes it difficult to determine what its ultimate uses will be and
Volti7e_04_ch03_039-060_highres.indd 51 28/11/12 5:56 PM

52 The Process of Technological Change
who, if anyone, will want it. By contrast, advances of the first kind occur within
a known context; the basic technology is already in use, and there are likely to be
people and business firms that want, and are willing to pay for, the new wrinkles
that promise to improve an existing technology. Improvements of this sort therefore
have a predictable market. Conversely, radically new technologies confront a great
deal of uncertainty. They may satisfy a latent need, or they may create a new one.
They may also sink without leaving a trace. They are flights into the unknown, and
it is hazardous to guess what sort of a reception they will meet. If nothing else, they
confirm the old Chinese saying that it is dangerous to make predictions—especially
about the future.
Market Economies and Technological Advance
The forces that “push” and “pull” technological advance do not exist everywhere
or in equal measure. In many places and at many times, the distribution of wealth
and power retarded these forces, resulting in a slow pace of technological advance.
Moreover, the particular technological advances that do occur usually reflect a
society’s general configuration of wealth and power. In the European Middle Ages,
the landowning aristocracy and Church officials controlled most of the wealth and
wielded great power. The monastic orders often played an important role in land
clearing, farming, and the construction of mechanical devices, but for the most part
the religious and secular establishment showed little interest in such matters. While
the era gave rise to significant technological advances in water power, mechanical
ENIAC, a first-generation electronic digital computer, was programmed by plugging and
unplugging cables. (CORBIS)
Volti7e_04_ch03_039-060_highres.indd 52 28/11/12 5:56 PM

The Sources of Technological Change 53
clocks, and weaponry, its most evident technological triumph was the great symbol
of the traditional order: the Gothic cathedral.
As European history unfolded, the interests and demands of a growing mer-
chant class led to the development of technologies that eventually surpassed even
the soaring cathedrals. The great technological innovations that began in the mid-
fifteenth century with improvements in shipbuilding and ocean navigation were
closely associated with the rise of capitalism and the emergence of a market system.
A market system organized around the principle of private property was of crucial
importance for the stimulation and guidance of inventive and innovative abilities,
as well as their application to production.26
One of the strongest accolades to the technological dynamism of capitalist soci-
ety can be found, of all places, in The Communist Manifesto. With unfeigned admira-
tion, Karl Marx and Friedrich Engels note that the following:27
The bourgeoisie, during its rule of scarce one hundred years, has created more
massive and colossal productive forces than have all preceding generations
together. Subjection of Nature’s forces to man, machinery, application of
chemistry to industry and agriculture, steam-navigation, railways, electric
telegraphs, clearing of whole continents for cultivation, canalisation of rivers,
whole populations conjured out of the ground—what earlier century had even a
presentiment that such productive forces slumbered in the lap of social labour?
In the time of Marx and Engels, and in our own time, a market economy driven
by the activities of self-interested businessmen has produced the most receptive
environment for technological innovation. There are several reasons for this. A
market economy will stimulate inventive efforts, for it promises financial rewards to
those able to meet the needs of consumers. For example, somebody invents a bet-
ter mousetrap in the hope of selling it in the market. If the demand is there, eager
customers will buy it. Everybody is better off: consumers have a better mousetrap,
while the inventor gets rich and retires to Palm Springs, and nobody worries about
how technological advance has lowered the quality of life for mice. Second, a mar-
ket economy is characterized by the presence of numerous competitors. Under these
circumstances, a producer is strongly motivated to develop and apply new technolo-
gies in order to make better products and to reduce production costs. Failure to
do so may result in the eventual collapse of the enterprise, as the history of many
once-successful firms demonstrates. Finally, a market system is particularly effective
in eliciting the production of the auxiliary items necessary for technological inno-
vation. A new technology will require special materials, components, and services.
Because of its responsiveness to new sources of demand, a market economy is well
suited to meet these requirements.
The advantages of a market economy in stimulating technological advance
are further demonstrated by an examination of centrally planned economies. For
decades the economies of the Soviet Union and the People’s Republic of China
were organized through the mechanisms of central planning, but during the 1980s
it became painfully evident that these mechanisms were fatally flawed. One of the
chief manifestations of that failure has been a retarded technology. It cannot be
Volti7e_04_ch03_039-060_highres.indd 53 28/11/12 5:56 PM

54 The Process of Technological Change
denied that the Soviet Union produced some impressive technological achieve-
ments, most notably in its space and military programs, while China made signifi-
cant progress in industrializing an impoverished country, but taken as a whole their
level of technological development remained stagnant while the United States,
Western Europe, and Japan moved rapidly forward.
The backwardness of centrally planned economies has had significant political
repercussions. In China, dissatisfaction with the pace of technological and eco-
nomic advance led to a retreat from centralized planning and a much greater scope
for market-based economic relations, In the Soviet Union, the fear of falling even
further behind the West motivated the Gorbachev regime to introduce some ele-
ments of a market economy and to attenuate the role of central planning. But these
efforts were not enough, and within a few years the Soviet Union ceased to exist. In
the countries of the former Soviet Union and in China, it has become evident that
market-based reforms have made major contributions to economic and technologi-
cal development, albeit at the cost of greater economic and social inequality, as the
recent histories of the two nations have shown.
Many of the difficulties experienced by centrally planned economies in achiev-
ing technological advance have been the result of a basic tension between their
system of economic management and the requirements of technological innova-
tion. Centrally planned economies rest on the assumption that economic activities
can be reduced to predictable routines. But the course of technological innovation
is notoriously difficult to predict. The bureaucratic procedures that work tolerably
well for the administration of routine productive tasks usually fail when they are
applied to technological innovation. A planning agency can set goals and quotas for
the production of established goods, and various ministries can oversee the actual
operation of individual enterprises through routine bureaucratic administration.
But these procedures work much less well when innovation is the goal. Innovation
is an activity full of risk and unpredictability, and it cannot easily be accommodated
to preprogrammed structures and activities.
To make matters worse, centrally planned economies attempt to motivate
workers and managers through the allocation of rewards that create disincentives
for technological innovation. A factory manager typically receives bonuses for
the fulfillment and overfulfillment of quotas for established products, as given by
the central plan. The production of an innovative product is not rewarded, for it
has not been stipulated by the plan. The uncertainties and unpredictabilities that
surround technological innovations create risks for those who seek to develop
and use them, but these risks are not matched by commensurate rewards for those
who take them.
Noneconomic Sources of Technological Advance
It is not the intention here to convey the impression that only market forces can
produce technological innovation; as we shall see, government institutions have
become increasingly important sources of technological advance. And no inference
should be made that the historical superiority of a market economy in promoting
Volti7e_04_ch03_039-060_highres.indd 54 28/11/12 5:56 PM

The Sources of Technological Change 55
technological advance makes it a superior system in general. There is more to life
than technological advance, and, as we have already seen and shall see some more,
both capitalism and the market have produced technologies that have been detri-
mental to large numbers of people.
Furthermore, technological innovation cannot always be traced to economic
motives or even to the desire to address practical problems. To be sure, we tend to
think of technology as the result of efforts to solve problems of this sort; after all,
technology has already been defined as the product of knowledge that is used in
order to get something done. The very word “technology” conjures up images of
useful devices, and technology’s practitioners—engineers, managers, and skilled
workers—are often viewed as a serious bunch, sitting rigidly in front of computer
terminals, making precise measurements, and, above all, applying their talents to
the solution of practical problems that are usually tied to economic concerns.
In fact, even the most practical of inventions may owe their origins to a spirit
that seems more closely connected to play than to “productive” work. When Willis
Whitney served as the first director of the research laboratory of the General
Electric Company, he often asked his scientists and technicians there if they were
“having fun.” For Whitney, “fun” was working on problems that had stumped every-
one. Pursuing these problems was nothing less than the most exciting thing that a
person could do.28
Consider, too, one of America’s most famous inventors, Benjamin Franklin.
With typical American pragmatism he wrote, “Utility is in my opinion the test of
value in matters of invention, and that a discovery which can be applied to no use,
or is not good for something is good for nothing.”29 Franklin’s inquiries into the
nature of electricity did result in one useful device: the lightning rod, which saved
many a building from destruction. But his other inquiries had a less immediate pay-
off. Although Franklin devised a number of devices that helped him to learn more
about the nature of electricity, decades passed before electricity had any practical
value. Indeed, he was “chagrined a little that we have been hitherto able to produce
nothing in this way of use to mankind.”30
Later events proved him wrong, although he never shared in that knowledge.
Still, the pursuit of useful innovations could not have been the prime motive for
Franklin’s inquiries. Franklin was an amateur in the literal sense of the word: a per-
son who pursues an activity for the sheer love of it. For many years the leisure-time
pursuits of amateur scientists such as Franklin sustained research into the nature of
electricity despite the absence of direct applications, yet these “idle” intellectual
efforts were essential to the invention of a great variety of useful devices.
A century after Franklin died, a young man of seventeen climbed a cherry tree
and turned his imagination to possibilities that only a few had dreamed of. The year
was 1899, and the young man was Robert Goddard, who was to be the inventor
of the liquid-fueled rocket. As he recalled in later years, “It was one of the quiet,
colorful afternoons of sheer beauty which we have in October in New England, and
as I looked toward the fields at the east, I imagined how wonderful it would be to
make some device which had even the possibility of ascending to Mars, and how
it would look on a small scale, if sent up from the meadow at my feet. . . . I was
Volti7e_04_ch03_039-060_highres.indd 55 28/11/12 5:56 PM

56 The Process of Technological Change
a different boy when I descended the tree from when I ascended, for existence at
last seemed very purposive.”31 At that time, and for many years to come, Goddard
could scarcely have imagined the products of that October vision: orbiting satellites
for global communication links, weather prediction, global positioning systems,
and the development of terrifying new weapons. Throughout his life Goddard
continued to be energized by the dream of space travel for its own sake; practical
consequences were at best a secondary concern.
It also should be noted that even in a predominantly market-oriented, capital-
ist society such as the United States, not all technologies have been generated and
shaped by market forces. Chapter 18 will take up this issue by examining the role
of the government in promoting technological change. At this point it will only
be noted that technologies developed outside the constraints of the market system
are less likely to be shaped by concerns about costs. Firms operating in a market
The realization of a vision: Robert Goddard and his first liquid-fueled rocket. (UPI/Bettmann
Newsphotos)
Volti7e_04_ch03_039-060_highres.indd 56 28/11/12 5:56 PM

The Sources of Technological Change 57
environment know that minimizing costs is essential to success. Technologies that
hold down production costs are quite appealing, as are technologies that expand
sales by lowering the cost of the product itself. The Ford Motor Company during
the glory years of the Model T is a striking example of a firm that owed its success to
the use of new technologies in order to lower costs. This process also has been dra-
matically demonstrated in recent years as technological advances in the design and
production of integrated circuits have led to sharp declines in the price of personal
computers and a concomitant expansion of this market. At the same time, however,
there are large sectors of the economy where prices and the costs of production are
secondary concerns. As a result, the pattern of technological development can be
quite different. This has been particularly evident in the defense industry, where the
presumed dictates of national security have allowed the deployment of staggeringly
expensive military technologies. To take a particularly striking example, a single Air
Force F-22 fighter costs U.S. taxpayers $412 million (when R&D and testing costs
are taken into account) and requires 30 hours of maintenance and an expenditure
of $44,000 for every hour it is in the air. 32 In a climate where only the presumed
best will do, there are strong tendencies to “gold plate” weapons systems and to have
little concern for cost constraints.
This tendency is not confined to the military sector. In recent years there has
been a mounting concern about the financial costs of modern medical technolo-
gies. Some medical technologies, such as antibiotics, have undoubtedly lowered the
costs of medical care, but many others have had the opposite result. When faced
with a choice between controlling medical expenses or saving lives and alleviating
pain through the use of sophisticated technologies, it is difficult for any individual
or society concerned with the well-being of its members to put the former ahead of
the latter. It is thus no surprise that about half the growth of health care spending
that occurred during the second half of the twentieth century can be attributed to
the use of new technologies.33
The case of military and medical technologies brings us back to the necessity
to consider the political and social context of technological choice. Economic con-
siderations, important as they are, are not the sole basis of decisions regarding the
development, selection, and use of particular technologies. Nor does technology
develop according to its own internal dynamics. Technologies are social creations,
and any successful attempt at understanding why particular technologies are cre-
ated, chosen, and used must take into account their social context. Having made
this basic point, we will pursue the matter a bit further by looking at the processes
through which technologies spread and take hold. But before we do so, we need to
consider one more source of technological change: scientific advance. This will be
the topic of the next chapter.
Questions for Discussion
1. In your opinion, how well does the social construction approach explain tech-
nological innovation? What technologies seem well-suited to the application
of this perspective? Which do not?
Volti7e_04_ch03_039-060_highres.indd 57 28/11/12 5:56 PM

58 The Process of Technological Change
2. In times past, inventors like Edison, Morse, Tesla, and the Wright brothers
were treated as celebrities. Can you name any contemporary inventors? Why
do individual inventors appear to be less prominent today?
3. A person with a heart problem needs a pacemaker but is unable to pay for it.
What, then, is the source of effective demand for this technology? Who ulti-
mately pays for it, and why?
4. As a would-be inventor or entrepreneur, how would you go about ascertaining
whether or not a new technology is likely to find a receptive market?
5. If you were the leader of a developing country, what sort of changes would you
try to institute in order to accelerate the pace of technological innovation?
Notes
1. Donald MacKenzie and Judy Wajcman (Eds.), The Social Shaping of Technology, 2d ed.
(Buckingham, England, and Philadelphia: The Open University Press, 1999).
2. Wiebe E. Bijker, “Understanding Technological Culture through a Constructivist View
of Science, Technology, and Society,” in Stephen H. Cutcliffe and Carl Mitcham
(Eds.), Visions of STS: Counterpoints in Science, Technology, and Society Studies (Albany:
State University of New York Press, 2001), p. 27.
3. Thomas P. Hughes, “Technological Momentum,” in Merritt Roe Smith and Leo
Marx (Eds.), Does Technology Drive History? The Dilemma of Technological Determinism
(Cambridge, MA, and London: MIT Press, 1994), pp. 99–113.
4. Rudi Volti, “Why Internal Combustion?” American Heritage of Invention and Technology
6, 2 (Fall 1990).
5. Arnold Heertje, Economics and Technical Change (New York: John Wiley & Sons, 1977),
p. 98.
6. Nathan Rosenberg, Inside the Black Box: Technology and Economics (Cambridge, England:
Cambridge University Press, 1982) pp. 62–70.
7. Louis Hunter, Steamboats on the Western Rivers (Cambridge, MA: Harvard University
Press, 1949), pp. 121–122. Quoted in Rosenberg, Inside the Black Box, p. 64.
8. Albert Fishlow, “Productivity and Technological Change in the Railroad Sector, 1840–
1910,” in Studies in Income and Wealth No. 30: Output, Employment, and Productivity in
the United States After 1800 (New York: National Bureau of Economic Research, 1966),
pp. 635, 641.
9. Tom Shedd, “The Little Engine That Does,” Technology Review 87, 2 (February–March
1984): 66–67.
10. Richard Tanner Pascale and Anthony G. Athos, The Art of Japanese Management:
Applications for American Executives (New York: Warner Books, 1981), p. 306.
11. The following is based on John A. Heitman and David J. Rhees, Scaling Up: Science,
Engineering, and the American Chemical Industry (Philadelphia: Center for the History of
Chemistry, 1984), pp. 17–21.
12. Nathan Rosenberg, Perspectives on Technology (Armonk, NY: M.E. Sharpe, 1985), p. 167
[author’s emphasis].
13. Rosenberg, Inside the Black Box, pp. 60–61.
14. Peter George, The Emergence of Industrial America: Strategic Factors in American Economic
Growth Since 1870 (Albany: State University of New York Press, 1982), p. 51.
15. David A. Hounshell, From the American System to Mass Production, 1800–1932: The
Development of Manufacturing Technology in the United States (Baltimore: Johns Hopkins
University Press, 1984), pp. 189–215.
Volti7e_04_ch03_039-060_highres.indd 58 28/11/12 5:56 PM

The Sources of Technological Change 59
16. Edward Constant, The Origins of the Turbojet Revolution (Baltimore: Johns Hopkins
University Press, 1980), p. 82.
17. Christopher Freeman, “The Determinants of Innovation: Market Demand, Technology,
and the Response to Social Problems,” Futures 11, 3 (June 1979): 211.
18. Everett M. Rogers, Diffusion of Innovations (New York: The Free Press, 1962), pp. 148–149.
19. Jacob Schmookler, Inventions and Economic Growth (Cambridge, MA: Harvard
University Press, 1966).
20. Ibid., p. 93.
21. W. G. L. De Haas, “Technology as a Subject of Comparative Studies: The Case of
Photography,” Comparative Studies in Society and History 21, 3 (July 1979): 367–370.
22. Christopher Freeman, The Economics of Industrial Innovation, 2nd ed. (New York:
Cambridge University Press, 1982), p. 124.
23. Ibid., p. 127.
24. Wilson Dizard, The Coming Information Age: An Overview of Technology, Economics,
and Politics (New York: Longman, 1982), p. 33. See also Paul Ceruzzi, “An Unforeseen
Revolution: Computers and Expectations, 1935–1985,” in Joseph J. Corn (Ed.),
Imagining Tomorrow: History, Technology of the American Future (Cambridge, MA: MIT
Press, 1986).
25. Hugh G. J. Aitken, Syntony and Spark: The Origins of Radio (Princeton, NJ: Princeton
University Press, 1985).
26. Robert Heilbroner, Between Capitalism and Socialism: Essays in Political Economics (New
York: Random House, 1970), p. 162.
27. Karl Marx and Frederick Engels, The Communist Manifesto, in Karl Marx and Frederick
Engels, Selected Works, vol. I (Moscow: Foreign Languages Publishing House, 1962),
p. 39.
28. Elting E. Morison, From Know-How to Nowhere: The Development of American Technology
(New York: New American Library, 1977), p. 126.
29. Quoted in Roger Burlingame, March of the Iron Men: A Social History of Union Through
Invention (New York: Grosset & Dunlap, 1938), p. 77.
30. Ibid., p. 77.
31. Barton C. Hacker, “Robert H. Goddard and the Origins of Space Flight,” in Carroll W.
Pursell, Jr. (Ed.), Technology in America: A History of Individuals and Ideas (Cambridge,
MA: MIT Press, 1981), p. 233.
32. R. Jeffrey Smith,“High-Priced F-22 Fighter Has Major Shortcomings,” Washington Post
(July 10, 2009), accessed on September 3, 2012, at http://www.washingtonpost.com
/wp-dyn/content/article/2009/07/09/AR2009070903020.html.
33. Philip Aspden (Ed.), Medical Innovation in the Changing Healthcare Marketplace (Washington,
DC: National Academy Press, 2002), p. 16.
Volti7e_04_ch03_039-060_highres.indd 59 28/11/12 5:56 PM

http://www.washingtonpost.com

A Macmillan Higher Education Company
WORTH PUBLISHERS
Volti7e_01_FM_i-xviii_highres.indd 2 28/11/12 5:52 PM
this page left intentionally blank

61
One of the most common beliefs about technology is that it is simply “applied sci-
ence.” There are certainly many examples that can be cited in support of this view.
Modern medical practices have been strongly influenced by fundamental discoveries
in biology. The development of the transistor depended on a thorough understand-
ing of quantum mechanics. Synthetic materials have been made possible by research
into polymer chemistry. But one should not be content to rest with these examples.
When the full spectrum of technological advance is considered, it becomes evident
that science does not always play the decisive role in the development of technology.
Indeed, many are the times when technological advances have taken place without
the benefit of scientific knowledge. Conversely, on some occasions scientific advance
has depended on prior technological achievements. In this chapter we will look at
the complex and shifting relationships between science and technology in order to
come to a better understanding of how they differ, as well as the ways in which they
have influenced each other.
The Historical Separation of Science and Technology
The definition of technology that was offered in the first chapter stressed that tech-
nology is based above all on the application of knowledge. But not all knowledge
need be derived from scientific research. It is certainly true that today much of the
knowledge required for technological advance is derived from scientific inquiries.
Still, when the full history of technology is surveyed, it is apparent that most tech-
nologies have been developed and applied with little scientific input. The ancient
Greeks made important contributions to many sciences—most notably astronomy,
optics, and acoustics—as well as producing major advances in mathematics. Greek
technology also progressed through innovations in agriculture, building construc-
tion, mining, the refining of metals, and military equipment. Yet none of these
innovations drew to any significant degree on Greek science. Moreover, the Greeks’
technological achievements were far less impressive than their scientific achieve-
ments, again indicating the lack of connection between the two. This lopsided
pattern of development continued with the Romans, although in reverse. Roman
contributions to science were minor, while Roman engineering (manifested in such
things as the construction of great aqueducts) reached a high level of development.
In any event, Roman technology had little to do with science.
c h a p t e r f o u r
Scientific Knowledge and Technological Advance
Volti7e_05_ch04_061-078_highres.indd 61 28/11/12 5:56 PM

62 The Process of Technological Change
The European Middle Ages was a time of slow but significant technological
advance. Improved agricultural practices were introduced, and the power of wind
and falling water was used for everything from grinding grain to polishing metal. An
effective horse collar allowed the literal harnessing of another important source of
power. Soaring cathedrals were built in many parts of Europe, where they continue
to be a source of awe and inspiration. Again, these achievements owed nothing to
the scientific inquiries of the time. In fact, the designers and builders of the cathe-
drals apparently did not even have knowledge of multiplication tables. Then, too,
there was little that technology could have drawn on, for medieval science exhib-
ited little of the dynamism of medieval technology.
At about the same time, blacksmiths in parts of the Middle East were using
steel superior to anything made in Europe. The swords and other edge weapons
that they made from the steel first produced in Damascus (the capital of present-
day Syria) combined a hard cutting edge with the flexibility necessary for an effec-
tive weapon. Yet it was only late in the twentieth century that the metallurgical
principles underlying Damascus steel were discovered. Although it was unknown to
the swordsmiths of the time, minute quantities of impurities, vanadium especially,
made an essential contribution to the unseen processes that gave the steel its desired
qualities. Consequently, when the composition of imported iron ore changed, the
steel made from it lacked the desired characteristics. Unable to draw on modern
metallurgical knowledge, traditional swordsmiths could not make the necessary
adjustments, and the “secret” of Damascus steel was lost for centuries.1
This disconnect between scientific and technological development continued
during the succeeding centuries. The sixteenth and seventeenth centuries were
the scene of epochal advances in science, yet technological change occurred at
a slower rate than it did during the preceding centuries when science had been
largely stagnant.2 In similar fashion, early nineteenth-century France boasted the
most advanced science in Europe, yet its technology was no more advanced than
that of England, its scientific inferior. In some key areas, such as mechanization
and steam power, France lagged well behind England.3 This historical record has
led one historian of science, Thomas Kuhn, to speculate that for the bulk of human
history, technology has flourished in societies where science has remained undevel-
oped, and vice versa.4 It is possible that our era is unique in its apparent ability to
simultaneously support scientific and technological advance.
Studies of Contemporary Science–Technology Relationships
Even today, when the connection between science and technology is much stronger
than it was in the past, a great deal of technological change takes place without
substantial inputs from science. The relative unimportance of science for many
technological developments was highlighted by a study that was conducted by
the Defense Department in the mid-1960s. Dubbed Project Hindsight, this study
assessed the extent to which pure scientific research was essential to the devel-
opment of 20 major weapons systems. In conducting their study, the Hindsight
researchers began with a weapon system and traced its history backward in order to
Volti7e_05_ch04_061-078_highres.indd 62 28/11/12 5:56 PM

Scientific Knowledge and Technological Advance 63
determine the key “events” that produced the knowledge that had been essential to
its creation and development. The results of the study gave little credence to the
commonly accepted view that scientific knowledge is the primary basis of techno-
logical development. Of the 710 events surveyed, only 2 were the result of basic
scientific research, a minuscule 0.3 percent of the total.5 Scientific research that was
specifically directed toward a particular military project was of greater importance,
accounting for 6.7 percent of events, while 2 percent were the result of scientific
research directed toward commercial or nondefense needs. The greatest portion of
events, the remaining 92 percent, owed little to concurrent scientific research, and
relied almost entirely on established concepts and principles.
Nearly 40 years later, a group of researchers conducted a similar study dubbed
Project Hindsight Revisited, in which they surveyed the processes involved in the
design of the Apache helicopter, the Abrams battle tank, the Stinger antiaircraft
missile, and the Javelin antitank missile. Although the researchers did not attempt
to determine the role of basic scientific research in the design of these weapons,
their report noted that most of the relevant research had been done well before
these projects were initiated, and that very little basic research was done in order to
address specific design issues.6
Similar conclusions emerged from a research project that was conducted in
England. This study examined the winners of the Queen’s Award for Industry,
which is given to British firms that have distinguished themselves by initiating
technologically innovative products and processes. The study found that very few of
these innovations were directly connected to basic scientific research. Accordingly,
the authors concluded that “the great bulk of basic science bears only tenuously if
at all on the operations of industry.”7
Although they show that technology’s connection to science is not as straight-
forward as is often assumed, one should not draw sweeping generalizations from
these studies. The authors of the original Hindsight study, as well as a number of
its critics, were quick to note that the long-term influences of scientific research
were not captured by the study’s methodology. Project Hindsight considered only
the effects of scientific research conducted for the most part after 1945, thereby
removing from consideration the immense body of scientific knowledge that had
accumulated before that time. The study’s researchers found that a median delay
of nine years separated the completion of a scientific research project from its
application, even when research efforts targeted at specific technological missions
were included. It was therefore not surprising that basic scientific research had few
technological consequences during the 20-year span covered by the study.8
By taking a longer chronological view, another study, entitled Technology in
Retrospect and Critical Events in Science (TRACES), contradicted the previously
cited studies by determining that a number of innovations, ranging from oral con-
traceptives to videocasette recorders, were directly tied to prior scientific research.9
But even here, the researchers were obliged to point out that the sequence from
scientific discovery to technological innovation is not linear, and that “a better
understanding needs to be achieved concerning the two-way influence between
science and technology.”10
Volti7e_05_ch04_061-078_highres.indd 63 28/11/12 5:56 PM

64 The Process of Technological Change
This theme was elaborated by critics of Project Hindsight and TRACES, who
noted that both of these studies assumed linear processes in which scientific discov-
eries preceded technological applications.11 However, as will be explored later in
this chapter, the relationship can be inverted, with technology serving as a stimu-
lus for scientific inquiry. On many occasions, the development and application of
particular technologies have raised questions that the science of the time could not
answer. In successfully addressing these intellectual puzzles, researchers have made
important contributions to the advance of science.
How Technology Differs from Science
If nothing else, these studies show that the connection between science and tech-
nology is not adequately captured by the common belief that technology is simply
applied science, and that scientific discoveries quickly and easily give rise to techno-
logical applications. Some technologies draw directly on scientific research, while
others make little use of it. This is rather obvious. Of greater significance is the fact
that science and technology are quite different in their basic natures. This makes
the translation of scientific knowledge into technological application a difficult and
complex process.
Whereas science is directed at the discovery of knowledge for its own sake,
technology develops and employs knowledge in order to get something done. The
content of the knowledge may be rather similar, but different motivations underlie
its pursuit and application. Here, of course, we are on slippery ground; it is often
extremely difficult to discern the motivations underlying a person’s activities, and
it may well be the case that a particular engineer may be driven by the same desire
to understand something for its own sake that animates the work of a pure scientist.
Motives are often mixed and complex.12
Much of the prestige accorded to science is the result of its supposed purity;
science is thought to be an intellectual venture free from political, organizational,
and economic constraints. The insulation of scientists from the demands of their
patrons confers a sense of higher ethical standards; scientists are beholden to noth-
ing but the internal demands of science. A great deal of recent scholarship has
sharply questioned this assumption. As has been the case with recent studies of
the history of technology, science has been described and analyzed as a social con-
struction. From this standpoint, scientific inquiry is not a disinterested, fact-driven
search for truth, but a human creation that has been shaped by cultural patterns,
economic and political interests, and gender-based ways of seeing the world.13 For
uncompromising social constructivists, successful scientific outcomes may have
more to do with negotiation, the support of designated authorities, and resonance
with prevailing attitudes than theoretical elegance or experimental evidence.
The social construction of science remains controversial, and, in any event,
few social constructivists believe that scientific facts and theories are purely social
creations that have nothing to do with underlying realities. Moreover, social con-
structivism is a largely academic enterprise, and most laypeople still believe in the
objectivity of science and the purity of scientific motives. These qualities give
Volti7e_05_ch04_061-078_highres.indd 64 28/11/12 5:56 PM

Scientific Knowledge and Technological Advance 65
individual scientists a claim to autonomy not enjoyed by other employees. Scientists
are thus in a particularly favorable situation. The assumption that scientific progress
leads to material progress confers an aura of practicality on their work, while at the
same time they are in a good position to resist the overt control of their work by
their sponsors.
In contrast, most engineers work under tighter constraints. Their employers
expect results that have immediate applications and fall within a narrowly defined
range of possibilities. A scientist may abandon a theory or an experiment in order
to pursue a line of inquiry that unexpectedly arises during the course of his or her
research. An engineer, however, rarely has this opportunity; there may be some
room for serendipity, but the bridge has to be built within a given time frame and
under definite budget constraints. For this reason, what separates scientific and
Superconductive magnets kept at extremely low temperatures are essential components of
magnetic levitation, which is demonstrated by this high-tech skateboard. Although there are
several other technological applications of low-temperature superconductivity, the underlying
physics of the phenomenon is not well understood. (BERTRAND GUAY/AFP/Getty Images)
Volti7e_05_ch04_061-078_highres.indd 65 28/11/12 5:56 PM

66 The Process of Technological Change
technological inquiries may not be the motivations of individual practitioners but
the motivations of their employers and patrons.14
Technology also differs from science in the type and depth of knowledge that
is required. The ultimate question asked of scientific knowledge is “Is it true?” For
technological knowledge, the key issue is “Will it work?” Technological problems
can often be solved with no understanding of what is going on. As we have seen,
throughout history many technologies were effectively applied even though the
basic principles underlying their operation were poorly understood, if they were
understood at all. A similar situation can be found today; high-temperature (which
in this case means 130 K or minus 418°F) superconducting materials are beginning
to be used in motors and other devices, even though the physics of the process
remains something of a mystery. It is also instructive to consider the story of the
great scientist Johannes Kepler (1571–1630), who developed and employed the
calculus of variation in order to derive optimum dimensions of beer kegs—only to
discover that these dimensions were already being employed by the coopers who
actually built the kegs!15
Many other technological innovations seem to fall into this pattern. Although
scientifically derived principles may emerge after the fact, many technologies have
been guided almost exclusively by trial and error, with the successful ones informed
by an intuitive sense of the right solution, and not by scientific truths. As Eugene
Ferguson has observed, at the end of the nineteenth century there were no scien-
tific principles that could be invoked during the design of the first motorcycles; the
placement of the engine, fuel tank, and other major components could be deter-
mined only through the actual construction and operation of motorcycles, without
the benefit of scientific principles or other forms of existing knowledge. Ferguson
therefore makes the point that “there is often no a priori reason to do one thing
rather than another, particularly if neither had been done before. No bell rings
when the optimum design comes to mind.”16
How Technology Stimulates Scientific Discovery
Although we tend to think of science as the leading factor in technological
advance, the reverse often occurs: scientific knowledge and discovery may be a by-
product of technological achievements, as when fundamental advances in biology
and chemistry were stimulated by the successful efforts of Pasteur, Lister, and Koch
to solve practical medical problems.17 There also have been cases where a technol-
ogy already in operation defied accepted scientific explanations and stimulated
the formulation of new theories. This process is exemplified by the story of how
the steam injector contributed to the abandonment of a popular scientific theory
regarding the nature of heat.18 In the mid-nineteenth century, many scientists
believed that heat was the result of the presence of a substance known as “caloric.”
According to this theory, when caloric combined with other materials those mate-
rials became hot. Also, caloric particles were supposedly self-repellent; thus, when
sufficient quantities of these particles came into contact with water, their repulsive
quality resulted in water turning into steam.
Volti7e_05_ch04_061-078_highres.indd 66 28/11/12 5:56 PM

Scientific Knowledge and Technological Advance 67
While this theory had its uses, it could not explain the operation of the steam
injector that was patented by Henri Giffard in 1858. The injector used steam from
the boiler to lift water into it, an operation that seemed to mimic perpetual motion
for those who subscribed to the caloric theory. In fact, Giffard, who was well-trained
in academic science, based his injector on the Bernoulli principle, which postulated
that the pressure of a fluid (in this case steam) drops as its velocity increases. The
operation of the injector was therefore the result of expanding steam producing a
partial vacuum that sucked water into the boiler.
Giffard’s injector was no perpetual motion machine; it used a quantity of heat
that was equal to the quantity of work expended in raising water into the boiler,
plus the losses due to radiation and contact with surrounding surfaces. Its opera-
tion therefore made sense only when the interconvertability of heat and work was
understood. This idea rested on the kinetic theory of heat, and it followed the first
law of thermodynamics (which stipulates that, quantitatively, energy cannot be cre-
ated or destroyed). The kinetic theory of heat was formulated several years before
Giffard’s invention but had been slow in winning acceptance. The rival caloric
theory had many adherents in the scientific community, and it took the apparent
anomaly of the injector to convert many of them to the now universally accepted
kinetic theory of heat.
The steam injector illustrates the often subtle interactions between science
and technology. The operation of the injector provided a strong stimulus for the
acceptance of one scientific theory and the rejection of another. At the same time,
another scientific theory had been essential to the invention of the injector. But
scientific theories by themselves were not enough; the design and effective use of
the injector still depended on the experiments and modifications performed by
practicing engineers, for no set of theories was powerful enough to guide its design.
Again, we have an example of a technology that worked even though existing sci-
entific principles did not completely explain its operation.
This example and many others that could be cited indicate that science and
engineering are still separate enterprises, although there are certainly linkages
between them. Scientific knowledge can result in technological advances, while
at the same time many technologies create both opportunities and motivations
for new scientific inquiries. Many technological developments reach a plateau due
to a lack of scientific knowledge, thereby generating a clearly perceived need for
fundamental scientific research. The knowledge obtained through technological
practices and applications is thus the raw material of many scientists, whose work
centers on explaining technological practices at a deeper level.19
One example of this process is the invention of the laser. During World War II
the United States and other countries were engaged in a major effort to develop
radar as a means of detecting enemy ships and aircraft. While participating in the
development of radar technology, scientists used the knowledge that they had
gained to make significant advances in microwave spectroscopy, which allowed a
more accurate determination of molecular structures. One of the main developers of
microwave spectroscopy, Charles Townes, although nominally a physicist, contin-
ued to work on technologies for the generation of microwaves. In 1954 he and his
Volti7e_05_ch04_061-078_highres.indd 67 28/11/12 5:56 PM

68 The Process of Technological Change
co-workers created a device they called the “maser” (for “microwave amplification
by stimulated emission of radiation”). In 1958 he and a former student published
a paper that outlined how the principle of the maser could be extended into the
region of infrared, visible, and ultraviolet light. These ideas were the foundation for
the laser (the acronym for “light amplification by stimulated emission of radiation”).
At first the laser was the classic example of an invention looking for an application.
But in succeeding years, the laser became the basis for a host of technologies ranging
from scanners used at checkout counters to devices used for the surgical rejoining of
detached retinas. In short, the development of one technology (radar) gave rise to
scientific advance (the determination of molecular structures through microwave
spectroscopy) and at the same time provided a scientific foundation for a entirely
new technology (the laser).20
As this example indicates, the relationship between science and technology, far
from being linear, may be one characterized by considerable back-and-forth move-
ment. This feedback between science and technology may be a fundamental source
of their dynamism21 This reciprocal relationship can be seen in the highest accolade
for scientific achievement, the Nobel Prize. Although there is no prize for technol-
ogy per se, a large portion of the prizes for chemistry, physics, and medicine have in
fact been awarded for the invention of new devices and techniques. Some of them
One of the unanticipated uses of the laser is its use as a surgical instrument for the correction
of faulty vision. (O. Louis Mazzatenta/National Geographic Society)
Volti7e_05_ch04_061-078_highres.indd 68 28/11/12 5:56 PM

Scientific Knowledge and Technological Advance 69
eventually resulted in commercially viable products, while others were used for
further scientific inquiries.22 In sum, when science and technology have gone their
separate ways, as has been the case for most of human history, they develop more
slowly than when they interact with each other, as they have done to an increasing
degree during the present era.
Indirect Effects of Technology on Scientific Advance
Technology’s role in stimulating scientific advance does not end with the provi-
sion of data, problems, and research opportunities. Technological development
also plays a vital role in scientific advance by supplying devices and instruments
that are essential for scientific inquiry. From early telescopes and galvanometers
to today’s electron microscopes and computers, the products of technology have
steadily increased our ability to observe and analyze the phenomena that science
takes as objects of inquiry.23 To take only a few recent examples, the scanning tun-
neling microscope has allowed a much better imaging of plant and animal cells; the
Hubble space telescope has given us new insights into the age and size of the uni-
verse and how it was created; particle accelerators have enabled physicists to obtain
a better understanding of the basic constituents of matter; and magnetic resonance
imaging has played a key role in the rapid development of neuroscience, the study
of the brain and how it functions. It is thus no exaggeration to claim that scientific
“instruments shape research, determine what discoveries are made, and perhaps
even select the types of individuals likely to succeed as scientists.”24
There is a final and less immediately evident contribution that technology has
made to scientific progress. Although an effort has been made here to demonstrate
that science has not always been decisive in the development of new technologies,
the opposite is widely believed. To a significant degree, this faith in the practical
consequences of scientific research has given science the immense prestige and legiti-
macy that it enjoys today. Many areas of scientific inquiry have become increasingly
expensive propositions. To take two admittedly extreme examples, the James Webb
Space Telescope, which is slated to replace the Hubble Space Telescope in 2018,
is expected to cost $8.7 billion over a 5-year period, while Europe’s Large Hadron
Collider carried an initial price tag of $4.9 billion. Neither of these projects is
expected to generate technological spinoffs, but many other research programs
would die for lack of funding without the promise of some practical paybacks. Over
the years, quite a lot of sophisticated and expensive biological research that has
been justified on the grounds of its potential contribution to curing cancer—a hope
that has yet to be realized. Biological research is hardly unique in this aspect, as
scientists have become quite proficient in writing grant applications that stress the
potential useful outcomes of their abstract inquiries.
Financial support, however important it is to the maintenance of scientific
inquiry, is only part of the picture. The willingness of government agencies to
grant money for scientific research and of citizens to have their taxes used in this
manner is indicative of a widespread belief in the legitimacy of scientific research.
This legitimacy is in large measure the product of the presumed ability of science
Volti7e_05_ch04_061-078_highres.indd 69 28/11/12 5:56 PM

70 The Process of Technological Change
to ultimately produce practical results. These ascribed powers of science have been
analyzed by Langdon Winner:25
[The ultimate success of science] must be accounted to its fulfillment of Baconian
ambitions—the delivery of power. Other modes of knowing have been able to give
an intelligible, systematic, aesthetically pleasing picture of reality. If science had
only been able to accomplish this and nothing more, it is likely that it would have
been supplanted by yet another philosophy of inquiry. But in the West at least,
the test is not so much what do you know? or how elegant is your interpretation of
worldly phenomena? but rather, what can you actually do? This is the conclusive
factor, the reason that, for instance, social science has never fully established its
credentials in the halls of science.
Science succeeds over rival ways of knowing—poetry, religion, art, philosophy,
and the occult—not by its ability to illuminate, not even by its ability to organize
knowledge, but by its ability to produce solid results. . . . In the last analysis, the
popular proof of science is technology.
This expected ability of science to “deliver the goods” is somewhat paradoxi-
cal, for science as a system unto itself responds rather poorly to economic needs.
This has even been made into a virtue by many scientists who pride themselves
on their insulation from the crass demands of the marketplace. As we have seen,
the autonomy of scientists has been legitimized by the conception of science as
a detached exercise in free inquiry. But it is also the case that the unpredictable
nature of scientific discovery often precludes the possibility of useful discoveries
being produced to order.
Scientific research, especially when directed at the discovery of basic prin-
ciples, is an uncertain endeavor that cannot be guided by schedules and routinized
procedures. This is illustrated by the response of one researcher who was offered
more research funds by an officer of his company in the hope that the conclusion of
a particular project could thereby be hastened. The researcher replied that it would
be just as logical to expect that eggs could be made to hatch in half the normal time
if twice as many hens were recruited to sit on them. Not only are the paths of sci-
entific discovery full of twists and turns, but many of them terminate in dead ends.
Of course, technology may also be incapable of solving the problems presented to it.
If it were otherwise, we would have a cure for cancer by now. But most technology
is directed toward the solution of specific problems, which narrows its scope and
makes it a more predictable enterprise than science.
Scientific knowledge often ends up being extremely useful to technology, but
in most instances that is not why it was produced. Scientists typically create knowl-
edge for other scientists. Their efforts are focused on the testing of theories and the
solution of problems that have been generated by previous scientific inquiries. If
scientific knowledge is used for technological applications, it is because engineers
and other technologists have appropriated it for their own use. In most places where
science and technology meet, engineers and technicians “pull” knowledge out of
science. Only in rare instances is knowledge directly relevant to technological
application “pushed” by science itself.26
Volti7e_05_ch04_061-078_highres.indd 70 28/11/12 5:56 PM

Scientific Knowledge and Technological Advance 71
When knowledge is “pushed” from science into technology, it often happens
indirectly. The transfer of knowledge from science to technology can be a subtle
process, with scientific research motivating technological change by pointing out
unseen problems and at the same time suggesting new opportunities. This happened
in the 1930s when the evolving science of aerodynamics showed how the behavior
of aircraft changed dramatically at high speeds. This research clearly indicated that
conventional propeller-driven airplanes would encounter an insurmountable veloc-
ity barrier as they approached the speed of sound. At the same time, aerodynamic
research indicated that proper streamlining could greatly increase the speed at
which airplanes could fly, provided they had a different method of propulsion. In
making these discoveries, aerodynamic researchers generated a powerful impetus
for the development of jet engines that produced more power and did not have the
inherent limitations of existing power systems.27
The Commonalities of Science and Technology
Up to now, this chapter has stressed the differences between science and technol-
ogy. At this point some mention should be made of the characteristics that they
have in common. Both are based on the gathering of knowledge, and they both
advance through the cumulative development of that knowledge. Isaac Newton is
reputed to have said that he could see farther because he stood on the shoulders
of giants. That is, his scientific discoveries were based on knowledge produced by
earlier scientists. The same holds true for modern technology. Just as the scientific
achievements of an individual chemist owe a great deal to the past research efforts
of other chemists, the work of an aerospace engineer draws upon the accomplish-
ments of other aerospace engineers.
More generally, science and technology have been nourished by a supportive
culture at least since the days of the early Industrial Revolution. Although science
provided few direct inputs into early industrialization, the values and attitudes of
engineers and mechanics had much in common with those of scientists. As Peter
Mathias has described this era: 28
Together, both science and technology give evidence of a society increasingly
curious, questioning, on the move, on the make, having a go; increasingly seeking
to experiment, wanting to improve. So, much of the significance [of the cultural
climate] impinges at a more diffused level, affecting motivations, values, general
assumptions, the mode of approach to problem-solving, and the intellectual milieu
rather than a direct transfer of knowledge.
A key component of the shared culture of modern science and modern technology
is their reliance on the rational thought processes described in Chapter 1. Although
the development of both science and technology requires intuitive and other nonra-
tional modes of thought, rationality is essential to the general methodology of science
and technology. In general, a rational approach includes a propensity to challenge tra-
ditional intellectual authorities; a willingness to settle questions through observation,
testing, and experimentation; and a desire to develop exact methods of measurement.29
Volti7e_05_ch04_061-078_highres.indd 71 28/11/12 5:56 PM

72 The Process of Technological Change
Some of the basic elements of this mode of inquiry are described by Robert
Pirsig in Zen and the Art of Motorcycle Maintenance, where he explains how even a
clearly technological task like determining why a motorcycle won’t start is addressed
through the use of procedures that have much in common with scientific inquiry.30
As a first step, a mechanic might formulate the hypothesis that the battery is dead;
he or she will then try to honk the horn to see if the battery is working. If the horn
honks, the mechanic concludes that the problem doesn’t lie with the battery and
proceeds to other parts of the electrical system. Should tests performed on these
components show them to be in good shape, the mechanic may hypothesize that the
problem lies with the fuel system and conduct tests (experiments) to check them
out. And so it goes, with the formulation of a series of hypotheses and the conduct-
ing of experiments to test them. In the end the problem is isolated and perhaps
fixed; if nothing else, you know what is wrong as you push your motorcycle along
the side of the road.
Of course, one shouldn’t take this analysis too far. Although both science and
technology make heavy use of rational modes of thought, neither can be properly
characterized as the embodiment of rationality. Scientific theories must be logically
consistent and rationally articulated, but their ultimate source is human creativity
and imagination—qualities often at a considerable distance from rational thought
processes. At the other end of the scientific enterprise, the testing of these theories,
there are no perfectly rational means of determining the criteria through which
theories can be validated or disproved. Even empirically derived “facts” can be
subject to interpretation, and general world views can strongly affect what is accept-
able as “proof.”31 In similar fashion, a great deal of technological advance is also
the product of nonrational thought. And, as was noted earlier, the benefit or harm
of a particular technology cannot always be adjudged according to criteria based on
rationally determined principles; a great deal hinges on values and ethical standards
that are derived through other means.
Other commonalities between science and technology can be noted. Math-
ematics is important to both as a kind of language and as an analytical tool. The
practice of both science and technology requires university-based training that can
stretch out for many years. Also, engineers and other technological practitioners
employ organized knowledge that is presented and diffused through journals, books,
blogs, and professional meetings that have many similarities to those found in the
realm of science. And although engineers usually work for firms that try to retain
exclusive use of innovative products and processes that were developed in-house,
there can be a surprising willingness on the part of engineers to share their knowl-
edge with engineers employed elsewhere.32
Although the sharing of information has long been a characteristic of sci-
ence, in recent years an increasing number of scientific discoveries have come to
be treated as proprietary information. This tendency has been particularly evident
in biotechnology, where basic research is often essential for the rapid development
of biological and medical technologies. Under these circumstances, the usual dis-
tinction between basic science and technological application no longer has much
meaning.33 In this field, and in a growing number of others, the distinction between
Volti7e_05_ch04_061-078_highres.indd 72 28/11/12 5:56 PM

Scientific Knowledge and Technological Advance 73
science and technology has become so blurred that both can be subsumed under a
single rubric, “technoscience.” Since innovative, biologically based technologies
can generate very large profits for the firms that develop them, these firms are likely
to be reluctant to share their discoveries with the scientific community as a whole.
It is not just for-profit private firms that have a reason for keeping scientific knowl-
edge under wraps; universities are major players in industries based on cutting-edge
technologies. Consequently, the research conducted in their laboratories may even-
tually generate substantial revenues. For universities and private firms alike, the
lucrative coupling of basic research with technological application may seriously
inhibit the sharing of new information, substances, and devices. These restric-
tions violate a basic canon of scientific culture—the free distribution of ideas and
research findings—and, in the long run, they may result in a slower rate of progress
for both science and technology.
Finally, at the core of the common culture of science and technology is a sense
of optimism and progress within their own realms. Science and technology are
dynamic enterprises that build on past successes, but they also make profitable use
of their failures. An inadequate scientific theory may lead to the formulation of a
better one, and a collapsed bridge is likely to provide valuable lessons that help
to prevent future failures.34 Above all, science is predicated on the belief that the
world is knowable, while technology is animated by a conviction that it will always
be possible to do something better. Both of these beliefs contribute to the dynamic,
essentially optimistic spirits of science and technology.
Although there are broad similarities between science and technology today, their
coexistence is problematic, much as it has been in the past. For Melvin Kranzberg,
their coexistence has been marked by the same kind of tensions and attractions that
characterize the marriage of a man and a woman. In Kranzberg’s words:35
History suggests that science and technology, though wedded today, went through
a long, indifferent courtship. They grew up independently, either oblivious to
each other’s existence or taking scornful note of the other’s presence. When they
reached the age of puberty—the scientific revolution in the case of science and
the Industrial Revolution in the case of technology—a mild flirtation ensued.
The marriage, when it came at last, was a marriage of convenience and necessity,
certainly no love match. Insofar as military needs helped to bring about many a
daring and secretive meeting, the ceremonies when finally reached, could be called
a shotgun wedding; and the couple, predictably, has not lived happily ever after.
Each partner has retained a good deal of independence, though lately both have
been having identity problems. There are constant bickerings about who is
contributing more to the marriage. They quarrel over mutual responsibilities, the
education of their offspring, and, as might be expected, the household budget.
It is a very modern marriage. Science and technology live independently, yet
coordinately, as if they had but one joint bank account and one car. Divorce is
frequently discussed. It is invariably rejected, however, because the scandal would
surely deface the public image of the parties, and because, I suspect, of the indis-
putable pleasures and the learned frivolities of the bed.
Volti7e_05_ch04_061-078_highres.indd 73 28/11/12 5:56 PM

74 The Process of Technological Change
The Translation of Science into Technology
Today, many technologies make heavy use of the products of scientific inquiry.
Much of this use, however, is indirect. A great deal of scientific information finds its
way into technological practice through the education of engineers.36 The findings
of basic scientific research eventually appear in handbooks, university courses, and
textbooks. Much of the scientific knowledge presented in these ways is eventually
drawn on during the course of technological development.37
Even here there can be problems. It has been argued that a significant amount
of engineering education has been distorted by overreliance on science-based
instruction. This has led to a devaluation of nonverbal thought, an excessive stress
on mathematics, and an unwillingness to tackle problems that do not have a single
unique solution.38 Scientific thinking converges toward a single (if temporary) set
of theories, while the history of technology is replete with examples of the old say-
ing that there’s more than one way to skin a cat. An excessive focus on the prin-
ciples and methods of science may therefore restrict creativity and lead to an overly
rigid approach to the solving of technological problems.
There is no getting around the fact that despite all that they have in common,
science and technology usually operate in different worlds. If the two are to share
in a productive symbiosis, they must be sustained by continual efforts to span the
differences that separate them. In many cases, technological development has been
stimulated by the presence of individuals and organizations that simultaneously par-
ticipate in scientific and technological communities. Their primary role is to serve
as translators, “decoding information generated in one system and transforming it
into information usable in another.”39
This process can be seen in the events that culminated in the invention of
the vacuum tube, which in the pre-transistor era was an essential part of radio and
e–1
The Edison effect: Thomas Edison inserted a metal plate into a bulb and noted that when
the power was turned on, a meter attached to the plate indicated that current was flowing
through the air between the glowing filament and the plate.
Volti7e_05_ch04_061-078_highres.indd 74 28/11/12 5:56 PM

Scientific Knowledge and Technological Advance 75
television technology. The story began with Edison’s invention of the light bulb.40
While trying to determine why dark deposits were forming on the interior walls of
the bulbs, Edison found that the needle of a galvanometer deflected when a wire
probe was placed in a circuit between the galvanometer and the bulb’s glowing
filament. Edison did not understand what was producing the flow of electrical cur-
rent through thin air, although he patented the modified bulb for use as a voltage
indicator. (Many years later, the realization came that the current was produced by
the migration of electrons from the negatively charged filament to the positively
charged probe.)
Nothing practical came of Edison’s discovery until John Ambrose Fleming
renewed his acquaintance with these specially equipped bulbs. During the 1880s
and 1890s Fleming had conducted a number of experiments using these bulbs; his
sporadic efforts produced useful scientific knowledge, but no technological appli-
cations. Things began to change in 1899 when he became technical advisor to
Guglielmo Marconi’s Wireless Telegraphy Company. At that time, the chief need of
the infant radio industry was for a detector that could efficiently convert the weak
oscillatory current of radio waves into direct current. After a few years’ work with
other devices, in 1904 Fleming came to the sudden realization that the specially
equipped light bulbs with which he had previously worked might be used for this
purpose. His hunch proved to be correct, and the “oscillation valve,” as he named
the device, began to be commercially used for the detection of radio signals a short
time later.
Fleming had not been the only one to experiment with modified light bulbs,
but he had been uniquely situated to act as a “translator” between science and
technology. He was not an inventor like Edison or a full-fledged scientist like other
experimenters. Rather, he was a scientifically trained engineer and teacher who
was closely associated with the electrical industry and with engineering-training
John Ambrose Fleming with a vacuum tube
diode. (UPI/Bettman/Corbis)
Volti7e_05_ch04_061-078_highres.indd 75 28/11/12 5:56 PM

76 The Process of Technological Change
institutions. These separate but interrelated roles gave him the knowledge and the
motivation to convert a scientific curiosity into a practical technology.
This chapter began with the assertion that technology is not applied science
and went on to provide some evidence for this statement. It is undeniable that
technology today makes extensive use of scientific knowledge. But as we have
seen, scientific knowledge often makes its way into technological practice in a very
roundabout way. At the same time, a considerable amount of scientific advance
stems from prior achievements in technology. Science and technology have evolved
along separate paths that often intersect. At these points of intersection each has
often contributed to the other’s development. Both science and technology seem to
do best when they remain in close contact, but this should not obscure the fact that
in most instances they remain very different enterprises.
Questions for Discussion
1. In 1993 the U.S. Congress canceled one of the most ambitious scientific research
projects of all time, the superconducting supercollider for high-energy physics.
One of the major reasons for canceling the project was its cost, which was esti-
mated to be at least $8.5 billion. In the years that followed, the U.S. government
continued to support the international space station, a project that will end
up costing more than $100 billion by the time it is completed. Why has one
project received financial support while the other was killed? Was the perceived
scientific value of the projects the paramount concern of congressional decision
makers?
2. Why have science and technology been so closely associated in popular
thought? How does each of them gain from this association?
3. Monetary considerations aside, which would you find more personally satisfy-
ing: making a scientific discovery or inventing a useful technology? Why?
4. Quite a few research projects in chemistry, physics, and biology receive grants
for millions of dollars, whereas most researchers in the social sciences and
humanities consider themselves lucky to receive a few thousand dollars in
grant aid. Why is this so? Does it represent a proper distribution of research
funds?
5. Students in engineering programs typically take a substantial number of sci-
ence and mathematics courses. Should some of these courses be eliminated and
replaced with different kinds of courses? If so, which courses should be taken
instead?
Notes
1. John D. Verhoeven, “The Mystery of Damascus Blades,” Scientific American 284,
1 (January 2001), pp. 74–79.
2. Thomas S. Kuhn, “Comment on the Principle of Acceleration,” Comparative Studies in
History and Society 11, 4 (1969): 427.
3. Thomas S. Kuhn, The Rate and Direction of Inventive Activity (Princeton, NJ: Princeton
University Press, 1963), pp. 450ff.
4. Kuhn, “Comment,” p. 428. See also Thomas S. Kuhn, The Essential Tension: Selected
Studies of Scientific Tradition and Change (Chicago: University of Chicago Press, 1977).
Volti7e_05_ch04_061-078_highres.indd 76 28/11/12 5:56 PM

Scientific Knowledge and Technological Advance 77
5. Chalmers W. Sherwin and Raymond S. Isenson, “Project Hindsight,” Science 156, 3782
(June 23, 1967): 1571–1577.
6. Richard Chait et al., Enhancing Army S&T: Lessons from Project Hindsight Revisited
(Washington, DC: National Defense University, Center for Technology and National
Security Policy, 2007) pp. 23–24. Accessed on June 18, 2008, at http://stinet.dtic.mil
/cgi-bin/GetTRDoc?AD=ADA466795&Location=U2&doc=GetTRDoc .
7. James M. Utterback, “Innovation in Industry and the Diffusion of Technology,” Science
183, 4125 (February 15, 1974): 622.
8. Sherwin and Isenson, “Project Hindsight,” p. 1575. See also Wendy Faulkner,
“Conceptualizing Knowledge Used in Innovation: A Second Look at the Science–
Technology Distinction and Industrial Innovation,” Science, Technology, and Human
Values 19, 4 (Autumn 1994), pp. 435, 454.
9. J. E. S. Parker, “The Economics of Innovation,” p. 30.
10. Edwin Layton, “Mirror-Image Twins: The Communities of Science and Technology in
Nineteenth Century America,” Technology and Culture 12, 4 (October 1971): 564–565.
11. Donald E. Stokes, Pasteur’s Quadrant: Basic Science and Technological Innovation (Washington,
DC: Brookings Institution, 1997), pp. 55–57.
12. Otto Mayr, “The Science–Technology Relationship as a Historiographic Problem,”
Technology and Culture 17, 4 (October 1976): 668.
13. See, for example, Bruno Latour, Science in Action: How to Follow Scientists and Engineers
through Society (Cambridge, MA: Harvard University Press, 1987).
14. F. R. Jevons, “The Interaction of Science and Technology Today, or, Is Science the
Mother of Invention?” Technology and Culture 17, 4 (October 1976): 731.
15. Kuhn, “Comment,” p. 429.
16. Eugene S. Ferguson, “The Mind’s Eye: Nonverbal Thought in Technology,” Science 197,
4306 (August 26, 1977): 827.
17. Joseph Ben-David, “Roles and Innovation in Medicine,” American Journal of Sociology
65, 6 (1960): 557–568.
18. Eda Fowlks Kranakis, “The French Connection: Giffard’s Injector and the Nature of
Heat,” Technology and Culture 23, 1 (January 1982): 3–38.
19. Nathan Rosenberg, Inside the Black Box, pp. 146–147.
20. Frederik Nebeker, “Charles Townes, the Maser, and the Relationship between
Engineering and Science,” Engineering Science and Education Journal (December 1995):
S41–S46.
21. Edwin T. Layton, Jr., “American Ideologies of Science and Engineering,” Technology and
Culture 17, 4 (October 1976): 688.
22. Eric Drexer, “The Nobel Prize for Technology” (January 8, 2009), accessed on January 12,
2012, at http://metamodern.com/2009/01/08/the-nobel-prize-for-technology/.
23. John P. McKelvey, “Science and Technology: The Driven and the Driver,” Technology
Review 88, 1 (January 1985): 38–47.
24. Philip H. Abelson, “Instrumentation and Computers,” American Scientist 1 (1986).
Quoted in Paul DeHart Hurd, “Technology and the Advancement of Knowledge in
Science,” Bulletin of Science, Technology, and Society 14, 3 (1994): 126.
25. Langdon Winner, Autonomous Technology: Technics-Out-of-Control as a Theme in
Political Thought (Cambridge, MA: MIT Press, 1977), pp. 24–25.
26. Hugh G. J. Aitken, Syntony and Spark: The Origins of Radio (Princeton, NJ: Princeton
University Press, 1985), p. 316.
27. Edward Constant, The Origins of the Turbojet Revolution (Baltimore: Johns Hopkins
University Press, 1980), pp. 15–16.
Volti7e_05_ch04_061-078_highres.indd 77 28/11/12 5:56 PM

http://stinet.dtic.mil

http://metamodern.com/2009/01/08/the-nobel-prize-for-technology/

78 The Process of Technological Change
28. Peter Mathias, “Resources and Technology,” in Peter Mathias and John A. Davis (Eds.),
Innovation and Technology in Europe: From the Eighteenth Century to the Present Day
(Oxford, England, and Cambridge, MA: Basil Blackwell, 1991), p. 37.
29. Peter Mathias, “Who Unbound Prometheus? Science and Technical Change, 1600–1800,”
in Peter Mathias (Ed.), Science and Society 1600–1900 (Cambridge, England: Cambridge
University Press, 1972), p. 79.
30. Robert M. Pirsig, Zen and the Art of Motorcycle Maintenance (New York: Bantam Books,
1974), pp. 99–103.
31. John Ziman, An Introduction to Science Studies: The Philosophical and Social Aspects
of Science and Technology (Cambridge, England: Cambridge University Press, 1984),
p. 104.
32. Ann Johnson, Hitting the Brakes. Engineering Design and the Production of Knowledge
(Durham, London: Duke University Press, 2009).
33. Rebecca S. Eisenberg and Richard R. Nelson, “Public vs. Proprietary Science: A
Fruitful Tension?” Daedalus 131, 2 (Spring 2002), p. 91.
34. See Henry Petroski, To Engineer Is Human: The Role of Failure in Successful Design (New
York: St. Martin’s Press, 1985).
35. Melvin Kranzberg, “Let’s Not Get Wrought Up About It,” Technology and Culture 25, 4
(October 1984): 742.
36. James M. Utterback, “Innovation in Industry and the Diffusion of Technology,” Science
183, 4125 (February 15, 1974): 622.
37. Faulkner, “Conceptualizing Knowledge,” p. 427.
38. John Rae and Rudi Volti, The Engineer in History, 2nd ed. (New York: Peter Lang,
2001), pp. 190–192.
39. Aitken, Syntony, and Spark, pp. 335–336.
40. George Shiers, “The First Electron Tube,” Scientific American 220, 3 (March 1969):
104–112.
Volti7e_05_ch04_061-078_highres.indd 78 28/11/12 5:56 PM

A technology is of no use unless it is put to use. This is a crushingly obvious state-
ment, but it does serve to remind us that a technology that never makes it out of
a laboratory or an inventor’s basement might just as well not exist. The last two
chapters have examined some of the ways in which technologies have come into
being; the task of this chapter is to describe how they are adopted by the individuals
and organizations that actually put them to use. Our concern here is the diffusion
of technologies—that is, the processes through which they spread from their initial
sources into homes, factories, offices, and so forth. To begin, we will consider the
diffusion of technologies from one country to another. In the second part the focus
will narrow as we look into the process of technological diffusion at the level of the
individual business firm.
The International Diffusion of Technology
There is an understandable human tendency to believe that progress, technologi-
cal or otherwise, is largely the result of efforts of one’s own people. We exalt native
scientists, entrepreneurs, and inventors, and see in their efforts the major source
of our technological advance. The Soviet Union in the Stalin era carried this
to ridiculous lengths when Soviet historians claimed that their countrymen had
invented everything from the steam engine to baseball, but many other countries
have exhibited strains of technological nativism. The truth of the matter is quite
different. Although indigenous skills and achievements are an indispensable part of
technological advance, few technologically dynamic countries have been isolated
from the influences of other lands. Continued technological advance requires an
infusion of ideas, tools, and materials from other places, coupled with an ability to
make good use of them.
Consider the world in the year 1500. There can be little doubt about what
country had produced the most technological innovations up to that time. It was
not England, nor was it France, Italy, or Germany. The country with the greatest
heritage of inventive achievements was China. By 1500, Chinese technologists
had produced not only the compass, paper, and gunpowder—the familiar textbook
examples of Chinese inventive ability—but also had been responsible for such
inventions as the segmental arch bridge, the chain drive transmission, the spinning
wheel, watertight bulkheads for ships, printing, fishing reels, paddlewheel boats, the
79
The Diffusion of Technology
c h a p t e r f i v e
Volti7e_06_ch05_079-100_highres.indd 79 28/11/12 5:57 PM

80 The Process of Technological Change
differential gear, earthquake detection devices, and of course the fine porcelain that
takes its name from its country of origin.1 Europe too could claim some significant
technological advances, most notably in mining and cathedral building, but on the
whole, there could be little question where the greater degree of inventiveness had
been residing.
Yet within less than two centuries China’s lead had dissipated, and the European
continent exhibited the greater degree of economic and technological vitality.
Much of this dynamism could be attributed to the successful adoption of Chinese
inventions by European countries. Paper was an essential complement to printing;
together they made possible the rapid circulation of new ideas and stimulated the
growth of literacy, which in turn directly contributed to major historical changes
such as the Protestant Reformation and the spread of capitalism. The magnetic
compass greatly aided transoceanic navigation and allowed the spread of European
economic power throughout the world. And gunpowder helped to conquer and
secure that world for the benefit of Europe.
The successful use of foreign technologies in early modern Europe was very
much in accordance with long-standing historical patterns. Many of the most
significant inventions used by the Western world up to that time did not originate
there. The stirrup, which initiated major changes in medieval warfare and in so
doing reshaped the social system, was not a Western invention, but diffused from
Persia sometime after the eighth century, although it too probably originated in
China.2 Horses could not be used effectively as draft animals until a practical collar
diffused from Asia to Europe in the eighth century.3
Much of the economic dynamism and prosperity of the Western world there-
fore can be traced to the willingness of its people to accept and make good use of
technologies that originated elsewhere.4 By contrast, China was far less open to
foreign inventions. Secure in the belief that they inhabited the “Middle Kingdom”
(the literal translation of Zhong Guo, the name by which the Chinese called their
country), the Chinese looked at the outside world with more than a trace of conde-
scension. This attitude was particularly evident when China came into contact with
the expansionary nations of Europe. During the late eighteenth century, English
merchants and officials scoured the world in search of customers for the products
of English factories, but they found no takers in China. The emperor made it very
clear to one envoy of King George III that “there is nothing we lack. . . . We have
never set much store on strange or ingenious objects, nor do we need any more of
your country’s manufactures.”5
China’s ruling elite persisted in this attitude. Toward the end of the nineteenth
century the scholar Kang Youwei advocated the combination of Western techniques
with traditional Chinese culture. Although he won the support of the emperor, the
reform movement that was initiated in 1898 was aborted in a few months when
the emperor was deposed and replaced by the reactionary dowager empress, Dzu Xi.
The early years of the Chinese Republic (1912–1949) saw a considerable infusion
of Western ideas and technologies, but these were eclipsed by Chiang Kai-shek’s
New Life Movement, which stressed a return to Confucian virtues as the cure for
China’s weakness.
Volti7e_06_ch05_079-100_highres.indd 80 28/11/12 5:57 PM

The Diffusion of Technology 81
Even the emergence of a Communist state in 1949 did not result in a whole-
hearted acceptance of foreign ideas and ways of doing things. Although a consider-
able amount of technology transfer from the Soviet Union took place during the
early 1950s, the Chinese were offended by Russian arrogance. The political rift
between the two countries closed off China’s main source of advanced technology,
while at the same time, Maoist ideology contributed to an increasing aloofness to
the outside world and stressed self-reliance in technological matters. Only in recent
decades has China exhibited an openness to the outside world. In the words of one
Chinese journalist, “No nation has a monopoly on culture. Cultural exchanges
between nations are inevitable and irresistible. . . . Whatever will facilitate China’s
material modernization and its cultural advancement is welcomed.”6
In contrast, people and business firms in the United States have a long history
of borrowing technologies that originated elsewhere. One could practically date
the start of America’s industrial revolution to the year 1790, when Samuel Slater,
who had recently emigrated from England, used his accumulated knowhow to
construct the first successful mechanized spinning factory in the United States. In
the years immediately following, the United States was so deficient in indigenous
technological capability that many implements and techniques had to be imported.
American canal builders found it necessary to secure from England drawings and
examples of devices as simple as a wheelbarrow in order to successfully construct
early canals.7
These were not simply instances of a young and economically immature nation
relying on imported technologies due to temporary incapability. Modern, quint-
essentially “American” innovations such as automobiles, radios, jet airplanes,
compact discs, and the World Wide Web all owe their existence to pioneering
efforts that were undertaken in other lands. And along with imported implements,
materials, and processes, American technological development has been spurred
on by an even more valuable import: people. Although a large reservoir of “Yankee
ingenuity” cannot be denied, many of America’s greatest technological achieve-
ments were produced by immigrants. It would take many pages simply to list some
of the most prominent of these, but a small sample can illustrate the debt that
American technology owes to immigrant scientists, inventors, and engineers. One
of the first plastics, Bakelite, was the work of a man born and educated in Belgium,
Leo Baekeland. The mathematical and engineering work of Charles Steinmetz, a
transplanted German, was essential to the development of the American electrical
industry. The science of aerodynamics owes much of its early development to an
immigrant from Hungary (via Germany), Theodore von Karman. The television
camera was invented by a Russian exile, Vladimir Zworykin. This pattern contin-
ues today. According to one study, between 1995 and 2005 more than a quarter of
new engineering and technology firms had foreign-born founders, CEOs, or chief
technology officers, as exemplified by Sergey Brin of Google and Pierre Omidyar of
eBay.8 Equally important, inventors and entrepreneurs, both immigrant and home-
grown, have benefited from the services provided by legions of unsung men and
women who brought their skills from other lands and in so doing helped to push
American technology to new heights.
Volti7e_06_ch05_079-100_highres.indd 81 28/11/12 5:57 PM

82 The Process of Technological Change
A heavy reliance on foreign technologies also can be seen in the development
of another technological colossus, Japan. The aggressive industrialization effort
launched after the Meiji Restoration in 1868 was at first based on the importation
of foreign equipment such as locomotives and textile equipment. Foreign technical
advice was also avidly sought; during the early years of the new emperor’s reign,
the salaries of foreign technicians absorbed 6 percent of the central government’s
budget, and 40 to 50 percent of the budget of the Ministry of Industry for the dura-
tion of its existence.9
As Japan’s industrial enterprises began to emerge, their products were often
outright copies of foreign products. At times, Japanese attempts to copy foreign
artifacts produced comic results, as exemplified by the factory with a sawtooth roof
that faced the wrong way so no light shone through the windows, or the tailor who
stitched together a Western-style suit complete with a patch on the pants. But with
mounting experience, Japanese workers and managers were able to effect substantial
improvements until some of their products were the equal of Western manufactured
Two key figures in the history of American technology, one native born, the other an immi-
grant: Thomas Edison and Charles Steinmetz. (From the collections of Henry Ford Museum and
Greenfield Village)
Volti7e_06_ch05_079-100_highres.indd 82 28/11/12 5:57 PM

The Diffusion of Technology 83
goods. An early example of this was the Toyoda automatic loom, which in 1929 was
exported to a textile mill in England, the birthplace of the Industrial Revolution.10
Certainly no one laughs at Japanese technology today, although the accusation is
occasionally still made that the country excels in the refinement of existing tech-
nologies but produces few completely new ones.
Today, a considerable amount of technology transfer occurs when firms in
the economically advanced nations of the world establish factories in developing
countries, some of them engaged in the manufacture of highly sophisticated elec-
tronic equipment such as computers and smartphones. For critics, the appeal of
these offshore enterprises centers on the availability of cheap labor; their output
consists of assembled products that require few skills on the part of the assemblers.
As a result, these enterprises are little more than enclaves of modernity with few
if any connections to the host country’s economy and society. In contrast, a more
optimistic assessment of these enterprises stresses their positive role in stimulat-
ing the development of workers’ skills and technological advance. Governments
that are committed to economic and technological advance know that labor gets
more expensive as a country develops and workers demand their share of a rising
national income. Under these circumstances, low-cost labor ceases to be a nation’s
main advantage. Consequently, countries like Taiwan, Korea, and China (as well
as Japan in an earlier era) have made concerted efforts to upgrade the skills of their
labor forces, so the quality of their labor forces makes up for the higher wages paid
to workers. No less important, these countries—China especially—have used mul-
tinational firms as sources of up-to-date technologies by requiring the transfer of
A fair amount of technology transfer may occur when advanced production lines are estab-
lished in developing countries, as with this tablet production line in China. However, the
contributions of these enterprises to a country’s overall technological development has been
a matter of debate. (Nelson Ching/Bloomberg via Getty Images)
Volti7e_06_ch05_079-100_highres.indd 83 28/11/12 5:57 PM

84 The Process of Technological Change
these technologies as preconditions for allowing these firms to set up shop on their
soil.11 And although it is hardly a commendable business practice, many firms, espe-
cially Chinese ones, have gained a fair amount of technological knowhow through
outright piracy.
Clever Copyists
In calling attention to piracy and to less flagrant forms of technological borrowing,
there is no intention to attribute the achievements of the United States, Japan, and
today’s China to technological plagiarism. It is not an easy matter to copy a tech-
nology that was produced somewhere else. Having an example of a machine or a
material in front of you helps, but it hardly guarantees success. Life would be simpler
and easier if technological advance were simply a matter of importing a prototype
and then engaging in “reverse engineering.” It does work occasionally, but all too
often the process ends in frustration, as when the Chinese government reportedly
spent $300 million in the 1980s in a failed attempt to produce a workable copy of
the Boeing 707 jetliner.12
The very nature of technology makes copying a difficult enterprise. As was
stressed in earlier chapters, technologies have to be considered as systems, and if one
element is missing or deficient, even seemingly simple tasks such as copying a par-
ticular item may be impossible. A successful technology transfer requires numerous
complementary inputs, many of which have to be developed and produced by the
recipient.13 There are occasional possibilities for “leapfrogging” over whole stages
of development through the importation of foreign technologies. For example, in
many parts of the world, cell phones have given millions of people access to tele-
phone service by eliminating the need for an expensive wired infrastructure. But
this is an exception; more often than not, the successful use of advanced imported
technologies depends on the preparation that comes through involvement with
well-established technologies.
This brings us to a related point. The effective use of foreign technologies
requires a labor force with a broad range of skills. It is essential that native manag-
ers, engineers, technicians, and ordinary workers are capable of making the best of
opportunities that imported technologies present. To do so often entails significant
modifications to an imported technology. Under these circumstances, the applica-
tion of imported technologies is combined with indigenous innovative efforts to the
point where the two are in fact part of a single process.14
A successful user of imported technologies must therefore have a reasonably
high level of indigenous capability. As noted by Vernon Ruttan, “The ability to
screen, borrow and adapt scientific knowledge and technology requires essentially
the same capacity as is required to invent new technology.”15 This is demonstrated
by one study that found that the countries most successful in using foreign tech-
nologies tended to be the ones with high levels of indigenous research and develop-
ment.16 It is also significant that countries with the highest innovation rate make
the quickest use of technologies developed elsewhere.17 A country (or an individual
firm) that draws on technologies developed elsewhere is spared the expense of
Volti7e_06_ch05_079-100_highres.indd 84 28/11/12 5:57 PM

The Diffusion of Technology 85
“reinventing the wheel,” but making effective use of imported technologies—even
if only direct copying is required—often requires a stock of indigenous skills that
cannot always be easily found.
Adaptation and Adoption
In some cases the diffusion process may result in a technology emerging as some-
thing fundamentally different. This is exemplified by the aforementioned transfer
of gunpowder from China to Europe. The Chinese at first regarded gunpowder as a
medicinal substance, and only after hundreds of years of experimentation did they
begin to employ it for fireworks and on occasion for military rockets, bombs, and
mines. By the end of the thirteenth century the Chinese were using gunpowder to
fire projectiles from vase-shaped guns, but development seems to have stopped at
this point. A few decades later, Europeans began using gunpowder for weapons of
steadily increasing destructive power, thereby transforming the nature of warfare
and ultimately society as a whole.18
The changes made in the course of the diffusion of gunpowder from China to
Europe were monumental, for the nature of the technology changed along with the
purposes to which it was put. In most cases, however, the modifications are far less
dramatic. A process of “fine tuning” takes place as the technology is transferred
The John Bull had to be fitted with leading wheels to guide it through the sharp curves of an
American railroad. (Library of Congress)
Volti7e_06_ch05_079-100_highres.indd 85 28/11/12 5:57 PM

86 The Process of Technological Change
from one place to another. As has been noted above, many successfully diffused
technologies succeed only because of alterations and improvements made during
the process of diffusion. Some of these changes may be done in order to adapt an
imported technology to local conditions. In the late nineteenth century, Japanese
entrepreneurs set up textile factories that used ring-spinning machines that had
been invented in Europe. Unlike in Europe, however, these machines were sup-
plied with a combination of high-quality long-staple cotton and cheaper short-
staple cotton. The thread that was produced was lower in quality, but it was less
expensive, an important consideration in a nation that was still poorer than most
European countries.19 Another adaptive change occurred when the railroad, an
English invention, crossed the Atlantic. Railroad systems were built in the United
States with great haste because there were huge expanses of frontier to be opened
up as quickly as possible. One consequence of these conditions was that curves were
much sharper than in England. This in turn necessitated changes in locomotive
design, as American engines were equipped with pilot wheels placed ahead of the
driving wheels to guide them through the curves.
Learning to Make Steel in Old Japan
The early development of the Japanese steel industry provides another illustration
of a successfully diffused technology that required many adaptations wrought by
native workers and technicians.20 Although the Japanese had been making small
batches of high-quality steel for centuries, their indigenous industry was incapable
of making iron and steel in large quantities. Japanese steelworkers therefore looked
to Europe for better ways of producing steel. The first Japanese attempts to draw on
foreign techniques occurred in the 1850s when they attempted to produce cannons
by using methods described in a Dutch book on steel production. Success was slow
in coming. Numerous trials resulted in the production of cannon that burst the
first time they were fired. Yet underlying these failures was a highly useful learning
process. At first, the native iron workers were incapable of melting even half of
the required pig iron, but by the fifth attempt a complete cannon was cast, albeit
not with complete success. Still, even this limited achievement required strenuous
efforts to learn how to treat the raw material and to develop devices and techniques
to bore the barrel of the cannon. Within about 20 years of the original attempt,
the foundry had successfully cast approximately 200 cannons. In accomplishing
this, the cannon founders had been able to draw on indigenous capabilities in such
things as the production of fire bricks for use in the reverberatory furnace in which
pig iron was melted, the construction of water-powered boring machines, and the
employment of high temperature furnaces for melting the iron.
Although their efforts represented an attempt at imitating foreign products
and techniques, more was involved than mere copying. What began as an effort
to imitate foreign techniques ended successfully only when indigenous capabili-
ties were applied to the solution of production and design problems. The process
continued in 1880 when foreign engineers were recruited to oversee an ambitious
effort to increase steel production. Initially this too was a failure, largely because the
Volti7e_06_ch05_079-100_highres.indd 86 28/11/12 5:57 PM

The Diffusion of Technology 87
foreign experts did not take into account the special characteristics of Japanese coal
and iron ore. But even this failed attempt to produce steel through the importation
of foreign materials and experts ultimately met with success 12 years later when a
Japanese engineer and one of his students reopened the dormant blast furnace after
solving the problems created by the use of native coke and iron ore.
The point of this brief narrative is simply that Japan’s successful nineteenth- century
attempt to create a modern iron and steel industry was the result of importing for-
eign equipment, techniques, and expertise while at the same time making full use
of the native capabilities that were required for their successful employment. This
example shows that technological diffusion is not a one-way process; without active
participation by the recipients of the foreign technology, an imported technology
may end up an expensive failure. In turn, effective participation often is the product
of previous efforts at indigenous technological transformation. The issue is not one
of “imitation” versus “self-reliance,” but of how to achieve an effective blending of
the two.
Appropriate Technology
Up to this point there has been an unspoken assumption that the use of technolo-
gies developed in advanced nations is essential to the technological and economic
modernization of developing countries. Modifications may have to be made to these
technologies, but, on the whole, technologies of foreign origin are a sound basis
for indigenous development. This assumption can be challenged by noting that
the countries receiving foreign technologies today are not like nineteenth-century
Japan and the United States. Most are far poorer, not just in terms of wealth, but
also in regard to human skills; this creates numerous problems in effectively using
the sophisticated technologies that have been created for the developed nations of
the world. Moreover, poor countries have large and growing populations, making the
creation of jobs one of their prime needs. Although in recent years the rate of popu-
lation growth has slowed in many poor countries, there still remains a huge cohort of
new entrants to the labor force. As a result, youth unemployment rates are still stub-
bornly high; according to a recent estimate by the International Labor Organization,
88 million young people around the world are struggling to find a job.21
Unfortunately, many of the technologies that have been developed in the eco-
nomically advanced nations have been designed to save labor, not to maximize its
use. Labor is often saved by using technologies that allow the substitution of capital
for labor, yet one of the distinguishing features of poor nations is that they are criti-
cally short of capital. And finally, is it reasonable to expect that the world as a whole
can sustain the global diffusion of the technologies that have been developed by the
rich nations of the world? It is hard to imagine China or India consuming resources
at the rate the United States does. To take one notable example, China at the end
of 2010 had an approximate ratio of one car per 17.2 persons, about one-tenth the
ratio in the developed countries.22 If China, a nation of 1.4 billion people, were to
achieve a car-to-population ratio equal to North America, Europe, and Japan, the
prospects for resource depletion and environmental pollution would be grim indeed.
Volti7e_06_ch05_079-100_highres.indd 87 28/11/12 5:57 PM

88 The Process of Technological Change
Women and children in poor countries often expend a lot of time and effort to bring water
to their homes. One appropriate technology, the Hipporoller, makes the task easier. (Courtesy
hipporoller.org)
Due to the drastically different circumstances of today’s underdeveloped coun-
tries, technologies that have worked well in the rich nations where they originated
may fail when they are taken to a different setting. Even worse, they may seriously
distort the course of a nation’s development and leave it poorer than when it started
out by concentrating human and financial resources in a few privileged segments of
the economy and society, leaving much of the population, especially those in rural
areas, as poor as ever. To take one particularly notorious example, after Western
experts introduced large tractors into Pakistan, farm owners replaced farm laborers
with the new machines. In one region, 40 percent of these workers lost their jobs
and thus were forced to migrate to the cities and lives of dire poverty. Per-acre crop
yields hardly increased at all.23
According to critics of conventional technology transfers, what poor countries
need is a set of technologies appropriate to the conditions prevailing in these coun-
tries.24 Sometimes they are called “alternative technologies,” sometimes they are
called “intermediate technologies” (because they stand in an intermediary position
between modern and traditional ways of doing things), and sometimes they are sim-
ply called appropriate technologies. These technologies, it is claimed, do more than
boost production; they increase employment, help to redress the maldistribution of
wealth and income, empower people, contribute to economic independence, and
preserve the environment.
Examples of appropriate technologies in developing countries include hand-
powered nut shellers, bicycle-powered pumps for irrigation and water distribution,
and rolling drums for transporting water. Appropriate technologies less directly
connected with production include shelters made out of concrete and canvas, solar-
powered lights, and inexpensive laptop computers for schoolchildren.25 The utility
of these technologies seems evident, and few could take issue with the general
goals of appropriate technology. The difficulty lies in their actual development and
Volti7e_06_ch05_079-100_highres.indd 88 28/11/12 5:57 PM

The Diffusion of Technology 89
implementation. Consider the first goal: increasing employment. As noted earlier,
in poor countries there is good reason to select technologies that use relatively more
labor than capital. But using labor-absorbing technologies may not result in overall
increases in employment. At first glance, small-size sugar mills in India seem to
be excellent examples of appropriate technology. In the late 1970s they produced
20 percent of India’s output—1.3 million tons of sugar per year—while providing
jobs for more than half a million people. Many jobs were created because the same
amount of capital required to build a large modern sugar mill could be used for the
construction of forty mini-mills; together these employed ten times more people
than the modern mill.26
Unfortunately however, these apparent employment gains were lost elsewhere
in the system. Although the mini-mills produced two and a half times more sugar
per unit of capital, they incurred much higher labor costs (exactly how high
depended on wage rates in the small mills compared to the large ones). This resulted
in higher prices, lower profits, government subsidies, or some combination of all
three, which in turn prevented funds from entering other sectors of the economy.
Were this not the case, there would have been more money to spend on other goods
and services, and a consequent stimulation of employment in these industries. Thus,
the apparent creation of jobs in small sugar mills in all likelihood produced no net
increases in employment.
Many economists would argue that a free market system, with its realistic reck-
oning of capital and labor costs, will naturally result in the use of labor-intensive
appropriate technologies. But the choice of technologies is not always governed
by economic calculations. Technologies may be selected because they meet the
requirements of powerful segments of the society, as when ultra-modern hospital
technologies are installed to serve the urban upper class of a country that lacks basic
medical services for most of its people.
At the same time, even practical, profit-oriented businessmen in underde-
veloped countries can be charmed by the technological sophistication of foreign
technologies that do not make good economic sense. According to one study of
business firms in Indonesia, intermediate technologies resulted in lower costs and
equal product quality, but they were often rejected in favor of advanced technolo-
gies of foreign origin. This was especially likely to happen in firms that had oligopo-
listic control over their market. Under these circumstances, the aesthetic appeal
of sophisticated technologies and the prospect of managing machines instead of
people overrode the economic advantages of more appropriate technologies.27
Finally, a consideration of alternative technologies that is confined to economic
matters is incomplete, for the appropriate technology movement has been moti-
vated by a deep concern with how technologies are connected to larger issues. In
the final analysis, a technology can be adjudged appropriate or inappropriate only by
reference to particular values, and these cannot be defined exclusively in economic
or technological terms. The appropriate technology movement also has directed
attention to the role played by powerful groups and institutions in the choice of
technologies. Choices are likely to be seriously constrained when donors of foreign
aid, international lending agencies, and multinational corporations occupy positions
Volti7e_06_ch05_079-100_highres.indd 89 28/11/12 5:57 PM

90 The Process of Technological Change
of dominance. It is also likely that the emergence of appropriate technologies is seri-
ously hindered by the international distribution of R&D expenditures, only about
16 percent of which is spent in the underdeveloped countries of the world.28
Appropriate technologies, however defined, cannot be a panacea for poverty
and backwardness. At the same time, the transfer of sophisticated technologies
from developed countries may cause more harm than good. Choices must be made,
and in poor countries as well as in rich ones, the selection of technologies reflects
many larger issues. One of the most important of these is sustainability. Today’s
technologies have enriched our lives in many ways, but can we continue to degrade
the natural environment while also expecting the perpetual availability of energy
supplies and other resources? In this context, technologies are appropriate when
they save energy, require minimal resources, and relieve stresses on the environ-
ment. Examples of sustainable technologies include products that lend themselves
to recycling, building designs that require little energy for heating and cooling, and
road systems that are safe and convenient for cyclists and pedestrians.
Business Firms and Technological Diffusion
Many of the same processes that characterize the diffusion of technology from one
country to another also manifest themselves when new technologies are acquired
by business firms in the same country. Many factors affect the speed with which
new technologies diffuse. According to Christian P. Tanon and Everett M. Rogers,
these include relative advantages over existing technology, compatibility with exist-
ing values of the firm, the ease or difficulty of understanding and applying the new
technology, the ease in experimenting with the new technology or employing it on
a trial basis, and the extent to which positive results are apparent.29 In highlighting
the importance of ease of use, observability, and comprehensibility, this list carries the
implication that, after all is said and done, it is individual people who determine the
success of a technological innovation. The diffusion of technology is still essentially
a learning process, through which the recipient gains an understanding of how the
technology works while at the same time adapting the technology to the recipient’s
particular needs. In this, the activities of skilled people are usually essential. As Mira
Wilkins explains, “Often the product, or the description in the patent, or mere draw-
ings and instructions, are inadequate for transfers of technology; [people] are needed
to carry, explain, and facilitate the introduction of new processes or products.”30
When we speak of “people,” we mean special kinds of people. The effective
transfer of technology often requires a movement from one environment to another,
such as from a laboratory to a factory. Technology is fundamentally a system of
knowledge, but knowledge that is readily comprehensible in one setting may be a
great mystery in another. As with the conversion of scientific knowledge to tech-
nological practice noted in the previous chapter, successful diffusion sometimes
requires the services of a kind of “translator”—a person capable of functioning in
both settings so that information can be transferred from one to the other.
As Hugh Aitken’s study of the early days of radio indicates, the efforts of
“translators” were crucial to the development of that technology and its adoption
Volti7e_06_ch05_079-100_highres.indd 90 28/11/12 5:57 PM

The Diffusion of Technology 91
by commercial enterprises. There was the early work of Heinrich Hertz, who “trans-
lated” James Clerk Maxwell’s theories of electromagnetic radiation into a laboratory
technology that allowed an empirical examination of the theory. Following Hertz,
Oliver Lodge moved beyond the laboratory experiment through his development
of an imperfect but workable method of using radio waves for actual communica-
tion. The first stage in the development of radio was essentially completed by
Marconi, who took the embryonic technology and converted it into a practical (and
highly lucrative) commercial business.31 Throughout the course of radio’s history,
information was put into a new form, pushing the technology to the next stage of
development. In Aitken’s summary, “At each stage in the process of translation,
information generated in one system was converted into a form that ‘made sense’
in terms of another; and at each stage new information was blended with what was
already known to create something essentially new.”32
A Risky Business
Some technologies, especially those involving military matters, are sponsored by
governments, but the majority of technological innovations are adopted by private
businesses pursuing their economic goals. Firms may adopt new technologies in
order to cut their costs, improve their products, bolster their profits, penetrate new
markets, or achieve some combination of any or all of these. The speed and extent
of diffusion are therefore strongly influenced by expectations of costs and benefits.
Expectations, however, are not always met. Along with the opportunities pre-
sented by a new technology come uncertainties that cannot be assessed through
conventional economic analysis. No new technology is a “sure thing.” Some end up
embarrassing failures, and even with a technology that is ultimately successful, its
immediate advantages may be slight and its benefits may be a long time in coming.
On a more positive note, some new technologies may exceed expectations when they
are complemented by other technologies. Under these circumstances, new products
and processes are more than the sum of their parts, as when a combination of the laser
and the computer gave birth to supermarket scanners, printers, and robotic surgery.33
It is often the case that the potential of an emerging technology is what mat-
ters, and this may be the hardest thing of all to assess. Many years may elapse before
a technology passes from laboratory feasibility to commercial value, that is, from
invention to innovation. Although some inventions made the transition rapidly—
only one year in the case of Freon refrigerants—others have taken much more
time. In the case of the mechanical cotton picker it was 53 years, and for the zipper,
27 years. The fluorescent lamp took no less than 79 years to make the passage from
laboratory demonstration to commercial viability.34
The decision to adopt a technology represents a firm’s assessment of the likely
benefits accruing from the use of the new technology, weighed against the uncer-
tainties that attend its use and the speed at which these uncertainties can be dis-
sipated.35 Given the nature of technological innovation, uncertainties will rarely
vanish completely, but as information is gathered and experience gained, uncertain-
ties can be converted into risks.36 More than terminology is involved here; according
Volti7e_06_ch05_079-100_highres.indd 91 28/11/12 5:57 PM

92 The Process of Technological Change
to a distinction made by economists and game theorists, “uncertainty” indicates an
inability to predict an outcome, whereas “risk” implies at least a rough notion of the
probability of success or failure.37 The ability to make calculations of this sort does
not ensure accurate predictions of success or failure, but at least it enables managers
to make informed judgments of their likelihood.
Not all risks are technological in nature. Although any new way of doing things
can be presumed to have its share of bugs and other unanticipated problems, these
may be minor when compared to the commercial uncertainties of a new venture.
Uncertainties of this sort can be particularly problematic, for decision makers usu-
ally find it easier to assess the probability of technical success than the probability
of marketing success.38 “Will it fly?” is easier to determine than “Will anybody pay
money to fly in it?” This was the literal problem for the Anglo–French supersonic
airliner, the Concorde. A stunning technological achievement that required the
solution of countless complex problems, the Concorde was a commercial failure. An
extraordinarily expensive aircraft to fly and maintain, it never generated revenues
commensurate with the costs of its development and operation. No private business
could have survived this sort of failure. Only the financial support of two govern-
ments, motivated by considerations of national pride as much as anything else, kept
the Concorde aloft until it was retired in 2004.
The NIH Syndrome
The inherently uncertain or at best risky nature of many technological innovations
may explain the presence of one persistent block to the diffusion of new technologies,
the Not Invented Here syndrome, or NIH for short. People and organizations exhib-
iting this syndrome are reluctant to make use of technologies that were invented else-
where. The corrosive effects of the NIH mentality were bemoaned by Henry Ford II:
“There’s too much NIH—not invented here [in the automobile industry]. . . . Lots
of times a guy brings something in, and unless the improvement is rather dramatic,
nothing happens. The status quo is a hell of a lot easier than making changes.”39
Internal resistance to technological innovation, while frustrating to some,
is certainly understandable. As was noted, innovation is at best a risky process
that can generate a great deal of discomfort, and this discomfort will be especially
intense when the innovation has not been the work of those who are affected by
it. It is also the case that internally generated technologies are likely to be more
compatible with established organizational structures and activities.40 Moreover,
an internally generated technology will probably have a “champion” within the
organization who seeks its utilization, and management may be more inclined to
pay attention to the ideas of their organization’s own personnel.
These are explanations for the presence of the NIH syndrome, but not excuses.
A dynamic firm can ill-afford to ignore useful new technologies just because they
were developed elsewhere. We have already seen that a receptivity to “foreign”
technologies has been crucial to the development of Europe, the United States,
and Japan, and that a resistance to them was a major reason for the economic
retardation of imperial China. An individual firm that chooses to resist or ignore
Volti7e_06_ch05_079-100_highres.indd 92 28/11/12 5:57 PM

The Diffusion of Technology 93
new technologies because of their source may ultimately find that security and pride
have been purchased at a very great price. Its comfortable routines and organiza-
tional structures are retained, right up to the day that it goes out of business.
Efforts to Restrict the Diffusion of Technology
The exclusive possession of a particular technology can confer great advantages on
those who have it. Individual firms or whole countries may therefore go to great
lengths to block the diffusion of these technologies. During the sixteenth century the
glassmakers of Venice sought to prevent others from learning the secrets that went
into the production of Venetian glass, and they even sent assassins to poison Venetian
expatriates who had set up glassworks abroad.41 England enacted a law in 1719 that
forbade the emigration of skilled artisans, such as the aforementioned Samuel Slater,
whose knowledge of British equipment and practices was a crucial element in the
early development of the American textile industry. The export of machinery also
was forbidden from the 1780s until the middle of the nineteenth century.
Today, technologically advanced nations attempt to prevent the export of
advanced technologies through export licensing requirements. Licenses may be
denied if national security could be compromised by the transfer of a particular
technology. The transmission of knowledge alone can also be deemed a threat. In
2012 the potential danger of scientific information falling into the wrong hands led
to requests by the U.S. federal government to request two groups of researchers to
temporarily withhold the publication of some research results involving the genetic
manipulation of the virus responsible for bird flu. Although the research was con-
ducted in order to aid in the prevention of future epidemics, it was feared that the
data it generated could become the basis of a future bioterrorist attack.42
Past efforts to suppress the outflow of technology and the knowledge upon
which it rests have rarely been successful, and there is little likelihood that contem-
porary efforts will fare any better. The transfer of plans, equipment, data, and prod-
ucts are extremely difficult to stop, even with rigorous attempts at policing. Also, as
exemplified by the case of Samuel Slater, the movement of people has historically
been the main vehicle for the transfer of technologies.43 This sort of movement is
very hard to stop, especially in a free society.
Attempts to stem the outward flow of technology may even be counterproduc-
tive. A country that attempts to bar the export of technology may end up damaging
itself more than anyone else. Technological development often requires the inter-
change of information from numerous sources. The attempt to seal off a technology
will restrict the flow of information from both domestic and foreign sources, and
thereby result in a slower pace of technological advance.
Finally, it is often the case that the most important “secret” is that a technol-
ogy exists. During the late 1940s and early 1950s, there was a great deal of anguish
in the United States after the Soviet Union exploded its first nuclear bomb. The
general belief was that the Soviets had obtained our “atomic secrets,” and two
Americans were executed for their role in passing on these “secrets.” But the Soviets
had long known the most important thing: that a massive explosion could be
Volti7e_06_ch05_079-100_highres.indd 93 28/11/12 5:57 PM

94 The Process of Technological Change
produced through the fissioning of atomic nuclei. Once this fact was known, it was
only a matter of time before the Soviets learned how to do it themselves, with or
without the clandestine transfer of American technology. Knowing that a problem
can be solved is often the most important step in its solution.
Patents and the Diffusion of Technology
An individual or a business firm naturally wants technological diffusion to take
place on its own terms. If it has developed a novel technology, it wants the benefits
of that technology to accrue only to itself, and not to others who might be able
to copy an invention that has required a great deal of effort and expense. This is
a legitimate desire, and it has been recognized as such by the establishment of the
patent system. The possession of a patent confers exclusive use of an invention; it
is a legal monopoly. Without the prospect of such a monopoly, it is believed, there
would be a diminished motivation to invent, and in the long run society as a whole
would suffer from a slower rate of technological advance.
Although it is aimed at restricting the sale or use of a particular technology
to a single individual or firm, there are some ways in which the patent system
may stimulate technological diffusion. The filing of a successful patent application
makes the invention public, for the basic design and specifications are open to
inspection by anyone willing to pay a small fee. In fact, the word “patent” is derived
from the Latin verb pateo, which means “to open.” One of the chief justifications for
awarding a patent is that the inventor has not attempted to keep his or her inven-
tion secret, but rather has revealed its workings to the public. As the U.S. Supreme
Court ruled in 1933, an inventor “may keep his invention secret and reap its fruits
indefinitely. In consideration of its disclosure and the consequent benefit to the
community, the patent is granted.”44
Although direct copying is of course forbidden, access to the public record
may give a clearer sense of how a technological problem may be addressed and may
stimulate alternative approaches to its solution.45 Then, too, patent holders often
pass their inventions to other firms in return for a licensing fee. This arrangement
may allow for the more rapid diffusion of a technology because the licensing agree-
ment often facilitates the transfer of supplementary material necessary for the best
use of the device or process.46 In some cases, a licensing arrangement may result in
a better product, as happened when DuPont developed waterproof cellophane after
being licensed to produce the original version.47
At the same time, however, patent protection may inhibit technological adv-
ance. A well-entrenched firm may hold a patent in order to suppress an invention
that could seriously shake up existing routines or even threaten the firm’s existence.
It seems as though everybody has a brother-in-law who knows a man who invented
a way to make his car run on water straight from the garden hose, but “of course the
big oil companies bought up all the patents.” One shouldn’t put much credence in
these stories, but it is true that dominant companies have at times used their control
of patents to restrict innovation, as happened with the design of light bulbs when
General Electric held the key patents.48
Volti7e_06_ch05_079-100_highres.indd 94 28/11/12 5:57 PM

The Diffusion of Technology 95
The use of patents to directly suppress certain technologies poses less of a
threat to innovation than the widespread use of patenting as a strategic and tactical
weapon. Patents are supposed to motivate individuals and business firms to create
innovative products and processes, but according to critics, their primary purpose
is to allow patentees to harass competitors, collect undeserved royalties, bolster
the value of financial portfolios, and block innovations that pose potential threats
to a firm’s market position.49 Critics of the present patent system have also called
attention to the monopoly power that has been conferred by a recent upsurge in the
patenting of entities that in the past were not entitled to patent protection, most
notably computer software, genes, and business methods.50
Although it may confer considerable power in the marketplace, the holding of
a patent does not necessarily confer an overwhelming competitive advantage, since
there may be considerable scope for “inventing around” the patent. Moreover, there
are no industries where a key technology is covered by a single “master patent” that
can be used to repulse potential competitors. There have been attempts to domi-
nate an emerging industry through the wielding of a patent, most notably when, in
1895, George B. Selden was granted a patent for the automobile. Selden and the
subsequent holders of the patent were able to extract licensing fees from most auto-
mobile manufacturers. One manufacturer who refused to go along was Henry Ford,
who successfully challenged the validity of the patent in court and won a great deal
of public acclaim as a result.51
It also has to be recognized that a patent never confers ironclad protection
for inventors; it may be little more than a “license to sue.” In recent years, most
patent holders have successfully waged court battles against individuals and firms
The patent illustration for the automobile that George Selden claimed to have invented.
(Culver Pictures)
Volti7e_06_ch05_079-100_highres.indd 95 28/11/12 5:57 PM

96 The Process of Technological Change
who have been accused of infringing upon their patents, but both sides may incur
sizeable costs in terms of time, talent, and money. As a result, many individuals
and firms have been content to exploit their unpatented invention until the copi-
ers are able to seize a large share of the market. In other cases, patented products
and processes are quickly outmoded by the advance of technology, and a temporary
monopoly position is soon lost. On the other side of the coin, parties who have been
charged with patent infringement may feel that they are in the right, but they will
nonetheless elect to pay royalties to the patentee in order to avoid an expensive and
drawn-out courtroom battle.
Whether the patent system serves as an overall stimulus or hindrance to tech-
nological advance is a matter of considerable debate, and one not likely ever to be
settled. Like many social institutions, it owes its existence to inertia and the efforts
of parties with a direct interest in maintaining it. Under these circumstances, the
status quo usually prevails.
Questions for Discussion
1. What sort of skills are required for the successful copying of a technology
developed elsewhere? Why do you think that the Japanese have been so suc-
cessful at making effective use of technologies that first appeared elsewhere?
How might other countries duplicate their success?
2. Most economically advanced countries have technical assistance programs
that are used to upgrade the technological levels of poorer countries. Under
what circumstances might these programs be harmful? How might these pro-
grams be shaped to better meet the needs of poor countries?
3. If you were the leader of a poor nation, what general policies would you for-
mulate in regard to the importation of technology? Where might you expect
to find opposition to these policies?
4. Suppose that you are the CEO of a firm that has a choice between two new
technologies: one that promises a modest profit with very little risk, and
another that may yield a very high profit but at considerable risk. What would
your choice be? Who in your company might support the first technology, and
who might support the second?
5. Does the patent system encourage or discourage technological innovation?
What would happen if it were abolished? Can you think of an alternative that
would better serve the needs of both inventors and the general public?
Notes
1. See Robert Temple, The Genius of China: 3,000 Years of Science, Discovery, and Invention
(New York: Simon and Schuster, 1989).
2. Lynn White, Jr., Medieval Technology and Social Change (New York: Oxford University
Press, 1966), pp. 14ff.
3. Frances and Joseph Gies, Cathedral, Forge, and Waterwheel: Technology and Invention in
the Middle Ages (New York: HarperCollins, 1994), pp. 45–47.
4. A. Rupert Hall, “The Rise of the West,” in Charles Singer et al. (Eds.), A History of
Technology, vol. III (Oxford: Clarendon Press, 1957), pp. 716–717.
Volti7e_06_ch05_079-100_highres.indd 96 28/11/12 5:57 PM

The Diffusion of Technology 97
5. Gilbert Rozman (Ed.), The Modernization of China (New York: The Free Press, 1981),
pp. 22–23.
6. Xin Xiangrong, “Open Policy, Import of Culture,” Beijing Review 29, 46 (November 17,
1986): 4.
7. Elting E. Morison, From Know-How to Nowhere: The Development of American Techn-
ology (New York: New American Library, 1977), pp. 21–36.
8. “Immigrants Lead Many Tech Start-Ups. Study Says,” Los Angeles Times (January 4, 2007),
accessed on September 4, 2012, at http://articles.latimes.com/2007/jan/04/business
/fi-execs4.
9. Angus Maddison, Economic Growth in Japan and the USSR (New York: W.W. Norton,
1969), pp. 16–17.
10. William Wirt Lockwood, The Economic Development of Japan: Growth and Structural
Change (Princeton, NJ: Princeton University Press, 1968), pp. 331–332. For a brief his-
tory of the Toyoda loom, see Tessa Morris-Suzuki, The Technological Transformation of
Japan: From the Seventeenth Through the Twenty-First Century (Cambridge: Cambridge
University Press, 1994), pp. 117–118.
11. Henry S. Rowen, Marguerite Gong Hancock, and William F. Miller (Eds.), Making IT:
The Rise of Asia in High Tech (Stanford: Stanford University Press, 2007), p. 10.
12. E. E. Bauer, China Takes Off: Technology Transfer and Modernization (Seattle: University
of Washington Press, 1986), pp. 82–86.
13. Paul Stoneman, The Economics of Technological Diffusion (Oxford and Malden, MA:
Blackwell, 2002) p. 75.
14. Richard G. Lipsey, Kenneth I. Carlaw, and Clifford T. Bakar, Economic Transformations:
General Purpose Technologies and Economic Growth (Oxford: Oxford University Press,
2005) p. 87.
15. Vernon W. Ruttan, “Towards a Global Agricultural Research System: A Personal View,”
Research Policy 15, quoted in Edward J. Malecki, Technology and Economic Development:
The Dynamics of Local, Regional, and National Change (New York: John Wiley & Sons,
1991), p. 142.
16. Tuvia Blumenthal, “A Note on the Relationship between Domestic Research and
Development and Imports of Technology,” Economic Development and Cultural Change
27, 2 (January 1979): 303–306.
17. Christopher Freeman, The Economics of Industrial Innovation, 2nd ed. (Cambridge,
MA: MIT Press, 1982), p. 61.
18. Zhou Jiahua, “Gunpowder and Firearms,” in Ancient China’s Technology and Science,
compiled by the Institute of the History of Natural Sciences, Chinese Academy
of Science (Beijing: Foreign Languages Press, 1983), pp. 184–191; Arnold Pacey,
Technology in World Civilization: A Thousand-Year History (Cambridge, MA: MIT Press,
1991), pp. 44–50.
19. Keijiro Otsuka, Gustav Ranis, and Gary Saxenhouse, Comparative Technology Choice
in Development: The Indian and Japanese Cotton Textile Industries (London: Macmillan,
1988), pp. 23–24, in Morris-Suzuki, op. cit., p. 87.
20. The example is based on Nakaoka Tetsuo, “Imitation or Self-Reliance: A Lesson from
the Early History of Modern Japanese Iron Manufacturing,” in Japanese Foundation
Newsletter 8, 4 (October–November 1979): 1–6.
21. U.N. Office for the Coordination of Humanitarian Affairs, “In-Depth: Youth in Crisis:
Coming of Age in the 21st Century,” accessed August 27, 2012, at http://www.irinnews
.org/IndepthMain.aspx?InDepthID=28&ReportID=69977.
Volti7e_06_ch05_079-100_highres.indd 97 28/11/12 5:57 PM

http://articles.latimes.com/2007/jan/04/business

http://www.irinnews

98 The Process of Technological Change
22. John Sousanis, “World Vehicle Population Tops One Billion Units,” WardsAuto, acce-
ssed February 8, 2012, at http://wardsauto.com/ar/world_vehicle_population_110815.
23. Robert C. Toth, “Fitting Technology to Need Held Critical in Third World,” Los
Angeles Times (June 18, 1978), part 1: 1, 32.
24. The most influential book on this subject is E. F. Schumacher, Small Is Beautiful:
Economics As If People Mattered (New York: Harper & Row, 1973). For some examples
of current appropriate technologies, see “Ten Cases of Appropriate Technology,”
at http://listverse.com/2010/06/12/10-cases-of-appropriate-technology/ (accessed on
February 7, 2012).
25. “Ten Cases of Appropriate Technology,” (June, 12, 2010), accessed on August 7, 2012,
at http://listverse.com/2010/06/12/10-cases-of-appropriate-technology/.
26. George McRobie, Small Is Possible (New York: Harper & Row, 1981), p. 200.
27. Louis T. Wells, Jr., “Economic Man and Engineering Man: Choice and Technology in
a Low-Wage Country,” Public Policy 21, 3 (Summer 1973): 319–342.
28. Bertil Tungodden, Nicholas Herbert Stern, and Ivar Kolstad (Eds.), Toward Pro-
Poor Policies: Aid, Institutions, and Globalization (Oslo, Norway, Bank Conference on
Development Economics 2002), p. 319.
29. Christian P. Tanon and Everett M. Rogers, “Diffusion Research Methodology: Focus on
Health Care,” in Gerald Gordon and G. Leonard Fisher (Eds.), The Diffusion of Medical
Technology (Cambridge, MA: Ballinger, 1975).
30. Mira Wilkins, “The Role of Private Business in the International Diffusion of
Technology,” Journal of Economic History 34, 1 (March 1974): 176.
31. Hugh G. J. Aitken, Syntony and Spark: The Origins of Radio (Princeton, NJ: Princeton
University Press, 1985), pp. 329–335.
32. Ibid., p. 335.
33. Lipsey et al., op. cit. pp. 86–87.
34. See John Enos, “Invention and Innovation in the Petroleum Refining Industry,” in
Richard Nelson (Ed.), The Rate and Direction of Inventive Activity (Princeton, NJ:
Princeton University Press, 1962).
35. Edwin Mansfield, Technological Change (New York: W.W. Norton, 1971), p. 88.
36. Donald A. Schon, Technology and Change: The New Heraclitus (New York: Dell, 1967),
pp. 24–25.
37. Richard R. Wilk, Economies and Cultures: Foundations of Economic Anthropology
(Boulder, CO: Westview, 1996), p. 63.
38. Freeman, Economics of Industrial Innovation, p. 154.
39. Henry Ford II, quoted in James C. Jones, “Dr. Schweitzer’s Dilemma: Not Invented
Here,” Ward’s Auto World (November 1974): 41.
40. J. E. S. Parker, The Economics of Innovation: The National and Multinational Enterprise in
Technological Change, 2nd ed. (London: Longman, 1978), p. 111.
41. John U. Nef, The Conquest of the Material World (Chicago: University of Chicago Press,
1964), p. 153.
42. Dan Vergano, “Scientists Engineer Bioterror Fears,” USA Today (January 26, 2012), 1A–2A.
43. Carlo M. Cipolla, “The Diffusion of Innovations in Early Modern Europe,” Comparative
Studies in Society and History 14 (1972): 48.
44. United States vs. Dubilier (289 U.S. 178), quoted in C. Lyle Cummins, Internal Fire
(Lake Oswego, OR: Carnot Press, 1976), p. 47.
45. Parker, Economics of Innovation, p. 310.
46. Ibid., p. 327.
47. Freeman, Economics of Industrial Innovation, p. 61.
Volti7e_06_ch05_079-100_highres.indd 98 28/11/12 5:57 PM

http://wardsauto.com/ar/world_vehicle_population_110815

http://listverse.com/2010/06/12/10-cases-of-appropriate-technology/

http://listverse.com/2010/06/12/10-cases-of-appropriate-technology/

The Diffusion of Technology 99
48. Leonard S. Reich, “Lighting the Path to Profit: GE’s Control of the Electric Lamp
Industry, 1892–1941,” Business History Review 66 (Summer 1992): 310, 312.
49. Stuart Macdonald, “When Means Become Ends: Considering the Impact of Patent
Strategy on Innovation,” Information Economics and Policy 16 (2004): 135–158.
50. Adam B. Jaffe and Josh Lerner, Innovation and Its Discontents: How Our Broken Patent
System Is Endangering Innovation and Progress, and What To Do About It (Princeton and
Oxford: Princeton University Press, 2004).
51. James Flink, America Adopts the Automobile, 1895–1910 (Cambridge, MA: MIT Press,
1970), pp. 318–328.
Volti7e_06_ch05_079-100_highres.indd 99 28/11/12 5:57 PM

A Macmillan Higher Education Company
WORTH PUBLISHERS
Volti7e_01_FM_i-xviii_highres.indd 2 28/11/12 5:52 PM
this page left intentionally blank

101
p a r t t h r e e
The following three chapters illustrate how technological change can create both
problems and solutions. Chapter 6 describes how technological innovations have at
times been the source of two general categories of environmental damage: pollution and
depletion. The record isn’t altogether depressing, however, as the chapter presents sev-
eral examples of successfully addressing these problems through the development of new
technologies. As is often the case, technological fixes are only part of the story, however.
Any fix must be carried out within a context of social forces, among which, governmen-
tal policies can both help and hinder the solution of environmental problems.
Chapter 7 turns from environmental health to human health. It describes the
many ways in which medical technologies have cured our ills and extended our lives
while at the same time creating new problems and challenges. The most serious of
these is escalating health-care costs. Many medical procedures can be described as
“halfway technologies” that extend lifespans and alleviate pain but leave patients in a
state of permanent—and expensive—dependence on medical care. On another front,
sophisticated diagnostic technologies give physicians more data, but at the expense of
marginalizing important diagnostic skills based on personal relationships with patients.
Chapter 8 looks into the development of medical technologies based on advances
in genetics and assesses recent developments in the genetic engineering of food crops,
once again noting that technological advances are not necessarily unmixed bless-
ings. The chapter explores how these technologies interact with social, political,
economic, and cultural forces, and examines the legal and ethical issues they raise.
How Technology Affects the Health of the Earth
and Its Inhabitants
Volti7e_07_ch06_101-124_highres.indd 101 28/11/12 5:57 PM

A Macmillan Higher Education Company
WORTH PUBLISHERS
Volti7e_01_FM_i-xviii_highres.indd 2 28/11/12 5:52 PM
this page left intentionally blank

Although technological advance has been blamed for a variety of ills, its most obvi-
ous and long-lasting negative consequence has been the alteration and even the
destruction of the natural environment. For most of human existence, people left the
environment pretty much as they found it. But beginning with sedentary agriculture
and accelerating with industrialization, the use of new technologies has at times left
a ruined environment as its legacy. And there is a strong possibility that environmen-
tal problems will worsen as more nations make greater use of industrial technologies.
Today, the United States, with less than 5 percent of the world’s population, con-
sumes about 25 percent of the world’s resources. If the poor countries of the world,
which contain the majority of the earth’s population, used resources at the current
U.S. level, the consequences for the environment could be dire. This will not hap-
pen in the near future, but the general trend is inescapable. For example, according
to the U.S. Department of Energy, global consumption of energy from 2008 to 2035
is projected to increase by 53 percent, with China, India, and other developing coun-
tries expected to account for 85 percent of the total increase.1
The deleterious consequences of technological advance can be grouped into
two broad categories: pollution and depletion. In the former, the environment is
damaged by the addition of harmful substances, resulting in polluted skies, acid
rain, poisoned soil, contaminated water, and likely climate change. In the case of
the latter, the unchecked application of technology leads to the permanent loss of
resources, deforestation, and the extinction of plant and animal species. On many
occasions, of course, the two go together; a mining operation could extract all of
the available ore and leave a devastated environment in its wake. Either separately
or together, pollution and depletion threaten the sustainability of the environment.
But there is some cause for optimism. Technology is not just a source of envi-
ronmental problems; it can also be part of the solution. In this chapter we will look
at some of the environmental consequences of technological advance, review some
technologies that can alleviate the degradation of the environment, and briefly
consider how governmental policies have affected the natural environment.
Fossil Fuels, Air Pollution, and Climate Change
The use of fossil fuels (petroleum, coal, and natural gas) has vastly extended the
amount of available energy. At the same time, cars, power plants, factories, and
103
Technology, Energy, and the Environment
c h a p t e r s i x
Volti7e_07_ch06_101-124_highres.indd 103 28/11/12 5:57 PM

104 How Technology Affects the Health of the Earth and Its Inhabitants
Beijing’s historic Temple of Heaven can barely be discerned through the thick smog envelop-
ing the city. (Feng Li/Getty Images)
other sources emit immense quantities of carbon compounds, sulfur, and oxides of
nitrogen. Every year, more than six billion tons of carbon fuel exhaust are discharged
into the earth’s atmosphere—approximately one ton for every human being on
earth.2
The blanket of smog now choking many of the world’s cities makes the conse-
quences of fossil-fuel use all too evident. Potentially more dangerous in the long run is
the accumulation of the main product of carbon combustion: carbon dioxide (CO2).
Volti7e_07_ch06_101-124_highres.indd 104 28/11/12 5:57 PM

Technology, Energy, and the Environment 105
Carbon dioxide is produced when the carbon in gasoline, diesel fuel, coal,
or natural gas unites with oxygen during the process of combustion. This creates
quantities of CO2 significantly greater than the original carbon; burning 1 pound of
gasoline generates more than 3 pounds of carbon dioxide.3 To look at it in a differ-
ent way, a car traveling 100 miles produces about 100 pounds of CO2.
Carbon dioxide is a clear, odorless gas that by itself presents no threat to health.
It is the normal product of any animal’s respiration, and without it drinks like beer
and soda pop would be unpalatably flat. The danger presented by CO2 emerges
only when large quantities of it accumulate in the atmosphere along with two other
products of an industrial society, methane (CH4) and nitrous oxide (N2O). As light
rays from the sun pass through the atmosphere containing these gases, the radiant
energy warms the surface of the earth, but most of the heat cannot pass back through
the atmosphere. This is known as the “greenhouse effect.” Just as a greenhouse is
warmed by the light of the sun on a cold winter’s day, the earth’s temperature begins
to rise as greenhouse gases accumulate in the atmosphere. Concentrations of these
three greenhouse gases increased to a marked degree as industrialization advanced.
According to the United Nations’ World Meteorological Association, in 2009 the
earth’s atmosphere contained concentrations of 386.8 parts per million of CO2,
1803 parts per billion of methane (CH4), and 322.5 parts per billion of N2O. These
figures represent increases of 38,158, and 19 percent, respectively, over concentration
ratios prior to the beginning of the industrial revolution in the mid-eighteenth century.4
Continued warming due to increasing CO2 emissions seems likely. As noted
by the United Nations Intergovernmental Panel on Climate Change, all but one
of the years from 1995 to 2006 ranked among the 12 warmest years since 1850, while
the linear warming trend from 1956 to 2005 was nearly twice as high as the period
that extended from 1906 to 2005.5 Projections of future trends indicate substantially
greater increases, but as a general principle, the reliability of any kind of prediction
diminishes as it moves further into the future. In regard to the earth’s climate, pre-
dicting the course of global warming is a complex exercise because the many vari-
ables affecting the earth’s climate have complex interactions. Predictions are further
complicated by the need to estimate increases in the production of greenhouse gases
resulting from economic growth and energy use in Third World countries in the
decades to come. Yet for all the uncertainties surrounding global warming, it still
seems likely that significant changes to the earth’s climate will take place if pres-
ent trends continue. Temperature increases will not be uniform, and some regions
may even end up with colder climates due to changes in ocean currents. Rainfall
patterns would shift to the advantage of some areas and to the detriment of others.
In conjunction with increases in CO2 and other greenhouse gases, global warming
could increase agricultural yields in certain regions but reduce them elsewhere. It also
seems certain that climate changes will result in the loss of some plant and animal
species, and the relocation of others. One of the consequences of the latter could
be the spread of insect-borne diseases to previously unaffected parts of the world.
Most troubling, significant global warming could lead to partial melting of glaciers
and the polar ice caps. It would also cause water to expand in volume as the oceans
warmed. The result would be a rise in sea levels, and potentially catastrophic floods
Volti7e_07_ch06_101-124_highres.indd 105 28/11/12 5:57 PM

106 How Technology Affects the Health of the Earth and Its Inhabitants
in low-lying areas. One particularly vulnerable place is the nation of Bangladesh;
according to a United Nations report, a three-foot rise in sea level would inundate a
quarter of its coastline, resulting in the displacement of 30 million people.6
Atmospheric scientists are learning more and more about the complex inter-
actions that govern the earth’s climate, but many unanswered questions remain.
In the meantime, the buildup of greenhouse gases could reach a potentially cata-
strophic tipping point in a few decades. Reducing the emissions that contribute
to climate change will require the expenditure of billions if not trillions of dollars
over the decades to come, but the costs of doing nothing will likely be even greater.
Effectively addressing the causes of climate change also requires taking some pain-
ful actions today in order to forestall problems that lie well in the future, a dif-
ficult course of action for business and governmental leaders who necessarily have
short time horizons. Moreover, decisions (or nondecisions) must be made amid
considerable uncertainty, as scientific understanding lags behind past and present
technological applications. Effectively addressing climate change will require the
development and application of new technologies along with the more extensive
use of some existing ones. It is possible that the emerging field of geoengineering
will result in some technological fixes, but as we have seen, such technological
fixes usually need complementary political actions if they are to be effective.
The development of technological fixes to prevent or mitigate climate change is known as
geoengineering. One example is a proposed fleet of wind-powered vessels that would gener-
ate clouds of seawater droplets, which in turn would block some of the sunlight that is the
source of the greenhouse effect. (John MacNeill Illustration)
Volti7e_07_ch06_101-124_highres.indd 106 28/11/12 5:57 PM

Technology, Energy, and the Environment 107
A Planet under Stress
Paralleling global warming, and probably contributing to it, has been the widespread
destruction of forests. Particularly serious has been the loss of forested areas in Africa
and Latin America, much of it occurring in tropical regions. From 1990 to 2005
the world was being deforested at a rate of 0.18 percent a year, while losses in
Africa and Latin America came to 0.62 and 0.51 percent, respectively.7 Wherever
it occurs, deforestation contributes to the extinction of plant and animal species
through the destruction of their natural habitats. The loss of woodlands may also
contribute to climate change, although it is not certain that tropical forests function
as carbon sinks, which effectively remove carbon dioxide from the atmosphere.8
Even so, deforestation can contribute to global warming when carbon stored in
wood is released into the atmosphere as the trees are burned. Subsequently, the soil
itself can become a large source of carbon emissions, depending on how farmers and
ranchers manage the land after it has been cleared.
At the same time that a variety of technologies have been assaulting the land
and air, the ground below is also being changed for the worse. Industrial societies
generate massive amounts of refuse, a significant portion of which is hazardous.
According to the U.S. Environmental Protection Agency, in 2010 Americans
generated 249.9 million tons of municipal solid waste annually (an average of
4.43 pounds per person every day) and a much greater volume of industrial and
other waste products.9 About a third of household wastes are composted or recycled,
and most of the remainder is nontoxic. However, some of the materials discarded
by households and industrial enterprises reside in inadequate disposal facilities that
threaten adjacent communities. Already, some areas have been so badly contami-
nated that they had to be abandoned for a number of years while the mess was being
cleaned up. Many other places contain dump sites that are only slightly less hazardous.
Restoring toxic sites is and will be an expensive proposition; according to the U.S.
Environmental Protection Agency, cleaning up existing and yet-to-be-discovered
hazardous waste sites may cost as much as $280 billion over a 35-year period.10
While pollution, global warming, and the other negative consequences of
industrial production cause problems on one front, depletion of the resources
that provide energy and raw materials pose another set of challenges. Fossil fuels
have been essential to the process of industrialization since at least the middle
of the nineteenth century, and petroleum-based fuels are the basis of the world’s
transportation systems. Fossil-fuel supplies are necessarily finite, although many
uncertainties attend efforts to estimate their size, especially in the case of petroleum.
According to some experts, we have already reached “peak oil” and inevitably face a
future of declining petroleum extraction.11 Other experts, however, take a contrary
view. Determining the amount of oil still remaining underground entails consider-
able guesswork, and there is not even universal agreement on how to define petro-
leum “reserves.” Moreover, the amount of oil that can be economically extracted
from a known reservoir depends on the prevailing market price, while at the same
time new recovery technologies allow the extraction of more oil from the earth.12
Finally, the price of oil will increase as supplies dwindle, so the rate at which oil
is extracted will depend on what users are willing to pay. Given all these vagaries,
Volti7e_07_ch06_101-124_highres.indd 107 28/11/12 5:58 PM

108 How Technology Affects the Health of the Earth and Its Inhabitants
projections of future supplies of oil have to be taken with a grain of salt. Oil short-
ages will likely still occur, just as they did during the energy crises of 1973–1974 and
1979, but it has to be remembered that they were triggered by political actions, and
not by an abrupt decrease in the world’s petroleum supplies.
The energy crises of the 1970s were a painful reminder of our dependence on
key natural resources. No similar drama has attended the decline of another vital
resource, the earth’s topsoil. Since 1945, 1.2 billion hectares of agricultural land
have been degraded by moderate to extreme soil erosion, the equivalent of the total
land area of China and India. This trend has not abated, and somewhere between
24 to 75 billion tons of soil are being lost annually around the world.13 In the past,
the loss of topsoil could be offset by moving to virgin lands, but this option is no
longer available in most parts of the world. Not only is new land unavailable, ero-
sion and land degradation are removing about one percent of existing arable land
annually. In many parts of the world the loss of topsoil has increased the costs of
farming, but in some places the consequences have been far more serious. Severe
loss of topsoil has converted some regions into deserts, bringing widespread famine
in its train. Elsewhere, topsoil loss coupled with climate changes induced by global
warming could have dire consequences for agriculture, even as the continual growth
of the world’s population calls for the production of more food.
Is Technology the Problem or the Solution?
The previous pages have presented a depressing catalog of environmental ills.
Pollution, climate change, species extinction, and resource depletion pose multiple
threats to our standard of living and, perhaps, even to our continued existence. The
reliance on fossil fuels has left our atmosphere contaminated by various pollutants,
and the earth threatened by global warming. Irresponsible agricultural practices
have poisoned groundwater with pesticides and other chemicals while depleting
the topsoil. Industrial processes produce thousands of tons of toxic wastes, and
nuclear power plants leave behind radioactive wastes that will pose potential health
hazards for thousands of years. In the opinion of some critics, the technological
advances of the past few centuries seem to have produced only temporary benefits
that will ultimately be overwhelmed by the consequences of environmental stress.
In the long run, it may be argued, technology generates more harm than good.
Is a modern economy, supported by advanced technologies, doomed to destruc-
tion, leaving us with no choice but to retreat to the simpler technologies of the
past, perhaps even back to the era of hunting and gathering? Or can technology
itself provide us with the solutions to technologically induced problems? In address-
ing these questions, it is useful to first gain some historical perspective. If nothing
else, some acquaintance with the past should convince us that damage to the
environment is not solely a phenomenon of modern times and modern technolo-
gies. Entire civilizations have collapsed due to excessive land clearance, overgraz-
ing, withdrawal of groundwater, and eventual desertification. Epidemic diseases of
catastrophic proportions have been spread by the careless disposal of household
wastes. There are some scholars who believe that a major factor in the collapse of
Volti7e_07_ch06_101-124_highres.indd 108 28/11/12 5:58 PM

Technology, Energy, and the Environment 109
the Roman Empire was sterility and premature death brought on by the widespread
use of lead in pipes and utensils. Air pollution existed long before the invention of
fossil fuel-consuming engines; those suffering from twenty-first-century smog can
find a seventeenth-century counterpart in John Evelyn, who decried the growing
use of coal for energy and warmth:14
In London we see people walk and converse pursued and haunted by that
infernal smoake. The inhabitants breathe nothing but an impure and thick mist,
accompanied by a fuliginous and filthy vapour, which renders them obnoxious to a
thousand inconveniences, corrupting the lungs and disordering the entire habit of
their bodies, so that catarrs, phtisicks, coughs and consumption rage more in that
one city than in the whole earth besides.
Some Technological Fixes of the Past
These examples of past environmental ills provide little comfort, for they can be
seen as a preview of what might be in store for us. But there are other examples of
environmental threats that have been successfully countered. One example is the
deforestation that had become a serious problem in England during the sixteenth
century. By that time, vast quantities of wood had been consumed by the demands
of an expanding population and the growth of shipbuilding, construction, and iron
manufacture (which required large quantities of charcoal). Within a century the
depletion of timber was perceived as a serious problem, as seen in the complaint
of one contemporary writer that “at this present, through the great consuming of
wood as aforesaid, and the neglect of planting of woods, there is so great a scarcitie
of wood through the whole kingdom.”15
England’s forests were never fully restored, but fuel shortages were alleviated
by burning coal in the place of wood. Although there were misgivings about the
noxious vapors given off by burning coal, it came to be widely used for domestic
heating and as a source of process heat for the production of beer, sugar, bricks,
soap, glass, and iron. More than simply a substitute for wood, by the end of the
nineteenth century coal had become the basis of industrial civilization, as the rich
coal deposits of Britain significantly contributed to that country’s unique position
as “the Workshop of the World.” Much of the industrial age was the era of coal, as
coal-fired steam engines powered factories, hauled railroad trains, generated elec-
tricity, and propelled ships to distant destinations.
Yet just when coal had established its primacy as the most important energy
source for industrial society, hard questions were being asked about the continued
viability of coal-based technologies. By the end of the nineteenth century it was
becoming evident that stocks of coal, while still large, were being depleted at ever-
increasing rates. The projection of established trends seemed to offer indisputable
proof that the day of reckoning was not far off: Britain was running out of coal. In
the words of the contemporary English economist, W. Stanley Jevons, “There is no
reasonable prospect of any relief from a future want of the main agent of industry.
We must lose that which constitutes our particular energy.”16
Volti7e_07_ch06_101-124_highres.indd 109 28/11/12 5:58 PM

110 How Technology Affects the Health of the Earth and Its Inhabitants
Coal was king, and in Jevons’ estimation, as well as those of other informed stu-
dents of the British economy, there was no hope that anything could take its place.
In Jevons’ gloomy appraisal, “All things considered, it is not reasonable to suppose
or expect that the power of coal will ever be superseded by anything better.”17 His
pessimistic assessment of potential substitutes is exemplified by his quick dismissal
of petroleum as a fuel: “Its natural supply is far more limited and uncertain than
that of coal, and an artificial supply can only be made by the distillation of some
kind of coal at considerable cost. To extend the use of petroleum, then, is only a new
way of pushing the consumption of coal. It is more likely to be an aggravation of
the drain than a remedy.”18 Natural gas, another possible substitute, was an equally
forlorn hope. Jevons approvingly quoted the assessment of an American steel execu-
tive: “Of late years the supply of gas has been decreasing . . . and it would seem that
before many years this fuel would cease to be a factor in the large operations of a
steel works.”19
One can smile at the remarkable wrongheadedness of these assessments, but
it is easy to be wise after the fact. After all, the true extent of oil and gas reserves
were only dimly perceived at the time Jevons was coming to his gloomy conclusions.
The third edition of Jevons’ book that contains the passages quoted was published
in 1905, four years after the Spindletop field demonstrated the vast oil reserves of
East Texas. There, a single well produced twice as much oil as the entire state of
Pennsylvania, until then the center of the American oil industry.20 And it was not
until three decades later that the immense oil deposits of the Middle East began to
be explored.
The essential point here is that a problem such as resource depletion can
often be solved by the use of substitutes, just as coal substituted for wood, and oil
replaced a great amount of coal. This does not happen easily or automatically, of
course; it requires the invention, development, and application of many new ways
of doing things. The large-scale employment of petroleum fuels required a host of
new technologies: seismic exploration devices, casings and bits for rotary drilling,
new compounds for the cementation of bore holes, and so on. Equally important,
the use of a new source of energy must be complemented by the emergence of new
energy-using technologies. In the early 1900s, the coal-fired reciprocating steam
engine was a proven technology, while the gasoline-fueled internal combustion
engine was cranky and unreliable, and the diesel engine had scarcely emerged from
the laboratory. The rapid strides made by these new engine types in the ensuing
years was both a product of the availability of new fuels, as well as a stimulus to
their accelerated extraction.
Alternatives to Fossil Fuels
The example of the widespread substitution of petroleum-based energy for coal-
based energy applies to many other diminishing resources. It demonstrates that
at one level of analysis the cure for depletion and environmental damage can be
found in new technologies that successfully address the problems generated by
old technologies. Still, this is not the end of the matter. Historical analogies have
Volti7e_07_ch06_101-124_highres.indd 110 28/11/12 5:58 PM

Technology, Energy, and the Environment 111
their place, but like all analogies they must be treated with caution. The fact that
technological solutions have been found in the past is no guarantee that they will
be found in the future.
The limitations of technological solutions to energy shortages are all too appar-
ent when nuclear energy is examined. Beginning in the 1950s, nuclear energy was
heralded as the next stage in the evolution of energy sources, the logical solution
for fossil-fuel depletion. In the words of a former head of the U.S. Atomic Energy
Commission, nuclear power would produce electricity that was “too cheap to
meter.”21 This, of course, never came to pass, and many problems still attend the
widespread use of nuclear energy.
First, in addition to costing twice as much per megawatt as a coal-fired plant
and five times as much as one using natural gas,22 the generation of nuclear power
requires considerable expenditure of energy for uranium mining and transporta-
tion, equipment manufacture, plant construction, maintenance and administra-
tion, and waste disposal. When all of these energy inputs are taken into account,
nuclear plants make a much smaller contribution to net energy supplies than their
output indicates, and a number of years will go by before the cumulative contribu-
tions of a nuclear facility exceed the energy that went into its construction and
operation.
Second, the product of nuclear plants is electricity, and electricity has its limi-
tations. Electricity is not a source of primary energy; it is a means of transmitting
energy, and as much as 10 percent of the energy used to generate electricity is lost in
transmission. This means that, if possible, it is always better to produce energy close
to where it will be used. Although rooftop solar panels are not the most efficient
way of generating electricity, they may outperform a large power plant situated hun-
dreds of miles away when transmission losses are taken into account. Also, electrical
energy cannot be used for all purposes. Although fossil fuels are an important source
of electrical power, the substitution of nuclear plants for conventional generating
facilities would not come close to eliminating fossil-fuel use. In 2011, nuclear power
plants accounted for only 13.5 percent of the world’s electricity output, and a much
smaller percentage of total energy production.23 Increased electrical generation by
new nuclear plants would diminish but would not come close to eliminating the
need for energy derived from fossil fuels. Nuclear power is sometimes offered as an
effective way of slowing global warming through the replacement of power plants
fueled by coal or gas. Again, its contribution would be significant, but limited.
Depending on the type of conventional power plant being replaced, tripling the
present generating capacity supplied by nuclear power would reduce greenhouse
gases by between 11 to 26 percent.24
Finally, accidents at nuclear plants are rare but potentially catastrophic. In
2011 an earthquake and subsequent tsunami caused a partial core meltdown and
hydrogen explosion at the Dai-ichi nuclear facility near Fukushima, Japan. In
addition to the massive economic losses incurred by the region and the nation
as a whole, the disaster resulted in hundreds of immediate deaths and will lead to
the eventual loss of thousands of lives due to greater-than-normal incidences of
cancer. Less dramatically, nuclear wastes present serious long-term problems. The
Volti7e_07_ch06_101-124_highres.indd 111 28/11/12 5:58 PM

112 How Technology Affects the Health of the Earth and Its Inhabitants
processing of ores into nuclear fuel has left a residue of hundreds of millions of tons
of radioactive waste, while thousands of tons of radioactive materials remain as by-
products of civilian and military nuclear programs. In addition, large quantities of
low-level radioactive wastes are generated through medical and industrial applica-
tions of nuclear technologies. In the United States, these dangerously radioactive
materials now reside in 131 “temporary” storage sites while efforts to open a perma-
nent waste facility near Yucca Mountain, Nevada, have been stymied by technical
problems and political opposition.25
Problems with the disposal of nuclear wastes are not confined to the United
States, and they will only intensify with the expansion of nuclear power. If global pro-
duction of nuclear energy went from its current annual output of 350,000 megawatts
to 1 million megawatts, the resultant waste material would fill a disposal facility
the size of Yucca Mountain in only three-and-a-half years.26Alternatively, nuclear
Although nuclear power plants do not produce greenhouse gases, they present other dan-
gers, as was demonstrated when a tsunami led to a catastrophic explosion at the Fukushima
Dai-ichi nuclear power plant in 2011. The severely damaged No. 4 reactor building, shown
here, housed 460 tons of nuclear fuel in a storage pool in its upper floors. (Kyodo via AP Images)
Volti7e_07_ch06_101-124_highres.indd 112 28/11/12 5:58 PM

Technology, Energy, and the Environment 113
plants could use a “closed cycle” process through which nuclear wastes are recycled
into nuclear fuel. This method, however, entails additional costs. More ominously,
it results in the production of large quantities of plutonium that could be the basis
of a nuclear weapon if only a small amount fell into the hands of terrorists or hostile
states.
Lest it be thought that nuclear energy is being singled out as an especially
problematic source of energy, it should also be noted that other alternative energy
sources have their own shortcomings. The energy of the sun offers a fantastically
large source of potential energy; the solar energy that strikes the earth in 40 minutes
equals the world’s annual consumption of energy. The trick, of course, lies in effec-
tively using even a modest fraction of that energy. Significant strides have been
made in that direction. In 1970, electricity produced by photovoltaic cells cost
$60 per kilowatt-hour. By 1980 the cost had fallen to $1 per kilowatt-hour, and
by 2009 it stood at 46.9–70.5 cents per kilowatt-hour.27 Still, this amounts to four
to six times the present cost of electricity from coal-fired power plants. A more
economically competitive form of solar technology uses the sun to heat troughs of
water or oil that is then routed to a heat exchanger that in turn produces steam to
drive a turbine. An alternative technology runs a steam-powered turbine by focus-
ing sunlight on a “power tower” filled with liquid sodium. Up to now, the economic
and technical limitations of solar technologies have inhibited the widespread adop-
tion of power derived from the sun, but it may become more significant as these
technologies improve and conventional sources become increasingly costly.
Wind power, a very old technology, may also increase in importance in the
years to come. When sited in the proper location, the most efficient wind turbines
can produce electricity for 6.1–8.4 cents per kilowatt-hour, a price that makes them
competitive with conventional power plants.28 Although wind power supplies only
a small fraction of electrical energy in the United States, generating capacity has
grown steadily in recent years. As with solar energy, the use of wind power will
increase as more efficient turbines are developed and the costs of carbon-based
sources steadily rise, although complaints about noise and the degradation of land-
scape aesthetics will limit or prevent the construction of wind turbines in some
areas. Wind power also shares with solar cells the problem of episodic operation.
This means that some way to store energy is needed for times when the sun doesn’t
shine and the wind doesn’t blow. It may be possible to use wind- and solar-generated
power to compress air in order to run electrical generators when needed, but con-
struction costs would be very high in terms of both time and money.29 Wind and
solar also suffer from a fundamental mismatch: areas of maximum sunlight or wind
are usually far from the places that consume the most electricity. Redressing this
situation will require very sizeable investments in new electrical transmission lines.
The same can be said about hydropower, which accounts for 10 percent of electrical
generation in the United States. Hydropower also has the further drawback of alter-
ing the ecology of river systems, often to the detriment of the plants and animals
that live in them.
Many other sources of energy might become significant as reserves of fossil fuels
are depleted and their environmental consequences become unacceptable: biomass
Volti7e_07_ch06_101-124_highres.indd 113 28/11/12 5:58 PM

114 How Technology Affects the Health of the Earth and Its Inhabitants
(plants and other organic sources), geothermal, methanol (methyl alcohol) pro-
duced from coal and gas, ethanol (ethyl alcohol) produced from plants, and nuclear
fusion (although not for many decades, if ever). Still, for the immediate future, none
of them can rival petroleum as a relatively cheap and convenient source of energy.
Petroleum-based energy sources will be of central importance for many decades.
The key issue will be using them efficiently.
Doing More with Less
The generation and use of massive quantities of energy is the cornerstone of
industrial society. Much of the expansion of production (and of course consump-
tion) that has taken place since the Industrial Revolution has come through the
development of technologies dependent on external sources of energy. Countering
the depletion of fossil fuels as well as their adverse consequences will require the
use of new sources of energy. Still, it must be remembered that new sources will
generate new problems, as will the employment of effective pollution-control
strategies; in both cases, there are few, if any, cost-free technological fixes. In
the early twentieth century, the replacement of the horse by the automobile was
widely applauded because it promised a far cleaner urban environment—in those
days horses in New York City deposited 2.5 million tons of manure annually.30
One might speculate on which source of pollution is preferable, the emissions of
cars or of horses.
Wind turbines and nuclear power plants are alternatives to fossil fuels use, but each carries
disadvantages as well as advantages. (© Sylvain Sonnet/Corbis)
Volti7e_07_ch06_101-124_highres.indd 114 28/11/12 5:58 PM

Technology, Energy, and the Environment 115
Instead of substituting one source of pollution for another, a better course of
action is to try to reduce pollution by cutting back on energy use. This would not
be a new effort, for the long-term historical trend has been to use energy more
efficiently. In the United States, energy intensity (the ratio of energy used to pro-
ductive output) peaked during the second decade of the twentieth century and
has been declining ever since.31 The total amount of energy used has increased,
but the use of more fuel-efficient sources of energy (primarily the shift from
wood and coal to oil and natural gas) changes in the structure of the economy,
and the development of more energy-efficient technologies allowed the rate of
economic growth to outstrip by a comfortable margin the rate of energy use. A
heightened interest in promoting energy efficiency emerged in 1973, when the
first oil embargo put a serious crimp in energy supplies. The world’s industrial
economies went into a temporary tailspin as a result of skyrocketing oil prices,
but major efforts were made to use energy more efficiently, and today the indus-
trially developed countries use considerably less energy relative to gross domestic
product than they did in 1973.
A portion of the improved output-to-energy ratio can be attributed to the
growth of services (which use relatively little energy) and the relative decline of
industrial production (which uses a lot of energy). It should be pointed out, how-
ever, that the growth of the service sector has been accompanied by increasing
energy use in developing countries, most notably China, that have become major
producers of manufactured items bought by economically advanced nations. The
rest of the improvement in energy intensity has been the result of increased effi-
ciency. New technologies along with better management of heating, lighting, and
ventilation systems have reduced energy costs by billions of dollars.32 Relatively
modest changes in everyday items such as refrigerators and freezers have produced
large benefits. Refrigerators sold today use only one-quarter the energy of those
made in 1974. Were it not for this substantial improvement, operation of the
150 million refrigerators and freezers in the United States would require the gen-
eration of an additional 40,000 megawatts of electrical power.33 Similar savings
have been gained by replacing incandescent bulbs with compact fluorescent lamps
(CFLs). CFLs use about one-third the energy of conventional incandescent bulbs,
and they produce far less heat than incandescents. Hundreds of millions of these
lights have been installed around the world, where the energy they save equals
the generating capacity of dozens of power plants. For individual consumers,
CFLs last longer and reduce electrical bills, but they require a minute or two to
produce maximum illumination. As with all technological advances, some costs
come along with the benefits. The manufacture of CFLs requires more materials
and energy, offsetting some of the savings gained through their operation. Also,
CFLs contain tiny amounts of mercury, a highly toxic element, so special care is
required for their disposal.
A multitude of energy conservation measures can also be applied to commercial
and residential buildings. Many of them require nothing in the way of advanced
technology. Intelligent site selection and proper orientation of buildings and the
windows in them can result in structures stay warmer in the winter and cooler in the
Volti7e_07_ch06_101-124_highres.indd 115 28/11/12 5:58 PM

116 How Technology Affects the Health of the Earth and Its Inhabitants
summer while making use of natural light. Adequate sealing and insulation prevents
the loss of hot air in the winter and cool air in the summer, while the provision
of thermal storage in buildings allows the use of electricity for air conditioning to
be shifted to off-peak hours, thereby obviating the need to build new generating
facilities.34 At the same time, there are many new technologies that can produce
significant energy savings. Microprocessor-based integrated controls for heating,
ventilation, and cooling can keep temperatures within narrow limits that are opti-
mal for different rooms in a building, so during the summer a computer room can
be kept cooler than a storage room.35
Additional gains could come through more intensive recycling efforts. One
noteworthy example comes from a prodigious user of energy, the aluminum
industry. Aluminum smelters consume 3 percent of the world’s electrical supply,
and the electricity used to produce one ton of aluminum would run a typical
U.S. household for nearly 18 months. Much less energy is required to convert
aluminum beverage cans back into raw aluminum, yet only 44 percent of these
cans were recycled in the United States in 2003, leaving 2.5 million tons of
aluminum to be plowed into landfills. The recycling of these cans would have
saved 36.7 billion kilowatt-hours of electricity, enough to supply a year’s worth of
electricity to 3.5 million households.36
More Miles to the Gallon
Even greater savings follow when the fuel consumption of private vehicles is
reduced. Cars and light trucks account for 43 percent of petroleum consump-
tion in the United States, and 11 percent of global petroleum consumption—a
quantity of oil only slightly less than the annual production of Saudi Arabia.37
Automobiles also are responsible for 20 percent of CO2 production in the United
States, an amount greater than the total emissions of CO2 from all but three
other of the nations of the world.38 Improving automotive technology to produce
greater efficiency would thus be a major step in arresting both resource depletion
and global warming.
There are essentially three ways of enhancing the fuel economy of cars
and trucks: reducing weight, improving aerodynamics, and making engines and
accessories perform more efficiently. All of these have contributed to the impres-
sive gains in fuel economy scored by American automobiles after the first energy
crisis in 1973. In that year, American cars averaged about 12 miles per gallon
(mpg); by 2011, fuel mileage had nearly doubled to an average of 23.4 mpg.39
The largest share of this improvement was effected by reducing weight. Cars
became smaller and lighter, but the shift to front-wheel-drive designs for many
models maintained interior space. Better aerodynamics also helped, while under
the hood, fuel injection, turbochargers, computerized engine controls, variable
valve timing and lift, and detail improvements allowed engines to squeeze more
miles per gallon with few, if any, performance losses.40 It is not certain that the
improvements to fuel mileage that occurred after the first round of energy shocks
can continue at the same rate. Still, there are many promising technologies that
Volti7e_07_ch06_101-124_highres.indd 116 28/11/12 5:58 PM

Technology, Energy, and the Environment 117
could result in considerably more fuel-efficient vehicles. Greater use of alumi-
num and plastics can make cars lighter, as can the use of more space-efficient
designs. More aerodynamic improvements will be made, for they promise sub-
stantial rewards; a 10 percent reduction of a car’s air resistance can improve
fuel economy by 3.5 percent, while a 10 percent weight reduction yields only a
2.5 percent gain.41 Many of today’s cars are aerodynamically “cleaner” than auto-
mobiles of the past; the original Volkswagen “Beetle” had a coefficient of drag
(cd) of 0.48, while the cd for a 2004 Toyota Prius was only 0.26.42 Still, there is
considerable room for improvement; experimental vehicles have been built with
drag coefficients as low as 0.14, better than that of an F-15 fighter.43 Production
cars will not likely approach this figure in the near future, but considerable aero-
dynamic gains are possible.
Although the internal combustion engine is not likely to be replaced any time
soon, other parts of the automobile’s power train could undergo significant changes,
such as the widespread use of continuously variable transmissions that allow en-
gines to run at the most efficient rotational speeds. Even more radical concepts
have already been tested. It is possible that in a decade or two, some vehicles will
be powered by fuel cells.. Meanwhile, innovative power plants are already being
offered to customers. Several automobile manufacturers produce battery-powered
electric vehicles that up to now have generated only modest consumer interest.
This Volkswagen experimental car is able to go up to 235 miles on one gallon of gasoline.
Some of the technology it embodies will eventually be used for cars sold to the public. (Image
by © Car Culture/Corbis)
Volti7e_07_ch06_101-124_highres.indd 117 28/11/12 5:58 PM

118 How Technology Affects the Health of the Earth and Its Inhabitants
More successful have been hybrid vehicles that use both an internal combustion
engine and an electric motor to optimize fuel consumption. Hybrids are now a
common sight on the world’s highways; by 2010, Toyota had sold two million Prius
hybrids.44
Fuel-efficient cars will become more attractive as rising gas prices make
them increasingly attractive, and rising corporate average fuel economy (CAFE)
standards mandate more miles per gallon of gasoline. It is far from certain that
the days of the conventional automobile are numbered, and it is likely that new
automotive technologies will be able to at least partially compensate for dimin-
ished fuel supplies.
Economic Systems, Government Policies, and the Environment
Just as technological advances affect energy supplies and energy-conservation meas-
ures, so too are there a substantial number of innovative technologies that can
help to control toxic wastes, pollutants, and other undesirable by-products of mod-
ern technology. Many new technologies can be used to substitute for hazardous
materials and processes, make more efficient use of potentially toxic materials, and
counteract existing hazards. But as important as these technological advances are,
one would be overly optimistic to think that new and improved technologies are
the solution to all of our environmental ills. As Chapter 1 indicated, technological
fixes have their place, but when major problems are involved, they must be comple-
mented by social, political, and cultural changes. Some of these changes may occur
spontaneously, but many of them must be encouraged by laws, regulations, and
other government interventions. Even when a technological fix can be developed,
it is often necessary to stimulate its use through the formation and implementation
of effective policies.
During the energy crisis brought on by the OPEC oil embargo in 1973, some
scholarly observers came close to panic. As they saw things, the massive reduction
in oil supplies was the latest symptom of a massive crisis. Combined with popula-
tion growth and environmental destruction, the contraction of energy supplies was
forcing major changes in industrial societies. Democratic procedures would have to
be replaced by powerful, even authoritarian, government mandates. One influential
economist looked toward a dangerous future and reluctantly concluded that “the
passage through the gantlet ahead may be possible only under governments capable
of rallying obedience far more effectively than would be possible in a democratic
setting.”45 His assumption, one shared by many others, was that in the face of a
massive crisis, the only salvation lay in a centralized government’s ability to force
people to sacrifice their individual interests in favor of the collective good. A highly
centralized government was inescapable, for only such an authority could bring suf-
ficient expertise to bear on the shaping of the necessary policies and at the same
time stifle individual actions contrary to these policies.
There may be a surface plausibility in these ideas, but a little reflection should
serve to demonstrate their falsity. The centralization of governance does not
necessarily improve the ability to forge and administer effective policies, especially
Volti7e_07_ch06_101-124_highres.indd 118 28/11/12 5:58 PM

Technology, Energy, and the Environment 119
those relating to something as complex as the natural environment. Even with the
best intentions, a centralized decision-making body would be overwhelmed by the
amount of information necessary for the management of a fantastically complex
ecosystem. The second assumption, that governmental actions will be largely
directed toward the public good, is even less tenable. The self-serving tendencies
of governments when they are insulated from any oversight by the citizenry seem
evident enough.
The shortcomings of authoritarian governments have been made amply evi-
dent by the performance of centrally planned economies ruled by Communist
regimes. To take one example, all of them used energy far less efficiently than
market-oriented industrial economies. China, one of the worst in this regard, was
using three times as much energy per unit of gross national product as Japan in the
early 1990s.46 This can be partially excused on the grounds that a poor nation like
China could not afford to rapidly deploy new, energy-saving technologies, but the
wastage of energy was universal in the Communist world. Even more disturbing,
these countries also exhibited extraordinarily high levels of environmental degra-
dation:47
In the Soviet Union . . . energy production had, by 1989, increased concentrations
of toxic air pollutants to a level ten times the maximum permissible in 88 Soviet
cities with a combined population of 42 million people. . . .Twenty large
hydro-electric power stations erected on lowland rivers have transformed flowing
rivers into stagnant reservoirs and reduced the ability of these rivers to assimilate
wastes. The Chernobyl nuclear reactor accident killed several dozen people and
contaminated 10,000 square kilometers with levels of radioactivity that exceeded
15 curies per square kilometer, affecting more than 250,000 people.
The countries of Eastern Europe that endured more than four decades of
Soviet domination did no better. In Poland and Czechoslovakia, sulfur dioxide
emissions from the uncontrolled burning of coal were four to eight times higher
than in most Western European countries. Toxic elements accumulated in the soil
and contaminated food to such a degree that the bone growth of one-third of the
children in the worst-affected parts of Czechoslovakia was retarded by 10 months
or more. In Poland, the life expectancy for men aged 40 to 60 actually dropped, and
one-third of the population was expected to suffer from respiratory disease, cancer,
skin disease, or afflictions of the central nervous system as a result of environmental
pollution.48
While environmental degradation was endemic in centrally planned econo-
mies, one shouldn’t jump to the conclusion that a market economy is environmen-
tally benign. The history of capitalist economic development has also been marred
by numerous examples of pollution, contamination, and heedless exploitation of
natural resources. This is to be expected; a pure market system, engine of economic
growth though it may be, is by its nature a threat to the environment. Markets gen-
erally do a good job of coordinating production and consumption, but they are not
effective in dealing with effects that lie outside the transactions of individual buyers
and sellers. The price paid for a commodity sold by one party and bought by another
Volti7e_07_ch06_101-124_highres.indd 119 28/11/12 5:58 PM

120 How Technology Affects the Health of the Earth and Its Inhabitants
does not take into account the costs borne by other parties, such as the damages
suffered by downstream residents when a paper mill dumps its wastes into a river.
In similar fashion, a transaction may not take into account the loss of an
irreplaceable resource to future generations. The price of a commodity may reflect
some depletion costs, but time lags may prevent adequate corrective actions. In
many cases, the price of a dwindling resource will increase at too slow a rate to
signal its rapid depletion. The result is an overshoot-and-crash situation. This situ-
ation resembles a car driven by someone with slow reflexes; by the time the driver
perceives a problem ahead and decides to apply the brakes, it is too late to avoid a
collision.49
As will be noted in greater detail in Chapter 18, the imperfections of a market
system can be alleviated by intelligent regulations and tax policies, government-
sponsored research programs, and educational efforts. The consequences of gov-
ernmental policies have been particularly evident in the development and use of
energy technologies, although the results have not always been benign. Critics of
the nuclear industry argue that atomic power would never have gained a foothold
without loan guarantees, limits on liability in the event of accidents, and financial
support for the disposal of wastes.50 Renewable energy has also depended to a sig-
nificant degree on government subsidies. Since 1992, wind power installations have
received a tax credit of 2.2 cents per kilowatt-hour of electricity generated for their
first 10 years of operation, while until recently the solar industry was supported by
cash grants that amounted to 30 percent of the construction costs of new projects.51
The oil and gas industry has not received direct subsidies, but according to critics,
tax breaks of a sort that only an accountant or lawyer could understand and love
have been worth $70 billion over a 10-year period.52
Less controversial is governmental support of scientific and technological
research relating to the environment. One striking example of the government’s
contribution to energy conservation is the development of new lighting technolo-
gies. About one-third of all electricity generated in the United States is used for
lighting. As was noted a few pages ago, some of this energy is now being used
more efficiently through the use of compact fluorescent lamps (CFLs). Some of
the research that led to the commercial marketing of CFLs was conducted by the
federal government at a cost of only $1.5 million, yet it ultimately resulted in saving
billions of dollars in energy costs.
Providing economical and environmentally friendly sources of energy will
require an increased pace of research and development efforts on a variety of
fronts. Although private industry has been an important source of energy-related
R&D funding, governments also have been essential, if erratic, providers. Federal
government-sponsored R&D expenditures on nonfossil sources of energy rose sharp-
ly in the wake of the energy crises in the 1970s, but only temporarily. Research and
development for nuclear energy continued at a high level in the 1980s, but R&D for
other sources of energy underwent a sharp decline and then more or less leveled off
from the mid-1980s onward.53 The federal government’s generally weak and waver-
ing support for energy R&D is one of the reasons that reliance on fossil fuels has
decreased only marginally since 1973, from 93 percent then to 85 percent today.54
Volti7e_07_ch06_101-124_highres.indd 120 28/11/12 5:58 PM

Technology, Energy, and the Environment 121
The federal budget for the 2008 fiscal year allocated more than $1.44 billion for
R&D expenditures directly tied to energy (for comparison, the budget for weapons
research was $68.1 billion).55 Of this sum, 32.2 percent went to oil, natural gas, and
coal, while 30.3 percent was directed to nuclear energy. This left 37.4 percent or
$540.4 million for renewable sources of energy (biomass, solar, wind, water power,
geothermal, and hydrogen fuel cells). There is no guarantee, of course, that higher
levels of R&D support would have resulted in an array of technological break-
throughs on the energy front or for the benefit of the environment as a whole. Still,
it does seem likely that we would be in better shape today if R&D on conservation
and renewable sources of energy had been supported at the level that conventional
sources of energy enjoyed for many years.
At the same time, it should be recalled that the availability of a potentially
superior technology does not ensure that it will be used. The development of
technologies that could help us use energy more efficiently, reduce pollution,
and make fewer demands on a fragile environment is only part of the solution.
Individual choices, corporate and governmental policies, the general distribution
of income and power, and the willingness to forsake short-term advantages for
longer-term benefits are at least as important as any combination of technologi-
cal fixes. These will be the ultimate determinants of our ability to maintain and
even improve our material standard of living while at the same time preserving
the environment that has sustained us through centuries of carelessness, neglect,
and exploitation.
Questions for Discussion
1. What do you consider to be the greatest environmental threat facing the world
today? What sort of measures need to be taken to counter it? What will be the
major sources of resistance to these measures?
2. The long-term environmental consequences of CO2 emissions cannot be
known for sure. The earth’s atmosphere is an extremely complex system that
is driven by a great multitude of variables. Predictions of climate trends are,
and probably always will be, based on data that are not completely adequate.
Are we justified in enacting laws to enforce significant reductions in the use of
fossil fuels on the basis of imperfect scientific information? At what point can
we decide that the assessment of risk is exact enough to warrant taking firm
actions?
3. A distinction has to be drawn between conservation and curtailment. The
former implies doing the same sorts of things with fewer inputs of materials
and energy, while the latter implies an actual loss of output and consumption.
Are the energy-saving measures taken in the last 15 years primarily examples
of conservation or curtailment? Will future energy-saving strategies be based
on the former or the latter?
4. The continual extraction and use of natural resources does not result in their
being completely “used up,” but eventually the costs of extracting a diminish-
ing resource exceed the value of the resource; barring improvements in extrac-
tion technologies, the resource is as good as gone. This may not happen for
Volti7e_07_ch06_101-124_highres.indd 121 28/11/12 5:58 PM

122 How Technology Affects the Health of the Earth and Its Inhabitants
a long time, perhaps not until we as individuals are long departed from this
earth. Is there anything immoral about using large quantities of the earth’s
resources for our own benefit? Do we owe anything to future generations? If so,
by how much should we restrict our use of resources? How should these restric-
tions be mandated?
5. A considerable improvement in the fuel economy of automobiles has been the
result of “downsizing.” Yet all other things being equal, smaller cars are not as
safe as larger ones. Can a substantial savings in fuel justify the likelihood of
more traffic-related injuries and fatalities? At the same time, more fuel-efficient
automobiles also produce fewer pollutants, leading to fewer pollution-induced
deaths and illnesses. Is it possible to construct a balance sheet that takes into
account all of these factors in order to determine if smaller cars improve or
threaten our physical health?
6. A major issue in environmental analysis is sustainability. Is our present economy
and society sustainable over the long run? In what ways do present technolo-
gies undermine sustainability? Are there any that promote it?
Notes
1. United States Department of Energy, Energy Information Administration, “International
Energy Outlook, 2011” accessed on February 20, 2012, at http://www.eia.gov/forecasts
/ieo/pdf/0484(2011) , p. 1.
2. Janet L. Sawin, “Carbon Emissions Continue Unrelenting Rise,” in The Worldwatch
Institute, Vital Signs, 2007–2008: The Trends That Are Shaping Our Future (New York
and London: W. W. Norton, 2007), p. 43.
3. Newton H. Copp and Andrew W. Zanella, Discovery, Innovation and Risk: Case Studies
in Science and Technology (Cambridge, MA, and London: MIT Press, 1993), p. 299.
4. United Nations, World Meteorological Organization, “Greenhouse Gas Bulletin: The
State of Greenhouse Gases in the Atmosphere Based on Global Observations through
2009,” accessed on February 20, 2012, at http://www.wmo.int/pages/prog/arep/gaw/ghg
/documents/GHG_bull_6en .
5. United Nations Intergovernmental Panel on Climate Change, “Climate Change 2007:
Synthesis Report,” accessed on June 23, 2008, at http://www.ipcc.ch/pdf/assessmentre-
port/ar4/syr/ar4_syr , p. 30.
6. Anuj Chopra, “How Global Warming Threatens Millions in Bangladesh,” U.S. News
and World Report (March 26, 2009), accessed on August 8, 2012, at http://www.usnews
.com/news/energy/articles/2009/03/26/how-global-warming-threatens-millions-in
– bangladesh.
7. Food and Agriculture Organization of the United Nations, “State of the World’s
Forests 2007” (Rome: FAO, 2007), accessed on October 20, 2008, at http://www.fao.org
/ docrep/009/a0773a/a0773e00.htm.
8. Rebecca Lindsey, “Tropical Deforestation,” NASA Earth Observatory (March 30, 2007),
accessed on June 24, 2008, at http://earthobservatory.nasa.gov/Library/Deforestation
/ printall.php.
9. United States Environmental Protection Agency, “Municipal Solid Waste Generation,
Recycling, and Disposal in the United States: Facts and Figures for 2010,” accessed on
February 20, 2012, at http://www.epa.gov/osw/nonhaz/municipal/pubs/msw_2010_rev
_factsheet ,
Volti7e_07_ch06_101-124_highres.indd 122 28/11/12 5:58 PM

http://www.eia.gov/forecasts

http://www.wmo.int/pages/prog/arep/gaw/ghg

http://www.ipcc.ch/pdf/assessmentre-port/ar4/syr/ar4_syr

http://www.ipcc.ch/pdf/assessmentre-port/ar4/syr/ar4_syr

http://www.ipcc.ch/pdf/assessmentre-port/ar4/syr/ar4_syr

http://www.usnews

http://www.fao.org

http://earthobservatory.nasa.gov/Library/Deforestation

http://www.epa.gov/osw/nonhaz/municipal/pubs/msw_2010_rev

Technology, Energy, and the Environment 123
10. John Heilprin, “EPA Projects Hazardous Waste Sites Growing in Number and Cleanup
Costs,” Environmental News Network (2004), accessed on June 24, 2008, at http://
www.enn.com/top_stories/article/520.
11. Kenneth S. Deffeyes, Hubbert’s Peak: The Impending World Oil Shortage (Princeton, NJ:
Princeton University Press, 2001)
12. Leonardo Maugeri, “Squeezing More Oil from the Ground,” Scientific American 301, 4
(October 2009).
13. David R. Montgomery, Dirt: The Erosion of Civilizations (Berkeley, Los Angeles, and
London: University of California Press, 2007) pp. 4, 171–174.
14. Carlo M. Cipolla, Before the Industrial Revolution: European Society and Economy,
1000–1700, 2nd ed. (New York: W.W. Norton, 1980), p. 138.
15. Quoted in Ibid., p. 288.
16. W. Stanley Jevons, The Coal Question: An Inquiry Concerning the Progress of the Nation,
and the Probable Exhaustion of Our Coal-Mines, 3rd ed. (London: Macmillan and Co.,
1906), p. 9.
17. Ibid., p. 187.
18. Ibid., pp. 184–185.
19. Ibid., p. 185.
20. James R. Chiles, “Spindletop,” American Heritage of Invention and Technology 3, 1
(Summer 1987): 34.
21. For a discussion of the context for this statement, see “Too Cheap to Meter,” at http://
media.cns-snc.ca/media/toocheap/toocheap.html (accessed September 8, 2012).
22. Adam Piore, “Planning for the Black Swan,” Scientific American 304, 6 (June 2011): 50.
23. Nuclear Energy Institute, “World Statistics,” accessed on February 21, 2012, at http://
www.nei.org/resourcesandstats/nuclear_statistics/worldstatistics/.
24. Calculated from figures in John M. Deutch and Ernest Muniz, “The Nuclear Option,”
Scientific American 295, 3 (September 2006), p. 78.
25. Matthew L. Wald, “What Now for Nuclear Waste?” Scientific American 301, 2 (August
2009).
26. Deutch and Muniz, op. cit., pp, 82–83.
27. Matthew L. Wald, “The Power of Renewables,” Scientific American 300, 3 (March 2009).
28. Ibid., pp. 58–59.
29. Ken Zweibel, James Mason, and Vasilis Fthenakis, “A Solar Grand Plan,” Scientific
American 296, 12 (December 2007).
30. James J. Flink, The Car Culture (Cambridge, MA: MIT Press, 1975), p. 34.
31. “Power Slide,” The Economist Online (January 11, 2009), accessed on February 21, 2012,
at http://www.economist.com/blogs/dailychart/2011/01/energy_use.
32. Arthur H. Rosenfeld and David Hafemeister, “Energy-Efficient Buildings,” Scientific
American 258 4, (April 1988): 78.
33. Eberhard K. Jochem, “An Efficient Solution,” Scientific American 295, 3 (September
2006), p. 66.
34. Rosenfeld and Hafemeister, “Energy Efficient Buildings,” p. 81.
35. Rick Bevington and Arthur H. Rosenfeld, “Energy for Buildings and Homes,” Scientific
American 263, 3 (September 1990): 78.
36. Andrew Wilkins, “Aluminum Production Increases Steadily,” in The Worldwatch
Institute, Vital Signs, 2006–2007: The Trends That Are Shaping Our Future (New York
and London: W.W. Norton, 2006), p. 58.
37. Robert L. Hirsch, Roger Bezdek, and Robert Wendling, “Peaking of World Oil
Production: Impacts, Mitigation, and Risk Management” (United States Department
Volti7e_07_ch06_101-124_highres.indd 123 28/11/12 5:58 PM

http://www.enn.com/top_stories/article/520

http://www.enn.com/top_stories/article/520

http://media.cns-snc.ca/media/toocheap/toocheap.html

http://media.cns-snc.ca/media/toocheap/toocheap.html

http://www.nei.org/resourcesandstats/nuclear_statistics/worldstatistics/

http://www.nei.org/resourcesandstats/nuclear_statistics/worldstatistics/

http://www.economist.com/blogs/dailychart/2011/01/energy_use

124 How Technology Affects the Health of the Earth and Its Inhabitants
of Energy, 2005), accessed on June 23, 2008, at http://www.netl.doe.gov/publications
/ others/pdf/Oil_Peaking_NETL p. 23.
38. John DeCicco, Rod Griffin, and Steve Eertel “Putting the Brakes on U.S. Oil Demand,”
(Environmental Defense, 2003), accessed on June 24, 2008, at http://www.edf.org
/ documents/3115_OilDemand , p. 3.
39. Eric Loveday, “U.S. Government Fleet Jumps to 23.4 MPG Average Thanks to Fuel-
Efficient Vehicles,” Greencar News, accessed on August 26, 2012, at http://green
.autoblog.com/2011/05/05/us-government-fleet-jumps-23-mpg-average/.
40. Ben Knight, “Better Mileage Now,” Scientific American 302, 2 (February 2010).
41. James J. Flink, “The Path of Least Resistance,” American Heritage of Invention and
Technology 5, 2 (Fall 1989): 42.
42. EcoModder Wiki, “Vehicle Coefficent of Drag List,” (July 16, 2012), accessed on
August 8, 2012, at http://ecomodder.com/wiki/index.php/Vehicle_Coefficient_of
_Drag_List.
43. Deborah L. Bleviss, “Saving Fuel: Time to Get Back on Track,” Technology Review 91,
8 (November–December 1988): 50.
44. “Worldwide Toyota Prius Sales Crack 2-Million Mark,10-Year Anniversary Celebration
Planned,” Auto News (October 8, 2010), accessed on February 21, 2012, at http://www
.autoblog.com/2010/10/08/worldwide-toyota-prius-sales-crack-2-million-mark-10-year
-annive/.
45. Robert L. Heilbroner, An Inquiry into the Human Prospect (New York: W.W. Norton,
1974), p. 110.
46. William U. Chandler, Alexei A. Makarov, and Zhou Dadi, “Energy for the Soviet
Union, Eastern Europe, and China,” Scientific American 263, 3 (September 1990): 125.
47. Ibid., p. 122.
48. Ibid.
49. Kimon Valaskakis et al., The Conserver Society: A Workable Alternative for the Future
(New York: Harper & Row, 1979), p. 166.
50. Doug Koplow, “Nuclear Power: Still Not Viable without Subsidies,” Union of
Concerned Scientists (February 2011), accessed on February 21, 2012, at http://www
.ucsusa.org/assets/documents/nuclear_power/nuclear_subsidies_report .
51. Diane Cardwell, “Energy Tax Breaks Proposed Despite Waning Support for Subsidies,”
New York Times (January 26, 2012), accessed on February 21, 2012, at http://www
.nytimes.com/2012/01/27/business/energy-environment/clean-energy-projects-face
-waning-subsidies.html?_r=2&pagewanted=1&ref=windpower.
52. Seth Hanlon, “Big Oil’s Misbegotten Tax Gusher,” Center for American Progress,
(May 5, 2011), accessed on February 21, 2012, at http://www.americanprogress.org
/issues/2011/05/big_oil_tax_breaks.html. It should be pointed out that many of these
quasi-subsidies are also collected by firms in other sectors of the economy.
53. Government Accountability Office, “Budget Trends and Challenges for DOE’s Energy
R&D Program” (2008), accessed on June 28, 2008, at http://www.gao.gov/new.items
/d08556t , p. 6.
54. Ibid., p. 1.
55. Ibid., p. 14.
Volti7e_07_ch06_101-124_highres.indd 124 28/11/12 5:58 PM

http://www.netl.doe.gov/publications

http://www.edf.org

http://green

http://ecomodder.com/wiki/index.php/Vehicle_Coefficient_of

http://www

http://www

http://www

Home

http://www.gao.gov/new.items

Some may doubt the overall benefits of technological advance, but almost every-
one would agree that improvements in medical technologies have made our lives
better. Who would want to go back to the not-so-good old days, when vast num-
bers of children died in infancy, epidemic plagues wiped out millions, and pain
and suffering were an inevitable part of everyday life? Not too long ago, medicine’s
ability to cope with sickness was severely limited at best. In many—perhaps
most—cases, medical intervention only made bad situations even worse. The
knowledge base of medical technology was pathetically underdeveloped, render-
ing the majority of therapies ineffective or worse. As Lewis Thomas describes this
unhappy situation:1
Bleeding, purging, cupping, the administration of infusions of every known plant,
solutions of every known metal, every conceivable diet including total fasting,
most of these based on the weirdest imaginings about the cause of disease, con-
cocted out of nothing but thin air—this was the heritage of medicine up until a
little over a century ago.
More recently, technological advance supported by an expanding knowledge
base has extended the lives of many, while eliminating or at least reducing a great
deal of agony. How could anyone quibble with such positive results? And yet, it
should be apparent by now that, in aggregate, technological advance is never an
unmixed blessing. Problems are solved or at least alleviated, but at the cost of a new
set of difficulties, many of them unforeseen. Medical technology is no exception. As
we shall see, for all its unquestioned benefits, technological advance has generated
some vexing problems. Understanding the nature of these problems is important in
its own right, and at the same time it may help us come to a deeper understanding
of how technological advance can take with one hand as it gives with another.
In assessing the significance of modern medicine, it is important to bear in
mind that not all historical improvements in mortality and morbidity are the result
of advancing medical technologies. During the last 200 years, dramatic advances
have taken place in medical care: immunization for the prevention of infectious
diseases, antiseptic surgery, antibiotics, and in recent decades the transplantation of
organs. However, the positive impact of all these advances has been comparatively
small. The real gains in life span and improvements in the physical quality of life
125
Medical Technologies
c h a p t e r s e v e n
Volti7e_08_ch07_125-144_highres.indd 125 28/11/12 5:58 PM

126 How Technology Affects the Health of the Earth and Its Inhabitants
have been due to better nutrition, sanitation, and personal hygiene, along with
higher standards of living in general.2
Recent decades have seen the development and application of many new medi-
cal treatments, but it is not at all certain that they have had their intended effect.
In fact, there may even be a negative correlation between the use of these treat-
ments and patient health. This is strongly suggested by a recent study of Medicare
patients conducted by researchers at Dartmouth University’s Medical School. The
study found considerable regional variations in the treatment of patients with
serious illnesses during the last two years of their lives, with aggressive medical
treatments being much more common in some parts of the country than in oth-
ers. Paradoxically, however, patients in these regions were less satisfied with their
care, ended up no better in terms of day-to-day function, and died at a slightly
higher rate than patients in regions with less aggressive care. There are a number
of reasons for this anomaly, but chief among them is the higher concentration
of medical specialists in regions that have the most aggressive medical practices.
Medical care in regions with large proportions of specialists is characterized by an
excessive use of tests and procedures, extended periods of hospitalization, and a frag-
mented approach to patient care that sharply contrasts with the holistic approach
of primary-care physicians.3
This is not to say that technological advances in medicine have been of no
value. Many people are alive today because of kidney dialysis, computerized tomog-
raphy (CT), and antibiotics. Modern medical technologies have generated new pos-
sibilities and new hopes. Instead of accepting sickness and death as the inevitable
product of fate or God’s will, we increasingly expect that cures will be found, and
that they will be made available when we need them. But along with rising expec-
tations come rising costs. We do not have to suffer and die quietly (and inexpen-
sively); new medical technologies allow something to be done. As a result, medical
advances and their associated costs have the potential to expand without limit, for
as long as people are subject to sickness and death, there will be no lack of demand
for new medical interventions. Moreover, unlike other goods and services, expen-
ditures on medical care are not usually weighed against other possible expenditures.
You may decide to defer the purchase of a new car so that you can make a down pay-
ment on a house, but indefinitely deferring a quadruple bypass operation is not an
option when it is all that stands between you and a life of progressive debilitation.
While the costs of medical care have been escalating, they have become largely
irrelevant to most individuals because government and private insurance programs
have paid for the bulk of medical expenditures. In 1960 one-half of U.S. health-
care costs were directly borne by patients and their families. The years since then
have seen a major expansion of private insurance plans, as well as the creation and
expansion of government-supported Medicare and Medicaid programs. By 2004
private insurance plans paid for 35.4 percent of medical expenditures. Medicare
and Medicaid were responsible for 17.2 and 16 percent of expenditures, respectively,
while other private and government programs accounted for another 17.7 percent,
leaving only 13.7 percent of medical costs to be borne by individuals.4
Volti7e_08_ch07_125-144_highres.indd 126 28/11/12 5:58 PM

Medical Technologies 127
While patients are insulated from much of the cost of medical care, far more
medical treatments are available than was the case 50 years ago. Under these cir-
cumstances, there has been a strong tendency to provide increasing amounts of med-
ical care and for a country’s medical costs to increase with no apparent endpoint. As
one analyst put it, “When benefits do not have to be weighted against costs—when
the only criterion is that there be some benefit—the number of good things that can
be done in medical care is, for all practical purposes, unlimited.”5 All of these “good
things” have created a situation in which about half the increase in medical costs
over recent decades can be attributed to advances in medical technology.6
Although it is difficult to do so, especially when life and well-being are at stake,
choices eventually have to be made. Increasingly sophisticated medical technologies
offer many benefits, but they can also generate costs that threaten to spiral out of
control. Items of high-technology medicine should not be applied indiscriminately;
like any other claim on our financial and other resources, they need to be evaluated
in terms of the benefits they provide and the costs they incur. A useful starting point
for evaluating the appropriateness of particular medical technologies has been pro-
vided by Bryan Jennett, a British neurologist. According to his analysis, a medical
technology can be deemed inappropriate for a number of different reasons:7
Unnecessary The patient has a condition too advanced to respond
to treatment, as would be the case with attempting an
extended treatment of metastatic cancer.
Unsafe Complications outweigh the probable benefit, as with some
invasive investigations and dangerous therapies.
Unkind The quality of life after treatment is not good enough or its
duration long enough to have justified the intervention.
Unwise It diverts resources from activities that would yield greater
benefits to other patients.
These are useful analytical categories, but in actual practice it may be very dif-
ficult to determine which of these categories apply to particular cases. Doctors still
need to have the ability to make accurate prognoses, and compassion always has to
be combined with informed judgment.
New Medical Technologies: Choices and Trade-offs
It is not always easy to scrutinize a particular medical technology in terms of costs,
benefits, and general appropriateness—especially when it applies to us or someone
close to us. Still, it is worth making the effort, if only to better comprehend the
dilemmas often posed by the advance of medical technologies. The brief studies that
follow should help to make the key issues more concrete.
The Case of Kidney Dialysis
Sophisticated technologies may be of limited importance to the population as a
whole, but for many individuals they are literally the difference between life and
Volti7e_08_ch07_125-144_highres.indd 127 28/11/12 5:58 PM

128 How Technology Affects the Health of the Earth and Its Inhabitants

death. The benefits of recent technological advances are nowhere more evident
than they are to people suffering from kidney disease. In a healthy individual, the
kidneys regulate the body’s acid–base concentration, maintain proper water bal-
ance, and concentrate metabolic wastes that are subsequently excreted as urine.
Complete or substantial kidney failure (known in medical jargon as “end-stage renal
disease”) usually results in incapacitation and premature death. Many are afflicted:
urinary diseases are the nation’s fourth-largest killer, right behind cardiovascular
diseases, cancer, and pneumonia.
In the early 1940s a Dutch physician used a bathtub and parts salvaged from a
foundry to construct the first device to successfully take the place of the kidneys.
The inventor, Dr. Willem Kolff, later immigrated to the United States, and his
device inspired the development of more refined versions at a number of American
hospitals. At that time, cleansing the blood by means of an artificial kidney (a
process known as dialysis) was confined to short periods of time. Use of dialysis as a
substitute for functioning kidneys became possible in the early 1960s through equip-
ment improvements and the invention of a connecting tube that obviated the need
to tap into a new artery and vein every time that a patient was hooked up to the
machine. This made long-term dialysis a practical proposition, but at the same time
it generated a host of nonmedical problems that have yet to be resolved.
Dr. Willem Kolff (center) with the prototype of a kidney dialysis machine intended for home
use. (Bettmann/Corbis)
Volti7e_08_ch07_125-144_highres.indd 128 28/11/12 5:58 PM

Medical Technologies 129
When dialysis first became an accepted medical practice, the number of
patients who could potentially benefit from it far exceeded the number of avail-
able machines. It was therefore necessary to select some patients for dialysis and to
reject others. In order to make these choices, the nation’s pioneering dialysis insti-
tution, the Seattle Artificial Kidney Center, established an Admissions and Policy
Committee to screen applications and determine who would get dialyzed and who
would not. The committee was intended to reflect the community as a whole, being
initially comprised of a lawyer, a minister, a housewife, a labor leader, a government
official, a banker, and a surgeon, as well as two physician-advisors.8 The members
of the committee made their first selective cut by only accepting patients from
the state of Washington, and by eliminating children along with adults over the
age of 45. Having done this, the committee then applied a set of criteria that took
into account the personal characteristics of prospective patients. According to one
report, these included the “sex of patient, marital status, and number of dependents;
income; net worth; emotional stability, with regard to patient’s ability to accept the
treatment; educational background; nature of occupation; past performance and
future potential, and names of people who could serve as references.”9 As might
be expected, making life-or-death decisions on the basis of the presumed worth of
the patient generated a fair amount of indignation in some quarters. According
to two critics, a psychiatrist and a lawyer, the published accounts of the selection
criteria “paint a disturbing picture of the bourgeoisie sparing the bourgeoisie, of
Kidney dialysis has become a common procedure that has saved many lives while at the same
time contributing to rising medical costs. (© Javier Larrea/age fotostock)
Volti7e_08_ch07_125-144_highres.indd 129 28/11/12 5:58 PM

130 How Technology Affects the Health of the Earth and Its Inhabitants
the Seattle committee measuring persons in accordance with its own middle-class
suburban value system: scouts, Sunday school, Red Cross. This rules out creative
nonconformists, who rub the bourgeoisie the wrong way but who historically have
contributed so much to the making of America. The Pacific Northwest is no place
for a Henry David Thoreau with bad kidneys.”10
The problem of controlling access to a potentially life-saving technology was
mitigated as the number of machines increased and they were made easier to oper-
ate. It even became possible for a patient to routinely dialyze himself or herself at
home. But formidable cost problems remained. In 1970 a twice-weekly dialysis cost
$3,000 to $5,000 and required start-up expenditures of $9,000 to $13,000. For most
patients these expenses produced grave financial pressures, and for some they were
impossible to meet. The problem was apparently solved in 1972 when the U.S.
Congress authorized payment for dialysis treatments through the federal govern-
ment’s Medicare insurance program.
The process whereby this policy was enacted illustrates the capricious way in
which technologies are sometimes supported. The government’s assumption of pay-
ments for dialysis treatments was in large measure the result of intense lobbying, an
effort that was made all the more effective by connecting an abstract ailment with
actual people. In one instance, the vice president of the National Association of
Patients on Hemodialysis and Transplantation not only testified but dialyzed him-
self in the presence of members of the House Ways and Means Committee.11 The
provision of Medicare payments for dialysis was only a small portion of a larger piece
of legislation; only 30 minutes of debate on the floor of the Senate took place, cul-
minating in a lopsided vote in support of the measure at a time when close to half of
the senators were absent. The conference committee of the House and the Senate
took only 10 minutes to discuss the measure, which was only slightly modified as a
result. Both houses subsequently accepted the conference committee’s report, and
the bill including the dialysis provision was signed into law by President Nixon two
weeks later.12
Payment for dialysis through the Medicare program addressed a real need, but
at considerable cost. Initial estimates of annual expenditures of $135 to $250 million
were far too low; by 2005 dialysis was costing the federal government more than
$17 billion a year.13 Of course, it is not the federal government that pays for this;
the costs are ultimately borne by individual taxpayers, the vast majority of whom
are not afflicted by end-stage renal disease. There is nothing inherently unrea-
sonable or unfair about this. A catastrophic organ failure could happen to any of
us, and we shouldn’t begrudge helping people who have been less fortunate than
ourselves. Still, questions remain about the equity of paying for dialysis and not for
other ailments. In 2005 dialysis patients comprised only 0.8 percent of Medicare
beneficiaries, yet their treatment accounted for almost 6 percent of total Medicare
payouts.14
Today, about a third of the patients receiving dialysis are over the age of 65.
As the population ages it can be expected that the demand for dialysis will increase
apace, putting further financial strains on the health-care system. This may not occur
Volti7e_08_ch07_125-144_highres.indd 130 28/11/12 5:58 PM

Medical Technologies 131
in some countries where government insurance programs do not always cover dialysis
for patients with other serious medical problems. In Great Britain, the government-
run National Health Service always has operated under tight financial constraints.
Consequently, certain treatments have been restricted, dialysis included. During the
early years of dialysis treatment, patients over the age of 55 were rarely given the oppor-
tunity to receive dialysis because it was believed that their physical constitutions would
not allow them to survive and flourish under a regimen of daily dialysis. One British
physician candidly stated that people over the age of 55 were not suitable candidates, for
they were all “a bit crumbly.”15 This may be true in a statistical sense, but in fact there is
greater variation in overall levels of health among the elderly than there is in any other
age group. Using age as a means of limiting access to an expensive medical technology
is administratively convenient, but it dodges the issue of making choices on the basis of
more relevant criteria. In any event, in Great Britain, increasing numbers of people over
the age of 65 are now receiving dialysis, which now accounts for 1 percent of the budget
of that country’s health-care expenditures.16 Access to modern medical technologies is
not easily rationed, even when there is a strong commitment to cost containment.
Is the American policy of giving universal access to dialysis the correct one,
or is the former British policy more defensible when financial resources are lim-
ited? There are no easy answers. At the same time, the problems of distributive
justice posed by the use of artificial kidneys are only a preview of what will have
to be faced as new and even more expensive medical technologies become avail-
able and medicine gains the ability to successfully address hitherto untreatable
conditions.
Replacing Broken Hearts
The dilemmas engendered by the advance of medical technologies are further
illustrated by the prospect of using new technologies to counteract the number-
one killer in the United States, heart disease. In recent decades, many victims of
clogged arteries and weakened hearts have gained a new lease on life through bypass
surgery. This procedure is now routine in many hospitals, but once again, questions
of cost and benefits have to be confronted. Open-heart surgery became feasible in
the early 1960s with the development of devices that could take over the heart’s
functions during surgery. By 1970 the efficacy of these procedures was attested to
in several dozen papers appearing in medical journals. But only two of these papers
reported on controlled trials, studies that determined the efficacy of a procedure by
comparing outcomes for one group of patients who had received a treatment with
another group who had not. These two studies came to the opposite conclusion:
surgical procedures were ineffective for the treatment of heart disease. In the years
that followed, one particular surgical intervention, coronary artery bypass grafting
(“bypass surgery”) became the most common surgical intervention. In bypass sur-
gery, a segment of a vein is removed from the leg or chest and spliced into one or
more of the five coronary arteries that transport blood from the heart. In extreme
cases, all five arteries may receive this treatment. This procedure was effective in
reducing early mortality, but only for a limited amount of time and primarily for
Volti7e_08_ch07_125-144_highres.indd 131 28/11/12 5:58 PM

132 How Technology Affects the Health of the Earth and Its Inhabitants
patients with generally poor health conditions. Negative findings, however, did not
alter the incidence of bypass surgery; by the early 1980s, 100,000 of these procedures
were being performed every year at a cost of $2 billion, Meanwhile, alternative
approaches to heart disease based on medication, diet, and exercise received much
less attention.17
A similar story can be told of another treatment for cardiovascular problems,
percutaneous transluminal coronary angioplasty, commonly known as “balloon
angioplasty.” This procedure threads a tiny balloon into a coronary artery in order
to break up a blood clot that is obstructing the flow of blood to the heart. This
procedure has saved the lives of many men and women who would otherwise have
succumbed to a heart attack. But many of the two million angioplasties that are
performed each year in the United States are of dubious benefit. One study of
angioplasties performed on a sample of 828 Medicare patients found that only about
one-third of the patients benefited from the procedure. It was deemed completely
inappropriate for 14 percent, and it could not be determined whether or not the
remaining patients were helped by the procedure.18 Also, although angioplasty is
often used in lieu of a bypass operation and its attendant higher cost and greater
danger, the opposite effect has been observed; according to one study, greater avail-
ability of angioplasty has been associated with higher rates of bypass operations for
people over the age of 65.19
For many cardiac patients, even effective and appropriate angioplasties or
bypass operations are of no use because the heart has been so weakened as to be
barely functional and is in danger of failing completely. The transplantation of
another person’s heart offers one solution, but demand exceeds the supply. Back
in the mid-1960s, the implantation of artificial hearts seemed to offer a way out of
this impasse. At that time, a consulting group reporting to the federal government
assumed that the major technical impediments were well on the way toward solu-
tion, and that before too long many people would have a diseased heart replaced by
a mechanical one. According to this optimistic scenario, recipients of these hearts
would be able to return to the work force, adding $19 billion to the gross national
product over a 10-year period. It was even asserted that the taxes paid by artificial
heart recipients would more than offset the federal government’s expenses in sup-
porting the program.20
In the 1960s these hopes seemed close to realization. Artificial hearts began
to be used for brief periods of time while patients were awaiting a heart trans-
plant, and efforts to develop a permanent artificial heart accelerated. An apparent
breakthrough occurred in 1982 when the Jarvik-7 artificial heart was used for the
first time. Unfortunately, the first recipient of the device, a Seattle dentist named
Barney Clark, underwent great suffering as a result of the immaturity of this tech-
nology. Seizures, severe nosebleeds, pneumonia, kidney disease, gout, epididymitis,
and an intestinal ulcer followed the implantation.21 Clark’s postoperative existence
for 112 days could hardly justify the expense and pain of the operation. Subsequent
implants met with hardly any greater success. One patient lived for 620 days, but
during that span he suffered four strokes and a series of infections that severely
Volti7e_08_ch07_125-144_highres.indd 132 28/11/12 5:58 PM

Medical Technologies 133
eroded his physical and mental capacities.22 Despite these setbacks and the bad
publicity that attended the use of the Jarvik-7, about 90 more patients received
the device in the years that followed. In 1990 the Food and Drug Administration
(FDA) banned the Jarvik-7 as a permanent replacement for a human heart because
most of the recipients lived for only a few months after its implantation. It is still
used, however, as a temporary “bridge” for critically ill patients awaiting the trans-
plant of a human heart.
Another type of implantable artificial heart was approved by the FDA in 2006.
Dubbed the AbioCor Implantable Replacement Heart, it is intended for cardiac
patients who are unlikely to live more than a month but are ineligible for a heart
transplant. Although an improvement over earlier efforts, the device has added only
a few months to the lives of its recipients, the current record being 17 months. Nor
does it confer anything approaching a normal life; in only one case has a patient
been discharged from the hospital and allowed to go home after receiving one.23
While the artificial hearts cannot yet be considered a completely successful
technology, another recent invention, the Left Ventricular Assist Device (LVAD)
is now being successfully used to compensate for the deficiencies of damaged hearts.
LVADs cannot be considered as permanent fixes, but in some cases they can delay
the need for a heart transplant by many years. Several thousand of these devices
have been implanted in cardiac patients in several countries, and their potential use
is much greater. It has been estimated that every year there are 50,000 patients in
the United States who could benefit from the implantation of LVADs.
An AbioCor heart being implanted in a patient. (Jewish Hospital/Getty Images)
Volti7e_08_ch07_125-144_highres.indd 133 28/11/12 5:58 PM

134 How Technology Affects the Health of the Earth and Its Inhabitants
Artificial hearts and LVADs are the epitome of high-tech, high-cost medi-
cal technology. They can save lives, if only temporarily, but they come with
substantial price tags. It has been estimated that providing artificial hearts to the
12,000 patients willing to submit to the procedure would cost $1.3 billion by the
patients’ fifth year at a per-patient cost of $100,000, and correspondingly higher
if, as is likely, initial costs were higher. If all of the 32,000 patients between the
ages of 55 and 70 years with end-stage heart disease received an implant, annual
total expenditure would be $3.8 billion.24 Aggregate costs for the widespread use of
LVADs would be even higher, perhaps on the order of $10 billion every year, since
the number of patients eligible for LVADs would be higher than those eligible for
artificial hearts.25 Whatever the exact figures may be, extensive use of artificial
hearts and LVADs would add billions of dollars to a national medical bill that
already absorbs more than 17 percent of the national income of the United States.
The widespread use of these technologies would engender some very dif-
ficult choices. The primary beneficiaries would be those who could afford it,
either through insurance coverage or personal resources. Alternatively, some sort
of rationing system could be set up, using criteria such as the likelihood of the
patient’s long-term survival. This is clearly a distasteful prospect—only 21 percent
of Americans indicate a willingness to ration costly new medical technologies.26
And if this were to occur despite public resistance, what sort of criteria might be
used? Ominously, criteria could include those invoked by the Seattle Admissions
and Policy Committee five decades ago, including the “worth” of a prospective
recipient, that is, how much the person had contributed or was likely to contribute
to society. Whatever the criteria invoked, the decision to use an expensive medical
technology in a world of finite resources necessitates making decisions. In many
cases this can be akin to “playing God,” a role that few humans are eager to assume.
The selection or exclusion of recipients is not the only equity issue that the use
of expensive medical technologies pushes to the forefront. In a world of necessarily
limited resources, there is no escaping the fact that an expenditure in one area of
health and medicine means the foreclosing of an expenditure in another. The more
than $17 billion annual cost of dialysis or the several billion dollars that would
be required for the large-scale implantation of artificial hearts could be used to
produce significantly higher levels of overall well-being. To cite one example, some
of the funds that have been absorbed by an artificial heart program could instead
have been used for antitobacco education campaigns. Smoking is a major contribu-
tor to heart disease; consequently, even a 1 percent reduction in tobacco use would
produce benefits considerably in excess of those offered by artificial hearts.27
The use of high-tech medicine is understandable; physicians are reluctant to
take on the responsibility of deciding when to withhold treatment, and many elderly
patients do in fact benefit from advanced medical technologies. But whatever the out-
come there is no getting around the fact that their use contributes to the sizable costs
incurred during the last year of life, which account for 26 percent of Medicare expen-
ditures and 25 percent of Medicaid expenditures.28 For some critics, the treatment of
aged patients with advanced medical technologies is a misallocation of resources; as
they see it, much of the money spent on expensive care would be better applied to the
Volti7e_08_ch07_125-144_highres.indd 134 28/11/12 5:58 PM

Medical Technologies 135
elimination of poverty among both the elderly and nonelderly, improving nutritional
awareness, or promoting after-school fitness programs.
Diagnostic Technologies
The escalation of medical costs is one of the most problematic consequences of
the advance of medical technologies, but hardly the only one. A variety of diag-
nostic technologies have substantially advanced physicians’ ability to determine
the source of medical problems. At the same time, however, they have altered the
nature of medical practice, sometimes with unfortunate results.
For centuries, medical diagnosis had been based on the observation of exter-
nal symptoms, patients’ narratives of their illness, and the application of dubious
theories that explained illness in terms of such things as imbalances in four bodily
“humors.” A major reorientation began in the 1820s when doctors began to use a
simple tube to hear heartbeat and other sounds within the patient’s chest. Dubbed
the “stethoscope,” the new instrument allowed more accurate diagnosis by ampli-
fying and focusing the sounds produced by the circulatory system. Within a few
years, other instruments for apprehending hitherto hidden parts of the body were
invented, such as the ophthalmoscope for observing the interior of the eye and the
laryngoscope for the throat. A series of sophisticated diagnostic technologies such as
X-ray machines and electrocardiographs followed. By the beginning of the twenti-
eth century, physicians were using instruments for measuring everything from blood
Rene Laennec holds an example of his invention, the stethoscope, while using a more tradi-
tional method for conducting an auditory examination of a patient’s chest cavity. (Superstock)
Volti7e_08_ch07_125-144_highres.indd 135 28/11/12 5:58 PM

136 How Technology Affects the Health of the Earth and Its Inhabitants
pressure to the electronic currents generated by the nervous system. The accuracy
of diagnosis was further advanced by the development of laboratory tests for every-
thing from cholesterol levels to the presence of the organism that causes syphilis.
The use of diagnostic tests exemplifies the tendency of technologies to create
their own needs. Many of the tests routinely performed today seem to be unneces-
sary; according to one study of a teaching hospital, 47 percent of tests performed
could have been eliminated with no evident loss in the quality of patient care.29
Fetal heart monitors have had few discernable benefits; their main consequence
seems to have been an increase in the number of deliveries using a cesarean
section.30 On the other hand, the increase in lab tests is not solely the result of
the availability of these tests. Some of the increase is directly attributable to the
practice of “defensive medicine.” Doctors are justifiably nervous about the threat
of malpractice suits, so the safest course seems to be the overprescription of tests in
order to avoid accusations of not having used every available diagnostic tool.
The overuse of tests can produce more than financial pain. According to a past
president of the Association of American Physicians, “As our interventions have
become more searching, they have also become more costly and more hazardous.
Thus, today it is not unusual to find a fragile elder who walked into the hospital,
[and became] slightly confused, dehydrated, and somewhat the worse for wear on
the third hospital day because his first forty-eight hours in the hospital were spent
undergoing a staggering series of exhausting diagnostic studies in various laborato-
ries or in the radiology suite.”31
It can be reasonably argued that the physical and financial discomfort produced
by diagnostic tests is outweighed by the greatly expanded ability to understand and
treat illnesses. But another, more subtle consequence of sophisticated diagnostic
technologies is that they have encouraged the treatment of specific disorders rather
than human patients. In many cases a physician, by examining an X-ray plate or
a microscope slide, can produce an accurate diagnosis even when the patient is
not physically present; indeed, the physician does not even have to be personally
acquainted with the patient.
In going about their diagnostic work, few physicians today trouble themselves
with obtaining lengthy narratives from their patients regarding their patients’ ail-
ments. The physician’s personal diagnostic skills are often slighted in favor of more
“objective,” scientifically based knowledge attained through the use of sophisticated
tests and instruments. This is a process that began with the stethoscope; although
a very simple device, it changed the nature of the interaction between doctors and
their patients. As Stanley Joel Reiser explains, the use of the stethoscope “helped
to create an objective physician, who could move away from involvement with
the patient’s experiences and sensations, to a more detached relation, less with the
patient but more with the sounds from within the body.”32
A similar story could be told of the sphygmomanometer, the ubiquitous instru-
ment used for the measurement of blood pressure. After its introduction in the
late nineteenth century, physiologists praised the instrument’s capacity to produce
objective, quantitative, and precise data. Many practicing physicians saw things dif-
ferently, however, fearing that the sphygmomanometer would “intervene between
Volti7e_08_ch07_125-144_highres.indd 136 28/11/12 5:58 PM

Medical Technologies 137
patients and doctors, dehumanize the practice of medicine, and replace the delicate
art of medicine with mere technical proficiency.”33
The use of increasingly sophisticated diagnostic technologies illustrates the
mixed blessings of technological advances in medicine. On the positive side, far
more data can be gathered and analyzed, allowing physicians to identify illnesses
that might otherwise have been misdiagnosed or to catch a disease in its early
stage when it can be more easily treated. Standard tests also help to promote a
standardized medical practice, removing the biases and blind spots of individual
practitioners. Consequently, medicine is more “scientific” and less tied to the vari-
able skills of individual practitioners. At the same time, however, other kinds of
medical skills are in danger of being lost. Not every aspect of a functioning organ-
ism can be reduced to “objective” data. Treating illnesses often requires more than
the ability to process information of this sort and use it as the basis of treatment.
Judgment, experience, and even “intuition” can be of vast importance for a suc-
cessful medical practice. Ideally, the use of sophisticated diagnostic technologies
frees doctors from tedious work, leaving them with more time for personalized
interactions with patients and more accurate diagnoses. Unfortunately, this is not
what always happens. A fixation on the use of modern diagnostic technologies can
easily result in the denigration of diagnostic skills based on more personal relation-
ships with patients. Consequently, a focus on “objective” data to the exclusion of
all else can lead a physician to miss some crucial clues regarding the source of a
patient’s illness.
This has a significance beyond diagnosis. Sick people usually have something
physically wrong with them, but organic disorders may be only part of the problem.
Illnesses often have psychological components as well, and the successful treatment
of these requires more than the application of medical techniques; communication,
the ability to build a bond of trust with the patient, and some measure of empathy
are also necessary. An overreliance on “objective” data can obscure and even drive
off these qualities, to the detriment of medical care as a whole. As Reiser notes,
a physician must resist the tendency to place an excessive reliance on diagnostic
technologies and the information they provide: “Accuracy, efficiency, and security
are purchased at a high price when that price is impersonal medical care and under-
mining the physician’s belief in his own medical powers.”34
Medical Technologies and Medical Ethics
While contemporary medical training has emphasized the objective, “scientific”
dimension of medical care, it also has conferred the belief that the highest duty
for members of the medical profession is to preserve life. On the face of it, it is
hard to take issue with this principle. At the same time, however, advances in
medical technology have rendered the creation and preservation of life increasingly
problematic. At all stages of life, medical practitioners along with their patients
and their patients’ families may have to grapple with major ethical dilemmas
that are closely related to the application of contemporary medical technologies.
Volti7e_08_ch07_125-144_highres.indd 137 28/11/12 5:58 PM

138 How Technology Affects the Health of the Earth and Its Inhabitants
These dilemmas have given rise to the field of medical ethics, a discipline that has
emerged in recent decades in an effort to systematically address the issues that have
attended advances in medical technology.35
New Ways of Making and Sustaining Babies
Technological advances have created ethical issues that arise long before birth. One
important threshold was crossed in 1978 with the birth, in England, of the first baby
to have been conceived through in-vitro fertilization. Since that time, hundreds of
thousands of babies have been born worldwide through the use of this procedure,
although the exact number is difficult to determine. In-vitro fertilization entails
bringing sperm and egg together in a petri dish (the term “in vitro” literally means
“in glass”). The sperm can be supplied by the husband or partner of the woman
whose eggs are to be fertilized, or it can be provided by a donor, anonymous or oth-
erwise. After the fertilized egg is cultured for three to four days and develops into
an embryo with at least four cells, the embryo is transplanted into the uterus of the
egg donor or a surrogate. Because each procedure entails the fertilization of more
than one egg, in-vitro fertilization results in the production of many surplus embryos
every year. These can be kept viable for long periods of time, but many are simply
discarded. Alternatively, the embryos can be a source of stem cells, the basis for an
emerging technology that will be discussed in the following chapter.
In-vitro fertilization is most commonly used to circumvent blockages of a
woman’s fallopian tubes or a man’s insufficient sperm production. A different repro-
ductive issue occurs when a normal pregnancy is impossible or undesired. Under
these circumstances a surrogate mother can be implanted with an embryo that has
resulted from the fertilization of another woman’s egg. Alternatively, a surrogate
mother can be artificially inseminated with sperm that fertilizes her own egg. The
latter process led to a court battle in the 1980s when a surrogate mother refused to
give the baby she had carried through pregnancy to the couple who had contracted
for her services. Although the case was eventually resolved largely in the favor of
the couple who had hired the surrogate, the present legal status of surrogate moth-
erhood reflects the absence of an ethical consensus regarding this practice. It is
unregulated in some states, regulated in others (often by barring cash payments to
the surrogate), and banned outright in others.36
More generally, new reproductive technologies have muddled accepted notions
about a fundamental constituent of human society, the family. Whether through
artificial insemination, in-vitro fertilization, or surrogate motherhood, the ability to
bypass the usual methods of procreating children has called into question the very
definition of a family. It is now at least possible for a child to have several parents:
two fathers (the sperm donor and the man taking on the conventional social role
of father) and three mothers (an egg donor, a surrogate to carry the embryo through
pregnancy, and the woman who ends up occupying the social role of mother).
While controversy still surrounds nontraditional reproductive methods, a dif-
ferent set of issues comes to the fore with the birth of babies who have been born
prematurely. On the one hand, the ability to preserve the lives of prematurely born
Volti7e_08_ch07_125-144_highres.indd 138 28/11/12 5:58 PM

Medical Technologies 139
infants has been all to the good, as many “preemies” have gone on to lead healthy
and productive lives as a result of these interventions. Unfortunately, some prema-
turely born infants are afflicted with major medical problems that result in lack of
function, continual pain, and early death. As with the end-of-life issues about to
be discussed, the value of preserving the lives of infants destined to a lifetime of
pain and sickness can be debated at considerable length. And, as with the case of
many modern medical technologies, it is at least open to question if the hundreds
of thousands of dollars sometimes spent on preserving the life of one severely pre-
mature infant might better have been used for prenatal care, which continues to be
inadequate for many expectant mothers.
Some of the medical problems that potentially afflict all newborns, premature
or otherwise, can be forestalled though the use of prenatal and postnatal testing.
Many of these procedures do not entail ethical or financial issues. For example, the
genetically based illness phenylketonuria (PKU) causes seizures, mental retardation,
and hyperactivity by affecting the ability of the body to metabolize a particular
enzyme. It can be successfully treated through dietary restrictions, but only when
there is an awareness that the disorder is present. To this end, newborns are cus-
tomarily screened for possible PKU through an easily administered blood test, and
a potential individual and social problem is averted.
Other tests are more problematic and give rise to a number of ethical questions.
The target of these tests is the genetic abnormalities associated with particular ill-
nesses. From the late 1960s onward, amniocentesis (testing of the amniotic fluid in
the uterus) has allowed the detection of genetically linked abnormalities such as
Down syndrome and spina bifida. Unlike PKU, however, treatment of these condi-
tions is difficult or nonexistent. Should a serious disability of this sort be discovered,
prospective parents have to decide if the fetus should be aborted or be allowed
to survive, a difficult decision for many. Amniocentesis also makes it possible to
determine the sex of a fetus, thereby bringing another ethical issue to the fore: Is
the desire for a boy or girl a sufficient basis for aborting a fetus of the “wrong” sex?
The decision to abort a medically abnormal fetus can be circumvented
through the use of in-vitro fertilization in conjunction with a process known as
pre-implantation genetic diagnosis (PGD). This is done by analyzing the DNA
of one cell that has been extracted from an embryo to determine if there are
chromosomal abnormalities associated with diseases such as cystic fibrosis and
Tay-Sachs. Only an embryo without this abnormality would then be transferred to
the uterus. PGD obviates the need to abort an abnormal fetus, but the disposal of
embryos is still unacceptable to those who believe that life begins at the moment
of conception. Ethical issues of this sort will become more common in the near
future as the ability to detect potential problems increases. Further complicating
matters will be the emergence of in-utero tests that may not be definitive, provid-
ing only statistical probabilities of future disorders. What might expectant parents
decide if a test indicates that there is a 50 percent chance that their child will
be stricken with cancer before the age of 20? The rapidly developing science of
genetics will surely raise many issues of this sort, some of which will be explored
in the next chapter.
Volti7e_08_ch07_125-144_highres.indd 139 28/11/12 5:58 PM

140 How Technology Affects the Health of the Earth and Its Inhabitants
When Does Life End? When Should It?
While medical technologies have raised a host of ethical issues regarding how life is
to be created, they also have been the source of new ethical dilemmas that emerge at
the other end of the life span. One centers on the very conception of when life has
become unsustainable. In the not-too-distant past, the failure of one or more vital
organs was tantamount to death. Today, however, organ failure and even multiple
organ failures can be circumvented by devices such as feeding tubes, respirators, and
dialysis machines. Even a major and irreversible loss of brain function can no longer
be taken to be an unequivocal indication of death. The issue of determining when
death occurs received a great deal of national attention from 1998 onward when
family members took opposing sides over the removal of a feeding tube that was
sustaining a 34-year-old woman named Terri Schiavo. Although she had been in a
“persistent vegetative state” for more than eight years, her parents strongly resisted
her husband’s efforts to terminate their daughter’s life through the removal of the
tube. What had been a family matter was then thrust into the political arena when
it was taken over by politicians with opposing values about the prolongation of life
through artificial means. In 2005, after many court hearings, legislation that was
overturned by the Supreme Court, political posturing, and a great deal of acrimony,
the tube was removed, and Terri Schiavo died at the age of 41.
A feeding tube is not a particularly sophisticated item of medical technology, but
in some circumstances it helps to maintain organic functions for an indefinite period
of time. Whether or not the maintenance of these bodily processes is the same thing
as preserving a life lies at the center of the ethical debate over the use of modern medi-
cal technologies for this purpose. This issue is not easily resolved, and it will become
more prominent in the years to come as emerging technologies make it possible to
preserve organic functions for longer periods of time and under increasingly difficult
circumstances. At the same time, end-of-life issues are becoming increasingly salient
as the elderly comprise a large and growing portion of the total population of the
world’s industrially developed nations. As with costly medical technologies in general,
measures to extend the lives of elderly patients are in competition with alternative
uses of scarce medical resources. Economic calculations cannot substitute for ethically
based judgments, but both will have to be taken into account as advanced medical
technologies solve one set of problems while giving rise to others.
Halfway Technologies
The dilemmas presented here demonstrate the decidedly mixed blessings bestowed
by medical technologies that are only palliative. A kidney dialysis machine does not
cure the disease that led to kidney failure in the first place, just as a heart bypass
operation comes into play only after the damage has been done. Many of the appar-
ent breakthroughs of recent years are nowhere near as efficacious and cost-effective
as the dramatic advances of the past. For example, relatively inexpensive vaccina-
tion campaigns completely eradicated smallpox as a human disease, significantly
contributing to longevity and a better quality of life. In contrast, despite decades of
intensive efforts, we have made only marginal progress in overcoming cancer, the
Volti7e_08_ch07_125-144_highres.indd 140 28/11/12 5:58 PM

Medical Technologies 141
second-largest cause of death in the industrial world. Many of today’s technological
advances in medicine have extended life spans and eased pain but have done little
to address the causes of disorders themselves.
For a patient suffering from organ failure, the availability of dialysis or a heart
transplant can be the difference between life and death. But for the society as a
whole, the escalating costs of medical care driven by these “halfway technologies”
constitute a formidable problem. No one is comfortable with putting a dollar value
on a human life, but the years to come may see the emergence of more explicit efforts
to balance the costs of new technologies with their benefit to society as a whole. Far
from offering across-the-board advantages, the development of new medical tech-
nologies will force us to grapple with increasingly difficult choices concerning their
application. Financial considerations aside, new and emerging medical technologies
will also intensify existing ethical quandaries while creating some new ones as well.
As we shall see in the next chapter, the rapid development of genetic technologies
will allow the successful treatment of many illnesses and infirmities, and may allow
the complete eradication of some others. But to repeat a now familiar theme, these
advances will give rise to a new set of issues that will demand our attention.
Questions for Discussion
1. As noted in this chapter, about a quarter of Medicare and Medicaid expenditures
are incurred during the last year of life, Given the increasing financial burdens
of both programs, should anything be done about this situation. If so, what?
2. When the demand for a new medical technology exceeds the supply, what
should be used to determine who gets it? A lottery? The ability to pay? The
“merit” of the recipient? Might it be better to limit the development of new
technologies in order to forestall the need to make these choices?
3. Attempts at developing devices to completely replace human hearts have not
been successful, but many technologies require a long development period
before they are successfully employed. Should R&D for artificial heart tech-
nologies continue? If so, how should it be financed? Are there some ongoing
medical programs that are less deserving of financial support than the develop-
ment of an artificial heart?
4. Who should make the decision to terminate life support when a person is
in a permanent vegetative state, and has left no instructions concerning the
indefinite prolongation of his or her life? Should government officials, elected
or otherwise, have a role in making this decision?
5. Some medical procedures can be characterized as “halfway technologies,”
while others are at least close to being completely effective. What accounts for
the difference? What “halfway technologies” of the past have become much
more effective today? Why has this advance occurred?
Notes
1. Lewis Thomas, The Medusa and the Snail: More Notes of a Biology Watcher (New York:
Bantam Books, 1980), p. 133.
2. Thomas McKeown, The Role of Medicine: Dream, Mirage, or Nemesis? (Princeton, NJ:
Princeton University Press, 1979).
Volti7e_08_ch07_125-144_highres.indd 141 28/11/12 5:58 PM

142 How Technology Affects the Health of the Earth and Its Inhabitants
3. John E. Wennberg, et al., “Tracking the Care of Patients with Severe Chronic Illness,”
(April 2008) accessed on July 8, 2008 at http://www.dartmouthatlas.org/atlases/2008
_Atlas_Exec_Summ .
4. Gregory L. Weiss and Lynne E. Lonnquist, The Sociology of Health, Healing, and Illness,
ed. 6 (Upper Saddle River, NJ: Pearson Prentice Hall, 2009) p. 300.
5. Louise B. Russell, Technology in Hospitals: Medical Advances and Their Diffusion (Washington,
DC: The Brookings Institution, 1979), p. 3.
6. Congressional Budget Office, “Technological Change and the Growth of Health Care
Spending,” (January 2008) accessed on February 23, 2012 at http://www.cbo.gov/sites
/default/files/cbofiles/ftpdocs/89xx/doc8947/01-31-techhealth .
7. Adapted from Bryan Jennett, High-Technology Medicine: Burdens and Benefits (Oxford:
Oxford University Press, 1986), p. 174.
8. Renee C. Fox and Judith P. Swazey, The Courage to Fail: A Social View of Organ
Transplants and Dialysis (Chicago: University of Chicago Press, 1978), p. 230.
9. Shana Alexander, “They Decide Who Lives, Who Dies,” Life 53 (9 November 1962),
quoted in Ibid., p. 231.
10. Fox and Swazey, The Courage to Fail, p. 233.
11. Ibid., p. 348.
12. Ibid., p. 349.
13. United States Renal Data System, “Total Medicare Expenditures by Modality”
accessed on July 9, 2008 at http://www.usrds.org/2007/slides/html/00a_precis_07.
swf, p. 46.
14. W. Noel Keyes, Bioethical and Evolutionary Approaches to Medicine and the Law
(Chicago: American Bar Association, 2007) p. 685.
15. Henry J. Aaron and William B. Schwartz, The Painful Prescription: Rationing Health Care
(Washington, DC: The Brookings Institution, 1984), p. 35.
16. “Costs of Dialysis for Elderly People in the UK” (May 14, 2002) accessed on
February 23 at http://ndt.oxfordjournals.org/content/18/10/2122.full
17. Margaret L. Eaton and Donald Kennedy, Innovation in Medical Technology: Ethical Issues
and Challenges (Baltimore: Johns Hopkins University Press, 2007), pp. 16–20.
18. Shannon Brownless, “The Perils of Prevention,” New York Times Magazine (March 19,
2003) accessed on July 7, 2008 at http://www.newamerica.net/publications /articles/2003
/the_perils_of_prevention.
19. Thomas Bodenheimer, “High and Rising Health Care Costs. Part 2: Technologic
Innovation,” Annals of Internal Medicine 142, 11 (June 2005).
20. Diana B. Dutton, Worse Than the Disease: Pitfalls of Medical Progress (Cambridge:
Cambridge University Press, 1988), p. 99.
21. Ibid., p. 118.
22. Gideon Gil, “The Artificial Heart Juggernaut,” Hastings Center Report 19, 2 (March/
April 1989): 24.
23. U.S. Food and Drug Administration, “FDA Approves First Totally Implanted Permanent
Artificial Heart for Humanitarian Uses” (September 5, 2006) accessed on July 8, 2008 at
http://www.fda.gov/bbs/topics/NEWS/2006/NEW01443.html.
24. National Center for Biology Information, “The Artificial Heart: Costs, Risks, and
Benefits—An Update” accessed on July 8, 2008 at http://www.ncbi.nlm.nih.gov
/pubmed/10311835.
25. Patrick M. McCarthy, Richard Lamm, and Robert M. Sade, “Medical Ethics Collide
with Public Policy: LVAD for a Patient with Leukemia,” Annals of Thoracic Surgery 80,
3 (September 2005): 793–98.
Volti7e_08_ch07_125-144_highres.indd 142 28/11/12 5:58 PM

http://www.dartmouthatlas.org/atlases/2008

http://www.cbo.gov/sites

http://www.usrds.org/2007/slides/html/00a_precis_07

http://ndt.oxfordjournals.org/content/18/10/2122.full

http://www.newamerica.net/publications

http://www.fda.gov/bbs/topics/NEWS/2006/NEW01443.html

http://www.ncbi.nlm.nih.gov

Medical Technologies 143
26. Howard D. Schwartz, “Rationing and the Ideology of Exclusion,” in Howard D.
Schwartz (Ed.), Dominant Issues in Medical Sociology, 3rd ed. (New York: McGraw-Hill,
1994), p. 417
27. Dutton, Worse Than the Disease, p. 300.
28. Donald R. Hoover, Stephen Crystal, Rizie Kumar, Usha Sambamoorthi, and Joel
C Cantor, “Medical Expenditures During the Last Year of Life: Findings from the
1992–1996 Medicare Current Beneficiary Survey,” Health Services Research, (May
2003) accessed on February 23, 2012 at http://www.ncbi.nlm.nih.gov/pmc/articles
/PMC1464043/
29. U.S. Congress, Office of Technology Assessment, “The Changing Health Care
System,” in Schwartz, Dominant Issues, op cit., p. 286.
30. Alexander Friedman, “Why Do Doctors Cling to Continuous Fetal Heart Monitoring?”
(2011) accessed on February 23, 2012 at http://www.ncbi.nlm.nih.gov/pmc/articles
/PMC1464043/
31. David E. Rogers, “On Technologic Restraint,” Archives of Internal Medicine 135 (1975),
p. 1395, cited in Stanley Joel Reiser, Medicine and the Reign of Technology (Cambridge:
Cambridge University Press, 1978), p. 161.
32. Ibid., p. 38.
33. Hughes Evans, “Losing Touch: The Controversy over the Introduction of Blood
Pressure Instruments into Medicine,” Technology and Culture 34, 4 (October 1993): 803.
34. Reiser, Medicine and the Reign of Technology, p. 231.
35. Richard Sherlock, “Medical Ethics,” in Carl Mitcham, Ed., The Encyclopedia of Science,
Technology, and Ethics (Detroit: Thomson Gale, 2005) p. 1184.
36. Weiss and Lonnquist, The Sociology of Health, p. 369.
Volti7e_08_ch07_125-144_highres.indd 143 28/11/12 5:58 PM

http://www.ncbi.nlm.nih.gov/pmc/articles

http://www.ncbi.nlm.nih.gov/pmc/articles

A Macmillan Higher Education Company
WORTH PUBLISHERS
Volti7e_01_FM_i-xviii_highres.indd 2 28/11/12 5:52 PM
this page left intentionally blank

145
Genetic Technologies
c h a p t e r e i g h t
As was noted in the previous chapter, many of the dilemmas that surround the use
of medical procedures such as dialysis and heart transplantation are a consequence
of their status as “halfway technologies.” They offer the prospect of survival and
some relief from a potentially fatal disease, but they do not cure it. The practice
of medicine is badly in need of technologies that are more than palliatives. This
is not a vain hope, for we may be on the threshold of medical advances equal to
vaccination or antiseptic surgery. In recent years, much has been learned about the
structure and mechanisms of cellular reproduction, giving rise to the prospect of
eventual cures for a host of ailments. Yet when we consider the possible impact of
technological breakthroughs in this area, we are confronted with a familiar theme:
every advance is both a solution and a problem.
The Genetic Fix
The fundamental scientific insight underlying many emerging medical technologies
is that the growth and reproductive mechanisms of each cell are governed by the
basic unit of heredity, the gene. Humans have about 20,000 genes arrayed along
the 46 chromosomes found in the nuclei of every cell, other than red blood cells
(which have no nuclei), and sperm and egg nuclei (which have 23 chromosomes).
The human genome consists of long chains of deoxyribonucleic acid (DNA), which
collectively contain more than 3 billion letters that comprise the human genetic
code. These letters (also called “bases” or “base pairs”) are in effect instructions for
making proteins, the essential building blocks of life. Some of the DNA sequences
that make up a genetic code may be faulty and can give rise to a variety of disorders.
Genetic screening or actually intervening in an organism’s genetic code offers
revolutionary new ways of diagnosing, curing, and even preventing the emergence
of many disorders.
Gaining the ability to successfully treat genetically based disorders has been
the work of many decades. People had long been aware that parental traits often
reappeared in their offspring, but they could only speculate on the mechanism that
made this happen. An important step forward came with experiments conducted
during the latter half of the nineteenth century by Gregor Mendel. His systematic
study of the propagationof peas revealed rules of inheritance through which traits
were passed from generation to generation, although the biological processes that
drove them were still unclear.
Volti7e_09_ch08_145-162_highres.indd 145 28/11/12 5:59 PM

146 How Technology Affects the Health of the Earth and Its Inhabitants
While Mendel was conducting his experiments, other scientists had been
taking advantage of improvements in microscopy to obtain a better understanding
of the constituents of plant and animal cells. First came the discovery of the cell
nucleus, followed by the use of improved staining techniques to reveal the presence
of chromosomes in cell nuclei. During the first decade of the twentieth century,
Thomas Hunt Morgan found that these chromosomes were the site of the key
agent of heredity, the gene. Morgan and his associates performed experiments with
fruit flies that helped them to unravel the process of trait transmission, but little
was known then about its molecular basis. The puzzle began to be solved in the
1940s when DNA (deoxyribonucleic acid) was identified as the key constituent of
genes. This set the stage for one of the most important scientific discoveries of the
twentieth century. In 1953 James Watson and Francis Crick determined that the
molecular structure of DNA consists of bases attached to pairs of sugar-phosphate
backbones that form intertwined spirals, the now-famous double helix. This
discovery laid the foundation for rapidly expanding insights into the functioning
and reproduction of cells.
Another milestone was reached in the 1970s when researchers learned how to
separate and isolate portions of DNA. This allowed the development of procedures
for transferring genetic material from one organism to another. As has been the case
with a great deal of contemporary scientific research, new technologies have created
greatly expanded opportunities for advancing scientific knowledge, In the case of
genetics, the development of polymerase chain reaction machines gave researchers
the ability to obtain and analyze virtually unlimited amounts of DNA, while at the
same time identifying mutations associated with particular diseases.
Continued advances in laboratory equipment and techniques made possible the
discovery of the distinctive sequence of bases that characterized the DNA of a few
individual genes. This gave rise to the possibility of determining the sequences for
all of a particular organism’s genes. By the mid-1980s the 150,000 bases that made
up the genome of a virus had been sequenced, but the possibility of doing the same
thing for the more than 3 billion bases of the human genome seemed remote at best.
But by the early 1990s, new technologies in the form of automated sequencers and
mapping machines, along with faster and more powerful computers, were propelling
advances in basic science. The eventual determination of the human genetic map
had become a distinct possibility.
Discovering Genes and Patenting Them
Up to this point the sequencing effort had been conducted by researchers working
as part of the Human Genome Project, which had been launched in 1990 and
coordinated by the U.S. Department of Energy and the National Institutes of
Health. By the end of the decade a private company, Celera Genomics, also was
engaged in decoding the human genetic code. In early 2001 both enterprises
separately published “rough drafts” of the human genome that were in general
agreement with each other. The complete sequence was published in 2003, exactly
50 years after the discovery of the molecular structure of DNA.1
Volti7e_09_ch08_145-162_highres.indd 146 28/11/12 5:59 PM

Genetic Technologies 147 147
Although Celera Genomics made a major contribution to human genetics,
its involvement in genetic research has highlighted some of the issues raised by
the coupling of scientific discoveries with commercial motivations. Even before
the mapping of the human genome, the potential monetary value of genetically
engineered organisms was underscored when in 1980 the U.S. Supreme Court in
a 5-4 decision ruled that a human-made organism (in this case a bacterium that
could digest oil) was entitled to full patent protection.2 Today, patent protection has
been extended to genes themselves. To be awarded a patent on a gene, an applicant
must satisfy the basic criteria mentioned in Chapter 5: novelty, usefulness, and
nonobviousness. A gene (or even a fragment of a gene) can be patented when the
applicant is the first to identify that gene, show what it does, and demonstrate that
no one else had made these discoveries. The U.S. Patent Office also requires that
patented genetic material must also exhibit “specific and substantial utility that is
credible.”3 For example, a patent may be awarded for the discovery of a gene that
is implicated in certain kinds of cancer. The holder of the patent then would have
an exclusive right to market laboratory tests, diagnostic devices, and therapeutic
products based on the knowledge of that particular gene.
The rationale for allowing genes to be patented is the same as for the patenting
of mechanical devices or new chemical compounds, that the possibility of obtaining
a patent serves as a powerful stimulus to research and invention. At the same
time, however, some critics have questioned the wisdom of allowing genes to be
patented. The general counterargument—that the monopolies engendered by
patents outweigh their benefits—is underscored by several other objections tied
specifically to the patenting of genes. At the most basic level, the very concept of
patenting a fundamental constituent of life remains problematic to some, the 1980
Supreme Court decision notwithstanding. It also may be argued that the patenting
of genes and their components can retard scientific advance by inhibiting the free
flow of information. Patent documents may provide essential information, but their
use usually requires the payment of permission fees to the holder of the patent.
Moreover, genetic research often requires numerous steps and stages, each of which
may be controlled by a different patent. Once again, hefty fees may be required for
the use of the information covered by a patent. Even the process of simply finding
out who owns what may entail substantial costs. Critics have also pointed to the
practice of “patent stacking,” whereby a single gene sequence is covered by separate
patents covering key elements of the gene, such as expressed sequence tags and
single nucleotide polymorphisms, along with the gene itself.
The number of genome-related patents now runs into the millions, and
genetically based technologies are a very big business. Commercial entities such
as pharmaceutical firms have profited handsomely from gene patents, but so have
public and not-for-profit institutions such as universities and research organizations.
The 3,933 U.S. patents held by 195 universities and other research institutions
brought in $1.3 billion in licensing income in 2003 alone.4 Thousands of gene-
related patent applications are filed every year, and a great amount of revenue may
hinge on the granting or refusal of a single application. As has happened throughout
the history of the patent system, the awarding of a single patent may give rise to
Volti7e_09_ch08_145-162_highres.indd 147 28/11/12 5:59 PM

148 How Technology Affects the Health of the Earth and Its Inhabitants
a lengthy and expensive courtroom battle over its validity. More significantly for
society as whole, the practice of patenting genes and their components will likely
be challenged in the years to come. In the past, U.S. Congress has prohibited the
awarding of a patent when it was deemed contrary to the public interest. For much
the same reason, the American Medical Association has opposed the patenting
of medical and surgical procedures. The propriety of patenting genes is still up
for debate, and the issue may need to be reconsidered by all three branches of the
United States government at some point in the future.
Bioengineering on the Farm
A less dramatic but no less controversial application of genetics has been the
development and cultivation of genetically modified (GM) crops. Genetic
modification is nothing new, of course; humans have been using selective breeding
for millennia in order to improve on naturally occurring plants and animals. But
selective breeding can be a hit-and-miss affair that entails thousands of genes and
may take many generations to achieve an intended result. Genetic engineering
makes it possible to regularize and accelerate the process by transferring a single
gene or set of genes in order to obtain a desired characteristic.
In some quarters at least, genetic modification of food crops has become
especially attractive as a way of dealing with present and future food shortages.
Once confined to a few regions, food shortages may be a growing problem in the
years to come. The gains from the Green Revolution that began in the 1960s have
largely played out, and worldwide grain production has leveled off or risen only
slightly in recent years.5 Pressures on food production will likely become more
severe as the world’s population grows and as increasing affluence triggers a growing
appetite for meat, which demands considerably more grain and water per capita
than does a largely vegetarian diet.
Most genetic interventions do not increase crop yields by making individual
plants more prolific. Rather, the costs of cultivation may be lowered through new
means of controlling weeds, pests, and diseases. It is now a common practice to sow
fields with crop varieties that have been genetically modified to have increased
resistance to herbicides so the application of weed killers will not affect the food
crop.
Use of these crops may have the added benefit of allowing the application of
safer herbicides that need to be used less frequently. It is also possible to eliminate
or reduce the use of pesticides through the cultivation of corn and cotton varieties
that have been modified to produce a pesticide derived from Bacillus thuringiensis, a
naturally occurring soil bacterium.6
Genetic modification can also be used to increase the nutritional value of foods.
One notable example has been the development of strains of rice that prevent
certain kinds of blindness by supplying increased quantities of vitamin A. Another
example of a useful genetic modification is the creation of crop varieties capable of
tolerating long periods of drought. Somewhere over the horizon is the production
of “edible vaccines”—food crops that prevent certain diseases when they are eaten.
Genetically engineered crops also may be used to promote “conservation tillage,”
Volti7e_09_ch08_145-162_highres.indd 148 28/11/12 5:59 PM

Genetic Technologies 149
a mode of energy-efficient cultivation that conserves the soil and the organisms
in it by eliminating or sharply reducing the need for regular plowing. On the far
horizon are possible technologies that promote more efficient photosynthesis,
drought resistance, nitrogen fixation, and other qualities that have the potential to
significantly increase food production.7
Genetically modified crops now constitute a substantial portion of the global
food economy; by 2006, 10.3 million farmers were cultivating genetically modified
food crops on 252 million acres of farmland.8 The United States is by far the
largest user of genetically modified crops, with 57.7 million hectares sown in 2007.9
Argentina and Brazil at 19.1 and 15 million hectares, respectively, came in a distant
second and third. At this point, only four types of genetically modified crops are
cultivated in significant numbers—soybeans, canola, cotton, and corn—but they
dominate the market in the United States. In 2005 GM crops accounted for
93 percent of the soybeans, 82 percent of the canola, 79 percent of the cotton, and
52 percent of the corn.10 With figures like these, it is apparent that the consumption
of food based on GM crops is hard to avoid in the United States. At least 60 percent
of processed foods found in American supermarkets contain one or more GM
ingredients, usually corn, soy, or canola.11
Although GM crops are confined to only a few countries, their widespread use
in the United States and a few other countries has generated substantial revenues.
A $280 million business in 1996, sales of genetically modified seeds increased
17-fold, to $4.7 billion in 2004, and now account for a quarter of the revenues of
the commercial seed market. Most of these revenues have gone to the few firms that
dominate a highly oligopolistic industry.
Enthusiasm for GM crops is hardly universal; more than half of the world’s
genetically modified crops are grown in the United States.13 Some European
countries have put a moratorium on their use, and opposition to such crops can be
found throughout the world. At the most fundamental level, critics of GM crops
share a concern that changing the genetic makeup of plants is a dangerous venture
into uncharted territories. Alterations to an organism’s genome, which in nature
may take millions of years, can now take place in a matter of days. And, according
to critics, the introduction of laboratory creations into the natural world may have
dangerous unintended consequences. Opponents of GM crops point to the possible
presence of new allergens or toxins that could afflict many consumers of these foods.
Concerns have also been voiced about the danger of cross-pollination, sometimes
referred to as “outcrossing,” of GM plants with naturally occurring plants, which
could result in the emergence of “superweeds” with a resistance to commonly used
herbicides. There also have been cases in which genetic material from corn used for
animal feed has shown up in corn intended for human consumption.14 Arguments
against the use of GM crops can also be based on the economics of agricultural
production. In the industrially developed world the “farm problem,” at least until
recently, has stemmed from too much production, not too little, resulting in low
food prices and depressed incomes for farmers. Increasing production through the
use of GM crops exacerbates the problem, and puts even more pressure on small
farmers, who are most vulnerable to falling prices. Overproduction is, of course,
Volti7e_09_ch08_145-162_highres.indd 149 28/11/12 5:59 PM

150 How Technology Affects the Health of the Earth and Its Inhabitants

The director of an agricultural R&D
center (top) in Thailand inspects
genetically modified crops, while
others (bottom) point to their
potential dangers. (Top photo: Michael
Mathes/AFP/Getty Images; Bottom photo: Jim
Watson/AFP/Getty Images)
Volti7e_09_ch08_145-162_highres.indd 150 28/11/12 5:59 PM

Genetic Technologies 151
not a problem in most poor countries, where simply keeping up with population
growth is a major challenge for agriculture. But even there, the use of GM crops
raises some important issues. Seeds for genetically modified crops are more expensive
than naturally occurring varieties. Extensive cultivation of GM crops would primarily
benefit wealthy, large-scale farmers and intensify rural economic inequalities.
Also problematic is the development and sale of genetically modified seeds
with “terminator technology” that prevents them from propagating new seeds.
Consequently, each new crop has to be sown with seeds purchased from the firm
that markets them.
Widespread use of GM crops also would increase monoculture—the cultivation
of only a few varieties of plants over a wide area. This practice poses the danger
of massive crop failures because it eliminates the natural firewalls that prevent the
spread of a pest or disease. To be sure, massive crop failures are a rarity in modern
farming, but the main thing that prevents their occurrence is the extensive use of
insecticides and other pesticides, which create problems of their own.
Genetically modified crops exemplify the limitations of technological fixes. At
present and for the near future, existing agricultural technologies are sufficient for
feeding the world’s population. The tragic existence of millions of malnourished
people is not the result of inadequate food production, but of warfare, unfair trade
practices, misguided governmental policies, and the maldistribution of income and
productive resources. Genetically based agricultural technologies have the potential
to improve food production and nutritional standards, but they shouldn’t be viewed
as a substitute for needed economic and political reforms.
In regard to the real and imagined dangers of genetically modified crops,
extreme fears are probably not justified. Recent research has not demonstrated that
GM crops pose clear-cut dangers, but, as with the case of all novel and potentially
far-reaching technologies, a large dose of caution is certainly in order. At the very
least, continued research into possible negative consequences is essential. In the
words of one plant pathologist, “We need to devote the research to risks now, rather
than deal with repercussions later.”15
Genetic Mapping and Screening
While recently developed technologies are used to alter the genetic makeup of plants
and other organisms, some technologies are being used to get a better understanding
of natural genetic endowments and their consequences. Most notable is the use of
pre- and postnatal screening to discover potential abnormalities in infants. It is hard
to take issue with technologies that prevent early deaths and crippling diseases, but
as will be noted below, even these benefits can have a dark side. Also problematic
is the genetic screening of adults that does not have a curative intent. Perhaps
the least objectionable use of genetic mapping is in the realm of law enforcement.
Identifying the perpetrators of crimes, always a major concern for police forces, has
been significantly augmented through DNA testing. As has now become familiar to
viewers of televised crime, both real and fictional, a DNA sample obtained from a
suspect can be matched with DNA evidence taken from a crime scene to determine
Volti7e_09_ch08_145-162_highres.indd 151 28/11/12 5:59 PM

152 How Technology Affects the Health of the Earth and Its Inhabitants
guilt or innocence. Sometimes it is not even essential to have a suspect’s DNA
sample; a close match with a relative’s DNA may provide an important insight into
the identity of an at-large murderer or rapist.
Tests of this sort require a prior record of an individual’s genome or at least
a portion of it. In recent years many police forces have been accumulating DNA
records for large numbers of people. In many jurisdictions anyone arrested is
required to submit a DNA sample, usually obtained through a simple mouth swab.
Even though a suspect may be exonerated, his or her genome will remain on file,
most likely in a centralized data bank maintained by the FBI. On the other side
of the coin, some individuals convicted of various crimes have been exonerated
when subsequent DNA tests indicated that they were in fact innocent; by 2008,
243 convictions had been overturned, and 17 residents of death row were shown
to be innocent.16
Whether or not the mandatory submission of a DNA sample can be successfully
challenged by invoking the U.S. Constitution’s prohibition of “unreasonable search
and seizure” is still being debated. Meanwhile, the collection of DNA samples by
firms in the private sector has become a source of concern to civil libertarians,
although it is not yet widespread. As might be expected, physicians and other
medical practitioners have taken the lead; having a patient’s genetic information
can make an invaluable contribution to diagnosis and treatment. At the same time,
however, the growing use of electronic medical records increases the danger that
these records, many of which contain sensitive information, could be accessed by
individuals with malicious intent.17 More benign but still problematic is the legal
right of employers and insurance companies to require individuals to authorize
the release of their medical records. Most states have laws that prevent insurers
from denying or restricting coverage or charging different rates based on genetic
information. It is also forbidden in most states to require a genetic test and to deny
employment due to unfavorable genetic information. At the same time, however,
an employer may make a conditional job offer to a prospective employee and
then require him or her to authorize the release of his or her health records as a
condition of employment.18 Clearly, the emergence of DNA testing has created
a number of issues and problems in balancing improved medical care with the
protection of individual privacy, a conundrum that will reappear when we consider
in a later chapter how advances in electronics also pose potential threats to
personal privacy.
Cloning, Present and Future
The patenting of genes, the growing of genetically modified food crops, and the
threats to privacy posed by digitized genetic records are prominent examples of the
legal and ethical issues raised by recently developed biological technologies, but
they are hardly the only ones. Another prominent source of controversy has been
the practice of cloning organisms. A clone is an organism that has been derived
from another organism through asexual reproduction. Because all of the clone’s
genetic material originated with the “parent” organism, it is a genetic duplicate.
Volti7e_09_ch08_145-162_highres.indd 152 28/11/12 5:59 PM

Genetic Technologies 153
During the 1950s biologists successfully cloned frogs, but for many years it was
thought that the cloning of higher organisms was impossible or at least unlikely.
Complacency over cloning was shattered in 1997 when a team of researchers at
the Roslin Institute in Scotland announced that they had successfully cloned a
female sheep during the previous year.19 The genetic material of the sheep known
as “Dolly” originated in the nucleus of a cell that had been part of the mammary
tissue of an adult sheep, one of many nuclei that had been transplanted into
the enucleated eggs of sheep of a different breed. Since these nuclei carried full
complements of chromosomes (unlike sperm and eggs, which each contain only
half the genetic material of ordinary cells), the eggs did not require fertilization.
Instead, an electric shock was applied, which allowed some of the eggs to begin
the process of cell division that results in an embryo. Thirteen out of 277 of these
eggs were implanted into the uteruses of sheep who served as surrogate mothers.
Of these, 12 miscarried, but Dolly was carried to term and came into the world on
July 5, 1996.
The ultimate goal of the Roslin team was to genetically modify the cells of
sheep udders so that they would produce commercially valuable drugs. The cells
would then be cloned, allowing the reproduction of sheep that produced the drug
along with their milk. It was anticipated that cloned sheep eventually could also
be used for conducting research into genetically borne diseases. These, of course,
are not the only possible applications of cloning. Sheep are not fundamentally
different from other mammals, humans included, and there is no scientific
Dolly with Ian Wilmut, the leader of the team that cloned her. (Maurice McDonald/PA Wire/
AP Photo)
Volti7e_09_ch08_145-162_highres.indd 153 28/11/12 5:59 PM

154 How Technology Affects the Health of the Earth and Its Inhabitants

The initiation of cloning: a needle inserts
the nucleus of an adult cell into an
enucleated mouse egg. (James King-Holmes/
Photo Researchers, Inc.)
reason that the procedures used to produce Dolly could not also be applied to the
cloning of people.
The prospect of human cloning has raised a host of ethical questions. Would
it be prudent to circumvent the normal process of reproduction, which ensures the
healthy genetic diversity of a population? Will members of a particular segment of
the population be especially inclined to clone themselves, and are they necessarily
the kind of people who should be perpetuated? Will people with particular abilities
allow themselves to be cloned for a fee so that prospective parents will be able to
have a child with inborn talents? Will people with damaged organs clone offspring
so that they are guaranteed future organ replacements that pose no danger of
rejection? Might it be possible to eventually clone individuals from the cells of dead
people? Right now this is a far-fetched prospect, but so was the cloning of mammals
a few years ago. Some might argue that such an outcome would be desirable, as it
would preserve the genetic endowments of geniuses in the realms of science, music,
the visual arts, and other exalted human endeavors. But at the same time, of course,
it could also allow a ruthless dictator to attain a kind of immortality
Concerns over human cloning have led several governments to ban any efforts
in this direction, and the U.S. government has barred the use of federal funds for
human-cloning research. Even so, cloning is a technology that will be difficult or
even impossible to contain. Clandestine human cloning could take place anywhere,
and eventually it might even be encouraged by governments beguiled by the
prospect of creating a cohort of “supercitizens.” In any event, neither tacit nor direct
government support is essential for human cloning. There are many individuals
wealthy enough to underwrite such a venture, and their ranks include persons whose
outsized egos are likely to make their physical duplication an attractive proposition.
It is important to note, however, that governments or individuals will be
disappointed if they expect that cloning will create identical copies of the original
organism. Even “identical” twins, who are natural clones, are not really identical.
There are many cases, for example, of one twin being left-handed while the other
is right-handed. Animals cloned in laboratories often differ considerably in both
appearance and behavior from the animal from which they were cloned.20 These
differences stem from biological processes that are still not well understood, along
Volti7e_09_ch08_145-162_highres.indd 154 28/11/12 5:59 PM

Genetic Technologies 155
with environmental influences that begin in the womb and continue to shape an
organism throughout its life. Without getting into the age-old debate about the
relative importance of “nature vs. nurture,” it simply can be stated that genetic
endowments are not all-powerful, and that an organism’s environment is a major
influence on its appearance and behavior.
Stem Cells and Future Therapies
At this point, the cloning of humans is only a hypothetical issue. Of much
greater current salience is a promising but controversial medical technology,
treating serious diseases through the use of embryonic stem cells. These cells
appear during the early phase of embryo formation and have the quality of “plu-
ripotency,” which means that they have the capability to become any of the
220 kinds of cells that constitute the human body, everything from bone cells
to neurons. Stem cells in bone marrow are now being successfully used to treat
leukemia, but the real promise of stem-cell therapies lies in the future. With
further research it is hoped that stem cells can be used to regenerate organs
that have been damaged by Parkinson’s disease, Alzheimer’s disease, and many
other ailments.
In recent years some researchers have been able to convert ordinary cells into
pluripotent stem cells by using a virus to introduce four reprogramming genes into
a cell nucleus.21 Although promising, therapies using these induced stem cells lie
at least a decade in the future, and current stem cell research and application is
based on stem cells obtained from embryos that have been created through in-vitro
fertilization but have not been implanted into a uterus for further development.
Alternatively, it may be possible to extract stem cells from embryos that received
the genetic material of a patient to be treated with these cells. In either case, the
use of embryos has galvanized opposition to stem cell research. For individuals and
groups who hold that life begins at conception and that an embryo is a human
being, the destruction of an embryo through the extraction of stem cells is akin
to murder. Adhering to this point of view, President George W. Bush in 2001
prohibited the federal funding of stem cell research unless it used existing stem cell
lines derived from embryos that were the product of in-vitro fertilization and where
the sperm and egg donors had provided informed consent. Originally thought to
constitute approximately 60 stem cell lines, only about 15 of them proved suitable
for research purposes.22 Other restrictions on embryonic stem cell research created
difficulties for researchers. According to the government’s rules, research using
stem cells had to be kept entirely separate from other kinds of research receiving
federal funding; even the shared use of a refrigerator was forbidden. In 2009 these
restrictions were lifted by the Obama administration, a move upheld in a number of
court cases. Meanwhile, embryonic stem cell research continues to be controversial.
Polls indicate that a majority of Americans are in favor of it, but support or
opposition tends to break along political and cultural divisions; Republicans and
frequent church attenders were more likely to oppose embryonic stem cell research,
while Democrats and less frequent churchgoers tended to favor it.23
Volti7e_09_ch08_145-162_highres.indd 155 28/11/12 5:59 PM

156 How Technology Affects the Health of the Earth and Its Inhabitants
Although most supporters of this research do not believe that an embryo
constitutes a human life, its use as a source of stem cells still triggers a number
issues and problems. Currently, the embryos used for stem cell research are the
surplus products of in-vitro fertilization that are destined to be discarded when
they are no longer needed. However, the supply of these embryos will likely not
be sufficient if stem cell research results in therapies that have larger numbers of
potential beneficiaries. This would give rise to two alternative means of dealing
with supply limitations, both of which may be distasteful to many. On the one
hand, the products of in-vitro fertilization would continue to be the sole source of
stem cells, but because the number of embryos is necessarily limited, their stem cells
would either go to the highest bidder or would be allocated in a manner reminiscent
of the early days of kidney dialysis. On the other hand, embryos could be cultivated
by commercial enterprises for the sole purpose of providing stem cells, making them
a kind of commodity rather than a life form, human or otherwise. As with most
revolutionary medical technologies, stem cell–based therapies bring with them a set
of ethical issues that need to be addressed, even though they may never be resolved
to the satisfaction of everyone.
The Ethics of Genetic Intervention
Stem-cell research, mammal cloning, genetically modified foods, and the sequencing
of the human genome have given rise to a host of ethical concerns that touch upon
some of the most basic issues of human existence. On a somewhat less abstract
level, governments are faced with the task of creating appropriate public policies
to encourage the development of beneficial technologies while preventing the
emergence of harmful ones, or at least mitigating their worst consequences. In
hammering out these policies, it will be important to guard against an excessive
enthusiasm for seeking in the human genome the answer to all of our medical,
psychological, and social problems.24
At the level of the individual, there is no doubt that many diseases have
a genetic component, but genetic defects are not always clear-cut; a gene that
malfunctions under one set of environmental, nutritional, or other external con-
ditions may be perfectly adequate in different circumstances.25 It cannot even be
said with complete confidence that the presence of a seemingly harmful gene is an
unmixed hazard; genetic “imperfections” provide the raw material of genetic vari-
ability upon which natural selection operates.26 Also, a search-and-destroy cam-
paign waged against “defective” genes is likely to be a quixotic affair, for it may not
always be possible to identify them. Each of us carries between 5 and 10 defective
genes that have no consequence for us individually but which could at some future
time become a problem for our progeny.27
Finally, and most importantly, an overemphasis on the role of genes in
determining physical and behavioral qualities could lead to a revival of the now
discredited eugenics movement. Initiated in the late nineteenth century by Charles
Darwin’s cousin, Francis Galton, the eugenics movement had many adherents,
including leading scientists, politicians, and intellectuals.28 Its basic postulate was
that biological inheritance was the major source of human differentiation; if some
Volti7e_09_ch08_145-162_highres.indd 156 28/11/12 5:59 PM

Genetic Technologies 157
people had superior strength, intelligence, and even morality, it was because their
ancestors exhibited these qualities and passed them down to their descendants.
Although its focus was on the qualities of individuals, eugenics easily lent itself to
racist notions of human differentiation. According to the majority of eugenicists,
the white race occupied the pinnacle of human development and all others were
arrayed at lower places of the human hierarchy. The sexes, too, could be divided
according to their general abilities, with women on average presumed to have lower
intellectual capabilities than men, although they were deemed to be superior in the
lesser qualities of empathy and nurturance.
Eugenics aimed at being far more than a scientifically based way to account
for human variation; it also was a call to action. According to eugenicists, human
progress depended upon the perpetuation of individuals with superior genetic
endowments and the limitation or even elimination of those with inferior
qualities. According to the program envisaged by “positive eugenics,” individuals
whose physical, mental, and moral qualities marked them as “superior” were
encouraged to have children who would inherit and perpetuate these qualities.
This, however, was a difficult task. Much easier was the promotion of “negative
eugenics,” which aimed at preventing the transmission of “undesirable” qualities
from one generation to another. In its mildest form, negative eugenics motivated
the promotion of birth control for low-income groups. It was also associated
An early-twentieth-century demonstration of support for eugenics. The demonstrators did
not necessarily agree with the statements on the signs they were holding; they were hired to
participate in the demonstration. (Wisconsin Historical Society)
Volti7e_09_ch08_145-162_highres.indd 157 28/11/12 5:59 PM

158 How Technology Affects the Health of the Earth and Its Inhabitants
with more draconian measures, most notably the sterilization of “defectives.”
In the United States more than 60,000 compulsory sterilizations of “defectives”
were performed between 1907 and 1963.29 The most horrific examples of
negative eugenics took place in Nazi Germany, where 350,000 sterilizations were
performed in the name of improving the German Volk and millions of men,
women, and children were killed because they were adjudged to be members of
“inferior races.”30
These ghastly actions undermined much of the support for negative eugenics,
but support for positive eugenics continued well into the 1960s. Although it was
shorn of its racist underpinnings, eugenics still was predicated on a concern that
humanity was threatened by the deterioration of its genetic foundation. Genetic
endowments were taken to be the major source of human variation, and as one
eminent geneticist put it in the early 1960s, “Human welfare, both with individuals
and with society, is predicated on the health of the genetic endowment of human
populations . . . [An] appalling amount of human misery is due to defective
heredity.”31
Eugenics has ceased to be a social movement aimed at human betterment, but
concerns about the genetic endowments of individuals are very much with us. The
elimination of genetically borne defects is no longer a government policy. Instead, it
is now an individual, personal matter, as physicians and parents-to-be make exten-
sive use of a variety of tests to determine the presence of genetically linked disorders
such as Tay-Sachs and Down syndrome. Through genetic counseling, prospective
parents are helped to decide if a fetus with inborn defects should be aborted or car-
ried to term.32 For prospective parents using in-vitro fertilization, this decision is
somewhat less painful because it is now possible to test a single extracted cell from
an embryo consisting of only eight cells. Even so, the decision to terminate life at
any stage of development is a difficult one. Yet for all of the potential anguish they
may cause, these choices will likely become more widespread as researchers find
genetic links to a growing number of disorders.
Although more medical problems may come to be associated with particular
genetic malfunctions in the years to come, their influence should not be exagger-
ated. For one thing, a decade of research conducted after the initial sequencing of
the human genome has shown that early expectations of tying certain diseases to
particular gene abnormalities turned out to be far too optimistic. There is no ques-
tion that genetic abnormalities can be implicated in many diseases, but how the two
are connected is turning out to be far more complicated than had been anticipated
in 1990.33 As we have seen earlier, making the transition from science to technol-
ogy is often a long and difficult process. It is also important to understand that
genes never determine physical and behavioral development by themselves. Genes
function within individual organisms, and these organisms are always affected by
their environments. As David S. Moore has noted, “It is a mistake to think that
genes are necessarily the first link in the causal chains of biological events. From the
moment of conception, environmental factors and genetic factors are in an ongoing
‘dialogue’ with one another about building a person. Each of these sets of factors
brings its own necessary information to this conversation.”34
Volti7e_09_ch08_145-162_highres.indd 158 28/11/12 5:59 PM

Genetic Technologies 159
Fastening upon genetic endowments to the exclusion of everything else
is a simplistic form of determinism that harks back to excesses of the eugenics
movement. At the same time, it is more than likely that advances in biological
and genetic research will serve as the basis for a plethora of potential medical
applications. Taken to the extreme, expanding knowledge about genetics holds out
the promise—or the threat—of what the eminent biologist Edward O. Wilson has
called “volitional evolution,” that is, directing human heredity to favor some traits
while extinguishing others. Should this come to pass it “will present the most pro-
found intellectual and ethical choices that humanity has ever faced.”35 We are still
a long way from achieving this capability, but continued progress in understanding
the structures and operation of the human genome will offer new ways of addressing
medical and psychological maladies. It will take a great deal of collective wisdom
and foresight embodied in new ethical principles to decide which should be pursued
and which should be rejected.
Questions for Discussion
1. What are the pros and cons of patenting genes? Is it legitimate to treat specific
genes or portions of genes like other inventions eligible for a patent?
2. Genetically modified foods are sometimes derided as “Frankenfoods.” Is this
a fair way to characterize them? Do you have any qualms about consuming
genetically modified foods? Why?
3. Most specialists are of the opinion that there are no technical barriers to
human cloning. Although no legitimate agency is likely to financially support
human reproductive cloning, it could be funded by an individual seeking to
perpetuate himself or herself. Should such a practice be forbidden, with stiff
penalties attached? If so, how would such a ban be enforced?
4. Gene-based therapies can now be used to successfully treat cystic fibrosis and a
few other diseases, and more applications of this technology may be expected in
the future. If they become available, should there be encouragement (financial
and otherwise) for all gene therapies? Should they be used to treat “deficien-
cies” like premature baldness, or short stature? Who should be empowered to
decide if these therapies should be developed and applied?
5. For many people, the longstanding “nature vs. nurture” debate has tilted in
the direction of the former. Why has this happened? Are you more inclined
to give primacy to one over the other? Is this dichotomy even a useful way
of evaluating the contributions of genetic endowments and environmental
influences?
6. What does Edward O. Wilson mean by “volitional evolution”? Is it just a syn-
onym for eugenics? Is it something that should be pursued?
Notes
1. Information on the project, both current and historical, can be found at Human
Genome Project Information, “History of the Human Genome Project,” accessed
on February 29, 2012, at http://www.ornl.gov/sci/techresources/Human_Genome
/project/hgp.shtml.
Volti7e_09_ch08_145-162_highres.indd 159 28/11/12 5:59 PM

http://www.ornl.gov/sci/techresources/Human_Genome

160 How Technology Affects the Health of the Earth and Its Inhabitants
2. The majority ruling and the minority’s dissent in the case, Diamond v. Chakrabarty, can be
found at http://caselaw.lp.findlaw.com/cgi-bin/getcase .pl?court=us&vol=447&invol=303
(accessed February 29, 2012).
3. Human Genome Project Information “Genetics and Patenting,” accessed on March
2, 2012, at http://www.ornl.gov/sci/techresources/Human_Genome/elsi/patents.shtml.
This site has many useful links pertaining to the patenting of genes.
4. Bernard Wysocki, Jr., “Columbia’s Pursuit of Patent Riches Angers Companies,” Wall
Street Journal (December 21, 2004) A1. Accessed on July 16, 2008, at http://www.cogsci.
ucsd.edu/rik/courses/cogs200-w05/readings/7-Columbia’s%20Pursuit%20of%20Patent.
doc.
5. Worldwatch Institute, “Grain Production Sets Record but Supplies Still Tight (2011),
accessed on February 29, 2012, at http://www.worldwatch.org/node/5539.
6. World Health Organizations, “Twenty Questions on Genetically Modified Foods,” accessed
on February 29, 2012, at http://www.who.int/foodsafety/publications/biotech/20questions
/en/.
7. David Despain, “Farming the Future: GM Crops Recommended as Key Part of Obama’s
Evergreen Revolution” (December 9, 2010), accessed on March 5, 2012, at http://www.
scientificamerican.com/article.cfm?id=farming-the-future-gm-crops.
8. Human Genome Project Information, “Genetically Modified Foods and Organisms,”
accessed on February 29, 2012, at http://www.ornl.gov/sci/techresources/Human
_Genome/elsi/gmfood.shtml,
9. Ibid.
10. Matt Kallman, “Genetically Modified Crops and the Future of World Agriculture”
(World Resources Institute, June 17, 2008), accessed on July 17, 2008, at http://earth-
trends.wri.org/updates/node/313, p. 3.
11. Ibid., p. 2.
12. ETC Group, “Global Seed Industry Concentration” (September/October 2005), accessed
on July 17, 2008, at http://agobservatory.com/agribusiness_records.cfm?dID=114, p. 2.
13. “Genetically Modified Foods and Organisms,” op. cit.
14. World Health Organization, “Twenty Questions,” op. cit.
15. David Suzuki and Peter Knudtson, Genethics: The Clash Between the New Genetics and
Human Values (Cambridge, MA: Harvard University Press, 1989), p. 205.
16. Randy James, “A Brief History of DNA Testing,” Time (June 19, 2009) accessed on
February 29, 2012 at http://www.time.com/time/nation/article/0,8599,1905706,00.html
17. Mark A. Rothstein, “Keeping Your Genes Private,” Scientific American, 299, 3 September
2008)
18. Ibid., p. 68
19. Gina Kolata, The Road to Dolly, and the Path Ahead (New York: Morrow, 1998).
20. Rachel Dowty, “Clones and Cloning,” in Sal Restivo (Ed.), Science, Technology, and
Society: An Encyclopedia (Oxford and New York: Oxford University Press, 2005), p. 54.
21. Konrad Hochedlinger, “Your Inner Healers,” Scientific American 302, 5 (May 2010).
22. Andrea L. Bonnicksen, “Embryonic Stem Cells,” in Carl Mitcham (Ed.), Encyclopedia
of Science, Technology, and Ethics (Detroit: Thomson Gale, 2005), p. 609.
23. Eryn Brown, “Stem Cells: Research Funding and the 2012 Elections,” Los Angeles Times
(November 21, 2011), accessed on March 1, 2012, at http://articles.latimes.com/2011
/nov/21/news/la-heb-stem-cell-funding-2012-elections-20111121,
24. Stephen S. Hall, “Revolution Postponed,” Scientific American 303, 4 (October 2010).
25. David Suzuki and Peter Knudtson, Genethics: The Clash Between the New Genetics and
Human Values (Cambridge, MA: Harvard University Press, 1987), p. 205.
Volti7e_09_ch08_145-162_highres.indd 160 28/11/12 5:59 PM

http://caselaw.lp.findlaw.com/cgi-bin/getcase

http://www.ornl.gov/sci/techresources/Human_Genome/elsi/patents.shtml

http://www.cogsci

http://www.worldwatch.org/node/5539

http://www.who.int/foodsafety/publications/biotech/20questions

http://www

http://www.ornl.gov/sci/techresources/Human

http://earth-trends.wri.org/updates/node/313

http://earth-trends.wri.org/updates/node/313

http://earth-trends.wri.org/updates/node/313

http://agobservatory.com/agribusiness_records.cfm?dID=114

http://www.time.com/time/nation/article/0,8599,1905706,00.html

http://articles.latimes.com/2011

Genetic Technologies 161
26. Ibid.
27. Ibid.
28. Daniel J. Kevles, In the Name of Eugenics: Genetics and the Uses of Human Heredity
(Berkeley: University of California Press, 1985).
29. Philip Reilley, The Surgical Solution: A History of Involuntary Sterilization in the U.S.
(Baltimore: Johns Hopkins University Press), pp. 94–95. Quoted in Joan Rothschild,
The Dream of the Perfect Child (Bloomington and Indianapolis: Indiana University Press,
2005) p. 45.
30. Richard Weikart, “Eugenics,” in Encyclopedia of Science, Technology, and Ethics, op. cit.,
pp. 708–709.
31. Theodosius Dobzhansky, Heredity and the Nature of Man (New York: Harcourt, Brace,
and World, 1964) pp. 14–15.
32. Rothschild, The Dream of the Perfect Child, op. cit., pp. 52–131.
33. Stephen S. Hall, “Revolution Postponed,” Scientific American, 303, 4 (October 2010).
34. David S. Moore, The Dependent Gene: The Fallacy of “Nature” vs. “Nurture” (New York:
Henry Holt, 2001) p. 140. Emphasis in original.
35. Edward O. Wilson, Consilience: The Unity of Knowledge (New York: Vintage Books,
1999) p. 299.
Volti7e_09_ch08_145-162_highres.indd 161 28/11/12 5:59 PM

A Macmillan Higher Education Company
WORTH PUBLISHERS
Volti7e_01_FM_i-xviii_highres.indd 2 28/11/12 5:52 PM
this page left intentionally blank

163
p a r t f o u r
Few elements of human life have been as powerfully affected by technological
change as the ways in which we spend our working lives. As is noted in Chapter 9,
humans survived for most of their existence through gathering and hunting. This
mode of life was radically overturned by the adoption of agriculture, and then
much later by industrialization. The introduction of new technologies has been
a prominent feature in all of these transitions, but as argued in this chapter and
contrary to commonsense beliefs, technological advance did not always make
working life easier; in fact, sometimes the contrary has been true.
Although modern technologies have provided us with a plethora of goods
and services with less physical effort, they also have brought with them the fear of
widespread unemployment. Chapter 10 takes up this longstanding issue, indicating
when this fear is appropriate and when it is misplaced. Without downplaying the
real pain caused by job losses due to technological change, this chapter provides
some reasons to be optimistic about the future levels of employment.
Chapter 11 moves the discussion from the quantity of available work to the
quality of working life. Industrialization radically changed the nature of work. New
technologies altered many aspects of life on the job, as did the theories and practices
of management that were developed at this time. Today, it is often remarked that
we are living in a “post-industrial” economy and society, but the distribution of skill
and authority in the workplace is still a contentious issue, and will likely remain so,
even as the distribution of jobs continues to change.
Technology and the Transformation of Work
Volti7e_10_ch09_163-182_highres.indd 163 28/11/12 6:00 PM

A Macmillan Higher Education Company
WORTH PUBLISHERS
Volti7e_01_FM_i-xviii_highres.indd 2 28/11/12 5:52 PM
this page left intentionally blank

165
For most people the greatest part of their waking hours is spent doing some sort
of work. Be it as an assembly line worker, a sales clerk, or a psychiatrist, we gain
our livelihood and a good measure of our individual identity from the work that
we do. Technological change has been a major influence on the way we go about
our work. Today, few of us are directly engaged in the actual production of things.
Nor do we find it necessary to work from sunup to sundown in order to secure
our livelihood. All in all, technological change appears to have lightened our
workload substantially. Yet by now it should be apparent that the consequences of
technological change are not always simple and straightforward. The development
of technology over time has not always resulted in diminished workloads; in fact,
the opposite has sometimes been the case. In this chapter we will examine the
relationship between work and technology before the onset of industrialization. The
facts and concepts developed here will provide perspectives that should be helpful
when we consider subsequent changes in work and technology.
Working with the Earliest Tools
The history of work is virtually synonymous with the history of the human species.
To trace the influence of technology on work we need to begin with our likely
ancestors, the Australopithecines, who were shaping and using stone tools at least
2.6 million years ago. Paleoanthropologists are not in agreement as to which group of
primates are entitled to be put in our genus, Homo, but there is no doubt that making
and using tools was one of their distinguishing characteristics.1 Although early stone
tools have a crude appearance, this does not mean that the productive technologies
early humans employed were primitive. From the outset, it is important to draw a
distinction between the apparent crudity of the product and the skill that went into
making it. Technologies in “primitive” societies are simple because only a few tasks
are involved in the productive process, and a single person usually performs all of
them.2 In these societies the division of labor is limited; a worker does everything
required to produce something, and the work process is not fragmented into separate
tasks allocated to different individuals.3 There may be some degree of specialization;
a few jobs may be the exclusive work of women or men, or a worker may specialize
in the production of a particular good, although again the task is his or hers from
start to finish.4 But most workers in “primitive” societies do not specialize in a single
Work in Nonindustrial Societies
c h a p t e r n i n e
Volti7e_10_ch09_163-182_highres.indd 165 28/11/12 6:00 PM

166 Technology and the Transformation of Work
task, so they are not likely to be virtuosos in a particular area. They are, as the old
adage has it, jacks of all trades and masters of none.
This lack of specialized skills does not mean, however, that workers in “primitive”
societies are lacking in technical ability. Although the overall work process is
simple, the individual workers may be highly dexterous and ingenious as they go
about their work. Take, for example, the production of stone tools. What could be
more primitive than “Stone Age” technology? But in point of fact, the production
of stone tools is a considerable achievement. In the first place, the idea of making
an implement by chipping a stone is hardly obvious. Although sea otters commonly
use a stone for breaking clam shells, modifying a stone so that it can serve as a better
tool is unknown in the animal world. Second, the fabrication of stone tools is a
task that calls for considerable skill and practice. The effective chipping of a stone
tool requires the preparation of a properly sited striking platform on the stone to be
chipped, and the delivery of the striking blows at the proper angle. In some cases
the stone is chipped by the use of a pressure chipping tool, such as a piece of antler,
which forces a flake off as pressure is applied to it. Whatever the method employed,
the production of stone tools is not a simple process; most of us would experience
some difficulty in duplicating the work of early humans and protohumans.5 At the
same time, new generations of toolmakers did not have to develop these techniques
by themselves. The regularity of the tools’ shape and design suggests that the
techniques of toolmaking were transmitted from person to person. An educational
process was an integral part of Stone Age technologies.
The use of stone implements by prehumans and early humans was a crucial
factor in the development of a species of animal that would forever be characterized
by its ability to make and use tools. This ability was not simply the result of
biological evolution; it was at the same time a cause of it. The process through
which early hominids evolved into humans began with the assumption of upright
posture and the anatomical changes that made the hand capable of performing an
increasing variety of tasks. One of the most important of these was the production
and use of tools. In turn, tool use stimulated brain development and gave an
advantage to those individuals whose mental capabilities allowed them to be the
most proficient makers and users of tools. In sum, manual dexterity, tool use, and
brain development evolved together in a mutually reinforcing manner.6 By the
Upper Paleolithic era, this combination of evolved hands and brains had given rise
to an impressive variety of equipment: pins, awls, needles with eyes, spoons, saws,
axes, pestles for grinding, scoops, and shovels.7 Some early hunters discovered that
they could double the distance that a spear could be thrown through the use of a
throwing stick (also known as an atlatl), long before Archimedes (c. 287–212 b.c.)
explained the principle of leverage.
Work and Leisure in Technologically Primitive Societies
Significant as the first tools were for human development, the control over the
environment that they offered was modest indeed. It is easy to visualize a life that,
in the classic formulation of the seventeenth-century philosopher Thomas Hobbes,
Volti7e_10_ch09_163-182_highres.indd 166 28/11/12 6:00 PM

Work in Nonindustrial Societies 167
was “nasty, brutish, solitary, and short.” A “primitive” technology seems to be asso-
ciated with a precarious and insecure existence, one in which the daily struggle for
mere subsistence occupied all of a person’s time and energy. But once again we fail
to give technologically unsophisticated people the respect they deserve. Research
on the actual lives of these people indicates that Hobbes may have been wrong,
and that a life of unremitting toil is not a necessary consequence of a low level of
technological development.
Societies that sustain themselves through hunting and gathering are rare
today. Still, there are a few parts of the world where agriculture and industry have
not taken over and the ways of earning a living likely resemble the ones employed
by our prehistoric ancestors. One such place is the Kalahari Desert of southern
Africa, the home of the !Kung Bushmen. It is a harsh, difficult environment,
one that would seem the natural locale for the kind of life described by Hobbes.
Since they possess only the simplest technologies, it might be expected that the
life of the !Kung is one of unceasing hardship, and that the bare necessities of life
are obtained only through continuous toil. Yet this is not the case. As the field
research of Richard Lee has demonstrated, the !Kung do not have to work especially
hard.8 By gathering protein-rich mongongo nuts and over 80 other species of food
plants, and supplementing these with the meat of game animals, the !Kung provide
themselves with a nutritionally adequate diet. The work of gathering and hunting
is not particularly arduous. On average, adults put in a six-hour workday about two
and a half days a week. Earning a living thus requires only a total of 12 to 19 hours
of labor each week. The remainder of the !Kung’s working hours are taken up with
such activities as visiting and entertaining friends and relatives, and engaging in
dances that put the participants into a trance.
The modest work required to meet the needs of the !Kung has led one
commentator to characterize the !Kung community as “the original affluent society.”9
Here, affluence is not the same thing as abundance, for the !Kung lack the material
possessions common to more technologically advanced societies. A nomadic
people, they have no permanent dwellings. Their clothing, as befits life in a warm
climate, is minimal. They lack all but the most basic domestic utensils, and they
have no way of conveying these except on their own backs.
Besides not producing many material goods, they have not produced many of
their own kind. They have controlled their rate of reproduction and have avoided
putting excessive pressure on their environment. The population density in their
territory is only 41 persons per square mile. In contrast, agricultural societies have
densities ranging from a few hundred to several thousand, while urban centers in
industrial societies may pack tens of thousands of people into a square mile of space.
What the !Kung do have in great supply is leisure time. This leads to the
seemingly paradoxical conclusion that technological progress may result not in the
saving of labor but in its increase, or, as Marshall Sahlins puts it, “the amount of
work per capita increases with the evolution of culture, and the amount of leisure
per capita decreases.”10 Why is this so? The answer lies in the equivocal connection
between technological change and the requirements of work. A technologically
dynamic economy generates labor-saving devices, but at the same time it produces
Volti7e_10_ch09_163-182_highres.indd 167 28/11/12 6:00 PM

168 Technology and the Transformation of Work

a steady stream of new goods that are eagerly sought after. This means that labor-
saving technologies are generally used to increase income, not to reduce the hours
of work. As workers in a technologically advanced society, we often find ourselves
on a treadmill, working long and hard to obtain the material goods that we scarcely
have the time to enjoy.
The !Kung live lives of marked deprivation when measured against the material
standards of most other societies. Their static economy and society have not prod-
uced the continually expanding and changing outflow of goods characteristic
of technologically dynamic ones. Instead, they have maintained an equilibrium
between their wants and their capacity to achieve them. Modest requirements and
simple technologies have produced a way of living that lacks physical amenities,
but leaves much more leisure time than can be found in our more “advanced” way
of life.
Work and the Development of Agriculture
Although the !Kung way of life is a rarity today, it must be remembered that hunting
and gathering was the dominant mode of existence over most of the history of
humankind. This age-old pattern began to change about 12,000 years ago, when
people in first began to cultivate their food instead of foraging for it. After its likely
beginning in the Tigris-Euphrates Valley of the Middle East, within a few thousand
years farming independently appeared in other parts of the world, most notably in
Egypt’s Nile Valley, the Yellow River Valley in China, the Indus Valley in India,
A !Kung san woman with a harvest of mongongo nuts. (Richard Lee/Anthro-Photo)
Volti7e_10_ch09_163-182_highres.indd 168 28/11/12 6:00 PM

Work in Nonindustrial Societies 169
and parts of Central and South America. These were epochal developments, for in
many ways the development of agriculture marked the most fundamental change in
human life and the way it is sustained through work.
Why, after tens of thousands of years of successful foraging, did humans
take up farming, a transition that has been dubbed “The Neolithic Agricultural
Revolution”? Much debate still surrounds this question.11 The movement from
hunting and gathering to agriculture occurred over a long period of time in many
different parts of the world, making it hard to generalize about its causes. Further
complicating the matter is the lack of clearly defined boundaries separating the old
ways of life from the new. Some hunting and gathering groups engaged in what
might be termed proto-agriculture by encouraging the growth of certain plants
and semi-domesticating animals. It also seems to be the case that in some regions
hunting-and-gathering activities remained for hundreds if not thousands of years
after the establishment of sedentary agriculture.12 On a more positive note, despite
a lack of a consensus on all of the causes of the Neolithic Agricultural Revolution,
most scholars agree that changes in population size, the climate, and the natural
environment strongly affected many hunting-and-gathering societies and, at least
in some cases, impelled a turn to deliberate plant cultivation.
Whatever its causes, the result was a new symbiosis between plants and people.
Domesticated grains could not reproduce themselves as they had done before humans
intervened. People had to assiduously cultivate these descendents of wild grasses,
and their lives increasingly depended on regular harvests of grain. Technological
progress thus resulted in a greater regularity of work patterns. Also, people had to
work harder and more consistently because there were more of them. Agriculture
allowed considerably higher population densities than those found in hunting-and-
gathering societies, but at the cost of a greater workload for each individual.
Farming Techniques and Patterns of Work
The interconnections between the development of agricultural technologies,
population expansion, and increases in the amount of work performed can be seen
through an examination of successive stages of agricultural technology. One of
the earliest forms of farming, known as slash-and-burn (or swidden) cultivation, is
based on cutting down indigenous vegetation and then burning it on the spot. This
not only clears the land, but it also puts nutrients into the soil. A variety of crops
are then planted on the cleared land. After harvesting, the plot is abandoned for
a number of years, which allows the land to replenish itself before the cultivators
return to begin the process once again. Slash-and-burn cultivation stands in an
intermediary position between foraging and more intensive forms of farming, and
it likely antedated the more settled forms of agriculture that were practiced in the
aforementioned river valleys of India, China, the Middle East, and Central and
South America.13
Although slash-and-burn cultivation alters the environment more than forag-
ing does, on the whole it makes fewer demands on the land than other forms of
farming. The soil is given ample time to recover, typically five years or more, and
Volti7e_10_ch09_163-182_highres.indd 169 28/11/12 6:00 PM

170 Technology and the Transformation of Work
every effort is made to nurture the tree seedlings that will regenerate the forest. The
natural variety of plant life is preserved, thus avoiding the radical simplification
of the environment characteristic of settled forms of farming.14 At the same time,
slash-and-burn agriculture supports population densities greater than those found in
hunting-and-gathering societies—up to 150 people per square mile—but with little
increase in work effort. Slash-and-burn agriculture requires that an annual average
of 500 to 1,000 person-hours of labor be expended on all phases of crop produc-
tion.15 This comes to a little more than nine and a half to 19 hours of work a week,
figures that compare quite favorably with the work schedules of the !Kung. In some
societies, such as the Bemba of Zimbabwe, even less work is required. Only three
to four hours of work are performed each day even during the busiest agricultural
seasons, and averaged over a year, only one to two hours per day are required.16
Slash-and-burn agriculture requires less effort than more “advanced” forms of
farming, but its drawback is that it cannot support large populations. One study of
three Mexican farm communities that employed different mixes of slash-and-burn
and sedentary farming technologies showed that almost 14 times as many families
could be supported on the same acreage when irrigated farming was used instead
of slash-and-burn methods.17 This, in fact, may be the primary reason for the
development of settled, intensive agriculture. People began to practice this form
of cultivation (perhaps while continuing to employ slash-and-burn techniques on
other plots of land) in order to supply food to a growing population when no new
land was available for cultivation. Settled forms of agriculture allowed more people
to be fed because farm work was not dependent on the availability of vast tracts of
infrequently cultivated land. The same farm plot could feed much larger numbers
of people whose labor was in turn required by the much greater demands of seden-
tary farming. In extreme cases, such as that of early-twentieth-century China, with
labor-intensive agricultural technologies the land could support enormous numbers
of people—6,000 per square mile in some places.18
The paradox underlying the development of agricultural technologies until
recently is that as the land becomes more productive, people have to work much
harder, for the increased productivity of the land is the result of an increase in the
number of workers and the amount of work that they do. Settled forms of agriculture
require labor not just for the direct tasks of soil preparation, planting, and harvest-
ing. If the land cannot be left fallow for long periods of time, it is imperative that
the farmer take an active role in maintaining the fertility of the soil; this requires
gathering, storing, and applying organic wastes that range from animal droppings
to the mud scraped from the bottom of ponds. Stubborn weeds and grasses must be
removed at regular intervals. Unless rainfall is totally adequate, irrigation networks
have to be built and maintained. Animals that provide power and manure must be
cared for. Storage buildings have to be constructed. Farm implements need to be
built and kept in repair. Plants require protection from insects, even if this means
manually picking them off the growing plants.
The adoption of settled agriculture thus signifies a steep increase in the amount
of work that has to be performed. Irrigated agriculture in particular requires sharp
increases in the amount of work performed. Surveys of farms in India show that
Volti7e_10_ch09_163-182_highres.indd 170 28/11/12 6:00 PM

Work in Nonindustrial Societies 171
with the introduction of irrigation much more labor is required while the crops are
growing; work is no longer largely confined to plowing, planting, and harvesting.19
Twice as much labor may be required per unit of irrigated cropland than in the case
of dry farming.20 All in all, as sedentary agriculture develops, the amount of work
required goes up sharply. In Southeast Asia, for example, an agricultural economy
based on the transplantation of rice seedlings into irrigated fields requires an annual
average of nearly 300 person-days of work. This situation changes only when
industrial products such as farm machines and chemical fertilizers are introduced
into the farm sector. This began to happen in Europe and North America a century
and a half ago, and in many parts of the world it is yet to occur. In these places,
the biblical injunction that “in the sweat of thy face shalt thou eat bread”21 is a
harsh reality.
The Ironies of Progress
At this point, one may question the benefits of an “advanced” technology. The
three productive systems just examined represent successive stages in economic
development, but it is by no means certain that they represent across-the-board
progress, especially when the amount of time that has to be devoted to labor is
considered. One cannot even say with certainty that they represent progressive
increases in one critical component of technology, the amount of skill exercised by
individuals, for the skills employed by foragers seem no less sophisticated than those
employed by sedentary farmers, although the total skills found within the society as
a whole may be greater in the latter case because the permanent communities that
accompany sedentary agriculture allow specialization and a more extensive division
of labor.
These comparisons of foraging, slash-and-burn, and sedentary agriculture give a
strong indication that technological advance does not necessarily result in the saving
of human labor. We are accustomed to thinking that new technologies result in the
diminution of human drudgery, but the example of the epochal shift from hunting and
gathering and slash-and-burn agriculture to settled crop cultivation shows that this
need not be the case. We are left with the sobering realization that the development
of agriculture, one of the greatest technological advances in human history, resulted
in dramatic increases in the duration, pace, and extent of human labor.
Artisan and Craft Work
Not all of the consequences of sedentary agriculture were so grim. Settled farming was
paralleled by the development of towns and cities, which were often characterized by
monumental architecture and other artistic advances. Dense permanent settlements
also gave rise to specialized religious practitioners, as well as more complex modes
of civil and military organization. Occupational specialization flourished, allowing
craftsmen to turn out higher-quality work and produce entirely new kinds of goods.
Agrarian societies also made possible the development of roads and other elements
of infrastructure that facilitated the work of merchants and artisans, giving a further
stimulus to occupational specialization.22
Volti7e_10_ch09_163-182_highres.indd 171 28/11/12 6:00 PM

172 Technology and the Transformation of Work
In some cases, division of labor extended beyond workers specializing in a
particular product; the work process was itself broken up. The Greek historian
Xenophon (c. 430–354 b.c.) noted how on occasion shoe manufacture was subdivided:
“One man earns a living by only stitching shoes, another by cutting them out,
another by sewing the uppers together, while there is another who performs none
of these operations but only assembles the parts.”23 A similar pattern can be
observed in Rome during the days of the Republic, when the metalworking industry
contained such occupational specialties as pattern makers, smelters, turners, metal-
chasers, and gilders.24
The change in economic relationships necessitated by specialization was
even more important than the products themselves. Specialized producers
need to exchange their goods and services with others. This is commonly done
through some kind of market. Production for a market instead of for the direct
use of family and community members implies a different kind of relationship
with others. For most of human existence work was performed as part of an
extensive set of mutual obligations shared by members of the same group. But
when work is done in order to make goods for sale in a market, it ceases to be
embedded in particular social relationships. Instead, it is done as a freestanding
activity, governed by self-interest rather than a sense of obligation to members of
one’s group. The spirit of this kind of work was captured by a famous passage in
Adam Smith’s eighteenth-century classic, The Wealth of Nations: “It is not from
the benevolence of the butcher, the brewer, or the baker, that we expect our
dinner, but from their regard to their own interest. We address ourselves, not to
their humanity, but to their self-love, and never talk to them of our necessities,
but of their advantages.”25
A market-based exchange is one in which both parties participate in a
transaction through which each expects to gain. When the transaction is completed,
that is the end of the relationship. A car salesman may engage in some friendly
conversation about your spouse and kids, but what he is really interested in is your
money, just as your interest is getting a good price on the car. The relationship ends
when he pockets your money and you drive off with the car.
An exchange system such as this, cold-blooded as it may be, has vast
consequences for the way work is done. When, as often was the case in the past,
work is firmly rooted in a larger set of social relationships, the way that work is
organized will not be governed solely by the technical requirements of the job.26
More will be at stake than efficiency, innovation, and the general development
of production. It may be far more important to see to it that all members of the
family or community are employed, or that jobs are parceled out not according to
the abilities of the workers, but rather in accordance with their social position.
For example, in India only members of the lowest castes engage in the tanning of
leather, for this is considered an “unclean” occupation and therefore improper for
a high-caste individual.
Although urbanization results in more occupational specialization and some
disentangling of economic and social spheres, this separation may not be fully
realized. Particular occupations may be the special province of specific social groups.
Volti7e_10_ch09_163-182_highres.indd 172 28/11/12 6:00 PM

Work in Nonindustrial Societies 173
In the African city of Timbuktu, until recently, butchers, barbers, slipper-makers,
masons, tailors, and smiths were all hereditary tradesmen, and in the case of the
smiths, almost all marriages were between members of families involved in the same
occupation.27 These alignments may be firmly buttressed by religious beliefs and
practices that prevent the free movement of individuals into specific occupations.
In Timbuktu it was widely believed that if someone other than a member of the
Arna tribe tried to sew up a leather slipper, supernatural forces would cause the
needle to jab his hand. In similar fashion, it was believed that one who was not a
member of a mason family would suffer a fatal fall if he were so unwise as to attempt
to build his own house.28 In societies such as this, the division of labor and the
market system have not completely taken over. Although the economy is fairly
complex, economic relationships are still kept within the context of the established
social order.
The caste of this leather worker in India is a major determinant of his occupation. (David
Cumming; Eye Ubiquitous/Corbis)
Volti7e_10_ch09_163-182_highres.indd 173 28/11/12 6:00 PM

174 Technology and the Transformation of Work
Guild Organization and Technological Change
Families and tribes are not the only social groupings that regulate economic activity
in traditional societies. In some traditional societies, such as medieval Europe,
organizations known as guilds were of considerable importance. Guilds are groupings
of people engaged in the same kind of occupation, be it manufacturing, trade, the
provision of a service, or even begging and thievery.29 One of the primary purposes
of guilds is to restrict the practice of a particular craft to members of the guild, who
also control recruitment into the guild. Solidarity among guild members is further
reinforced by regular festivals and ceremonial functions, such as observance of the
feast day of the guild’s patron saint. These are not the only sources of solidarity;
guilds also have a hierarchy of political offices that set rules, adjudicate disputes
among members, and generally regulate the work activities of the members.
Guild organization had significant consequences for the pattern of technological
change and for the work activities of guild members. Entry into a guild began with
a long period of apprenticeship. Apprentices spent much of their time on menial
tasks, but they also had an extended opportunity to gain essential knowledge about
their craft. Much of that knowledge involved the acquisition of tacit skills that could
be acquired only through observing a master craftsman and by engaging in hands-on
practice.30 The relationship between master and apprentice was closely constrained;
the apprentice agreed to work for his master and not leave or get married without
the master’s permission. In return, the master imparted craft skills and provided the
apprentice with room, board, and clothing. Violation of these stipulations by either
side could result in the payment of a cash penalty to the aggrieved party.31
While guild organization and practice helped to preserve many technical
processes, they also were a source of technological retardation. Although long
apprenticeships facilitated the transfer of important skills, they also produced a
kind of “trained incapacity” that locked guild members into set ways of doing things
that inhibited innovation. Since the practice of a trade required membership in
the appropriate guild, recruitment regulations and stipulated training programs
set limits on who could practice a trade. These regulations often resulted in the
exclusion of individuals who were not already members of a guildsman’s extended
family.32
Many other guild regulations and procedures worked against technological
change. In order to maintain their monopoly positions and keep prices high, guilds
set exacting standards for workmanship and attempted to restrict output. Limits were
placed on the number of apprentices and other workers that could be employed in
a guild-member’s shop, and deviations from normal work practices, such as working
under artificial light, were prohibited. In general, technological innovations were
discouraged, for the prime concern of the guild was the maintenance of the existing
way of life.
The inhibitions created by guild organization meant that economic and
technological progress often required the establishment of enterprises in areas
remote from guild control. This occurred in the Roman Empire, where craft
workshops were established on great estates in the outlying provinces, far from the
Volti7e_10_ch09_163-182_highres.indd 174 28/11/12 6:00 PM

Work in Nonindustrial Societies 175
cities and guild regulation. Here could be found such technological and economic
novelties as an establishment near Arles that boasted eight water mills, as well as
glass and textile factories in eastern Gaul where work was done by wage laborers
rather than independent craftsmen.33 It probably is no coincidence that the rise
of modern industry first occurred in England, where guild restrictions were weaker
than they were in the rest of Europe.
Slavery and the Inhibition of Technological Development
One of the most extreme means of tying occupation to social status is through the
operation of an institution that is unfortunately all too common in human societies:
slavery. Slavery was a major source of labor in the empires of the ancient world; it
would not be an exaggeration to say that slave labor was the foundation for a large
portion of economic life. Slavery persisted in early medieval Europe, although its
spread was inhibited by the belief that Christians should not wage wars against fellow
Christians for the purpose of enslaving them. By the thirteenth century the church
This fifteenth-century cabinetmaker practiced his trade as a member of a guild. His wife is
also at work as she spins thread with a distaff. (The Granger Collection)
Volti7e_10_ch09_163-182_highres.indd 175 28/11/12 6:00 PM

176 Technology and the Transformation of Work
forbade the buying and selling of Christian slaves, but no such restriction applied to
people of other faiths, especially Muslims. Scattered vestiges of slavery could be found
in eighteenth-century Europe, but by that time the major center of slave labor was the
New World, where enslaved Africans supplied much of the labor for the plantation
economies of the Caribbean, the American South, and parts of South America.
The institution of slavery had a number of unfortunate consequences for work,
worker motivation, and the development of workplace technologies. Slave labor
was inefficient when compared to most forms of free labor because work had to
be closely supervised with coercion as the main source of motivation. Slaves were
charged with the performance of simple if arduous tasks, but little was expected of
them in the way of initiative, innovation, or attention to detail. Slaves produced
great amounts of wealth for their owners, but the institution of slavery powerfully
inhibited the development of a skilled labor force, an essential component of long-
term economic advance.
It is sometimes asserted that slavery also inhibited the development and
use of labor-saving technologies since slave owners paid very little for the labor
supplied by their slaves, usually not much more than the cost of their subsistence.
This is not altogether convincing; after all, slave owners could have increased
their incomes by adopting technologies that enhanced the productivity of their
slaves while still paying very little for their upkeep. More likely, slavery inhibited
technological advance indirectly by engendering negative attitudes about work
and its place in one’s life.
The Greeks and Romans of the classical age often manifested a strong respect
for work when it was part of a self-sufficient life style. But work was despised when
it was done for another—be he a master or simply a customer—because it signified
that the worker was locked into a relationship of servitude that prevented him from
being a free citizen, whose proper activities were directed toward political life and
the pursuit of pure knowledge.34 This attitude is exemplified by the assertion of the
Greek philosopher Aristotle that “no man can practice virtue when he is living the
life of a mechanic.”35 When slavery became widespread, as in the days of the Roman
Empire, this attitude toward work hardened. Since more and more of it was being
done by slaves, manual work was seen as an inherently degrading activity not fit for
a free man. This attitude also characterized the American South before the Civil
War. Although most white Southerners did not own slaves, they inhabited a culture
in which manual work could easily be associated with slave labor. Due in part to
the low status of manual work, the South had a weaker tradition of engineering
and other technological endeavors. This was reflected in a much slower pace of
industrialization than occurred in the North, and ultimately was one of the key
reasons for the South’s defeat in the Civil War.
The Measurement of Time and Changed Working Patterns
There are two ways to consider the relationship between work and time: in terms of
the amount of time that is expended on work, and in the way that it is scheduled.
Technological change altered both of these. As we have seen, in technologically
Volti7e_10_ch09_163-182_highres.indd 176 28/11/12 6:00 PM

Work in Nonindustrial Societies 177
simple societies where subsistence comes through foraging or slash-and-burn
agriculture, relatively little time is expended on work. With the introduction of
settled farming and the beginning of urbanization, the hours of work lengthened.
Still, the sharp separation between work and other activities that we take for
granted today was not nearly as pronounced. Work was intertwined with social
events, religious observances, and community social activities. Moreover, the
scheduling of work activities was much more lax. The work week was punctuated
with religious feast days or other excuses for the avoidance of work.
For the farmer, certain times of the year, such as sowing and harvesting,
required long hours of work, but during other times of the year some leisure time
could be enjoyed. Although a craftsman might spend long hours at his place of
business, not all of his time was taken up with productive activities. Many hours
were spent socializing with neighbors and customers. A good deal of time was spent
on haggling over prices with prospective customers, an activity often done more for
the sheer pleasure of bargaining than for any real economic purpose. Concern about
time did not pervade workers’ consciousness. There was little of the frenetic desire
to “get ahead” that is so typical of many inhabitants of the modern world. Jacques
Le Goff’s description of the Middle Ages can be taken as typical of the general attitude:
“On the whole, labor time was still the time of an economy dominated by agrarian
rhythms, free of haste, careless of exactitude, unconcerned by productivity—and of
a society created in the image of the economy, sober and modest, without enormous
appetites, undemanding, and incapable of quantitative efforts.”36
Above all, the idea of keeping to a fixed work schedule rarely took hold. As
Gideon Sjoberg describes work patterns in a preindustrial city, “Merchants and
handicraft workers generally do not adhere to any fixed schedule. Shopkeepers
open and close their shops as they see fit. They may open one morning at nine,
the next at ten, and so on. The lunch hour is likely to be longer on some days
than others. Ambulatory merchants, likewise, are apt to keep rather irregular
schedules.”37
These attitudes toward time and work seem to indicate that the producer
was not concerned with the maximization of income. Work was not pursued with
the kind of single-mindedness we seem to take for granted in the modern world.
Nor was the time devoted to work considered a scarce commodity to be carefully
utilized in order to bring the maximum economic return. These conceptions of
work developed slowly and only in conjunction with major changes in the way
people viewed themselves and their surroundings. In the Western world, a major
impetus to these changes may have been religious. The rise of Protestantism,
especially in its Calvinist form, gave a new centrality to work, for it made work
into a quasi-religious “calling.” According to Calvinist doctrine only those
predestined to salvation could hope to enter heaven, and material success was
taken as a sign of being one of the chosen few. Equally important, this “calling”
was manifested not just by a willingness to work hard in order to gain material
success but also by a systematic and methodical approach to work.38 This spirit
was clearly congenial to the regulation of work activities according to the dictates
of a precise schedule.
Volti7e_10_ch09_163-182_highres.indd 177 28/11/12 6:00 PM

178 Technology and the Transformation of Work
The Clock
Important as these changed attitudes might have been, “modern” attitudes toward
work, especially the way it was organized and scheduled, also were influenced by a
key invention that allowed the precise scheduling of work activities: the clock. To
be sure, methods of telling time had existed throughout the ancient world. Sundials,
candles with marked segments, and vessels that discharged water at a regular rate were
all employed in antiquity. But each had drawbacks: sundials were useless at night and
on cloudy days, candles could blow out, and water clocks froze in the wintertime.
Equally important, most work activities in the ancient world required little in the
way of precise timing and scheduling. These requirements arose only in conjunction
with a new kind of social organization: the medieval monastery.39 By the standards
of their day, these monasteries were very large enterprises, many of them containing
hundreds of monks and other workers. Their cooperative efforts required precise
scheduling, as did their patterns of religious observance, which required regular times
for prayers, masses, and other religious observances. This was most pronounced in the
monasteries following the Rule of St. Benedict, which divided days and nights into
intervals of 12 hours each and required prayers to be said at sunrise, at the third hour
of daylight, at noon, at the ninth hour of daylight, and at sunset.40
The first clocks were built in the thirteenth century, their construction motivated
by a growing awareness of the value of accurate timekeeping.41 The scheduling of
activities through the use of special timepieces had brought economic benefits
to the monasteries and secular enterprises, and over time the idea of orderly routines
and schedules had become an integral part of city life in late medieval Europe.
There is the example of Philip VI of France giving to the city of Amiens in the
fourteenth century a charter that allowed the issuance of an ordinance “concerning
the time when the workers of the said city and its suburbs should go each morning
to work, when they should eat and when to return to work after eating; and also,
in the evening, when they should quit work for the day; and that by the issuance
of said ordinance, they might ring a bell which has been installed in the Belfry
of said city, which differs from the other bells.”42 It was in these urban centers
during the early fourteenth century that the mechanical clock began to exert its
influence. The ringing of the town bell could be specifically tied to the needs of
emerging occupations, as when the governor of the county of Artois granted to
the government of the city of the same name the right to construct a special belfry
because of the “cloth trade and other trades which require several workers each day
to go and come to work at certain hours.”43
In addition to laying the foundation for a more regularized pattern of work,
the clock embodied all of the key characteristics of a machine. It used an external
source of energy (a spring or a falling weight), unlike traditional tools and devices
that required human or animal muscle power. The even distribution of this energy
required some sort of regulator, as typified by the pendulum discovered by Galileo
and first applied to timepieces by Christian Huygens in Holland. The clock’s operation
was automatic, requiring little human intervention. Finally, the clock put out a
standard “product”—hours, minutes, and seconds.44
Volti7e_10_ch09_163-182_highres.indd 178 28/11/12 6:00 PM

Work in Nonindustrial Societies 179
In producing this standard product, the clock had made time into a kind of
substance. Without getting into deep philosophical waters, we can assert that time
as we tend to think of it does not exist; when we think or speak about time what we
are actually dealing with are measures of time: seconds, hours, decades, or centuries.
These units of time are then taken to have real substance and are treated as valuable
commodities. Hence, we use phrases like “saving time,” “wasting time,” “spending
time,” and “buying time.” Note too the assertion that “time is money.”
The conversion of time into a commodity reinforces the rational spirit that was
briefly described in Chapter 1. It stimulates the conscious choice of specific means
for the attainment of given ends and provides a basis for judging the worth of a
particular activity relative to others. Individuals in a society obsessed with time find
it difficult to go about their work in an unplanned and irregular fashion.
The precise scheduling of work received further emphasis as new mechanical
technologies began to make their appearance. Spinning jennies, power looms, flour
mills, and refractory furnaces, as well as the water wheels and steam engines that
Based on a discovery by Galileo, this clock mechanism uses a pendulum to regulate its
motion. (The Granger Collection)
Volti7e_10_ch09_163-182_highres.indd 179 28/11/12 6:00 PM

180 Technology and the Transformation of Work
ran them, called for a mechanically paced way of working that was profoundly
different from the old ways of doing things. The process that began with
medieval monasteries and thirteenth-century clocks moved inexorably toward
the conversion of most work into routinized procedures governed by artificial
time schedules. In Chapter 11 we will look at these historical developments and
attempt to bring them up to date. But before analyzing the effects of technological
development on the quality of work today, we will examine an issue even more
fundamental: the prospect that technological advance might leave many people
with no work to do.
Questions for Discussion
1. After reading about the work habits of the !Kung, does trading places with one
of them begin to seem attractive? How would a member of !Kung society feel
about such a swap? All in all, who would be better off?
2. Which of the following two statements do you find more satisfactory: (1) “The
development of agricultural technologies was beneficial to mankind because it
allowed more people to be supported,” or (2) “The development of agricultural
technologies was harmful to mankind because it required people to work harder”?
3. The development of a market economy stimulated technological and economic
development, but at the expense of a rather cold-blooded approach to human
relationships. Is this a fair trade-off? Under what circumstances can marketplace
exchanges coexist with emotionally satisfying interpersonal relationships?
4. Are there any modern occupations that retain characteristics of guild organ-
ization? Why have these persisted?
5. In many ancient societies, most notably Greece and Rome, major advances in
philosophy, mathematics, engineering, and architecture were wrought by an
elite class who owed their leisured existence to the labors of slaves. Do their
accomplishments justify the institution of slavery in those times?
6. How often do you look at a clock or your watch (or cell phone) during the
day? Is there anything unnatural about being governed by clocks? What would
happen to modern society if all the timepieces stopped, never to run again?
Notes
1. Chris Stringer and Peter Andrews, The Complete World of Human Evolution (London:
Thames and Hudson, 2005) pp. 130–139.
2. Manning Nash, “The Organization of Economic Life,” in George Dalton (Ed.), Tribal
and Peasant Economies: Readings in Economic Anthropology (Garden City, NY: The
Natural History Press, 1967), pp. 4–6.
3. Ibid, p. 4.
4. Melville J. Herskovits, Economic Anthropology: The Economic Life of Primitive People
(New York: W.W. Norton, 1965), pp. 124–152.
5. For a series of videos demonstrating the making of stone tools, see “Techniques for
Flintknapping” at http://www.ehow.com/videos-on_4514_techniques-flintknapping
.html (accessed March 7, 2012)
6. Frank R. Wilson, The Hand: How Its Use Shapes the Brain, Language, and Human Culture
(New York: Pantheon, 1998), pp. 15–34.
Volti7e_10_ch09_163-182_highres.indd 180 28/11/12 6:00 PM

http://www.ehow.com/videos-on_4514_techniques-flintknapping

Work in Nonindustrial Societies 181
7. Gerhard Lenski and Patrick Nolan, Human Societies: An Introduction to Macrosociology
(Boulder, CO: Paradigm, 2004), p. 83.
8. Richard B. Lee, “What Hunters Do for a Living, or How to Make Out on Scarce
Resources,” in Richard B. Lee and Irven DeVore (Eds.), Man the Hunter (Chicago:
Aldine-Atherton, 1968).
9. Marshall Sahlins, “Notes on the Original Affluent Society,” in Ibid., p. 85.
10. Ibid, p. 86.
11. For a comprehensive discussion of the theories, controversies, and empirical find-
ings bearing on the transition to agriculture, see Graeme Barker, The Agricultural
Revolution in Prehistory: Why Did Foragers Become Farmers? (Oxford: Oxford University
Press, 2006)
12. T. Douglas Price and Anne Birgitte Gebauer (Eds.), Last Hunters, First Farmers:
New Perspectives on the Prehistoric Transition to Agriculture (Santa Fe, NM: School of
American Research Press, 1995).
13. Ester Boserup, Population and Technological Change: A Study of Long-Term Trends
(Chicago: University of Chicago Press, 1981), p. 17.
14. Roy A. Rappaport, “The Flow of Energy in an Agricultural Society,” in Energy and
Power (San Francisco: W. H. Freeman, 1971).
15. Robert McC. Netting, Cultural Ecology (Menlo Park, CA: Cummings, 1977), p. 62.
16. Boserup, Population and Technological Change, p. 46.
17. Netting, Cultural Ecology, p. 68.
18. R. H. Tawney, Land and Labor in China (London: George Allen & Unwin, 1932), p.24.
19. Boserup, Population and Technological Change, p. 52.
20. Ibid., p. 39.
21. Genesis 3:19.
22. Boserup, Population and Technological Change, pp. 71–72.
23. Quoted in Claude Mosse, The Ancient World at Work (New York: W.W. Norton, 1969),
p. 79.
24. Ibid., p. 104.
25. Adam Smith, An Inquiry into the Nature and Causes of the Wealth of Nations
(New York: Random House, 1937), p. 14. (Originally published in 1776.)
26. Stanley Udy, Jr., Work in Traditional and Modern Society (Englewood Cliffs, NJ:Prentice-
Hall, 1970), p. 66.
27. Horace Miner, The Primitive City of Timbuctoo (Garden City, NY: Doubleday, 1965),
pp. 53–55.
28. Ibid, p. 57.
29. Gideon Sjoberg, The Preindustrial City: Past and Present (New York: The Free Press,
1960), p. 187.
30. Pamela O. Long, “Invention, Secrecy, Theft: Meaning and Context in Late Medieval
Technical Transmission,” History and Technology 16 (2000): 223–41.
31. Steven A. Epstein, Wage Labor and Guilds in Medieval Europe (Chapel Hill, NC:
University of North Carolina Press, 1991), pp.66–67
32. Mack Walker, “Hometowns and Guilds in Early Modern Germany,” in Edward Shorter
(Ed.), Work and Community in the West (New York: Harper & Row, 1973), pp. 40 –41.
33. Claude Mosse, The Ancient World at Work, trans. Janet Lloyd (London: Chatto and
Windus, 1969) pp. 106–107.
34. Ibid., p. 45.
35. Politics 1278. Quoted in Alison Burford, Craftsmen in Greek and Roman Society (Ithaca,
NY: Cornell University Press, 1972), p. 34.
Volti7e_10_ch09_163-182_highres.indd 181 28/11/12 6:00 PM

182 Technology and the Transformation of Work
36. Jacques Le Goff, Time, Work, and Culture in the Middle Ages, trans. Arthur Goldhammer
(Chicago: University of Chicago Press, 1980), p. 44 (author’s emphasis).
37. Sjoberg, The Preindustrial City, p. 209.
38. Max Weber, The Protestant Ethic and the Spirit of Capitalism, trans. Stephen Kalberg (Los
Angeles: Roxbury, 2002). (Originally published in 1905.)
39. Lewis Mumford, Technics and Civilization (New York: Harcourt, Brace and World,
1934), pp. 12–18.
40. Arnold Pacey, The Maze of Ingenuity: Ideas and Idealism in the Development of Technology
(Cambridge, MA: The MIT Press, 1976), p. 63.
41. David S. Landes, Revolution in Time: Clocks and the Making of the Modern World
(Cambridge, MA: Harvard University Press, 1983), pp. 53–82.
42. Le Goff, Time, Work, and Culture in the Middle Ages, pp. 45–46.
43. Ibid., p. 46.
44. Mumford, Technics and Civilization, pp. 14–16.
Volti7e_10_ch09_163-182_highres.indd 182 28/11/12 6:00 PM

Mention technological change to some workers, and a cold chill is likely to go down
their backs. Lurking behind many an innovation is the threat of a job lost. This is
not an irrational fear. Throughout history, and especially in the modern era, many
production technologies have been explicitly motivated by the desire to increase
productivity. Since productivity is usually measured in terms of output per worker,
the consequences of productivity increases seem clear: the reduction or elimination
of labor. For example, if worker productivity increases at a rate of 2 percent a
year—less than the post-World War II average in the United States—the size of the
economy will double in about 35 years. On the other hand, productivity growth can
be cast in negative terms by noting that today’s output of goods and services would
require only half the number of workers 35 years from now.
While productivity gains have made our material lives much richer, they
also have raised the specter of mass unemployment. If a relatively few workers
can perform all the necessary labor, large numbers of people will end up with no
jobs and no work to do. At least that is the fear. Fortunately, the situation is more
complicated. Technology’s effects on employment are substantial, but they also can
be contradictory. In this chapter we will trace the complex connections between
technological change and unemployment by looking at the historical record, and
by considering where the productivity improvements wrought by technological
change seem to be leading us. In addition, the chapter will briefly consider how
technological change has affected the distribution of income, and will note some
of the consequences of globalization, a process that has accompanied technological
advance, which also influences employment and worker remuneration.
The Technological Threat in Historical Perspective
A cursory examination of technological change reveals many examples of jobs
lost because of technological advance. In 1920, the American railroads employed
113,000 locomotive engineers and 91,000 firemen. In 2002, these railroads carried
a much greater volume of freight but needed only 40,000 engineers and no firemen
to do so.1 During the early 1980s, studio musicians saw recording jobs erode by more
than a third as a result of the growing use of musical synthesizers.2 From 1972 to 1977,
21 percent more telephone calls were made throughout the Bell System, yet the
183
Technology and Jobs: More of One and Less
of the Other?
c h a p t e r t e n
Volti7e_11_ch10_183-202_highres.indd 183 28/11/12 6:00 PM

184 Technology and the Transformation of Work
number of operators fell by 32 percent.3 Over a five-year period beginning in 2002,
steel production increased by almost 5 percent, yet employment fell 10 percent.
During the same period, corn production went up by 30 percent while farm payrolls
dropped by about the same percentage.4
Striking as these developments have been, they are not unique to our times.
Beginning in the late eighteenth century, the power loom forced many handloom
weavers out of their traditional craft, and those who stayed on did so only because
they were willing to accept miserably low wages.5 Many similar examples could
be extracted from the history of the Industrial Revolution. The introduction of
machinery threatened the livelihood of many workers and produced a considerable
amount of social unrest. Attempts to introduce the spinning jenny into the English
woolen trade during the late eighteenth century resulted in numerous riots. By the
early nineteenth century, worker resistance to new machinery was widespread in
several regions of England, culminating, as we saw in Chapter 2, in the machine-
smashing Luddite outbreaks.
Opposition to new technologies because of their consequences for employment
has not been confined to the workers whose jobs were threatened. The fear of the
effects of new technology on employment has on occasion moved public officials to
take drastic actions. In 1638 the British government banned the use of “engines for
working of tape, lace, ribbon, and such, wherein one man doth more amongst them
than seven English men can doe.”6 A more extreme (although perhaps apocryphal)
example comes from the Polish city of Danzig, where in 1661 the municipal
authorities destroyed a mechanical ribbon loom and drowned its inventor, for fear
that the new device would put hand weavers out of work.7 Consignment to a
watery oblivion also was advocated for a mechanical cotton picker by the Jackson,
Mississippi, Daily News in the late 1930s: “It should be driven right out of the
cotton fields and sunk into the Mississippi River, together with its plans and
specifications. Nothing could be more devastating to labor conditions in the South
than a cotton-picking machine.”8
A Case for Optimism
Examples, however numerous, do not conclusively prove that technological adv-
ance is incompatible with high levels of employment. Although there can be
no dispute that particular technological changes have resulted in job losses and
attendant personal tragedies, this does not tell the whole story. The effects of
technological change are complex, and it is easy to fix upon one particular result
and ignore others. Rather than dwell exclusively on the fate of particular industries
and occupations, it is essential to consider the larger picture.
When considering the effects of technological change on the overall level
of employment, one thing must always be kept in mind: even if all available
technologies were used to their utmost, there would still be plenty of work to do.
In many parts of the world today, including many “developed” countries, there are
still large numbers of people who lack the basic necessities and amenities of life.
More and better food needs to be produced and distributed, more houses built, more
Volti7e_11_ch10_183-202_highres.indd 184 28/11/12 6:00 PM

Technology and Jobs: More of One and Less of the Other? 185
clothing manufactured. People need to be educated, healed, and entertained. Even
with high levels of affluence, needs and wants do not decrease. When it comes to
the consumption of goods and services, it is likely that high levels of consumption
do not diminish the desire for still higher levels of consumption; appetite may
grow with eating. Although it may not be a particularly noble trait, the desire to
acquire more and more is present in most human beings. As a result, progressively
higher levels of economic development may generate the need for continued high
(or perhaps higher) levels of production. As economists have noted, scenarios of
widespread unemployment induced by technological change are predicated on what
has been called the “lump of labor fallacy,” which postulates that there is only so
much work on hand for a given workforce. But as we have just seen, there is no
intrinsic reason that work cannot expand as more people enter the workforce or
lose the jobs that they had held. At the same time, however, it has to be recognized
that the realization of potential job opportunities will depend on having a labor
force with the requisite skills (or the ability to develop them) and a political and
economic system able and willing to pay for the work that needs to be done.
The seemingly endless expansion of needs and desires can be seen clearly in
the field of medicine. As we have seen in Chapter 7, despite stupendous achieve-
ments in medical technology, we find that the monetary and human resources being
devoted to health care have increased rather than diminished. Indeed, the rising
cost of health care, which now takes more than 17 percent of our gross national
income, has become a major social and political issue. New medical technologies
have generated new possibilities and new hopes. Instead of accepting sickness and
death as the inevitable working of fate or God’s will, we expect that cures will be
found and that medical personnel will be available to administer them. It should
therefore come as no surprise to find that employment projections indicate that the
demand for health-care workers has increased year after year and in all likelihood
will continue to do so in the future.
As this example indicates, technological advances will not lead to job losses
if the demand for products and services increases at the same pace as increases in
productivity. It is also important to bear in mind that the level of demand is affected
by the price of a particular product or service. If productivity increases bring down
the cost, more people can afford it, thereby increasing the size of the market for that
product. This, in turn, may motivate an employer to take on more workers so that
the increased demand can be met. Much of the history of mass production conforms
to this pattern, as exemplified by the automobile industry and the interrelated
increases in productivity, employment, production, and purchases that began when
Henry Ford began to produce large numbers of Model T cars during the second
decade of the twentieth century.
In other cases, lower prices may not result in significantly higher levels of
demand, because people will buy only so much of a particular product irrespective of
its price. (In the jargon of economists, such a product is “price inelastic”.) Even so,
since consumers pay a lower price for this product, they have more money left over
to spend on other things. This increases the effective demand for other products
Volti7e_11_ch10_183-202_highres.indd 185 28/11/12 6:00 PM

186 Technology and the Transformation of Work
Although environmental damage is always unfortunate, it can provide a substantial number of
jobs. (Photo by Joe Raedle/Getty Images)
and enhances employment prospects in the industries that make them. To be sure,
many economists believe that there may be times when aggregate demand will be
insufficient because consumers prefer to save the extra money rather than spend
it on new purchases. According to many economic historians, this phenomenon
of “underconsumption” may in fact have been the underlying cause of the Great
Depression of the 1930s. Under such circumstances, governments may have to
stimulate demand by increasing their own expenditures, even if this means running
budgetary deficits.9
As a final point, it is also important to remember that technological change
often generates problems that require that a great deal of work be done. Animal
and plant habitats need to be restored, bodies of water require purification, and
victims of accidents and pollution need medical care. Many nuclear power plants
are nearing the end of their useful lives, and safely decommissioning them will
entail years of labor by thousands of workers. In general, the cost of environmental
cleanups is hardly trivial; as was noted in Chapter 6, the United States will have to
spend billions if dollars each year for many years just to deal with hazardous wastes
and old refuse dumps. The funds spent on environmental cleanup and protection
can be the basis of many jobs.
How Technology Creates Jobs
Fixing upon particular examples of jobs lost to technological advance can lead to
a kind of tunnel vision. While some technologies destroy existing human jobs,
others produce jobs that had not existed earlier. No one worked as a locomotive
Volti7e_11_ch10_183-202_highres.indd 186 28/11/12 6:00 PM

Technology and Jobs: More of One and Less of the Other? 187
Although technological change can result in the destruction of some jobs, it may also
stimulate the creation of new jobs through the development of new products. Every
year the Consumer Electronics Show in Las Vegas serves as a massive showcase for new
consumer goods. (AP Photo/Lennox McLendon)
engineer before the coming of the railroad, and there was no need for X-ray tech-
nicians before the twentieth century. In fact, most of the occupations held today
did not even exist a hundred years ago. This trend can be expected to continue.
Occupations such as genetic engineer, global positioning system technician, website
designer, and social media strategist could hardly be imagined a generation ago, but
they will become increasingly significant as the technologies that gave rise to them
are developed and deployed.
While the historical record shows that new technologies can give rise to
whole new industries, it is often difficult to determine in advance the occupational
consequences of a new invention. When the transistor was invented, it was at first
thought that its primary application would be simple substitution: the replacement
of vacuum tubes in amplifiers.10 Because of its small size and low power requirements,
the transistor’s first practical application was to make hearing aids more compact
and efficient. Today, transistors and other solid-state devices are the foundation of
industries unknown a few decades ago—personal computers, smartphones, e-book
readers, and MP3 players, to name but a few.
General technological advance has created a host of new products and the jobs
necessary to make them. And in this regard, too, actual events have run well ahead
of many predictions. In 1963 one economist warned that American households were
saturated with domestic appliances, resulting in a stagnant market for manufactured
goods; the only significant new product was the electric can opener.11. Today,
Volti7e_11_ch10_183-202_highres.indd 187 28/11/12 6:00 PM

188 Technology and the Transformation of Work
if he is still alive, the man who made this prediction is likely to live in a house
equipped with a big-screen HD television, food processor, microwave oven, Blu-ray
player, personal computer, and wi-fi router. These products, all of them virtually
nonexistent as consumer items five decades ago, have generated many new jobs in
manufacturing, marketing, and servicing.
The Indirect Effects of New Technologies on Employment
In the past, a single new technology could be responsible for a large proportion of
the new jobs that came into existence. Today’s economy is much more complex, so
no one technology is likely to have the same impact. It is unlikely that any recent
technological innovation will equal the greatest job generator of the twentieth
century: the automobile. Although direct employment in the automobive industry
has steadily declined in recent years, the approximately 250 million cars and light
trucks on America’s roads provide employment for legions of mechanics, sales
personnel, and insurance agents, as well as many workers whose jobs are indirectly
tied to the automotive culture and are employed in everything from oil refineries
to fast-food restaurants.
None of the new industries that have emerged as a consequence of recent
technological change is likely to match this record. The jet airliner, as emblematic
of our present transportation system as the automobile, has not led to the direct
creation of many jobs; relatively few people are employed by America’s airlines and
passenger aircraft manufacturers. The computer, often hailed as the most significant
invention of our age, has not added a great number of manufacturing and service
jobs, and many of the former have been relocated to low-wage countries. The same
can be said of the television industry, and just about any of the industries that owe
their origin to recent technological changes.
At first glance it looks as though the pessimists have got it right: technological
development in established industries destroys jobs, and the occupations created
by new technologies do little to offset the losses. But these conclusions follow
only if one looks at the specific industries without considering indirect effects on
employment.
Consider the airline industry again. It is true that despite the industry’s
dramatic growth in recent decades, employment has gone up only slightly. But
at the same time, air travel has stimulated a number of other sectors. Travel and
tourism, much of which stimulated by the growth of the airline industry, are now
one of the largest sectors of the world economy, accounting for 9.2 percent of
global GDP, 4.8 percent of world exports, and 9.2 percent of global investment.12
The growth of jet-age travel and tourism has brought with it a great many job
opportunities in hotels, restaurants, travel agencies, tourist attractions, and the
like. Television also has had a considerable, if unmeasurable, impact on economic
growth by stimulating a demand for new products and services that are presented
on programs and commercials. In similar fashion, the spread of computers into our
economy has been essential to the development of new industries, videogames,
and social media being the most obvious. All in all, some jobs have been lost as
Volti7e_11_ch10_183-202_highres.indd 188 28/11/12 6:00 PM

Technology and Jobs: More of One and Less of the Other? 189
computers increased productivity in clerical, manufacturing, and other occupations,
but at the same time new jobs are being created as computers have stimulated the
growth of new or existing industries.
Since the effects of jet aircraft, television, computers, and other advanced
technologies are often indirect, their role in job creation is often obscured. It is
much easier to take note of those jobs that have been directly eliminated by an
emergent technology, for they are dramatic and often well publicized. The indirect
effects of technological change on job creation are less easily noted. To take
another example, it has been noted that the spread of e-mail and other computer-
based communication systems will result in diminished use of the postal system
and the loss of jobs for hundreds of thousands of postal workers. This is true to
some extent, but at the same time, the rise of e-commerce has increased the use
of the U.S. Postal Service and other delivery services for the shipment of parcels.
Another consequence of computerization has been a great expansion of junk mail.
Not everyone enjoys receiving piles of advertisements and solicitations, but they
certainly have kept letter carriers busy, while creating a fair number of new jobs for
other people.
The Machines Aren’t Ready to Take Over
When people visualize technological change in the workplace, they often conjure
up an image of legions of robots doing what people used to do. There is no denying
that robots have been making an impact on factory production. In 2008, 1.3 million
industrial robots were working in the world’s factories.13 Moreover, robots are only
the final step in a more general process of automating production. Computer-aided
design (CAD) systems make it possible to “draw” a design onscreen, change its
size and shape, call up stored shapes, and rotate the design to see how it looks
from different perspectives. The design can then be tested through the use of
computerized procedures. When the design work is completed, the new product can
be built by a computer-aided manufacturing (CAM) system, in which computerized
instructions run the robots and other machinery that fabricate and assemble the
parts that have been designed with the help of computers.. Computerized systems
can also be used to organize the manufacturing process by scheduling production,
directing the movement of parts, and generally keeping everything under control.
Industrial computers are also the basis of flexible manufacturing systems that
alter the basic rules of mass production. Instead of using special-purpose machines
that do only one thing, it is possible to use general-purpose machines that can
be quickly reprogrammed to do a variety of things. For example, automobile
manufacturers can now rapidly reprogram assembly-line robots to do different
welding operations, depending on what is coming down the line, all of which
happens in response to a bar code affixed to a rail running below the vehicle.14
Flexibility of this sort is especially important for the manufacture of products that
are made in small quantities or undergo rapid changes in specifications.
At present, these technologies have appeared piecemeal in most industrial
settings; they are “islands of automation” on the shop floor. Over time, it is hoped,
Volti7e_11_ch10_183-202_highres.indd 189 28/11/12 6:00 PM

190 Technology and the Transformation of Work
Robot welders at work on an automotive assembly line. (AP Photo/Carlos Osorio)
they will become part of an integrated system that governs the total manufacturing
process from initial design to the testing of the completed product. When this
occurs, a great deal of production will take place with very little direct human
intervention.
As exciting as these developments are, the advance of industrial automation
will be a slow process. Automated manufacturing systems are very expensive, with
each workstation costing up to a million dollars. More important, many technical
problems remain unsolved. There are still many difficulties surrounding the
transport of materials from one station to another without human intervention.
Robots are particularly difficult to adapt to assembly operations, and a lot of hand
Volti7e_11_ch10_183-202_highres.indd 190 28/11/12 6:00 PM

Technology and Jobs: More of One and Less of the Other? 191
labor, much of it done in low-wage countries, is still used for assembly operations.
Robots and other elements of automated production technology have been
eliminating some jobs, but there are limits to how far this process can go. There
even have been some cases where robots have been replaced by human workers in
industrial settings in which people learn faster, are more flexible, take up less space,
and work cheaper and faster.15
Manufacturing systems that rely on automated production technologies also
operate on the thin edge of disaster. If a robot or other computer-controlled
process fails, production comes to a complete halt and everything stacks up
behind the obstacle until human workers sort out the problem.16 Even worse, a
small programming error can cause thousands of dollars of damage to machines
and material in a few seconds. The challenges of programming, controlling, and
maintaining computer-controlled industrial processes have put a high premium
on skilled workers, who remain in short supply while other manufacturing workers
remain on the unemployment rolls month after month.17
Technology, Jobs, and the Changing Structure of the Economy
Although automated industrial processes have been only partially realized, there is
no denying that they have made substantial inroads into manufacturing employment.
Robots, computerized controls, and CAD-CAM systems have reduced the need for
human labor in many industries, a process that will continue in the years to come. But
alarming as this prospect may appear, it is nothing new. Over the last century, many
of the jobs performed by manufacturing workers have been lost to mechanization, yet
total employment has undergone a massive increase. This has occurred because the
bulk of job creation, especially in recent decades, has been not in the production of
goods but in the supply of services. In 1950, 18.1 million workers were employed in
manufacturing, construction, and raw materials extraction, while 26 million worked
in the service sector. By 2012, with 88.9 million additional job holders, these figures
were 18.74 million and 114.3 million, respectively.18 In the 1950s, manufacturing,
construction, and raw materials extraction were pillars of the economy; they are still
important, but they have steadily declined as sources of employment. As simple arith-
metic shows, the number of these jobs has not dropped in absolute terms, but they
have fallen sharply relative to service occupations. In 2012, manufacturing, construc-
tion, and resource extraction had only 64,000 more jobs than in 1950; for the service
sector the number was more than 88.3 million.19 Even these figures may underes-
timate the extent to which services have been numerically dominant in modern
economies. Many jobs in the manufacturing sector are actually service occupations.
Independent consulting engineers provide a service, and are put in this category for
statistical purposes, but if they perform similar duties as members of the engineering
staff of a manufacturing enterprise, they appear as manufacturing employees. The
number of workers performing service-type functions in the manufacturing sector is
large and growing, another indication of the changing nature of work.
Many occupations fall into the service category, and not all of the jobs in this
sector of the economy are equally desirable. Shoeshine boys perform a service, just
Volti7e_11_ch10_183-202_highres.indd 191 28/11/12 6:00 PM

192 Technology and the Transformation of Work
Electronic trading has reduced traffic considerably at the New York Stock Exchange, as can be
seen in these photographs, one taken in 1962, the other in 2012. (Photo by Express Newspapers/
Getty Images; Xinhua/eyevine/Redux)
as neurosurgeons do. What many of these varied services have in common, however,
is that they incorporate a personal element. This in fact is what makes them desired
by consumers who have a surfeit of manufactured goods. For many people, the most
sophisticated machine is no substitute for human contact. This desire will likely
Volti7e_11_ch10_183-202_highres.indd 192 28/11/12 6:00 PM

Technology and Jobs: More of One and Less of the Other? 193
retard the replacement of human service providers by even the most sophisticated
technologies. Computer-based expert systems can do an excellent job of diagnosing
certain illnesses,20 but they fall short in providing a significant element in recovery
from many illnesses—a patient’s confidence in the services provided by a skilled
medical practitioner.
The desire for services shows no signs of abating, and this sector will generate
many future jobs. Still, relying on services to drive the economy and provide jobs
seems to violate common sense. The basis of human survival is food, clothing,
shelter, and the implements necessary to sustain life. How can the majority of
the work force be employed in occupations that directly contribute nothing to
production? In fact, much economic progress can be interpreted as an ongoing
reduction of the number of workers employed in producing the necessities of life.
Two hundred years ago, more than half of the population of Europe earned their
livelihood by growing food, just as is the case in many poor countries today. With
the rise of industrialization and the provision of such things as irrigation pumps,
mechanized farm implements, and chemical pesticides and fertilizers, the labor
requirements of agriculture steadily dropped, until today less than 5 percent of the
work force in most industrial countries is engaged in farming. The same thing could
happen in manufacturing in the years to come.
The displacement of agriculture by industry was closely tied to the rapid
progress of technology during the nineteenth and twentieth centuries. In the case
of the service sector, however, modern technologies are often conspicuous by their
absence. This is a major reason for the slow growth in productivity that has been
characteristic of the American economy in recent years. While manufacturing
employees have made much use of machines that embody increasingly sophisticated
technologies, many service occupations have been only lightly touched by them.
As a result, the desire for more services can be met only by employing more
workers.
According to the U.S. Bureau of Labor Statistics, between 2006 and 2016 more
than half a million production jobs, most of them in the manufacturing sector, will
have been lost, while another 29,000 jobs are projected to disappear in farming,
fishing, and forestry. All of the net employment gains will come in the service
sector. Some of these will be high-paying jobs that require a college degree or
higher, but many others will be poorly remunerated, relatively low-skill jobs: home
care aides, medical and dental assistants, janitors and cleaners, and short-haul truck
drivers.21
Perhaps the clearest example of the labor-absorbing quality of services can be
found in education. Despite efforts to use computers, audio-visual aids, and machine-
scored exams, teaching is conducted much as it was in the time of Socrates: through
lectures and discussions, both of which require some degree of personal contact.
Since much of the appeal of a service has to do with the human contact it provides,
the more “productive” a service job is, the less satisfying it may be for the consumer.
A teacher could greatly increase his or her productivity by lecturing to a class of
500 rather than 25, but most students would find so “productive” a class much less
enjoyable, and the lack of personal attention would probably result in lowered levels
Volti7e_11_ch10_183-202_highres.indd 193 28/11/12 6:00 PM

194 Technology and the Transformation of Work
of learning for many of them. It is also significant that one of the most successful
online educational programs, the Khan Academy, is most effective when students
have the opportunity to interact on a face-to-face basis with human teachers who
go over students’ homework, monitor their pro-gress, and help them with any
difficulties they may have encountered.22 To be sure, a good many services have
experienced sizable productivity gains through the introduction of labor-saving
technologies. The classic example of this can be found in the telephone industry. If
telephone calls were still manually switched as they were in the 1920s, today’s level
of telephone traffic would require no fewer than 50 million operators. Although
the impact may not be as profound, it is likely that many of today’s services will
also experience technologically induced productivity gains and corresponding
drops in employment. E-mail, word processing, automated banking, and the like
have undoubtedly caused job losses, and will continue to do so. But does this
mean that unemployment will necessarily increase? Here, again, the example of
the telephone industry may be relevant. Some actual, and millions of potential,
jobs were lost due to the introduction of automatic telephone switching, but total
employment in the telephone industry continued to rise because increased levels
of demand stimulated the industry’s expansion. Equally important, the expansion
of telephone communications has acted as a powerful stimulant to the economy as
a whole, leading to the creation of many jobs that would not otherwise exist. The
job-creating qualities of a new technology are usually hard to predict in advance,
but they are there all the same.
Technology and the Distribution of Income
At this point it can be fairly argued that the creation of new jobs through tech-
nological change is not the whole story; also in question is the nature of the jobs
created, and in particular, how well they pay. In this regard, the record of recent
years has been mixed. On the one hand, technological change has resulted in the
emergence of many new, high-paying occupations and business opportunities. On
the other hand, technological change has been implicated in diminished economic
opportunities and an overall reduction in wages for significant portions of the labor
force.
One group that has seen a reduction in wages relative to the rest of the labor
force is composed of workers with only a high school education or less. Technological
change is surely a culprit here, as mechanization has eliminated many jobs that a
generation ago paid decent wages while requiring few skills. For the economy as a
whole, the loss of these jobs has not resulted in widespread unemployment, because
the loss has been offset by large employment increases in the service sector. But this
is not all there is to it, because compensation (wages and benefits) in service-sector
jobs on average is not as remunerative as in manufacturing jobs, although there is
wide variation within the former.23
By contributing to the drift from manufacturing to service-sector employment,
technological change has been linked to one of the most troubling economic and
social trends in the United States today, the growing wealth and income gap that
Volti7e_11_ch10_183-202_highres.indd 194 28/11/12 6:00 PM

Technology and Jobs: More of One and Less of the Other? 195
separates the top stratum of the population from everybody else. This situation has
intensified in recent decades; in 1975 the top 20 percent of American households
accounted for 40.7 percent of the income received in the United States. By 2009
their share had climbed to 50.3 percent.24 The shift from manufacturing to services
has not been the only reason for the widening income gap; many other social
and economic changes have contributed: changes in family structure, economic
globalization, reductions in government benefits for the poor, increased immigration,
stagnant minimum wages, and the erosion of union power, to name a few.
Although it is likely that technological change has exacerbated wage inequality,
a precise reckoning is still a matter of debate.25 Assessing technology’s contribution
to the skewing of the distribution of income is difficult because much of it
operates indirectly. In some cases the consequences of technological change are
straightforward; a single computer programmer designs an automated system that
results in job losses for a dozen reasonably well-paid assembly-line workers. But
many other effects of technological change are more subtle. The installation and
use of technologically sophisticated operations is not simply a matter of replacing
manual workers with robots and other computer-controlled devices and processes.
Realizing the potential of digital technologies often requires a fundamental
restructuring of a firm’s organizational structure and operations. As Erik Brynjolfsson
and Andrew McAfee summarize, “The most productive firms reinvented and
reorganized decision rights, incentive systems, information flows, hiring systems,
and other aspects of organizational capital in order to get the most from the
technology.”26 As a result, the high earners are not just the technically proficient
members of the labor force but also include well-educated men and women with a
broad range of aptitudes and skills.
The effects of technological change on jobs and remuneration are particularly
evident when we focus on specific segments of the labor force. To take one
obvious example, technological change can make life very difficult for older
workers, especially those with limited skills. The fact that technological change
may have increased economic opportunities overall is no comfort to a veteran
auto worker who was displaced from a well-paying job by a robot welder.
Although qualified workers will be needed to program and repair the robot, such
jobs are not likely to go to the former auto worker. They may go to his son or
daughter, but for a mature worker with obsolete skills, the future is bleak unless
training programs and some form of income supplement can help him through a
difficult period of transition.
The difficulties encountered by many older workers with obsolescent job skills
are not confined to them alone. The job losses suffered by older workers will be a
growing problem as the population of the United States collectively ages; whereas
there were 63 million men and women aged 55 or older in 2002, by 2025 there will
be a projected 103 million of them.27 The situation is not altogether dire; although
older workers are sometimes stereotyped as being difficult to re-train, several studies
have shown that this is not the case.28 The biggest impediment to equipping older
workers with new skills may not inhere in the workers themselves, but rather stems
from the erroneous belief that old dogs can’t learn new tricks.
Volti7e_11_ch10_183-202_highres.indd 195 28/11/12 6:00 PM

196 Technology and the Transformation of Work
Technology, Globalization, and Jobs
The loss of jobs as a result of technological change is an old story. A more recent
source of job losses has been economic globalization—in particular, the offshoring
of manufacturing jobs to low-wage countries. Much of this has been the work of
multinational corporations (MNCs). The availability of low-wage labor in the poor
countries of the world is often cited as the primary reason for moving jobs abroad,
but it is not the only one; MNCs may have operations in other countries because
they want to be close to important markets, insulate themselves from currency
fluctuations, reduce transportation costs, and avoid tariffs and other trade barriers.
Access to cheap labor is undoubtedly an important motivation for transplanting
jobs to a poor country, but the advantages of a low-paid workforce may be offset by
other costs. A successful offshore operation requires adequate communication and
transportation infrastructures, trainable workers, and at least minimally effective
and honest governments. If cheap labor were all that mattered, all of the world’s
manufacturing would be done in sub-Saharan Africa.
Although technological change and the offshoring of jobs are separate
phenomena, they are closely related. Technological change in the form of shipping
containers and intermodal transport systems has dramatically lowered the cost of
transporting manufactured items from low-wage countries to consumers abroad.29 A
typical case of intermodal transportation begins with a manufacturing plant loading
its products into shipping containers, which then travel by truck to a port where
the containers are loaded onto a ship that may take the products halfway around
the world. The containers are then put on railroad flat cars bound for a distribution
point, where they are offloaded onto trucks that take them to their final destination.
Through this process, the costs of loading, unloading, and reloading crates and
boxes are eliminated, and losses due to the effects of weather, damage, and pilferage
are greatly reduced.
While improved technologies have lowered the time and expense of the
transportation of physical goods, the rapid advance of digital technologies has made
it possible to transfer information at close to the speed of light—company memos,
contracts, technical diagrams, legal briefs, market reports, and anything else that can
be digitized. This has made it possible to offshore some activities requiring expert
knowledge, such as interpreting lab tests, as well as some requiring minimal skills,
such as processing bill payments and staffing call centers. As of now, however, job
losses due to moving white-collar work to low-wage countries have been less evident
than the loss of manufacturing jobs to these countries. Many information-related
occupations will likely remain in the industrially developed world because they require
a fair amount of face-to-face interaction and relationships based on trust.30 And even
when some elements of the job can be reduced to digitized communications, as has
been noted above, it will still be necessary to employ creative, well-educated workers
to initiate, guide, oversee, and evaluate the kinds of changes that are essential to the
realization of the potential advantages of digital technologies.
In sum, economic globalization has brought benefits by lowering the price of
goods, facilitating the diffusion of modern technologies, and in general opening
up the world to new ideas and ways of doing things. But it has had its unfortunate
Volti7e_11_ch10_183-202_highres.indd 196 28/11/12 6:00 PM

Technology and Jobs: More of One and Less of the Other? 197
consequences as well. Throughout the developed world, empty steel mills,
abandoned factories, and depopulated communities bear witness to industries
and jobs lost to foreign competition. As with technological change, with which
it is closely connected, economic globalization has created winners and losers.
Manufacturing workers have been particularly hard hit as their jobs have been
outsourced to countries with cheaper labor or other inducements. As we have seen,
there is no inherent reason that technological advance should lead to long-term
unemployment, and the same can be said of economic globalization. Both create
new opportunities, but adjustments will not come automatically. As will be noted
in the next section, matching job skills to these opportunities will require the
implementation of appropriate policies in the private and public sectors.
Rebounding from Job Losses
A general optimism regarding the consequences of technological change for empl-
oyment should not obscure the fact that adjustments will not be smooth and auto-
matic. There will always be a need for purposive action if technological change is
not to result in a great number of individual tragedies alongside a general economic
advance.
In fact, there have been few effective programs to mitigate the inevitable
destruction of some jobs by technological advance. When the necessary adjustments
have been made, they have been done on an ad hoc basis. A few unionized workers
have been able to nullify the job-destroying effects of technological advance by
striking advantageous bargains with their employers. For example, locomotive
firemen were left with little to do when the diesel-electric replaced the steam
locomotive, but they were able to negotiate contracts that preserved their jobs
for many years. Make-work schemes, however, do not address the real problem of
technologically induced job loss. Most people seem to have a real need to work, and
to do work that has some intrinsic meaning. At the same time, as has been noted,
technological change has not eliminated the need for work. The problem, then, lies
in putting the two together by preparing workers whose jobs have been eliminated
by technological change to take on jobs for which a real need exists. In an era of
rapid technological change there is a fundamental need for retraining programs that
help displaced workers learn new skills and prepare them for new jobs.
Unfortunately, current training programs in the United States have not been
up to the task. Although federal, state, and local governments have a number of job
training programs in place, they are still restricted in scope and scale. In part this
is due to the chronic underfunding of such programs, resulting in their inability to
meet demand for them. But even with adequate funding, formal training programs
can do only so much. The most effective training programs focus on skills used in
a worker’s current job. In general, however, the private sector has failed to take a
leading role in improving workers’ skills because employers fear that their workers
will use their newly acquired skills to obtain better-paying positions elsewhere.
The reluctance of employers to pay for retraining also results from tax policies.
Some states offer a tax credit in return for incurring some training costs, and the
federal government awards tax credits for the training of targeted employees such
Volti7e_11_ch10_183-202_highres.indd 197 28/11/12 6:00 PM

198 Technology and the Transformation of Work
as veterans, but tax credits and deductions are more generous for investments in
physical capital than they are for human capital. At the same time, many wor-
kers fail to take advantage of the retraining programs that do exist because these
programs do not take into account workers’ skills and competencies. Many of the
jobs eliminated by technological advance have been held by unskilled workers with
limited education. A basic remedial program may thus be an essential component
of a successful retraining program. And, sad to say, many young people are entering
the labor force with serious deficiencies in essential reading and mathematics
skills. Workers operating at these low levels are especially likely to be victimized
by technological change, and it will be a major challenge to help them develop the
skills that allow them to adjust to a technologically dynamic economy.
Benefits, but Disruption Too
All in all, the fear that technological advance will lead to widespread unemploy-
ment seems unwarranted. Technological change does eliminate specific jobs, but it
does not eliminate work itself. While some occupations are rendered redundant by
new products and processes, others are being created. To be sure, this is cold comfort
to mature workers who have been replaced by new machines. They may find work
in America’s expanding service sector or even in businesses that have been created
by recent technological advances, but in all likelihood they will suffer a cut in pay.
Even those who land good jobs may do so at the cost of considerable dislocation.
Technological change often results in the rise of some geographical regions and the
decline of others. Eighty years ago the rise of the automobile industry stimulated
the rapid expansion of the Detroit area. In recent years the development of the
semiconductor industry has produced substantial growth in northern California’s
Silicon Valley. Taking advantage of new job opportunities may require pulling up
stakes and moving to an unfamiliar part of the country, leaving friends, family, and
community behind.
There is no getting around the fact that technological change can be a painful,
disruptive process. Occupations rise and fall, and with them the fortunes of many
individual people. There are spectacular winners and pathetic losers. Special efforts
must be made if the latter are not to be ruined by the advance of technology.
Technological change may destroy a career in which one has invested his or her
whole life. It would be irresponsible and cruel to turn a blind eye to the needs of
workers displaced by technological advance.
At the same time, it would be foolish to attempt to limit the development and
spread of new technologies for fear of massive job losses. Far from being the product
of technological advance, unemployment is much more likely to occur during
periods of technological stagnation. One British labor historian has estimated that
during the late seventeenth century, half the population of England were paupers
because there was not enough work for them to do.31 This situation began to
change only when a series of technological changes helped to produce the Industrial
Revolution. After two centuries of technological advance significant numbers of
people are still dogged by poverty and unemployment, but there is no denying
Volti7e_11_ch10_183-202_highres.indd 198 28/11/12 6:00 PM

Technology and Jobs: More of One and Less of the Other? 199
that things are much better than they once were. Certainly our situation stands in
stark contrast to the more technologically backward parts of the world, where vast
unemployment and the poverty that accompanies it are the order of the day.
This, however, is not the end of the matter. Although technological advance
has expanded employment opportunities for millions of people, it still can be argued
that it has led to the degradation of the work they do. People may have jobs, but
they are dreary, alienating, and unrewarding. In the next chapter we will try to
determine if this is really the case.
Questions for Discussion
1. List a few jobs that have been rendered obsolete by technological change. Are
some of these jobs so unpleasant that they are better done by machines? On the
whole, have more “good” jobs or “bad” jobs been eliminated by technological
advance?
2. What new products and services have been created by technological advance
during the past 20 years or so? To what extent are they “essential” parts of
contemporary life?
3. Is there anything problematic about an economy that centers on the produ-
ction of services instead of the production of goods? Is a country with such an
economy in an inherently dangerous position when it confronts international
economic and political competition?
4. In what ways is economic globalization similar to technological change in
regard to job losses? Does it make sense to try to block either technological
change or economic globalization in order to preserve jobs?
5. Should federal and local governments come to the aid of workers who have
lost their jobs because of globalization and technological change? Is this a
legitimate role of government?
If so, what sort of policies and programs might be formulated to meet workers’ needs?
Notes
1. John H. White, Jr., “Oh, To Be a Locomotive Engineer,” Railroad History, no. 190
(Spring/Summer 2004), p. 74.
2. A l l a n J a l o n , “ S y n t h e s i z e r s : S o u r S o u n d t o M u s i c i a n s , ” L o s A n g e l e s Ti m e s
(6 December 1985): 1.
3. Ian Reinecke, Electronic Illusions: A Skeptic’s View of Our High-Tech Future
(Harmondsworth, England: Penguin, 1984), p. 49.
4. Peter G. Gosselin, “What’s Old Is New Again in the U.S. Economy,” Los Angeles Times,
May 25, 2008, p. A29.
5. Malcolm I. Thomis, The Town Labourer and the Industrial Revolution (New York: Barnes
and Noble, 1974), pp. 88–105.
6. Malcolm I. Thomis, The Luddites: Machine-Breaking in Regency England (New York:
Schocken, 1972), pp. 14–15.
7. Witold Rybczynski, Taming the Tiger: The Struggle to Control Technology (New York:
Viking/Penguin, 1985), p. 103.
8. T. A. Heppenheimer, “The Machine That Killed King Cotton,” American Heritage of
Invention and Technology 20, 1 (Summer 2004): 39.
Volti7e_11_ch10_183-202_highres.indd 199 28/11/12 6:00 PM

200 Technology and the Transformation of Work
9. For a review of how economists have viewed the relationship between employment
and technological change, see Gregory R. Woirol, The Technological Unemployment and
Structural Unemployment Debates (Westport, CT: Greenwood Press, 1996).
10. Ernest Braun and Stuart McDonald, Revolution in Miniature: The History and Impact of
Semiconductor Electronics (Cambridge: Cambridge University Press, 1978), p. 57.
11. James Fallows, “America’s Changing Economic Landscape,” The Atlantic 255, 3 (March
1985): 54.
12. Robert Greenhill, “Preface,” in Jennifer Blank and Thea Chiesa (Eds.), The Travel
and Tourism Competitiveness Report 2011, Geneva, Switzerland, World Economic
Forum (2011), accessed on March 16, 2012, at http://www3.weforum.org/docs/WEF
_TravelTourismCompetitiveness_Report_2011 .
13. Erico Guizo, “World’s Robot Population Reaches 8.6 Million” IEEE Spectrum (April
14, 2010), accessed on March 16, 2012, at http://spectrum.ieee.org/automaton/robotics
/industrial-robots/041410-world-robot-population. The remaining 7.3 million robots
were classified as service robots.
14. Micheline Maynard, “Yes, Assembly Lines Can Mix Apples and Oranges,” New York
Times (17 August 2004): 5.
15. Emily Thornton, “Japan Lays Off Its Robots,” World Press Review 43, 7 (July 1996):
31–32.
16. Harley Shaiken, “The Automated Factory: The Review from the Shop Floor,” Techn-
ology Review 88, 1 (January 1985): 18
17. Peter Whoriskey, “U.S. Manufacturing Sees a Shortage of Skilled Factory Workers,”
Washington Post (February 19, 2012), accessed on March 20, 2012, at http://www.
washingtonpost.com/business/economy/us-manufacturing-sees-shortage-of-skilled-
factory-workers/2012/02/17/gIQAo0MLOR_story.html.
18. The figures for 1950 can be found in United States United States Census Bureau,
Statistical Abstract of the United States 1951, Table 207 “Employees in Nonagricultural
Establishments by Industry Division, 1919-1950” accessed on September 25, 2012 at
http://www2.census.gov/prod2/statcomp/documents/1951-03 . The figures for 2012
appear in United States Bureau of Labor Statistics, Table B-1 “Employees on Nonfarm
Payrolls by Industry Sector and Selected Industry” (7 Sept. 2012) accessed on Sept. 25,
2012 at http://www.bls.gov/news.release/empsit.t17.htm.
19. Calculated from Ibid.
20. “Software as a Career Threat,” Forbes 155, 11 (22 May 1995).
21. Arleen Dohm and Lynne Shniper, “Occupational Employment Projections to 2016,”
Monthly Labor Review (November 2007), accessed on August 21, 2012, at http://www
.bls.gov/opub/mlr/2007/11/art5full
22. Clive Thompson, “How Khan Academy Is Changing the Rules of Education,” Wired
(August, 2011).
23. David L. Kay and James E. Pratt, “Jobs, Good Paying Jobs, and Services,” Research
and Policy Brief Series, Cornell University (May 2009), accessed on March 20,
2012, at http://devsoc.cals.cornell.edu/cals/devsoc/outreach/cardi/publications/loader
.cfm?csModule=security/getfile&PageID=574410.
24. U.S. Census Bureau, Statistical Abstract of the United States: 2012, Table 694 “Share
of Aggregate Income Received by Each Fifth and Top 5 Percent of Households:
1970–2009,” accessed on March 13, 2012, at http://www.census.gov/compendia
/statab/2012/tables/12s0694 .
25. For contrasting views, see Clive Crook, “A Survey of Globalization,” The Economist
(September 29, 2001): 9, and Lawrence Mishel, Jared Bernstein, and Heidi Shierholz,
Volti7e_11_ch10_183-202_highres.indd 200 28/11/12 6:00 PM

http://www3.weforum.org/docs/WEF

http://spectrum.ieee.org/automaton/robotics

http://www

http://www2.census.gov/prod2/statcomp/documents/1951-03

http://www.bls.gov/news.release/empsit.t17.htm

http://www

http://devsoc.cals.cornell.edu/cals/devsoc/outreach/cardi/publications/loader

http://www.census.gov/compendia

Technology and Jobs: More of One and Less of the Other? 201
The State of Working America 2008/2009 (Ithaca: Cornell University Press, 2009):
214–220.
26. Eric Brynjolfsson and Andrew McAfee, Race Against the Machine: How the Digital
Revolution is Accelerating Innovation, Driving Productivity, and Irreversibly Transforming
Employment and the Economy (Lexington, MA: Digital Frontier Press, 2011): 41–42.
27. Sara J. Czaja and Joseph Sharit, Aging and Work: Issues and Implications (Baltimore:
Johns Hopkins University Press, 2009):259.
28. T. F. Rizzuto and S. Mohammad, Workplace Technology and the Myth About Older
Workers, paper presented at the Annual Conference of the Society for Industrial
and Organizational Psychology (2005), cited in Ron P. Githens, “Older Adults in
E-Learning, Opportunities and Barriers,” http://rodgithens.com/papers/older_adults
_elearning_2007 (accessed August 26, 2010); Leora Friedberg, “The Impact
of Technological Change on Older Workers: Evidence from Data on Computers,”
Department of Economics, University of California, San Diego, 1999, http://www
.escholarship.org/uc/item/1s97n77x#page-2 (accessed August 26, 2010).
29. For a discussion of intermodal transport and its implications for labor, see Edna
Bonacich and Jack B. Wilson, Getting the Goods: Ports, Labor, and the Logistics Revolution
(Ithaca, NY: Cornell University Press, 2008).
30. Frank Levy and Richard J. Murnane, The New Division of Labor: How Computers Are
Creating the Next Job Market (Princeton and Oxford: Princeton University Press, 2004),
pp. 13–30.
31. Thomis, Town Labourer, p. 147.
Volti7e_11_ch10_183-202_highres.indd 201 28/11/12 6:00 PM

http://rodgithens.com/papers/older_adults

http://www

A Macmillan Higher Education Company
WORTH PUBLISHERS
Volti7e_01_FM_i-xviii_highres.indd 2 28/11/12 5:52 PM
this page left intentionally blank

Technological Change and Life on the Job
c h a p t e r e l e v e n
203
The word “manufacture” literally means the production of something by hand, and
for most of human history, it was just that. Goods were produced in small numbers
by individual craftsmen who relied on little more than their own skills and a few
tools. This situation began to change markedly in the middle of the eighteenth
century as industrialization began to transform the way that things were made. To be
sure, this was a revolution that was a long time in the making; people did not pick
up their newspapers one day in 1750 and read “Industrial Revolution Breaks Out
in England!” Many of the changes that produced an industrial society were small
and slow to develop. Others were more dramatic, but even these required numerous
incremental changes before they were effective. But taken together, these changes
utterly transformed the way things were made, and with them the way people worked.
This process continues today, with technological advance constantly changing the
nature of work. In this chapter we will consider the historical relationships between
work and industrialization, as well as where they seem to be headed today.
Industrial Production
An economy based on industrial production has a number of special characteristics.
In the first place, it requires large amounts of energy. Before industrialization, almost
all of the energy used came from organic sources: human and animal muscle power
and the burning of wood and other plant products. Civilization rested on the physical
labor of peasant farmers and artisans (and in many places, slaves), whose efforts
produced the surplus that supported the activities of small numbers of artists, priests,
and government officials. The use of new sources of energy allowed vast increases in
production while at the same time lightening some of the physical burdens of work.
The first new energy source for the emerging industrial economy was water
power. Water wheels date back to antiquity, and were widely employed during
the Middle Ages; over 5,000 of them could be found in England according to the
eleventh-century Domesday Book. But these were simple devices that produced
little power. With the mechanization of key industries such as spinning, weaving,
and flour milling, larger, more elaborate wheels began to be used in great numbers.
To an increasing degree they were built according to systematic, empirically derived
principles, making them significantly more efficient.1
Volti7e_12_ch11_203-222_highres.indd 203 28/11/12 6:01 PM

204 Technology and the Transformation of Work
Water wheels continued to be important industrial power sources well into the
nineteenth century, at which time they slowly began to be supplanted by steam
engines. Originally built to pump water out of mines, during the late eighteenth
century the steam engine was brought to a higher level of efficiency by James
Watt’s invention of the separate condenser. Of equal importance, Watt devised new
linkages and gearing systems so that the up-and-down motion of a piston could be
translated into rotary motion, which was essential for the powering of many types
of industrial machinery.
During the Industrial Revolution steam engines and water wheels animated a
great number of new machines. Among the most important of these were machine
tools that could produce large numbers of identical parts. The possibilities of this
system were first indicated in the early nineteenth century when the Franco-
British engineer Marc Brunel teamed up with Henry Maudsley, a pioneer designer
and builder of machine tools, to produce large numbers of pulley blocks for the
British Navy. After they installed 44 machines at the Portsmouth (England)
Navy Yard, their staff of 10 workers was able to produce 160,000 blocks a year, a
larger output than the one that had been achieved by 110 men using traditional
techniques.2
The installation of power-driven machinery was complemented by the estab-
lishment of a new setting for productive work: the factory. Instead of the crafts-
man’s shop with its half-dozen or so workers, the dominant manufacturing
enterprise was the large factory in which hundreds and even thousands of people
performed the tasks that large-scale production required. As industrialization
took hold, the size of manufacturing establishments grew rapidly. In 1870 one
of the largest industrial enterprises in the United States, the McCormick reaper
plant in Chicago, employed no more than 500 workers. Thirty years later, more
than a thousand factories had between 500 and 1,000 workers, and over 400 had
more than 1,000.3 Three steel plants and a locomotive works each had more than
8,000 workers.
With the spread of the factory system, most manufacturing workers no longer
worked as self-employed artisans. They now worked for a business firm, and were
dependent on it for the tools they used, the facilities they worked in, and of course
the money they took home. Independent workers ceased to be the foundation of
the economy. The typical worker was now a hired member of a firm, a wage-earner
rather than an independent producer. Industrialization had created a society of
employees.
The supervision and coordination of large numbers of employees, in turn,
required entirely new methods of organization. Not only were far more people
working in a single enterprise, they were also socially and psychologically separate
from their employers and detached from their other social roles and responsibilities.
As a result, the face-to-face contact and personal attachments found in the
craftsmen’s shop gave way to bureaucratized managerial methods. Hierarchical
command structures, written rules, strict job definitions, the precise scheduling of
work, and rigid procedures became typical ways of organizing production.
Volti7e_12_ch11_203-222_highres.indd 204 28/11/12 6:01 PM

Technological Change and Life on the Job 205
Machine-Paced Labor
The spread of industrial technologies helped to make work a more regular and
precisely scheduled process. New power technologies were of particular importance
in making work follow strict temporal rhythms. Industrial processes that made use of
steam engines and water wheels were obviously unsuited to irregularly paced methods
of working. Since they were expensive pieces of capital equipment, these power
sources and the machines they ran had to be operated constantly if they were to be
used efficiently; sitting idle they produced no income. This meant that not only were
the times of work more precisely scheduled, but work had to be performed at all hours.
Shift work and labor at unsociable hours made their appearance, a situation that stood
in marked contrast to the life of the craftsman working under guild regulations that
usually prohibited working at night or under any form of artificial illumination.
Many of the new industries that emerged in the nineteenth century gave
further impetus to precisely scheduled, clock-regulated work patterns. This was
especially evident in the operation of one of the era’s key industries, the railroad.
The size of this new enterprise, the multiplicity of tasks required, and above all
the need for adhering to precise schedules made the railroad a key example of the
A locomotive engineer and a conductor make sure that their watches are in agreement.
(Pacific County Historical Society)
Volti7e_12_ch11_203-222_highres.indd 205 28/11/12 6:01 PM

206 Technology and the Transformation of Work
temporal regularization of work. The emphasis on scheduling and the efficient
use of time also had an effect on the larger economy and society. As Henry David
Thoreau said of the railroads of his era:4
They come and go with such regularity and precision, and their whistles can be
heard so far, that farmers set their clocks by them, and thus one well-regulated
institution regulates a whole country. Have not men improved somewhat in
punctuality since the railroad was invented? Do they not talk and think faster in
the depot than they did in the stage office?
During the railroad’s earliest years, railwaymen were required to carry “good
watches and correct them daily.”5 From that time onward, the conductor’s pocket-
watch, accurate to a few seconds a day, became emblematic of his occupation. On a
larger scale, the demands of railroads led to the establishment in 1883 of Standard
Time and the division of the United States into four time zones. No longer would
each community observe a unique time that was based on a noontime determined by
the highest point of the sun’s passage. A new technological system had produced the
artificial method of marking time that we take so much for granted today.
The railroad was not the only industry that required high standards of temporal
regularity in the workplace. In factories of every sort, workers had to accommodate
themselves to the demands of the machinery, and in many cases the machines were
harsh taskmasters. And not only was the pace more frenetic and unvarying, the work
itself was often highly unsatisfying. For many of the new industrial occupations, the
required work often called for only the most rudimentary skills. Many machines were
designed so that the skill required inhered in the machine and not in the worker
who tended it. Although the traditional labors of artisans and farmers also had their
periods of drudgery and monotony, the factories that sprang up in the nineteenth and
twentieth centuries seemed to usher in a new era of deadening labor—repetitious,
narrowly specialized, radically simplified, and paced by the machine.6
The employment of large numbers of workers, each performing specialized tasks
that had to be closely scheduled and coordinated in accordance with the needs
of machinery, necessarily resulted in the loss of worker autonomy. Even Friedrich
Engels, Karl Marx’s collaborator and a strong advocate of the rights of working
people, was convinced that industrial technologies resulted in a regime that was
“more despotic than the small capitalist who employs workers ever has been.”
Engels stressed that industrial technology required strict managerial authority:7
If man, by dint of his knowledge and inventive genius has subdued the forces of
nature, the latter avenge themselves upon him by subjecting him, in so far as he
employs them, to a veritable despotism independent of all social organization.
Wanting to abolish authority in large-scale industry is tantamount to wanting to
abolish industry itself, to destroy the power loom in order to return to the spinning
wheel.
Industrial technology thus stands indicted as the destroyer of long-established
ways of working that had allowed workers to enjoy at least some measure of indepe-
ndence. Machinery and the accompanying factory system increased production, but
Volti7e_12_ch11_203-222_highres.indd 206 28/11/12 6:01 PM

Technological Change and Life on the Job 207
only at the cost of a cold, regimented on-the-job existence. Many would consider
it a poor bargain. But the story is not yet complete; before we blame the advance of
technology for the degradation of work, we must consider how these technologies
came into being and whose interests they served.
Is Technology to Blame?
Industrialization resulted in rapid and sustained economic growth and a massive
increase in consumer goods. But at the same time, for many people it meant a
thoroughly unpleasant work environment. Were industrial technologies in them-
selves the cause of the long working hours, monotonous routines, and the general
degradation of labor often found in industrial economies? Here we again meet a
fundamental question about the nature of technology: Does technology determine
a basic human process such as work, or is technology itself shaped by economic and
social relationships?
The history of industrial technologies seems to indicate that similar technologies
can support a variety of working arrangements. The use of steam power did not
always require massive factories under centralized direction. In the Coventry ribbon
industry, weaving was done on power looms situated in blocks of independent
workshops that were arranged so that they could make use of a single steam engine.8
A rapid working tempo makes it possible for this poultry-processing plant to ship 200 tons
of dressed chickens every day. (KRT/Newscom)
Volti7e_12_ch11_203-222_highres.indd 207 28/11/12 6:01 PM

208 Technology and the Transformation of Work
By the mid-nineteenth century, these cottage factories contained over 1,000 power
looms, as many as could be found in the conventional factories throughout the city.9
The Coventry ribbon-weaving industry is not an isolated example. In Sheffield
and Birmingham, cities at the very heart of England’s Industrial Revolution, local
artisans could rent workspaces supplied with transmission belts connected to a
centralized steam engine; “Power to Let” was a sign commonly seen in Birmingham
during this period.10 Fundamental mechanical innovations of the early nineteenth
century, such as the Jacquard loom (which introduced the use of punch cards for the
guidance of an industrial operation), were used not in large factories but in small
workshops that were often part of the workman’s home.
By themselves, these examples do not prove much. As we have seen, a
fundamental feature of technological advance is that innovations begin to be used
on a small scale and are steadily expanded as experience is gained with them.
Still, the late-nineteenth- and early-twentieth-century deployment of machines in
factories of ever-increasing size was not simply the result of increasing technological
sophistication; putting workers into large factories may also have been motivated by
a need to control and discipline them.
Employers had long contended with strong-willed, independent workers
under a variety of working arrangements. Before the Industrial Revolution one
such arrangement was the domestic (or “putting out”) system of manufacture. An
employer supplied workers with tools and raw materials that were to be used in the
workers’ own homes. The employer then picked up the finished product several
weeks later. This system gave the workers a good deal of autonomy and flexibility—
too much, as far as many employers were concerned. The lack of direct supervision
often resulted in shoddy products and a slow pace of production. Even worse, some
workers bilked their employers by selling the raw materials, using an inferior substi-
tute, and pocketing the difference.
Given these managerial problems, it is easy to see the appeal that the factory
had for many employers. Within the walls of the factory, the workers were required
to put in regular hours while the pace of their work was controlled by foremen
who closely supervised their activities. Although perhaps overstated, one scholar’s
summation captures an essential reason for the development of the factory:11
It was purely for purposes of discipline, so that workers could be effectively
controlled under the supervision of foremen. Under one roof, or within a narrow
compass, they could be started to work at sunrise and kept going until sunset,
barring periods for rest and refreshment. They could be kept working six days a
week. And under the penalty of loss of all employment they could be kept going
almost throughout the year.
The development of the factory made close managerial control possible. Even
so, the closest management by itself cannot ensure regular work efforts. Unless
one supervisor can be assigned to each worker, the latter will always find ways to
escape scrutiny and evade work if so inclined. In extreme cases, disgruntled workers
can even resort to sabotage, a word that derives from the throwing of wooden
shoes—sabots—into the machinery. In a setting where workers are employees
Volti7e_12_ch11_203-222_highres.indd 208 28/11/12 6:01 PM

Technological Change and Life on the Job 209
and have no direct stake in the success of their enterprise, such inclinations are
likely to be widespread. In working environments like these, machinery takes
on an importance that goes beyond the improvement of productive processes.
Machine-based technologies can ensure that work is steadily performed and that it
is performed in accordance with the requirements of the management. As one critic
of management-inspired industrial technologies put it:12
Machinery offers to management the opportunity to do by wholly mechanical
means that which it had previously attempted to do by organizational and
disciplinary means. The fact that many machines may be paced and controlled
according to centralized decisions, and that these controls may thus be in the
hands of management, removed from the site of production to the office—these
technical possibilities are of just as great interest to management as the fact that
the machine multiplies the productivity of labor.
Machines can be complex and hard to manage, but people are even more
difficult. Many employers eagerly sought machine technologies as a way of
counteracting the vagaries of human motivation and performance. Machines prov-
ided a model of reliable performance, as well as a way of coaxing it out of their
workers. The factory system that began to emerge in the late eighteenth century was
not exclusively the consequence of the technologies employed; at least as important
as the operational requirements of machine-based technologies were the desires and
intentions of management.
Industrial Technology and the Division of Labor
A major characteristic of an industrial society is an extensive division of labor.
Instead of a single craftsman performing all of the tasks involved in making a
product, production is broken into small segments, each one performed by a
different worker. This reduces production costs, as it is not necessary to pay top
wages to a highly skilled worker to do everything; some of the work can be done by
a person with lesser skills at lower wages.
Production based on division of labor requires managerial control, but equally
important, it makes such control easier to exert. As long as tasks are kept simple
so that workers can be readily hired and fired, there will be little need to depend
on workers with all-around skills. This can benefit management because the special
abilities of skilled workers generate an independent spirit and a fair amount of
economic leverage, as John Delorean’s description of the behavior of skilled
toolmakers in the automobile industry illustrates:13
These fine old tool makers worked hard and were very proud of their craft. They
kept their big tool boxes right underneath their work benches. If you looked at
them the wrong way or dealt with them in any manner other than a man-to-man,
professional fashion, they would simply reach under their work benches without
saying a word, throw their tools into the big box, lock it up and leave. That was
it. Each guy figured, “What the hell. I don’t have to take this from anybody. I’m a
pro. I know this business and I am not depending on you or anyone else.”
Volti7e_12_ch11_203-222_highres.indd 209 28/11/12 6:01 PM

210 Technology and the Transformation of Work
The ultimate marriage of machine technology and the division of labor is
the assembly line. Instead of using a variety of skills and working at their own
pace, workers perform specialized and repetitious tasks that are dictated by the
Machinery was not always a prerequisite for factory production. Note the similarity in the
general layout of two clothing factories, one operating before the invention of the sewing
machine, the other afterward. (Everett Collection/Superstock; The Granger Collection, New York)
Volti7e_12_ch11_203-222_highres.indd 210 28/11/12 6:01 PM

Technological Change and Life on the Job 211
tempo of the moving line. The development of this system of production is
attributed to Henry Ford, and rightfully so.14 After taking his inspiration from the
meat-packing industry (which, of course, used a “disassembly” line), Ford and his
associates realized productive gains that had scarcely been imagined in the past.
Before Ford’s introduction of the moving line in 1913, automobiles were produced
in small batches by workers with a multiplicity of skills. The work was undoubtedly
more satisfying, but the product was high in price and necessarily limited to the rich.
The assembly line allowed dramatic price cuts for two reasons: it drastically reduced
the time needed for manufacture, and it allowed the substitution of low-wage,
unskilled labor for the high-priced labor of all-around craftsmen. Ford’s description
of work in his factory captures the essence of this new form of manufacture:15
One man is now able to do somewhat more than four did only a comparatively few
years ago. That line established the efficiency of the method and we now use it
everywhere. The assembling of the motor, formerly done by one man, is now divided
into eighty-four operations—those men do the work that three times their number
formerly did. . . . In the chassis assembling [room] are forty-five separate operations or
stations. The first men fasten four mudguard brackets to the chassis frame; the motor
arrives on the tenth operation and so on in detail. Some men do only one or two
small operations, others do more. The man who places a part does not fasten it—the
part may not be fully in place until after several operations later. The man who puts
in a bolt does not put on the nut; the man who puts on the nut does not tighten it.
Generator assembly at a Ford plant in the 1930s. (© Bettmann/CORBIS)
Volti7e_12_ch11_203-222_highres.indd 211 28/11/12 6:01 PM

212 Technology and the Transformation of Work
The extreme division of labor found on the assembly line was paralleled by an
unrelenting tempo of work and the absence of any control over it by the workers.
Until recent efforts at “job enlargement” began to change the nature of automobile
production, manufacturing methods followed the Ford pattern closely. With the typical
automobile assembly line churning out 50 to 60 vehicles per hour, each worker had but
a few seconds to complete an operation such as mounting a shock absorber or installing
a windshield. When problems arose, the worker had to make the best of a bad situation,
even if it meant installing something improperly. Under these circumstances work can
be little more than a rather unpleasant way of earning a living. Extracting any sense of
personal fulfillment from the work performed is just about impossible.16
Scientific Management Once Again
In addition to controlling the pace of work through the use of the assembly line,
managers also attempted to regulate work by using elaborate systems that removed
all vestiges of worker control over work processes. The most significant of these
was Frederick Taylor’s Scientific Management, which was briefly discussed in
Chapter 2. The important thing to reiterate here is that Scientific Management
stipulated the precise scheduling and organizing of work activities, and that these
procedures were never to be left to the workers’ discretion. As Taylor explained
his system, “Perhaps the most prominent single element in modern scientific
management is the task idea. The work of every workman is fully planned out
by the management at least one day in advance, and each man receives in most
cases complete written instructions, describing in detail the task which he is to
accomplish, as well as the means to be used in doing the work. . . . This task
specifies not only what is to be done, but how it is to be done and the exact time
allowed for doing it.”17
The development of these instructions was the work of a small group of
technical specialists who were not directly involved with the work being performed,
for, as Taylor stipulated, “All possible brain work should be removed from the shop
and centered in the planning or laying-out department.”18 Efficient work required
unfettered control by specially trained managers, who were to be the repository of
all knowledge regarding the work being done: “The managers assume . . . the burden
of gathering together all of the traditional knowledge which in the past has been
possessed by the workmen and then of classifying, tabulating, and reducing this
knowledge to rules, laws, and formulae.”19
Many would see Taylorism as a perversion of scientific means of inquiry and a
travesty of engineering practice. But repellent as it may seem to us today, the spirit
of Scientific Management still lives in technologically advanced societies. The
explosion of knowledge that is the driving force of our sophisticated economy can
fragment the population into groups of specialized experts. This may be a natural
outcome of the growth of knowledge, but it takes on a pernicious quality when
specialized knowledge is the exclusive possession of one group of people who attempt
to use it as a means of dominating others. Under these circumstances science and
technology do not simply supply detached knowledge that is used for universal
Volti7e_12_ch11_203-222_highres.indd 212 28/11/12 6:01 PM

Technological Change and Life on the Job 213
human betterment. They are used to strengthen the position of a dominant group by
removing the control over work processes from the hands of those doing the actual
work.
Industrial Work and Recent Technological Developments
Many social critics have decried the human consequences of technologies and
managerial strategies that take the skill out of human labor. Others have taken a
different tack, arguing that an industrial system that “de-skills” workers is doomed to
failure. A great deal of contemporary industrial work, according to this perspective,
requires substantial amounts of independence if it is to be done effectively. A
modern factory and its constituent departments cannot be expected to hum along
indefinitely like a well-oiled machine, nor can all of the jobs be precisely defined
and completely subject to control from above. Breakdowns, disruptions, and normal
deterioration are inevitable, and they all require prompt attention and often a fair
amount of innovative ability on the part of the workers.20
Even routine tasks often require knowledge of special quirks in the machines and
work processes. This knowledge can be gained only through intimate acquaintance
with the work itself; engineers cannot plan everything in advance.21 There are many
occasions when shop-floor workers understand things better than their bosses, and
their involvement in production processes may be essential to a firm’s success. This
point was strongly articulated by a General Electric executive when he noted, “All
of the good ideas —all of them—come from hourly workers.”22 Workers who have
been reduced to mindless operatives may not even be able to keep the machinery
going when the inevitable problems emerge. Accordingly, effective managers realize
that the destruction of worker skills through mechanization and rigid centralized
controls is self-defeating, for an ignorant and hostile work force means production
losses as well as high costs for maintenance and direct supervision.23
A precisely programmed production process that makes use of specialized
machinery while sharply separating the tasks of those who plan and those who
work is possible only when the product is a standardized one. This system generated
a great outpouring of mass-produced goods, but it may not be appropriate to
today’s economy and society. With improvements in the overall standard of living,
consumers are less willing to buy exactly the same things that their neighbors do.
Instead, they have an appetite for goods and services with distinctive qualities. These
cannot be produced by traditional manufacturing processes. As former Secretary
of Labor Robert Reich has argued, rigid production technologies were appropriate
only when business enterprises were “engaged in high-volume, standardized
production, entailing a large number of repetitive tasks with clearly defined goals,
in which almost all contingencies could be identified and covered.”24 In contrast,
the production of nonstandardized items requires much more flexibility, making it
difficult to replace human skills with machines and machinelike processes. At the
same time, the continual modification and customization of products and procedures
require the ongoing efforts of a variety of workers—designers, toolmakers, computer
programmers, systems analysts, setup personnel, maintenance and repair staff,
Volti7e_12_ch11_203-222_highres.indd 213 28/11/12 6:01 PM

214 Technology and the Transformation of Work
construction workers, and machinists, as well as personnel involved in marketing,
distribution, sales, and service. All of these occupations require a fair degree of skill
and on-the-job autonomy.
The production of standard products for a mass market creates a fertile ground
for the extreme division of labor and the de-skilling of work that was typical of many
businesses during the nineteenth century and a good part of the twentieth. But when
consumers continually demand new, nonstandard products, existing ways of produc-
ing things no longer make sense from an economic standpoint. Firms that cling
to the old routines not only oppress their workforce, they run the risk of eventual
bankruptcy.
Technological Change and White-Collar Work
At this point it can be fairly argued that a concentration on industrial work
is misplaced. As was noted in the previous chapter, the number of workers in
manufacturing enterprises has steadily declined in relative terms, and these
workers are now a small segment of the labor force. Work on the assembly line,
often considered the epitome of industrial work, in reality occupies only a tiny
fraction of workers in the manufacturing sector. Due to fundamental changes in
the economy, the typical worker in the United States no longer toils on some
factory floor.
As the number of manufacturing jobs relative to the total labor force has
declined, more and more people are now found in white-collar occupations. More
workers are engaged in what the U.S. Bureau of Labor Statistics classifies as “office
and administrative support occupations” than any other type of work.25 Today,
the typical workplace is not the factory but the office. Some analysts have seen
this as an indication that the general occupational level has improved. Moreover,
it has been argued that technological change has in many cases increased skill
requirements and allowed clerical workers to take on broader responsibilities, an
assertion that will be taken up in the last part of this chapter.
Although some technologies have the potential to empower workers, others
are used to monitor them continuously. These technologies are especially appealing
to managers like the one who stated, “I’m a great one for believing that people
will really do what’s inspected and not what’s expected.”26 They now have at their
disposal many ways to monitor their employees. As Ellen Alderman and Caroline
Kennedy have noted: “Keystroke monitoring (where the computer counts the
number of keystrokes per minute), telephone accounting monitoring (where the
number of phone calls per hour and the length of each call are recorded), as well
as service observation (where supervisors listen in on calls), and keeping track of
unplugged time (measuring time spent away from the computer) are all becoming
commonplace.”27 To these can be added the counting of mouse clicks and the
monitoring of e-mail messages, sometimes with programs that detect potentially
incriminating words like “boss” and “union.”28 Even executive and higher-level
employees are not immune; laptop computers, cell phones, and voice mail all
can keep employees tethered to an “electronic leash” that can make work a 24/7
obligation.
Volti7e_12_ch11_203-222_highres.indd 214 28/11/12 6:01 PM

Technological Change and Life on the Job 215
Industries and organizations, as well as the workers who are employed by
them, vary immensely, making it impossible to come to a summary judgment of the
consequences of technological change for white-collar work. Twenty-first-century
technologies offer expanded opportunities for monitoring workers, but they also
have the potential to empower these workers by making available vast troves of
information that used to be available only to upper-echelon managers. It is also
important to keep in mind that ascribed characteristics of workers, such as their
gender and ethnicity, can affect the way in which technological changes have
altered their jobs. There is some evidence that early examples of office automation
were more beneficial to male rather than to female workers. In the insurance
industry, for example, high-level positions in the management of electronic
data processing systems became almost exclusively male preserves, while routine
clerical work became an increasingly female occupation. A similar pattern was
observed in the Internal Revenue Service, where computerization reduced the
number of middle-level jobs, thereby blocking advancement opportunities for a
largely female clerical staff.29 These examples indicate that when a labor market
is already segmented along gender or ethnic lines, new technologies can reflect or
Although farms and factories still produce most of our essential goods, the office is the
typical workplace of twenty-first-century employees in the developed world. (© SuperStock)
Volti7e_12_ch11_203-222_highres.indd 215 28/11/12 6:01 PM

216 Technology and the Transformation of Work
even reinforce existing divisions. One group may benefit from new ways of doing
things, while another group may find themselves in the same, or possibly worse,
circumstances.30
Telework
The differential effects of new workplace technologies can also be seen in the way
new computer and communications technologies have been used for work performed
away from conventional workplaces. Known as “telecommuting” or “telework,” this
mode of labor has grown in recent years. In 2010, more than 34 million American
workers—24.4 percent of employed workers—at least occasionally did employment-
related work at home.31 Significantly, women, especially those with children, are
more likely to engage in telework than men.32
Although it is based on modern technologies such as e-mail, fax machines,
scanners, and teleconferencing, telework represents a return to a mode of work that
prevailed before industrialization, suburbanization, and new transportation technol-
ogies combined to separate the workplace from the home. As we have seen, domes-
tic production prior to industrialization was riddled with problems of motivation
and discipline that were ultimately overcome by the development of the centralized
factory and its regimen of close supervision. Modern communication technologies
have allowed some reversal of this long-term trend. Through telework, workers can
avoid expensive, stressful commutes and have more on-the-job flexibility.
In the past, homes adjoined the workshops of skilled craftsmen, but a home
worksite also could be a place where miserably paid workers put in long hours
in cramped cottages and fetid tenements. This dismal history caused an initial
opposition to modern telework by organized labor. Labor unions also opposed work
done in the home because dispersed and easily replaced workers are difficult to
organize and recruit.
Union opposition to telework has softened in recent years with the realization
that telework can be beneficial for many workers. Abundant possibilities for explo-
itation exist when workers are isolated in their individual homes, are dependent on
their employers for their livelihoods, and can be easily replaced, but not all work
in the “electronic cottage” conforms to this model. Electronic homework need not
be confined to such things as routine word-processing tasks. Jobs requiring a high
degree of skill and expertise can also be done in the home. It may even be possible
for many specialized services to be supplied by homeworkers who are not employees
of a firm but work on a contract basis, perhaps as a member of an autonomous
teleworkers’ association. At the same time, regular employees who work far from
their firm’s central office may not have to be electronically supervised as long as they
have a personal stake in the success of the firm. This is more likely to happen when
employees occupy upper-level positions within an organization. For these workers,
new technologies can make their work more autonomous, convenient, and fulfilling.
Although telework can bring a number of benefits, workers should understand
that a prolonged period of absence from a central workplace, even if it does not result
in diminished output, may put these workers at a disadvantage when it comes to
pay raises, promotions, and assignment to new projects. Teleworkers may be cut off
Volti7e_12_ch11_203-222_highres.indd 216 28/11/12 6:01 PM

Technological Change and Life on the Job 217
from vital information, have difficulties forming job-related relationships with other
workers, and may be excluded from essential job-related networks. As Joel Mokyr has
noted of conventional working relationships, “Proximity in a plant or office created
personal familiarity and thus conditions of trust and believability. There is always a role
of body language, intonation, and general demeanor in human communications.”33
In sum, electronic communications technologies can be used for many work-related
purposes, but they cannot completely replace “face time” at a workplace.
One of the advantages of
telecommuting is being able
to combine work and family
responsibilities. (Courtesy Michael Miller)
Smart Technologies and Dumb Jobs?
Optimistic students of technological change are fond of pointing out that as compu-
terized processes continue to emerge, a growing segment of the labor force will be
found in jobs that require high degrees of skill. Instead of monotonously tightening
nuts and bolts or engaging in routine clerical operations, large numbers of workers
will be engaged in tasks that cannot be performed by computers. Frank Levy and
Richard Murnane have called attention to the many kinds of work activities that
cannot be reduced to computer-friendly routines, the abilities and processes that
they have dubbed “expert thinking” and “complex communication.”34 And, as we
saw in the previous chapter, even where computers have taken hold, a large number
of employees will be required to continually modify organizational structures and
operations in order to make optimal use of computer-based technologies. From
these perspectives, occupational skill levels are going up, and the unskilled factory
operative and office clerk are becoming an anachronism in twenty-first-century
enterprises.
It is undeniable that these trends can be found in many firms, but to less
optimistic observers, predictions of the radical upgrading of workers’ tasks are at
best premature. Plenty of jobs still require only the most minimal levels of skill
Volti7e_12_ch11_203-222_highres.indd 217 28/11/12 6:01 PM

218 Technology and the Transformation of Work
and training. The task of monitoring industrial processes can be just as dull as the
most routine assembly operation, and in any event, many of these functions can be
performed by sensors connected to computers. Even maintenance and repair jobs may
represent only a slight improvement in required skill levels. Technological advances
can lower maintenance requirements, and when repairs need to be done, they can
often be accomplished by simply disconnecting a faulty module and replacing it with
a new one. The task of diagnosis, which often requires the greatest amount of the
repairperson’s skills, has itself been simplified through the development of special
test circuits and computer-based “expert systems.” To be sure, the need for repair
skills will always be present, and it is unlikely that technological developments will
allow the elimination of repair personnel. Still, any dilution of the repairperson’s
skill will have significant consequences for the overall skill levels of the labor force,
for these workers comprise the largest single group of skilled workers.35
Moreover, even if the optimists are largely correct, their predictions only hold
for industries that make extensive use of advanced technologies. High-tech facto-
ries and offices may require higher levels of employee skills, but these are not the
workplaces of large numbers of workers today or in the near future. Nor will new
high-technology industries generate great numbers of new jobs. To be sure, some
high-tech occupations have grown rapidly in recent years and are projected to do
so in the future. According to the U.S. Bureau of Labor Statistics, from 2010 to
2020 employment in “Computer and Mathematical Occupations” is expected to
grow by 22 percent, for a net addition of 778,300 jobs.36 This represents a healthy
rate of growth, but it is more than matched by the 1,443,000 projected new jobs
for “Healthcare Support Occupations” (about half of which are expected to be
home health care aides), the 1,861,900 additional jobs in the retail trade, and
1,029,500 more jobs in “Food Preparation and Serving Related Occupations” dur-
ing the same period.37 In fact, of the 10 occupations with the largest numerical
employment increases projected from 2010 to 2020, only two—registered nurses
and postsecondary teachers—require much in the way of advanced training. The
remainder—occupations like personal care aide, truck driver, and customer service
representative—hardly represent the high-tech future.38 Advanced technologies
have given rise to a number of occupations that have grown at a rapid rate from a
low base; however, in absolute terms, most of the job growth in the near future will
take place in occupations that have been around for a long time.
Although technological advance does not seem to be the source of large, across-
the-board increases in workers’ skills, it doesn’t seem to be producing the opposite
effect either. While the skill requirements in some occupations have decreased, they
have increased in others, and it is likely that all of these changes have offset one
another, leaving no net change one way or the other.39 It also may be the case that
there has been a polarization of workers’ skills as skill requirements have increased
in some jobs and occupations and decreased for others.40 For example, automobile
manufacture has changed substantially as a result of the introduction of computer-
controlled equipment. This, in turn, has led to changes in the skills exercised
by the factory workforce; while skilled workers have experienced upgraded skill
levels, the opposite has been the case for production workers.41 This bifurcation
Volti7e_12_ch11_203-222_highres.indd 218 28/11/12 6:01 PM

Technological Change and Life on the Job 219
was not the inevitable outcome of technological change, however. The distinction
between skilled and unskilled workers has always been a prominent feature in the
automobile industry, and in the absence of changes in organizational structure and
culture, the introduction of new production technologies reinforced this division by
giving skilled workers most of the responsibilities for the operation, maintenance,
and repair of technologically sophisticated equipment while leaving ordinary
production workers with jobs that continued to require little in the way of skill.
This is hardly an atypical pattern. As several researchers have noted, organizational
patterns strongly influence the skills used by workers.42 One aspect of organizational
structure that may strongly affect workers’ skill levels is the distribution of power.
When there is a sharp division between those who manage and those who work, and
when managers view their workers as hostile and unreliable, workers will be treated as
replaceable parts. Their skill level will be low, and they will be controlled by centralized
supervision and machine-paced work processes.43 By contrast, when a more democratic
and egalitarian order prevails, technologies that require the initiative and commitment
of workers are more likely to be selected, and workers will be given a chance to develop
their skills and take their places as valued members of the organization.
New technologies can be developed to increase levels of skill, or they can
be used to diminish them. Like all matters regarding technology, choices have
to be made. These choices emerge in an environment where the intentions and
relative power of employees, managers, and the owners of enterprises are usually at
least as important as emerging technical capabilities in determining the course of
technological change and its consequences for a firm’s workers.
Questions for Discussion
1. Do you agree that many early industrial technologies were used as ways of
controlling labor? What sort of historical research could be done in order to
test this proposition?
2. Do employers have a legitimate right to electronically monitor their employees?
Would you object to monitoring by your employer? Should federal or state
governments pass laws that limit electronic employee monitoring? If so, what
would be reasonable limits to this practice?
3. Can you see any indications that consumers are showing a preference for
non-standardized products? How will changed consumer preferences affect
workplace technologies?
4. According to some predictions, a growing number of jobs will be held by
“telecommuters”—employees who work at home while using computers to
receive, process, and transmit information. What sort of jobs could be done in
this way? Would you like to work as a telecommuter? Why?
5. What are some of the key skills used by practitioners of an occupation you
are interested in? Which of these skills might be replaced by technological
advances in the near future? Would a diminished need for these skills make
this occupation more or less attractive to you?
6. As noted in this and the previous chapter, the optimal use of computers
in many work settings will require employees who are able to restructure
Volti7e_12_ch11_203-222_highres.indd 219 28/11/12 6:01 PM

220 Technology and the Transformation of Work
organizational structures and procedures. To be more specific, what sort of
things might need to be done? How would you go about preparing for a job that
requires skills of this sort?
Notes
1. See Terry Reynolds, Stronger Than a Hundred Men (Baltimore: Johns Hopkins
University Press, 1983).
2. Melvin Kranzberg and Joseph Gies, By the Sweat of Thy Brow: Work in the Western
World (New York: G. P. Putnam’s Sons, 1975), p. 112.
3. Daniel Nelson, Managers and Workers: Origins of the New Factory System in the
United States, 1880–1920 (Madison: University of Wisconsin Press, 1975), pp. 4–7.
4. Quoted in Stewart Holbrook, The Story of American Railroads (New York: Crown,
1947), p. 15.
5. Walter Licht, Working for the Railroad: The Organization of Work in the Nineteenth Century
(Princeton, NJ: Princeton University Press, 1983), p. 88.
6. Daniel T. Rodgers, The Work Ethic in Industrial America (Chicago: University of
Chicago Press, 1978), p. 67.
7. Frederick Engels, “On Authority,” in Karl Marx and Frederick Engels, Selected Works
(Moscow: Foreign Languages Publishing House, 1962), p. 637.
8. Malcolm I. Thomis, The Town Labourer (New York: Barnes and Noble, 1974),
pp. 108–109.
9. Witold Rybczynski, Taming the Tiger: The Struggle to Control Technology (New York:
Viking/Penguin, 1985), pp. 46–47.
10. Charles F. Sabel, Work and Politics: The Division of Labor in Industry (Cambridge:
Cambridge University Press, 1982), p. 41.
11. N. S. B. Gras, Industrial Evolution, quoted in Sidney Pollard, The Genesis of Modern
Management: A Study of the Industrial Revolution in Great Britain (Harmondsworth,
England: Penguin, 1968), pp. 22–23.
12. Harry Braverman, Labor and Monopoly Capital: The Degradation of Work in the Twentieth
Century (New York: Monthly Review Press, 1974), p. 195.
13. J. Patrick Wright, On a Clear Day You Can See General Motors (New York: Avon Books,
1979), p. 94.
14. David A. Hounshell, From the American System to Mass Production, 1800–1932: The
Development of Manufacturing Technology in the United States (Baltimore: Johns Hopkins
University Press, 1984), pp. 217–261.
15. Henry Ford, in collaboration with Samuel Crowther, My Life and Work (Garden City,
NY: Doubleday, Page & Company, 1922) pp. 81, 82–83.
16. See Charles Rumford Walker and Robert H. Guest, The Man on the Assembly Line
(Cambridge, MA: Harvard University Press, 1953).
17. Quoted in Braverman, Labor and Monopoly Capital, p. 118.
18. Ibid., p. 113.
19. Frederick Winslow Taylor, The Principles of Scientific Management (New York: Norton,
1967 [originally published in 1911]), p. 36.
20. Larry Hirschhorn, Beyond Mechanization: Work and Technology in a Post-Industrial Age
(Cambridge, MA: MIT Press, 1984), pp. 61–86.
21. Ibid., p. 157.
22. “The Technology Payoff,” Business Week (14 July 1993): 59.
23. Sabel, Work and Politics, op. cit., p. 74.
Volti7e_12_ch11_203-222_highres.indd 220 28/11/12 6:01 PM

Technological Change and Life on the Job 221
24. Robert B. Reich, The Next American Frontier (Harmondsworth, England: Penguin,
1984), p. 81.
25. U.S. Bureau of Labor Statistics, “Occupational Employment Projections to 2020,”
Monthly Labor Review (January 2012), p. 89 (accessed on March 26, 2012, at
http://www.bls.gov/opub/mlr/2012/01/art5full ).
26. Michael W. Miller, “Computers Keep an Eye on Workers and See If They Perform
Well,” Wall Street Journal (June 3, 1985): 1.
27. Ellen Alderman and Caroline Kennedy, The Right to Privacy, (New York: Random
House, 1995) p. 316.
28. Jill Andresky Fraser, White Collar Sweatshop: The Deterioration of Work and Its Rewards
in Corporate America (New York and London: W.W. Norton, 2001), pp. 87–90.
29. Roslyn L. Feldberg and Evelyn Nakano Glenn, “Technology and Women’s Degradation:
Effects of Office Automation on Women Clerical Workers,” in Joan Rothschild (Ed.),
Machina ex Dea: Feminist Perspectives on Technology (New York: Pergamon Press, 1983),
pp. 59–78.
30. Barbara A. Gutek and Tora K. Bikson, “Differential Experiences of Men and Women in
Computerized Offices,” Sex Roles 13, (1985) 3–4.
31. Cindy Krischer Goodman, “Balancing Act: Telecommuting Is on the Rise,” Pittsburgh
Post-Gazette (June 7, 2010), accessed on March 23, 2012, at http://www.post-gazette
.com/pg/10158/1063247-407.stm.
32. Phyllis Moen and Patricia Roehling, The Career Mystique: Cracks in the American Dream
(Lanham, MD: Rowman and Littlefield, 2005), p. 179.
33. Joel Mokyr, The Gifts of Athena: Historical Origins of the Knowledge Economy (Princeton,
NJ: Princeton University Press, 2002) p. 160.
34. Frank Levy and Richard J. Murnane, The New Division of Labor: How Computers Are
Creating the Next Job Market (Princeton and Oxford: Princeton University Press, 2004),
pp. 47ff.
35. The changing skill requirements of auto mechanics are explored in Kevin Borg, Auto
Mechanics: Technology and Expertise in Twentieth Century America (Baltimore and
London: Johns Hopkins University Press, 2007).
36. U.S. Bureau of Labor Statistics, “Occupational Employment Projections to 2020,”
op. cit., p. 89.
37. U.S. Bureau of Labor Statistics, “Occupations with the Greatest Job Growth,” accessed
on March 26, 2012, at http://www.bls.gov/emp/ep_table_104.htm.
38. Ibid.
39. Arne Kalleberg, The Mismatched Worker (New York and London: W.W. Norton, 2007),
p. 50.
40. Ibid., p. 107.
41. Ruth Milkman and Cydney Pullman, “Technological Change in an Auto Asse-
mbly Plant: The Impact on Workers’ Tasks and Skills,” Work and Occupations 18,
2(May 1991).
42. William Form, Robert L. Kaufman, Toby L. Parcel, and Michael Wallace, “The Impact
of Technology on Work Organization and Work Outcomes: A Conceptual Framework
and Research Agenda,” in George Farkas and Paula England (Eds.), Industries, Firms,
and Jobs: Sociological and Economic Approaches (New York: Plenum Press, 1988).
43. Louis E. Davis and James C. Taylor, “Technology, Organization, and Job Structure,”
in Robert Dubin (Ed.), Handbook of Work, Organization, and Society (Chicago:
Rand -McNally, 1976), p. 412.
Volti7e_12_ch11_203-222_highres.indd 221 28/11/12 6:01 PM

http://www.bls.gov/opub/mlr/2012/01/art5full

http://www.post-gazette

http://www.bls.gov/emp/ep_table_104.htm

A Macmillan Higher Education Company
WORTH PUBLISHERS
Volti7e_01_FM_i-xviii_highres.indd 2 28/11/12 5:52 PM
this page left intentionally blank

223
The ability to express and understand complex thoughts is one of the most funda-
mental characteristics of the human race. For most of human existence this was
done primarily through the spoken word. The invention of writing greatly expanded
the reach and permanency of communication, but for hundreds of years reading and
writing were confined to tiny segments of the population. The invention and diffu-
sion of printing marked another revolutionary stage in human communication, as
we see in Chapter 12, making written works accessible to much larger numbers of
people and at the same time contributing to profound social, political, cultural, and
psychological changes.
For the next four centuries communication technologies remained static, as
people waited weeks and even months for the arrival of a letter. But in the middle
of the nineteenth century, a dramatic transformation occurred. Chapter 13 presents
the transformative media technologies that began as the telegraph brought virtu-
ally instantaneous communication to every part of the world where wires had been
strung. During the twentieth century, radio and then television vastly expanded the
range and content of human communication.
Still, the revolution in human communications was far from over. While the
consequences of radio and television are still being debated, recent years have seen
the emergence and rapid diffusion of a host of new media technologies, carrying
with them the potential for massive changes equal to those wrought by printing,
the telegraph, radio, and television. In Chapter 14 we will consider two of them,
p a r t f i v e
Communication
Volti7e_13_ch12_223-240_highres.indd 223 28/11/12 6:01 PM

the Internet and mobile communication devices. Today, many people view these
media as being virtually synonymous with technology in general, and as such they
occupy a special place in our everyday lives. As with the technologies that have
come before them, the Internet and mobile communication devices highlight the
complex issues presented by new ways of communicating.
Volti7e_13_ch12_223-240_highres.indd 224 28/11/12 6:01 PM

When I needed a permanent means of recording and subsequently disseminating
my ideas about how some major features of technology are connected to social
structures and processes, I naturally thought about writing a book. If I weren’t so
verbose, I could have written a magazine article. There are, of course, other ways of
retaining and transmitting information and ideas. They can be stored in a person’s
memory and then communicated through the spoken word; indeed, that was the
way things were done for most of human history, and are still done in nonliterate
societies. Perhaps sometime in the future, books, newspapers, and magazines will
be largely supplanted by websites and e-books. But until very recently, nothing has
equaled words printed on paper as a rapid, cheap, and portable means of storing and
transmitting information. Moreover, the importance of printing goes well beyond
the storage and communication of information. The social effects of printing
have been immense, making printing one of the most significant technological
innovations of all time.
The development of a written language is one of the hallmarks of civilization.
While a spoken language is a basic element of all human societies, not all of them
developed a written language. Of the approximately 5,000 to 8,000 languages that
are spoken today, only a very small percentage appear in written form, and in many
cases these written languages have been recent inventions, stimulated by missionar-
ies or other contacts with the outside world. One interesting example of an effort to
create a written language appeared in the 1820s, when a Cherokee Indian named
Sequoyah single-handedly constructed a written language so that his people could
more effectively resist the encroachments of white settlers. A written language was
an essential technology, for as Sequoyah observed, “Much that red men know, they
forget; they have no way to preserve it. White men make what they know fast on
paper like catching a wild animal and taming it.”1 After much laborious effort,
Sequoyah succeeded in constructing a system of writing that could be mastered in a
few weeks. Unfortunately for Sequoyah’s people, the new written language by itself
could not stem the advance of the white man into Cherokee territory.
The writing systems of the great languages of the world were also constructed to
serve practical purposes. In ancient Egypt, Babylonia, and China, writing was used
to keep financial accounts and to facilitate communication between government
225
Printing
c h a p t e r t w e l v e
Volti7e_13_ch12_223-240_highres.indd 225 28/11/12 6:01 PM

226 Communication
officials. Writing also served a spiritual purpose in many cases, for it allowed the
recording and diffusion of concepts that were central to the local religion. Since
church and state were tightly interwoven in these ancient states, writing met the
needs of the political elite in their dual role as priests and secular officials. As
with many technologies, writing owed its development to the needs of one group
of people and was used to maintain and extend their influence over others.2 For
example, the priests of Egypt during the days of the pharaohs were able to refer to
written calendars that allowed them to predict the cyclical flooding of the Nile,
thereby demonstrating their supposed indispensability.
The Printing Revolution
Other than the maintenance of a social and political hierarchy, the effects of writing
were minimal when the written language was understood only by a small segment of
the population. Few of the common people had the time and inclination to become
literate, and even if they were able to read, the great cost of producing written
documents kept these works out of their reach. Written materials were rare and
costly because they had to be laboriously drafted and copied by hand. Documents
central to a civilization, such as the Bible of medieval Christianity, were scarce and
inaccessible, for monks and other scribes had to labor for many months to produce a
single volume. Books were rare and precious; two of the largest book repositories in
England, the abbey libraries of Canterbury and Bury, each held 2,000 books, while
the library of Cambridge University had only 300.3
In the absence of widespread literacy, the ability to memorize was developed to
a level scarcely imaginable today. People made use of elaborate mnemonic devices
in order to recall important facts and concepts. This facility made it possible to pre-
serve laws, ceremonies, and stories in the minds of individuals. It was not unusual to
find people who could recite sacred texts and heroic sagas that might today occupy
hundreds of pages of printed text.
All of this began to change dramatically with the introduction of printing with
movable type. As with many other technological innovations, printing originated
in East Asia. In the fourth century a.d., the Chinese were copying stone inscriptions
through the use of paper rubbings. By the seventh century, Chinese printers were
using carved wooden blocks to produce books on agriculture and medicine. Chinese
accomplishments using this method were prodigious; during a 12-year period that
began in 971, printers in the city of Chengdu published a 5,048-volume compilation
of Buddhist scriptures that required the carving of 130,000 blocks. A significant
advance came in the middle of the eleventh century when a man named Bi Sheng
invented a system of printing that used individual pieces of clay type set in a wax
matrix. Two hundred years later another Chinese inventor, Wang Zhen, created a
rotating type frame that allowed the typesetter to easily pick out the required pieces.
The Chinese also experimented with metal type, although it was their Korean
neighbors who achieved success in this endeavor during the fourteenth century.4
Printing developed more slowly in Europe. In the early fifteenth century, playing
cards and pictures of saints were printed from wooden blocks. It is likely that by
Volti7e_13_ch12_223-240_highres.indd 226 28/11/12 6:01 PM

Printing 227
1440 woodblock printing was also used for the production of small books containing
a few lines of text.5 This method was not suitable for printing lengthy books, as
the type for each page had to be laboriously carved from a single piece of wood. At
about this time some printers also made prints from copper plates that had words
and images engraved into them.6 As with woodblock printing, engraving was a very
labor-intensive process that did little to lower the cost of the written word. What
made printing a much more efficient process was forming each page from individual
pieces of type that could be used over and over again. As we have just seen, this
had already been done in China, but the basic idea had to be reinvented in Europe,
stimulated perhaps by some knowledge of what the Chinese had been doing.7
Although there are other European claimants to this invention, Johann
Gutenberg (1400?–1468?) is usually given credit for devising a workable system
of printing with separate pieces of type. In order to put the idea into practice,
Gutenberg also had to develop a number of new techniques, such as a method to
uniformly cast pieces of type. This was done by punching the letter into a copper
matrix and then using the resulting mold for casting the actual pieces of type.
Setting these individual pieces of type in a straight line also presented a problem.
Gutenberg solved it by designing a mold that kept each piece of type within
acceptable tolerances, and by equipping each piece of type with a ridge on one side
and a corresponding groove on the other. In this way pieces of type could interlock
with one another when they were set in a frame.8
Important as the invention of movable type was, the printing of books in large
quantities required more than simply composing a page from individual letters. As
Mass-produced communication: a sixteenth-century print shop. (The Mansell Collection)
Volti7e_13_ch12_223-240_highres.indd 227 28/11/12 6:01 PM

228 Communication
is always the case with major technological changes, a whole system of interrelated
elements had to be developed. Obviously, some sort of a press was required; here
the ancient wine-press provided a useful model. New kinds of ink, compounded
from lampblack and linseed oil, were also necessary. The production of durable type
required considerable experimentation with different alloys; a mixture of tin, lead,
and antimony proved to be the winning combination. And to realize the economies
offered by printing, costly parchment was replaced by paper, a Chinese invention
that had passed through the Islamic world and began to be produced in Europe early
in the fourteenth century.
In a world that produced its goods through individual craftsmanship, the first
books printed from movable type were harbingers of the age of mass production.
While the hand-copied and illuminated books of the past were works of art available
to only a few, printed books, tracts, and pamphlets were standardized products that
reached a large and growing audience. According to one estimate, more than
10 million copies of 40,000 different titles were produced during the 50 years
following Gutenberg’s invention. Had they been copied by hand, the task would
have taken a thousand years.9 During the century that followed, as many as
200 million books were printed in Europe.10
Printing and the Expansion of Knowledge
Printing and the consequent expansion of the number of books in circulation did
much to transform European society. By systematizing knowledge and making it
readily available, printing helped to promote the rational modes of thought that
characterize a modern society. As happened five centuries later with the rise of the
Internet, information flowed much more readily, with everything from treatises
on accounting to printed musical scores receiving widespread distribution. With
so much information circulating, it was no longer necessary to simply accept the
judgment of a few authorities, and a more critical spirit emerged. Printing also
allowed the widespread distribution of maps and accounts of global exploration.
As a result, new geographical discoveries could be widely publicized, and errors
and inconsistencies could be made evident through the cross-checking of existing
maps and printed descriptions.11 The clearer definition of space made possible by
printed maps gave a great impetus to voyages of discovery that opened the world
to European civilization and vice versa. Better maps made journeys less hazardous,
and the ongoing definition of the world’s spatial contours motivated explorers and
colonists to fill in the remaining blank spaces on the maps.12
While explorers were making fundamental geographical discoveries, others
were gaining new insights into how the world worked. The great advances in
scientific knowledge that began to emerge in the fifteenth and sixteenth centuries
were pushed forward by printing. As with the progress of geographical exploration,
the advancement of science was to a substantial degree the result of the spread of
information through print media. Science requires accurate data, but before the
age of printing, information was scattered, quirky, and often highly inaccurate. To
be sure, much of the information disseminated by printing was bogus, but because
Volti7e_13_ch12_223-240_highres.indd 228 28/11/12 6:01 PM

Printing 229
information reached a much greater audience, it was subject to informed criticism,
making for greater accuracy. In short, printing allowed the establishment of an
intellectual feedback system that helped to drive out errors.13
More was involved than the transmission of factual information. Human knowl-
edge was significantly advanced by a greater exposure to different theories, percep-
tions, and interpretations. As Elizabeth Eisenstein has summarized the process:14
Much as maps from different regions and epochs were brought into contact in the
course of preparing editions of atlases, so too were technical texts brought together
in certain physicians’ and astronomers’ libraries. Contradictions became more
visible, divergent traditions more difficult to reconcile. . . . Not only was confi-
dence in old theories weakened, but an enriched reading matter also encouraged
the development of new intellectual combinations and permutations. . . . Once
old texts came together within the same study, diverse systems of ideas and special
disciplines could be combined. Increased output . . . in short, created conditions
that favored new combinations of old ideas at first and then, later on, the creation
of entirely new systems of thought.
The spread of information through printing allowed major advances in scien-
tific understanding. Everything from astronomical tables to accurate anatomical
illustrations became grist for the scientific mill. Theories could be readily checked
against a growing mass of information. Equally important, printing helped to
change the nature of learning. Scientific investigators were freed from the drudgery
of copying tables and texts. Also, as literacy spread and books were much more
readily available, students of all ages did not need to spend vast amounts of time on
rote memorization and the learning of elaborate mnemonic techniques. Freed from
these tasks, advanced students and natural philosophers could devote their energies
to “solving brain teasers, designing ingenious experiments and new instruments,
or even chasing butterflies and collecting bugs if they wished.”15 The pursuit of
knowledge could take on a playful quality, and play can be a key ingredient in the
development of scientific understanding.
Printing and the Rise of Protestantism
Just as printing helped to alter people’s views of the physical and natural world, the
new medium played a key role in transforming conceptions of the spiritual world. It is
certainly no coincidence that the Protestant Reformation began less than two genera-
tions after Gutenberg’s invention. Before then, unorthodox interpretations of prevail-
ing Christian religious beliefs had sprung up from time to time, but they remained
confined to particular locations. Although several instances of religious dissent may
have occurred simultaneously, they were not linked together into an integrated attack
on the authority of the centralized Church. The circulation of printed religious books
and tracts created a more unified challenge to the dominant Catholic order and
helped to turn a number of separate revolts into the Protestant Reformation.
Literacy and Protestantism reinforced each other; while Protestantism benefited
from the expansion of the printed word, the religious beliefs of Protestant reformers
did much to encourage printing. One of the central tenets of Protestantism was that
Volti7e_13_ch12_223-240_highres.indd 229 28/11/12 6:01 PM

230 Communication
individuals should study the Bible and interpret its message unencumbered by the
guidance of priests, bishops, and popes. The production of printed Bibles carried the
word of God directly to the faithful, while at the same time a great outpouring of
religious tracts spread the words of religious reformers throughout Christendom. For
good reason, Martin Luther praised printing as “God’s highest and extremest act of
grace, whereby the business of the Gospel is driven forward.”16
Luther later lost his enthusiasm for popular Bible reading when it became
apparent that the unguided reading of the Bible could result in “dangerous” beliefs
that contradicted his own.17 But by then individual reading of the Bible had become
an integral part of Protestant life. Sixteenth-century English clergymen were told to
“comfort, exhort, and admonish every man to read the Bible in Latin or English, as
the very word of God and the spiritual food of man’s soul, whereby they may better
know their duties to God, to their sovereign lord the king, and their neighbor.”18
The Catholic Church also used printed catechisms, tracts, and edicts as weap-
ons in its battles against the Protestant “heresy.” But the cultural order it attempted
to maintain was fundamentally antagonistic to the use of printed sources and the
spread of literacy. The Church was unwilling to allow laypeople to pursue Biblical
scholarship, and it was deeply suspicious of popular Bible reading and interpreta-
tion. Catholicism was still rooted in the medieval past, and it was unprepared for
the challenge presented by print technology.
If the Bible was to be made accessible to the Protestant faithful, it had to be printed
in a language they understood, rather than in the Latin that had served as the universal
language of the medieval clergy. While the Catholic Church strongly resisted the pro-
duction of Bibles in modern European languages, Protestantism made it an imperative.19
In Protestant lands the translation and printing of Bibles in vernacular languages spread
the Gospel, but equally important, stimulated nationalism. A Europe that had been
united under the cultural domination of the Catholic Church gave way to a patchwork
of separate states, each with its own established church and a growing literature printed
in the local language. Printing helped to destroy the unity of the medieval world, while
within the borders of discrete national entities, a considerable amount of homogeniza-
tion took place. Printing was of great importance in standardizing the languages of these
nations. Dialects unsupported by a printed literature slowly faded away and were sup-
planted by a common form of French, German, or English.
Printing, Literacy, and Social Change
Motivated by the religious requirement of Bible reading, Protestants became literate
to an extent not previously found in European civilization. Throughout the Middle
Ages the great bulk of the population could not read. Even rich and powerful
people were illiterate. Charlemagne, who reigned as King of the Franks from 768
to 814, and after 800 was the emperor of much of present-day Western Europe, was
an intelligent and learned man, but he needed to have books and documents read
aloud to him, and he relied on scribes to write down his words.
The only people who found literacy to be a necessity were members of the
Christian clergy. (The connection between literacy and a religious vocation can be
Volti7e_13_ch12_223-240_highres.indd 230 28/11/12 6:01 PM

Printing 231
seen in the etymological relationship between the words “clerk” and “clergy.”) The
ability to read was often taken as sufficient proof that a person was a member of
the clergy, and therefore not subject to civil law. This could be a great advantage,
for a person claiming “benefit of clergy” could escape corporal punishment or even
execution if convicted of a crime. The fifty-first psalm of the Old Testament was
known as the “neck verse,” because a person able to read it would be spared the
gallows.20 As might be expected, many illiterates tried to escape punishment by
memorizing the psalm and then pretending to read it at the appropriate moment.
This resulted in some farcical scenes in which a person attempted to demonstrate
his literacy by “reading” from an upside-down Bible!
Protestantism, with its emphasis on Bible reading, created strong incentives for
laypeople to learn to read. This skill could, of course, be useful in other endeavors.
A person who could read and write was able to keep accurate records and accounts,
learn about new ways of doing things, understand the law, enter into contracts,
communicate with large numbers of people, and make use of banks. Thus, the often
noted connection between Protestantism and rapid economic growth in early mod-
ern Europe was in part due to the superior levels of literacy in Protestant lands.21
As we have seen, the rise of Protestantism created a fertile environment for
the development of the printing industry. At the same time, urbanization and the
spread of capitalism also made for a society that was more open to—actually hungry
for—the new ideas that books could disseminate. On a more utilitarian level,
the development of commerce and industry generated a need for written records
and accounts. In later centuries, the desire for literacy was further stimulated by
the growing concerns with social equality. In particular, the greater accessibility
of education that culminated with the introduction of mass schooling in the
nineteenth century produced a literate public and expanded the market for the
printed word.22
All of these historical forces produced an environment that was ripe for
the large-scale production of printed works. Had they not been present, the
invention of movable type would have been of much more limited significance.
After all, printing had first emerged in East Asia, yet social conditions limited its
influence in that region. Korean printers had independently invented movable
type, but few books were printed by this means in Korea until the end of the
nineteenth century. Korean scholars even invented a phonetic alphabet known
as Hangul in the fifteenth century, which greatly facilitated the use of movable
type. But Korea’s traditional aristocracy clung to the use of Chinese characters
for the written rendition of Korean words because their ability to understand
the difficult-to-learn characters set them apart from the common folk. Although
a mass readership literate in Hangul could have provided a large audience for
printed works, no self-respecting writer was willing to use the “inferior” script.
Consequently, printing with movable type had minimal effects on Korean society.23
In Europe, social and cultural conditions created a strong demand for the
products of print technology. At the same time, the technological advances that
made the mass production of printed works possible did much to create these
conditions. As with all major technological advances, the influence of printing is
Volti7e_13_ch12_223-240_highres.indd 231 28/11/12 6:01 PM

232 Communication
not a simple matter of technological determinism. Instead, we find a reciprocal,
reinforcing relationship between technological change on the one hand and
social change on the other. Moreover, printing with movable type did not effect a
sudden revolution in Western society; even as late as the middle of the eighteenth
century, 40 percent of English men and more than 60 percent of English women
were illiterate.24 Still, working in conjunction with the great changes that were
sweeping over Western civilization, the development of print technology produced
a transformation that has only begun in many parts of the world today.
Psychological Effects of Printing
The effects of printing on religious, economic, and political life seem reasonably
evident. A consideration of the consequences of reading for individual attitudes
and ways of thinking, however, is a more speculative venture. The Protestant
Reformation is an identifiable historical event, and its connections with printing
seem clear. It is much more difficult to demonstrate that printing fundamentally
changed basic ways of thinking, or that it altered how we understand our place in
the world.
Still, we might reasonably speculate that printing resulted in a greater sense
of one’s separateness from the rest of society. Reading is usually a solitary activity.
Novels, with their typical emphasis on one key figure, have been the central literary
product of the last two centuries. It is therefore likely that a culture based on print
is more individualistic than one that relies on collective means of communication,
such as Sunday sermons or old-fashioned political oratory. It has even been
suggested that the very concept of the “self ” is foreign to people and cultures that
have not been touched by literacy.25
Some fascinating possibilities of this sort have been suggested by Marshall
McLuhan, for whom media such as print or television had consequences that far
outweigh the overt messages that they carry.26 According to McLuhan, printed
books fundamentally changed civilization not because of the information they
transmitted; the greatest consequences of printing lay in the different modes of
thought and perception that it fostered. In McLuhan’s analysis, each medium of
communication engenders a distinctive way of looking at the world; as his famous
aphorism has it, “the medium is the message.” The reading of the printed word
makes readers think in sequence, just as a sentence is sequentially read from left
to right. Reading also produces an egocentric view of the world, for the reader’s
involvement with the printed word is solitary and private. For McLuhan, the
intellectual and social consequences of print were immense, encompassing such
diverse things as the Newtonian view of the universe, the employment of fixed
prices for business transactions, and the “detribalization” of society.
It is impossible to summarize McLuhan’s ideas in a few sentences. The intrepid
reader is advised to turn to McLuhan’s writings after first being warned that their
author can be cryptic, elliptical, hyperbolic, and at times simply maddening.
Nonetheless, some taste of his thinking can be obtained by looking at how
McLuhan has interpreted changed painting styles as a reflection of printing, or what
Volti7e_13_ch12_223-240_highres.indd 232 28/11/12 6:01 PM

Printing 233
he has termed “The Gutenberg Galaxy.” Reading, as McLuhan points out, inculcates
a single “point of view,” the fixed position of the reader.27 In similar fashion, the
visual arts that developed in the post-printing era began to depict scenes as they
might be viewed from a single point in space. Given this central vantage point, the
scene appears to have a three-dimensional quality due to the artist’s employment
of the principles of perspective. The employment of perspective is not a natural or
necessary part of pictorial art. As any art student knows, to paint or draw something
in perspective requires the application of special procedures and a good deal of
practice. These artistic techniques were not employed in medieval painting, which
was characterized by a visual flatness. Figures were not presented according to the
rules of perspective, but rather in accordance with the importance of the subjects;
in a typical Medieval painting, Jesus, an archbishop, and the faithful appear in the
same plane, but they are not all of the same size; their size is commensurate with
their importance in the spiritual hierarchy. Until the advent of printing there was
no reason to depict subjects according to rules of perspective because printing and
widespread reading had not conditioned people to see the world as it might appear
from a single vantage point. As McLuhan has argued, the rules of perspective were
developed and used only after printing produced an individualistic perception of
the world.
Newspapers
For several centuries after the invention of movable type, the most important prod-
uct of the printing press was the book. Beginning in the eighteenth century, a new
kind of publication appeared that was to have equally revolutionary consequences
for society. That publication was the newspaper. The transmission of written news
has a long history, originating with the acta diurnia that was posted in the Roman
Forum, and continuing with the printed broadsheets that circulated from late
medieval times onward. The first regularly appearing newspapers emerged in the
seventeenth century, and by the beginning of the next century, newspapers were
established institutions in a number of European cities. Colonial America followed
suit, with the appearance of The Boston News-Letter in 1704 marking the beginning
of newspaper publishing in what was to be the United States.
These early newspapers were unimpressive, usually consisting of no more than
four pages of hard-to-read type. Their circulation was also limited, being confined
to the small number of people who were both literate and able to pay for them. This
situation changed dramatically during the nineteenth century, when the newspaper
became a mass-produced item, a product of Industrial Revolution technology.
One of the distinguishing characteristics of the Industrial Revolution was the
use of new sources of energy, with steam power taking on a growing importance. The
first steam-powered printing press was invented by Friedrich Koenig and Andreas
Bauer in 1812, and was put to use by the Times of London in 1814. The use of steam
power for this purpose necessitated a fundamental redesign of the printing process.
Instead of manually pressing a sheet of paper against a flat bed containing the type,
the job was done by a steam-powered rotating cylinder, while another cylinder did
Volti7e_13_ch12_223-240_highres.indd 233 28/11/12 6:01 PM

234 Communication
the inking. This process ran off 1,000 copies of the newspaper each hour, one-third
the time that the hand press required.28
Other refinements ensued. Printing on a continuously running roll of paper
(instead of feeding single sheets of paper) greatly speeded up the process, as did the
development of presses that printed on both sides of a sheet and automatically cut
individual pages. Improved inking processes contributed to the speed of production,
while the use of woodpulp for paper also lowered production costs.
Improvements to the rotary press concept followed, most notably Richard Hoe’s
use of another rotating cylinder instead of a flat horizontal bed to hold the type.
First employed in 1846, improved variants of the rotary press could produce as many
as 20,000 impressions per hour by the beginning of the Civil War.29 Printing was
further accelerated by the invention of the stereotype plate, which allowed the easy
production of duplicate plates for use on multiple presses. The stereotype plate was
made by pressing a paper mat onto a frame of set type. After the mat was dried and
shaped into a half of a cylinder, molten metal was forced into it, forming the actual
printing plate. The other side of the rotating cylinder held type for another page. The
first of these presses contained two cylinders, each with four plates, so an eight-page
newspaper could be printed with each turn of the cylinders.30 By 1890 improved
rotary presses were capable of printing 48,000 twelve-page papers each hour.31
While the presses turned out newspapers with increasing rapidity, the need to
set type by hand restricted the rate of production. A great deal of inventive energy
therefore was expended on the development of automatic typesetting. Even Mark
Twain was involved as a financial backer for such a venture; it unfortunately failed,
A Hoe rotary printing press from the middle of the nineteenth century. (The Granger Collection, New York)
Volti7e_13_ch12_223-240_highres.indd 234 28/11/12 6:01 PM

Printing 235
dashing the great author’s dreams of making a vast fortune.32 Success instead came
to a German immigrant, Ottmar Mergenthaler, whose Linotype machine was the
first effective typesetting device. First used in 1886, by the end of the century 3,000
of these machines were in use around the world.33
The development of the mass-circulation newspaper was further stimulated
by a host of nineteenth-century inventions that allowed the rapid gathering and
dissemination of news. Railroads and steamships conveyed reporters to scenes
of newsworthy events. The telegraph quickly relayed their stories back to the
newsroom, and had the incidental effect of making stories less verbose because the
senders of telegraphic messages were charged by the word.34 The news-gathering
power of the telegraph was enhanced by the laying of trans-Atlantic cables that
greatly speeded up communications between Europe and the United States. The
telephone and, by the end of the century, the radio further assisted reporters and
the newspapers for which they worked.
Circulation Wars and the Shaping of Public Opinion
As with other mass-production technologies, the improvements in newspaper
printing dramatically lowered production costs. Before the 1830s, newspapers sold
for 6 cents a copy, which in those days could buy a quarter pound of bacon or a
pint of whiskey.35 As production costs fell, the first American penny paper, the
New York Sun, made its appearance in 1837. Aiming for a mass audience, the Sun
offered sensational stories of dubious veracity, such as a report on the observance of
life on the moon through the use of a powerful telescope located in South America.
Other newspapers followed suit, and by the second half of the nineteenth century
the cheap mass-circulation daily newspaper was a fixture in every city and town of
Europe and North America. The expansion of newspaper readership occurred at a
rapid rate. During the early decades of the nineteenth century the Times had led all
English newspapers with 10,000 readers,36 while the largest New York newspapers
usually printed editions of no more than 2,500.37 By 1870, 2.6 million newspapers
were printed each day in the United States, and by the end of the century that
figure had risen to 15 million.38
Unfortunately, the spread of these newspapers was often stimulated by
journalism of the worst sort. Newspapers built up circulation by running stories that
stressed scandal, crime, and bizarre occurrences. The debasement of news reporting
accompanied the increase in newspaper readership; the “yellow journalism” of that
era, in the words of Edwin and Michael Emery, “turned the high drama of life into
a cheap melodrama and led to stories being twisted into the form best suited for
sales by the howling newsboy.”39 We can still see this sort of thing in the headlines
of tabloids conveniently situated next to supermarket checkout lines: “My Steamy
Nights with Brittany in UFO Love Nest.”
Both in the past and today, reportage of this sort demonstrated many news-
papers’ low regard for the taste and intelligence of their readers. Beyond this, the
mass-produced newspaper became a force in its own right, and its editorial policies
did much to shape public opinion. This can be seen in the way the popular press
Volti7e_13_ch12_223-240_highres.indd 235 28/11/12 6:01 PM

236 Communication
helped to push America into a war with Spain in 1898. To be sure, other forces were
at work; having conquered the frontier of its own territory, the United States was
seeking new triumphs of Manifest Destiny. Still, without the galvanization of public
opinion by newspapers such as William Randolph Hearst’s New York Journal, it is
possible that the war could have been averted.
Hearst was engaged in a fierce struggle with Joseph Pulitzer’s New York World,
and inflamed coverage of a rebellion against Spanish rule in Cuba was a surefire
way to sell papers. Through the use of sensational reporting, both papers enlarged
their circulations to over a million each, a record at that time.40 Reports of Spanish
atrocities in putting down the rebellion, while having some foundation, were wildly
exaggerated and given extensive coverage. Stories in the World left little to the
imagination: “Blood on the roadsides, blood in the fields, blood on the doorsteps,
blood, blood, blood!”41 Not to be outdone, Hearst’s New York Journal played a
prominent role in whipping up public enthusiasm for a war with Spain to “liberate”
Cuba. When Hearst’s artist-reporter in Cuba notified the boss that war was unlikely,
Hearst reportedly sent him a telegram with the command “You provide the pictures,
and I’ll provide the war.”
From the beginning of the revolt to America’s going to war with Spain, hardly a
day went by without a story about Cuba in one of the New York newspapers.42 And
when the American battleship Maine blew up in Havana harbor, many newspapers
were quick to attribute it to a Spanish mine. (To this day the cause of the ship’s
sinking remains unknown, although it was likely due to internal causes.) Even
though disputes with Spain over a variety of issues were almost fully resolved by this
time, a significant portion of the popular press had whipped up a war hysteria that
could not be extinguished. The brief war that ensued concluded with the United
States assuming the role of an imperial power, with colonial holdings that stretched
from the Philippines to Puerto Rico.
It would be an exaggeration to say that the Spanish-American War, or any
other major historical event, was directly caused by the reportage of circulation-
hungry mass newspapers. The remnants of Spain’s American empire were an
inviting target for turn-of-the-century America’s expansionist ambitions. Under
these circumstances, overheated news reporting reinforced opinions and ambitions
that were already evident. Still, there can be no denying that the journalistic
policies of Hearst and Pulitzer made a peaceful resolution of disputes between the
United States and Spain far more difficult.
It would be even farther from the truth to attribute such events as the
Spanish-American War to technological advances in printing. To be sure, the mass-
circulation dailies depended on improved methods of newspaper production, but
much more was involved than such innovations as the steam-powered rotary press.
The spread of education and literacy, spurred by the expansion of public schooling,
created a large potential readership for these publications. The expansion of industry
and commerce was accompanied by the emergence of a growing advertising industry
that provided vast revenues for the newspapers. Urbanization and immigration
produced large concentrations of population from which a mass readership could be
Volti7e_13_ch12_223-240_highres.indd 236 28/11/12 6:01 PM

Printing 237
drawn. Finally, a more democratic social order generated an environment in which
the “common man” gained in political and economic importance; as the first of the
mass media, newspapers were a natural outgrowth of mass society.
All of these trends reinforced one another. New printing technologies contr-
ibuted to literacy and the rise of a mass readership, commercial expansion, and
perhaps even the expansion of democracy. At the same time, the emergence of new
printing technologies was stimulated by all of these economic, social, political, and
cultural factors. As we have already seen, Gutenberg’s invention of movable type
was accompanied by changes on many fronts. The same thing happened during
the second half of the nineteenth century as a series of technological innovations
made possible the publication of mass-circulation newspapers. In neither case did
technological developments by themselves produce the epochal changes with
which they are associated. The development of these technologies was part of
a larger process of change. Yet at the same time, these technologies were major
contributors to this process. From movable type to the steam-powered rotary
press, the technology of printing exemplifies the complex, mutually reinforcing
interactions between technological and social change.
A similar process can be seen today, as digital technologies now allow the
transmission and reception of the printed word without the need for ink and paper.
With the advance of digital technologies, readers can access vast amounts of material
through the use of computers, smartphones, tablets, and electronic books. In recent
years, e-books and tablets have become especially important sources of reading
material. By early 2012, 21 percent of adults had read a book using one of these
devices, and of those who had read at least one book during the year, 28 percent
had used an e-reader or a tablet for this purpose.43 But impressive though these
statistics are, they capture only one component, and a small one at that, of the host
of revolutionary changes that have been transforming media from the middle of the
nineteenth century onward. These epochal changes are the topic of the next chapter.
Questions for Discussion
1. Do statements that appear in print seem more true than those that are pre-
sented through some other medium? Why?
2. One of the major trends in the world today is the spread of literacy. In many
parts of the world, only a small percentage of the population could read a gen-
eration ago; today, the majority of the population is literate. What sort of social,
political, and economic effects will this produce? Will all of them be good?
3. Some social critics are of the opinion that the spread of the electronic media is
destroying literacy. Standardized test scores of reading and writing ability have
in fact gone down in recent years. Are the new media the main cause? If so, is
the slow destruction of the printed media by the electronic media necessarily
a bad thing?
4. Lurid newspaper stories helped to fan a conflict between Spain and the United
States that ultimately led to war. The Vietnam War, another overseas conflict
involving the United States, met with a great deal of opposition, due in part
Volti7e_13_ch12_223-240_highres.indd 237 28/11/12 6:01 PM

238 Communication
to the continual presentation of that war on television. Marshall McLuhan
would have thought this significant. Can you imagine why? Can you think of
any fundamental differences between newspaper accounts of a war and those
presented on television?
5. It is sometimes asserted that books, newspapers, and magazines are doomed,
and that digital technologies in the form of e-books, tablets, and smartphones
will soon displace them. Do you agree? Do conventional printed materials have
a future? What, if any, advantages do they have over digital media?
Notes
1. Quoted in Maurice N. Richter, Jr., Technology and Social Complexity (Albany: State
University of New York Press, 1982), pp. 16–17.
2. Jared Diamond, Guns, Germs, and Steel: The Fate of Human Societies (New York and
London: W.W. Norton, 1997) pp. 215–38.
3. James Thorpe, The Gutenberg Bible: Landmark in Learning (San Marino, CA: Huntington
Library, 1975), p. 3.
4. Xing Runchuan, “The Invention and Development of Printing and Its Dissemination
Abroad,” in Ancient China’s Technology and Science (Beijing: Foreign Languages Press,
1983), pp. 383–391.
5. Albert Kapr, Johann Gutenberg: The Man and His Invention, translated by Douglas Martin
(Aldershot, England: Scolar Press, 1996).
6. Frances and Joseph Gies, Cathedral, Forge, and Waterwheel: Technology and Invention in
the Middle Ages (New York: HarperCollins, 1994).
7. Kapr, Johan Gutenberg, pp. 109–122.
8. D. S. L. Cardwell, Turning Points in Western Technology: A Study of Technology, Science
and History (New York: Science History Publications, 1972), pp. 20–24.
9. Thorpe, The Gutenberg Bible, p. 4.
10. Ferdinand Braudel, Capitalism and Material Life, 1400–1800 (New York: Harper & Row,
1975), p. 298.
11. Elizabeth Eisenstein, The Printing Revolution in Early Modern Europe (New York:
Cambridge University Press, 1984), pp. 195–199.
12. Lewis Mumford, Technics and Civilization (New York: Harcourt, Brace and World,
1934), pp. 21–22.
13. Eisenstein, The Printing Revolution, p. 75.
14. Ibid., pp. 43–44.
15. Ibid., p. 239.
16. Ibid., p. 147.
17. Richard Gawthorp and Gerald Strauss, “Protestantism and Literacy in Early Modern
Germany,” Past and Present 104 (August 1984): 6.
18. Quoted in David Cressy, Literacy and the Social Order: Reading and Writing in Tudor and
Stuart England (New York: Cambridge University Press, 1980), p. 3.
19. Eisenstein, The Printing Revolution, p. 170.
20. Cressy, Literacy and the Social Order, p. 16.
21. A. L. Stinchcombe, “Social Structure and the Invention of Organizational Forms,”
in Tom Burns (Ed.), Industrial Man (Harmondsworth, England: Penguin, 1969),
pp. 163–165.
22. Scott Cook, “Technology and Social Change: The Myth of Movable Type,” The Weaver 4,
1 (Fall 1985): 5.
Volti7e_13_ch12_223-240_highres.indd 238 28/11/12 6:01 PM

Printing 239
23. Kichung Kim, “Hyol-ui Nu: Korea’s First ‘New’ Novel,” Korean Culture 6, 4 (December 1985):
41–45.
24. Cressy, Literacy and the Social Order, p. 176.
25. Barry Sanders, A Is for Ox: The Collapse of Literacy and the Rise of Violence in an Electronic
Age (New York: Random House, 1995), pp. 29–32.
26. See Herbert Marshall McLuhan, The Gutenberg Galaxy: The Making of Typographic Man
(Toronto: University of Toronto Press, 1962), and Understanding Media: The Extensions
of Man (New York: New American Library, 1964).
27. McLuhan, The Gutenberg Galaxy, p. 111.
28. Robert W. Desmond, The Information Process: World News Reporting to the Twentieth Century
(Iowa City: University of Iowa Press, 1978), pp. 62–63.
29. Edwin Emery and Michael Emery, The Press and America: An Interpretive History of the
Mass Media, 7th ed. (Englewood Cliffs, NJ: Prentice-Hall, 1992), p. 115.
30. Desmond, The Information Process, p. 295.
31. Emery and Emery, The Press and America, p. 188.
32. John F. Kasson, Civilizing the Machine: Technology and Republican Values in America,
1776–1900 (New York: Penguin Books, 1977), pp. 203–204.
33. Desmond, The Information Process, p. 298.
34. John Tebbel, The Compact History of the American Newspaper (New York: Hawthorne
Books, 1969), p. 121.
35. Emery and Emery, The Press and America, p. 93.
36. Desmond, The Information Process, p. 77.
37. Emery and Emery, The Press and America, p. 85.
38. Edwin Emery, The Press and America, 3rd ed. (Englewood Cliffs, NJ: Prentice-Hall,
1972), p. 285.
39. Emery and Emery, The Press and America, p. 191.
40. Desmond, The Information Process, p. 388.
41. Edwin Emery, The Press and America, 3rd ed. (Englewood Cliffs, NJ: Prentice-Hall,
1972), p. 365.
42. George H. Douglas, The Golden Age of the Newspaper (Westport, CT, and London:
Greenwood, 1999), p. 111.
43. Lee Rainie, Katheryn Zickuhr, Kristen Purcell, Mary Madden, and Joanna Brenner,“The
Rise of E-Reading,” Pew Internet & American Life Project (April 12, 2012), accessed on
June 15, 2012, at http://libraries.pewinternet.org/2012/04/04/the-rise-of-e-reading/.
Volti7e_13_ch12_223-240_highres.indd 239 28/11/12 6:01 PM

http://libraries.pewinternet.org/2012/04/04/the-rise-of-e-reading/

Volti7e_13_ch12_223-240_highres.indd 240 28/11/12 6:01 PM

During the 450 years that followed the invention of movable type in the West, the
printed word was the dominant form of mass communication. By the middle of
the nineteenth century, words could be sent and received at unprecedented speeds
through the newly invented telegraph. Although it wasn’t identified as such when
it was invented, the telegraph is a digital device, one that uses electrical pulses
to transmit combinations of dots and dashes representing individual letters—the
Morse Code. Few people send or receive telegrams today, but in its time the
telegraph was as revolutionary as the Internet is in ours. It lifted the isolation of
remote communities, helped members of far-flung families to keep in touch with
one another, greatly facilitated the operation of vast railroad networks, and brought
news from all around the world.1 Still, the telegraph had its limitations. Not only
were trained operators required to encode and decode messages, large amounts of
capital were needed to string up telegraph wires between communication points.
Moreover, telegraphic communication was thwarted when the installation of wires
was physically impossible or costs were prohibitive, as in the case of ships at sea or
geographically remote places.
These limitations began to be overcome toward the end of the century as sci-
entists and engineers in several countries learned how to send and receive signals
through empty space. The first product of their efforts was radio. The experience
gained with radio subsequently supplied the foundation for an even more far-
reaching medium: television. The social, cultural, political, and economic effects of
these media have been at least as great as those of printing. We are still attempting
to comprehend how radio and television have affected our lives, even as we now
find ourselves facing the opportunities and challenges of digital media. The latter
will be explored in the next chapter, but first we will survey the causes and conse-
quences of advances in radio and television technology.
The Invention of Radio
It might be thought that the limitations of telegraphic communication created the
motivation to develop a method of electrical communication that did not require
wires. And it is certainly true that much of radio’s development was “pulled”
along by the demand for better methods of communication. But that came later;
241
The Electronic Media: From the Telegraph to Television
c h a p t e r t h i r t e e n
Volti7e_14_ch13_241-262_highres.indd 241 28/11/12 6:02 PM

242 Communication
the first successful attempt at sending and receiving wireless communications had
nothing to do with practical concerns. As we saw in Chapter 5, radio transmis-
sion and reception was first used by Heinrich Hertz as a way of empirically testing
the theoretical formulations of James Clerk Maxwell. Hertz produced radio waves
with an oscillator, a device that rapidly generated electrical pulses. To receive or
“detect” the radio waves produced by these pulses, Hertz employed a piece of wire
with small balls at each end, bent into a near-circle. When this was placed in the
vicinity of the oscillator, a spark jumped between the two balls in conjunction with
the sparking of the oscillator. Through a series of experiments using these devices,
he was able to confirm Maxwell’s calculations that electrical waves traveled at the
speed of light. Additionally, he showed that these waves propagated themselves in
accordance with the laws of optics that had first been formulated for visible light.
Hertz’s concerns were entirely in the realm of scientific inquiry, but others saw
in his apparatus an embryonic technology that might realize the dream of wireless
communication. Many experimenters tackled the numerous technical problems
that kept radio from being a practical possibility. Although Guglielmo Marconi is
often hailed as the inventor of radio, as with most significant inventions, there are
many others whose contributions were equally important, such as Oliver Lodge, who
developed the method of tuning a receiver so it could receive signals of a specific
wavelength. At this point radio was simply wireless telegraphy, useful only for send-
ing and receiving dots and dashes. The transmission and reception of actual sounds
was made possible through the use of continuous waves to carry a signal. This, in
turn, had been made possible by Ronald Fessenden’s incorporation of the heterodyne
Although Guglielmo Marconi did not invent radio, he played an important role in its commer-
cial application. Here he is at age 27 with some early devices for transmitting and receiving
wireless telegraphic messages. (Hulton-Deutsch Collection/Corbis)
Volti7e_14_ch13_241-262_highres.indd 242 28/11/12 6:02 PM

The Electronic Media: From the Telegraph to Television 243
principle into radio technology. Heterodyne transmission and reception used two
separate alternating currents, one generated by the transmitter and the other by
the receiver. Oscillating waves transmitted at a certain frequency mixed with the
receiver’s lower frequency waves to produce a wave that could be easily received.
Important as the heterodyne principle was, an even greater breakthrough came
with the invention of better equipment for the transmission and reception of radio
waves. These devices were based on the principle of using a small flow of current
to govern a larger current. In this application the small current produced by a radio
wave could be used to regulate the stronger current that drove the speaker of a
radio. The first of these devices was the diode, invented in 1904 by John Ambrose
Fleming, whom we met briefly in Chapter 4. It established the principle of con-
trolling current electronically, but it had its limitations. A much more workable
device was the triode, an invention of Lee de Forest. By using a separate electrode
to control the current, the triode served as an excellent basis for a radio receiver, as
well as for its amplifier.2
The Origins of Commercial Radio
While technical development proceeded at a good clip during these early years, few
thought of radio as a means of providing entertainment and information for a large
audience. Radio was seen simply as wireless telegraphy, and for many years its main
use was for ship-to-shore communications. Radio was also used to transmit messages
across oceans, thereby obviating the need for undersea cables, but high cost and
unreliability plagued these efforts. Military forces also made extensive use of radio
communications, and World War I stimulated a good deal of technical progress,
especially in the development of mass-production techniques for the manufacture
of radio components.
During the 1920s interest in radio mounted as large numbers of amateur radio
operators constructed their own equipment, sent out messages, and attempted to
pick up the signals of others. Their chatter cluttered up the airwaves, and with a
mischievous intent similar to that of today’s hackers, a few even sent out bogus
orders to naval vessels. But others had more serious aims. Throughout the United
States, amateur radio operators, many of whom had developed their skills during
World War I, began to broadcast news, weather bulletins, musical recordings, and
even live musical performances. In a manner resembling the early days of the per-
sonal computer, enthusiastic amateurs made significant contributions to the new
technology by constructing their own apparatus, finding new applications for radio
communication, and in general exploring the potentials of the new medium.
At this time business enterprises began to take notice of radio’s commercial
possibilities. Realizing that regular programming could stimulate a sizable market
for its radio sets, in 1920 the Westinghouse Electric and Manufacturing Company
engaged one of its employees, himself a dedicated radio amateur, to construct a
broadcasting station atop one of the buildings of its Pittsburgh headquarters. On
November 2, 1920, radio station KDKA went “on the air,” giving up-to-the-minute
results of the 1920 presidential election to an audience of several hundred. Within
Volti7e_14_ch13_241-262_highres.indd 243 28/11/12 6:02 PM

244 Communication
a matter of months Westinghouse and General Electric had set up radio stations in
a number of cities. Other entrepreneurs followed suit, and so by 1922 more than
500 stations were transmitting music, sporting events, speeches, and news programs.
Commercial broadcasting took on a growing importance as radio began to reach the
general public and not just a relatively small group of technically minded amateurs.
At first, many of these programs could be low-budget endeavors; phonograph
records required only small initial costs, and most live performers were willing to
appear free of charge. This did not last for long; the American Society of Composers,
Authors, and Publishers (ASCAP) began to demand payment for the broadcast of
recorded music, and in 1923 it backed this demand with a successful lawsuit. In
similar fashion, live performers started to expect payment for their services. Other
countries, most notably Great Britain, had by this time established a national radio
system through which the expenses of broadcasting were defrayed by licensing fees
paid by owners of radio sets. But in the United States free enterprise was the order
of the day, and radio broadcasting was sustained by companies willing to sponsor
programs in return for the airing of their commercials. A host of new programs made
their appearance over the airwaves, and in the years that followed, listening to radio
Radcliffe students operating an early radio station. (Bettmann/Corbis)
Volti7e_14_ch13_241-262_highres.indd 244 28/11/12 6:02 PM

The Electronic Media: From the Telegraph to Television 245
programs became an integral part of American life. At the same time, however,
listeners were subject to a steady barrage of commercial “messages,” exhorting them
to buy the sponsors’ products and promising them that many of their problems could
be solved by using the proper mouthwash or driving the right car. As had happened
before with other advances in the ability to communicate, impressive technological
achievements were put into the service of the mindless and the mercenary.
The Rise of Television
For all of radio’s accomplishments, only a portion of the electronic media’s poten-
tial had been demonstrated. Radio could only reach one of the senses; if sight
could be added to sound, the appeal of electronic communications could be greatly
enhanced. The dream had long been there; the idea of electrically transmitting and
receiving visual images existed before even radio was a practical reality. In 1879,
readers of the English magazine Punch were presented with an illustration of a
couple watching a tennis match being shown on a large screen over their fireplace.
Three years later, a French artist drew equally prescient illustrations that depicted
people viewing television screens filled with pictures of distant lectures, wars, and
even girlie shows.3 And in 1907 in the pages of Scientific American, the term “televi-
sion” appeared for the first time in print.
A substantial amount of inventive activity took place in the 1920s on both
sides of the Atlantic as inventors, government agencies, and private firms sought to
make television a practical reality. The British Broadcasting Corporation took an
early lead in 1929 when it began to transmit half-hour television programs five days
a week. The technology employed by the BBC used an electromechanical system
In pre-television days, radio was a chief source of family entertainment. (Photo by PhotoQuest/
Getty Images)
Volti7e_14_ch13_241-262_highres.indd 245 28/11/12 6:02 PM

246 Communication
in which a rotating disc was used to convert electrical signals into visible images.
This technology had many drawbacks, not the least of which was that it required
extremely high levels of illumination. Attention therefore turned to the develop-
ment of all-electronic methods of transmission and reception. These efforts were
boosted by a series of fundamental discoveries and inventions that began to occur
at the end of the nineteenth century. Most important of these was the cathode ray
tube. This device was based on a principle, discovered by Ferdinand Braun in 1897,
that a beam of electrons moving along a closed tube could be deflected by a magnet.
In Russia, Boris Rosing used this tube for television reception by placing a screen of
photoelectric cells in the tube and activating them with moving electrons.
This was only a laboratory demonstration and not a complete system of sending
and receiving television images. A crucial step was taken by Vladimir Zworykin,
a Russian émigré who had served as an assistant to Rosing while a student at the
Institute of Technology in St. Petersburg. After coming to the United States, he
was employed by Westinghouse and then by the Radio Corporation of America
(RCA), where he worked on television in addition to a number of other projects.
In 1928 he produced the first workable electronic television camera, which he
called the “iconoscope.” It employed a screen made up of thousands of individual
elements that took on an electrical charge when they were struck by light reflected
from the object whose image was to be transmitted. The screen was rapidly scanned
by a beam from an electron gun, which resulted in an electrical discharge that was
amplified and transmitted to the receiver. The receiver then reconstructed these
electrical discharges into points of light on the television screen.4
Zworykin’s iconoscope was a major accomplishment, but as often happens in
the history of technology, parallel inventions were being made elsewhere. While
Zworykin was engaged in the early phases of his research, an Idaho farm boy named
Philo Farnsworth astonished his high school science teacher by presenting him with
plans for an apparently workable television system. A few years later Farnsworth suc-
ceeded in getting financial backing for his research, and by the late 1920s he had pro-
duced a workable, if crude, system of electronic television. But after an expenditure of
a million dollars by 1938, Farnsworth had not produced a commercially viable method
of transmitting and receiving television. Even so, Farnsworth held a large number of
key patents on devices and processes of considerable importance, patents which had
been upheld only after a good deal of litigation with Zworykin and his backers.5
In 1939 RCA began the regular broadcast of television programs to a few
thousand receivers in New York City. A number of drawbacks were still evident;
most of the early receiving sets had five-inch screens, and their dim pictures meant
that they had to be viewed in darkness. Many technical problems remained to be
solved in such areas as the design of antennas and the relaying of signals over large
distances. Still, television had become a practical possibility.
The Federal Government Steps In
The development of radio and television in the United States was largely the
work of private enterprise. Even so, the activities of the federal government gave
Volti7e_14_ch13_241-262_highres.indd 246 28/11/12 6:02 PM

The Electronic Media: From the Telegraph to Television 247
a substantial boost to the electronic media. Military contracts stimulated a great
deal of useful research. National security was also invoked when the government
played the leading role in the creation of a unified radio industry. In the years after
World War I, government officials, especially those in the Navy, were increasingly
concerned about the monopoly that the British Marconi Company had over inter-
national radio communication. After some prodding by the Navy, the General
Electric Company bought a controlling interest in Marconi’s American subsidiary
and transferred it to the newly formed RCA. There still remained the problem of
RCA’s gaining access to the numerous patents that covered various aspects of radio
production and broadcasting. This was ultimately resolved by giving the two other
major patent holders, Westinghouse and American Telephone and Telegraph, a
financial stake in RCA in return for the use of their patents. The creation of a
radio monopoly was thus the result of an action undertaken at the behest of the
U.S. government.
In addition to these activities, the federal government used its regulatory power
to ensure an orderly environment for broadcasting. For clear reception it is neces-
sary for each broadcaster to remain on a single frequency. (When you “tune in” to a
radio station you are selecting a specific frequency, such as 790 kHz or 89.3 MHz.)
Should a number of radio stations broadcast their programs on the same frequency,
the result is a chaos of conflicting sounds. In order to address this problem, in 1912
the Department of Commerce and Labor began to require the licensing of all broad-
casters. When a license was awarded, it stipulated the exact frequency on which
the station could broadcast, as well as its times of operation. But this authority was
successfully challenged in court, and by the mid-1920s a large number of broadcast-
ers were transmitting all over the frequency spectrum. The airwaves had become a
cacophony of interfering signals.
To remedy this situation, Congress created the Federal Radio Commission
in 1927, giving it broad powers to issue licenses and assign frequencies. In 1934
government oversight of communications media was consolidated through the
creation of the Federal Communications Commission (FCC). The licensing and
frequency allocation powers of the FCC worked to the detriment of the very few
stations whose licenses were revoked because their broadcasts were deemed not
to be in the public interest. The strict assignation of frequencies also frustrated
would-be broadcasters who were unable to gain licenses. But for the industry as a
whole, the Federal Radio Commission and its successor, the FCC, were essential
elements in the system of radio broadcasting. Without federal regulation, broad-
casters could not be assured of their own special frequencies, and the air would be
filled with signals that constantly intruded on one another. Most radio entrepre-
neurs probably believed in free enterprise, but they realized that in the absence of
regulation their pursuit of individual self-interest would result in the destruction
of their industry.
The issue of federal regulation once again came into prominence when televi-
sion became ripe for commercial development in the late 1930s. One of the major
issues confronting the industry was the setting of technical standards. Each one of
the major companies involved in television research employed a different system
Volti7e_14_ch13_241-262_highres.indd 247 28/11/12 6:02 PM

248 Communication
for transmitting and receiving televised images. At the same time, one of these
companies, RCA, had made the most progress in putting together a total television
system. This meant that the standards employed by RCA would have to be used if
the FCC decided to permit the immediate broadcast of commercial television pro-
grams. The other manufacturers knew that they lagged behind RCA, but they felt
that their own technologies had a great deal of promise. They certainly were not
eager to adopt RCA’s standards and pay royalties for its patents.
Faced with these contentious issues, the Commission gave the go-ahead for
commercial television broadcasting in 1940, only to rescind its decision less than
a year later. A special engineering committee was then set up to consider the issue
of standards. By the middle of 1941 these had been successfully resolved, although
America’s entry into World War II put a halt to commercial television. When TV
went into its period of rapid growth in the late 1940s and early 1950s, some of its
success could be attributed to the uniform technical standards that were developed
under FCC sponsorship.6 Unlike the early years of the personal computer, the tele-
vision market was not fragmented by incompatible standards and equipment.
Problems of Regulation
The efforts of the FCC to promote uniform technical standards were essential to
the growth of the electronic media industry. The consequences of the FCC’s other
activities, however, are more controversial. According to the 1934 Communications
Act, the FCC is supposed to grant and renew broadcasting licenses on the basis of
“the public interest, convenience, and necessity.” In fact, the FCC has been quite
reluctant to influence the radio and television industries through the exercise of its
licensing powers. Very few licenses have not been renewed; as long as a station has
met some minimal level of community service it is deemed to have met the letter, if
not the spirit, of the regulations. An FCC chairman might indict American televi-
sion as “a vast wasteland,” as Newton Minow did in 1961, but other than sponsoring
the development of noncommercial television, his agency had scant influence over
the quality of TV programming.
It can be fairly argued that the FCC’s active involvement in the shaping of
television programming would amount to unwarranted government interference
with the private sector. There have been times when FCC actions have had signifi-
cant consequences for the broadcasting industry, and its decisions have on occasion
benefited particular firms at the expense of other firms. One particularly notori-
ous example was the FCC’s role in reallocating established FM frequencies in the
1930s, a move that was advantageous to the dominant radio corporation, RCA, but
detrimental to the interests of FM’s inventor, Edwin Armstrong.7 RCA also reaped
considerable benefits from the FCC’s decision in 1953 to support its color television
technology rather than the one developed by CBS. Seven months after this deci-
sion, the chairman of the FCC resigned his post and took a high executive position
with NBC, at that time an affiliate of RCA.8 Much the same had happened during
the controversy over the shifting of FM frequencies, when the FCC’s chief engineer
left his post to become the head of RCA’s engineering department.
Volti7e_14_ch13_241-262_highres.indd 248 28/11/12 6:02 PM

The Electronic Media: From the Telegraph to Television 249
These may have been extreme cases; even critics of the FCC do not generally
believe that the agency has been riddled with corruption and conflicts of interest.
But many critics do believe that it has failed to serve the public interest because
it has been “captured” by the broadcast industry. This is a common feature of gov-
ernment regulation. Many regulatory agencies owe their creation to the demands
of the industry that they regulate, as when in the 1930s the airlines promoted the
idea of a new regulatory agency with the intent of using it for their own purposes.9
This is precisely what happened in the case of the regulation of the radio industry,
for it was the demands of broadcasters that led to the establishment of the FCC
in 1934.10 Regulatory agencies such as the FCC are supposed to be independent
of other branches of government and therefore insulated from political pressure.
At the same time, however, their insulation often results in a lack of political
support, so the agencies find it in the very industries that they are regulating.11
Moreover, regulatory agencies often lack the expertise and information necessary
to act independently. Compounding a potential conflict of interest, there is often
close personal contact between the personnel of these agencies and representatives
of the industry that they are supposed to be regulating. And, as in the case of the
FCC officials who took positions with NBC and RCA, they are often alert to the
possibility of going over to the other side.
Occasionally, a regulatory issue will leap into political prominence and disrupt
the comfortable relationship between a regulatory agency and the industry that it is
regulating. This is what happened when a major accident at the Three Mile Island
nuclear power plant in Pennsylvania put the Atomic Energy Commission’s role in
regulating nuclear power into the political spotlight. But this is rare; for the most
part, regulatory agencies view the industry that they regulate as a client to be served,
sometimes to the detriment of the public interest.
The FCC seems to conform to this general process, for it has failed to take strong
and independent positions vis-à-vis the radio and television industry.12 Hampered
by lack of sufficient information and expertise, the FCC usually has agreed to the
initiatives of industry representatives. And tempted by the opportunity to take
lucrative positions in the industry, FCC commissioners have generally opted for
the status quo, thereby preserving the primary purpose of broadcasting as being little
more than a way of making large sums of money. This proclivity for maintaining
existing arrangements has also manifested itself in technological matters. The FCC
did not provide much support for innovations such as FM broadcasting, UHF, and
pay and cable TV. This conservatism was financially advantageous for established
broadcasters, but it may have slowed the pace of technological advance.13
The Television-Viewing Public
Television ownership has been virtually universal for decades, to the point where
the inhabitants of a home without a television are de facto members of some sort
of counterculture. The time spent watching television in the United States has
not declined in recent years, despite the availability of new sources of entertain-
ment like video games and Internet sites. On the contrary, the time spent watching
Volti7e_14_ch13_241-262_highres.indd 249 28/11/12 6:02 PM

250 Communication
television has held steady. Only work and sleep take up more time than watching
television, and children spend more hours in front of a TV set than they do in
school. According to the Nielsen media rating organization, during the 2010–2011
television season the statistically average individual spent 34 hours and 39 minutes
watching TV each week.14 This is the average for the entire American population;
the extent of television viewing differs when membership in broad age groups is
taken into account. Children and teenagers watch less television than adults, while
individuals over the age of 65 spend about 13 more hours watching TV than the
average viewer.15
Given the many complaints about the vapidness of most television program-
ming, it might be thought that better-educated people spend less time watching
television than people with less education. This is true up to a point; the number
of hours spent watching television declines as the educational levels of viewers rise.
But education-related differences in the amount of TV use are not large, and they
have been narrowing in recent years.16 The extent of TV viewing also declines as
incomes rise, but again the divergence is not striking. Although the extent of televi-
sion viewing is affected by socioeconomic variables like education, income, gender,
age, and race, watching TV is a significant part of the daily routines of people in
every station of life.
Violence on Television and Its Consequences
One of the most pervasive concerns about television’s influence centers on the vast
amount of violent and aggressive behavior depicted on the screen. Violence is a pro-
gramming staple; 61 percent of TV programs contain some violence, and children’s
programming is more violent than prime-time shows aimed at general audiences.17
With all of this mayhem appearing on TV, it is understandable that critics have indicted
television for contributing to the rampant violence that plagues American society.
Particular attention has been given to the connection between TV viewing and
aggressive behavior in children, and a large number of studies have been devoted
to this topic. Some of these studies have been based on laboratory experiments
that allow the researcher to eliminate extraneous factors in order to determine the
relationship between exposure to a filmed act of aggression and subsequent behav-
ior. In a typical experiment, one group of children is exposed to a film depicting
violence while a control group is not. The subsequent behavior of the two groups
is then compared. A classic experiment of this sort was conducted in the 1960s.
One group of preschool children witnessed an actor physically and verbally assault-
ing Bobo the Clown, a large, inflated plastic doll. The control group saw nothing.
The children were then allowed to play in a room containing a Bobo doll and
other toys. The results were unequivocal; the children who had seen the simulated
aggression displayed higher levels of aggressive behavior toward the hapless toy, in
some instances even directly imitating the acts they had viewed, such as yelling out
“Lickit! Stickit!” while pounding on the toy with the handle of a mallet.18
Many studies of this sort have been conducted. Some have found indirect as
well as direct consequences of exposure to filmed violence. For example, children
Volti7e_14_ch13_241-262_highres.indd 250 28/11/12 6:02 PM

The Electronic Media: From the Telegraph to Television 251
are more likely to play with toy guns after seeing filmed aggressive acts, even if
gunplay did not appear in the film. All in all, these experimental studies are nearly
unanimous in finding a connection between seeing acts of violence and subse-
quently engaging in aggressive acts.19
Laboratory experiments such as these can always be criticized because of their
artificiality. In their attempt to reduce or eliminate extraneous influences, experi-
menters may create an environment that bears little resemblance to reality. A film
clip presenting violent behavior is not a television program, in which violent acts
are presented in conjunction with a variety of other messages. Experiments also
deal only with immediate cause-and-effect relationships. They therefore do not
take long-term, cumulative effects into account. Finally, in these experiments the
subjects may think that aggressive acts are permitted, and even expected. For all
these reasons, the real-world applicability of these experiments can be questioned.
Other researchers have attempted to circumvent the inherent limitations of
laboratory experiments by conducting more “naturalistic” studies. In these, one
group of children views “normal” violent programming, while another group does
not. Their subsequent behavior in everyday settings is then directly observed or is
reported by parents or teachers. Although they have a somewhat contrived quality,
these studies can be taken as a reasonable reflection of real-world situations. And
as with the experiments previously noted, viewing violent programming was associ-
ated with subsequent aggressive behavior, while neutral programming had no such
effect.20
A third type of research examines actual viewing habits to see if there is an
association (or “correlation”) between viewing violence on television and aggres-
sive or violent behavior. These studies generally use interviews and self-reports as
their primary source of information about viewing habits and behavior. For the most
part, research of this sort does show a positive correlation between watching violent
TV programs and aggressive behavior. Children and adolescents who prefer violent
TV programs tend to be more aggressive in their behavior. There is a basic problem
with correlational studies, however, for correlation is not the same thing as causa-
tion. The correlation of A with B does not necessarily mean that A causes B. It is
also possible that B causes A, or that a third factor or factors could be the cause of
both of them. Accordingly, a child who was already violence-prone might seek out
violent programs. It is also possible that an impoverished, culturally deprived back-
ground may be the cause of both the viewing of violent programming and aggressive
behavior. These are reasonable objections, but several research programs have been
able to statistically isolate the viewing of violent TV programming from other fac-
tors that may cause violent behavior. When this is done, the viewing of violent TV
shows is shown to be an independent source of violent and aggressive behavior. It
may most strongly affect those who are already predisposed to this kind of behavior,
but it is not confined to them.21
It must be stressed that the conclusions drawn from all of these different stud-
ies cannot be applied to the behavior of specific individuals. For most children,
watching violent acts on television probably will not lead to aggressive or violent
behavior, but for some children it may. TV affects different people in different ways.
Volti7e_14_ch13_241-262_highres.indd 251 28/11/12 6:02 PM

252 Communication
Hard-and-fast generalizations are difficult to come by, but what Wilbur Schramm
noted many years ago still holds today: “For some children under some conditions,
some television is harmful. For other children under the same conditions, or for the
same child under other conditions, it may be beneficial. For most children, under
most conditions, most television is neither particularly harmful nor particularly
beneficial.”22
Like any voluntary act, violent behavior is a product of a complex set of
motivations and inhibitions. All of us probably have had moments when we
wanted to commit a violent act. We usually haven’t done so for three basic reasons:
(1) we have learned that such actions are likely to result in retaliation, (2) we know
that they usually do not solve the problem, and (3) we have internalized a code of
behavior that discourages such acts. Most likely, the operation of all of these pre-
vents us from giving vent to our violent urges.
Television has the capacity to alter all of these inhibiting factors. In some tele-
vised depictions of violence, the retaliation for the violent act is delayed and muted,
if at all. For nearly 75 percent of televised violent scenes, there is no punishment or
even condemnation of the violence that was depicted.23
To make things worse, the second inhibiting factor, a belief that violence does
not solve most problems, is not always supported by television programming. Illegal
and violent acts are sometimes used to attain socially approved goals.24 Finally,
there is the problem of television’s contribution toward desensitizing people, espe-
cially children, to violence, thereby attenuating the development of a personal code
of ethics that discourages violent acts. Such long-term effects of televised violence
are difficult to prove, but there is a fair amount of evidence that exposure to tele-
vised violence dampens a child’s emotional response to the witnessing of violent
acts and diminishes his or her concern about such acts.25
Television can be a powerful influence on thought and behavior, although it
does not necessarily override the influence of family, school, church, and peers.
Research into the consequences of television viewing face formidable challenges
in separating TV’s effects from other influences. Even so, the majority of the many
studies that have been done have found a connection between on-screen vio-
lence and real-world violence. As a result of these studies, in 1985 the American
Psychological Association passed a resolution expressing concern that televised
violence could make children (1) less sensitive to the pain and suffering of others,
(2) more fearful of the world around them, and (3) more likely to behave aggres-
sively toward others. The organization followed this up with a publication that
presented a great deal of evidence linking the viewing of television violence with
subsequent aggressive and violent behavior.26
Television violence has been identified as a significant social problem, but it is
not one with an easy solution. The depiction of violence may be essential to a story,
even one with great artistic merit; many of Shakespeare’s works include scenes
of appalling brutality. Occasional violence also accounts for some of the appeal
of sports like football and hockey. In the absence of widespread popular support
for government censorship, which would likely be overturned on constitutional
grounds, about all that can be hoped for is a greater degree of responsibility on the
Volti7e_14_ch13_241-262_highres.indd 252 28/11/12 6:02 PM

The Electronic Media: From the Telegraph to Television 253
part of the television industry. A technological fix of sorts is available in the form
of the V-chip, which can be used to block programming according to an age-based
rating system.27 But the most effective way of avoiding violent programming is
for parents to take an active role in monitoring their children’s consumption of
television programs and to use of the most effective way of regulating viewing: the
on/off control.
Delivering the News
Until recent times most people were blissfully ignorant of the world around them.
Travelers might bring stories of distant places, and the literate few could read of
them. Information traveled very slowly; the Battle of New Orleans was fought
two weeks after a treaty formally ended the War of 1812, for the combatants were
unaware of its signing. During the nineteenth century the railroad, the telegraph,
and the appearance of popular newspapers brought the world closer, but coverage
was still slow, spotty, and often inaccurate. Past the borders of one’s own com-
munity was a world dimly perceived by most people. All that has changed today.
Instantaneous communications tell us of uprisings in the Middle East, wars in
Africa, and financial crises in Europe. Although we do not quite live in a “global vil-
lage,” modern communications technologies have extended our vision to a degree
scarcely imaginable not long ago.
In 1960, survey respondents indicated for the first time that television had
replaced newspapers as their prime source of news.28 Television continues to be
A family in Jordon gathers around the “electronic hearth.” (Glenn Campbell UPI Photo Service/
Newscom)
Volti7e_14_ch13_241-262_highres.indd 253 28/11/12 6:02 PM

254 Communication
the most important source of news, especially news regarding one’s community or
region; according to a Harris Poll conducted in 2010, 76 percent of the respondents
indicated that when looking for local news they went to local television news “all
of the time” or “occasionally.” Local newspapers also did reasonably well; 69 percent
of those polled went to them for local news “all of the time” or “occasionally.”29
For decades television has been a prime source of news about the nation and
the world; in 2010 the evening news programs on ABC, CBS, and NBC had a total
viewership of 21.6 million.30 But lying behind these figures is a steady erosion of
viewership as the Internet has become a major source of news. This trend is espe-
cially notable when the viewing audience is broken down into age categories, as
younger cohorts are much more likely to use the internet and other computer-based
news sources. Only 57 percent of those aged 18–34 said that they preferred newspa-
pers, magazines, and network television as sources of news, a clear contrast with the
81 percent of those over the age of 55 who expressed this preference.31
Although television has a great potential to bring the outside world into our
homes, the effects of watching television news have been negligible for large seg-
ments of the viewing public. According to a survey conducted by the Pew Research
Center in 2007, only 38 percent of the respondents who got most of their informa-
tion from evening TV news programs correctly answered at least 15 out of 23 basic
questions about current events. In contrast, 51 percent respondents who designated
National Public Radio as their prime news source were members of the “High
Knowledge Group.” But NPR enthusiasts shouldn’t feel too smug; 54 percent of
the surveyed population who were in this group noted that The Daily Show and The
Colbert Report were the prime sources of their knowledge of current events.32
News presented on television is fundamentally different from the news that
appears in newspapers. In part, it is a matter of coverage; all of the verbal informa-
tion presented in a 30-minute news program would fit on one page of a newspaper.
The way the news is covered in the two media is also strikingly different. Newspaper
reporting tends to be detached and impersonal, whereas TV reporting is more like
storytelling. A news report is presented in narrative form, with a theme being
developed through the use of sound and pictures as well as verbiage. A great deal
of television news is presented as a kind of visually oriented entertainment, nar-
rated by newscasters who are selected, at least in part, on the basis of their physical
attractiveness. The stories are short and fragmentary, with very little contextual
information. This is inevitable, since the average network news story runs for only
1 minute and 20 seconds.33 News of this sort is quickly forgotten; one study found
that more than 20 percent of the viewers of TV news could not recall a single item
of news an hour after having seen a news broadcast. Another study found that the
average viewer retained only 20 percent of the information presented in a simulated
news story.34
By contrast, newspaper stories are written so that the key item can be quickly
grasped. Background and analysis then follow for the reader who wants to continue
with the story.35 Viewers of TV news and casual newspaper readers may have equally
superficial understandings of the events in the news, but at least the latter can
extend their knowledge. Television news gives few such opportunities.
Volti7e_14_ch13_241-262_highres.indd 254 28/11/12 6:02 PM

The Electronic Media: From the Telegraph to Television 255
Television and Politics
Numerous analysts have charged that television has fundamentally altered the
political process. Television, they assert, has reduced political campaigns to trivial
“media events,” selected the sensational over the substantive, undermined politi-
cal parties, greatly inflated the cost of political campaigns, and made “image” the
primary criterion of a candidate’s appeal. Are these accusations true?
There can be little question that the politics of the television age is different
from the politics of earlier times. In 1948 Harry Truman could wage a successful
presidential campaign from the back of a railroad car. Today, televised news and
political commercials have taken the place of the whistle-stop tour. On occasion,
candidates have successfully contended for office without the aid of television adver-
tising; in 1974 a Democrat won Gerald Ford’s vacant congressional seat in a special
election despite being too impecunious to pay for television commercials. But inci-
dents such as this are highly atypical. Televised political advertising has become an
inescapable part of electioneering, as the typical congressional campaign budgets at
least one-third of its funds for television advertising. Presidential campaigns devour
enormous amounts of money, much of it used for televised appeals. From April 3 to
November 5 of 2008, a period encompassing the final part of the primary season and
the general election, $125,530,148 was spent on television commercials in support
of John McCain, while the Obama campaign countered with $235,974,838 worth
of television ads.36
For a presidential candidate, intense scrutiny by the media is an inescapable part of campaigning.
(AP Photo/Alex Brandon)
Volti7e_14_ch13_241-262_highres.indd 255 28/11/12 6:02 PM

256 Communication
As television has greatly increased the costs of conducting campaigns, the
need for large campaign contributions has mounted. This, in turn, has increased
the danger of successful candidates being beholden to the special interests that
supply a significant portion of these contributions. As cynics (or maybe realists)
have suggested, we may end up with “the best elected officials that money can buy.”
Corruption has always been a part of political life, but the demands of television
campaigning have created something new, a scramble for campaign funds that is
perfectly legal but nonetheless a threat to democratic political institutions.
Although television has become a major force in American political life, there
are limits to its influence. Televised political advertising isn’t always decisive, and
money doesn’t always win elections. In most elections the majority of voters have
made up their minds about who will get their vote well before the campaign gets
under way. For these voters, televised advertisements help to crystallize their choices
and reinforce their existing beliefs but do not produce any fundamental alterations.
At the same time, however, there are many elections where the outcome hinges on
the choices made by voters who make their decisions late in the political campaign.
Sometimes they are a significant portion of the electorate; in the 1980 presiden-
tial campaign, which offered an unusually clear choice of ideologies and policies,
35 percent of the voters did not decide who would get their vote until the last week
of the campaign, and 10 percent were undecided right up to the day of the elec-
tion.37 Many of these voters are not greatly concerned with political affairs. They
have not closely followed the campaign, and their interest emerges only during the
last few days of the campaign. For this reason a flurry of televised political adver-
tisements appears during the last week of a campaign.38 Television can therefore
be decisive when an election hinges on the choices made by this group of voters.
In most cases, the voters who have made up their minds long before the elec-
tion takes place are the ones most strongly influenced by party loyalties. This makes
them fairly impervious to televised appeals to vote for another party’s candidate. Yet
identification with a particular party has been eroding in recent years, and many
people who call themselves Democrats or Republicans do not have a strong sense
of commitment to their party. Television has been responsible for a good deal of the
erosion of party loyalties because a candidate can use the medium to appeal directly
to the electorate.39 Also, presidential and other candidates for national office are no
longer selected by party bosses in smoke-filled rooms. They usually are the products
of state primaries, in which televised appearances and political commercials are
usually the most important part of the campaign.
At first glance, the ability of candidates to use television in order to directly
appeal to the electorate seems like a good thing, an improvement over the days of
party bosses and political machines. In reality, electronically transmitted communi-
cations from candidates to voters are usually devoid of substance. Numerous critics
of television have pointed out that the effectiveness of a TV presentation hinges
on visual appeal and a memorable sound bite, with content a secondary consider-
ation. Recent trends have exacerbated this tendency. One study found that from
1968 to 1992, network news programs reduced the average length of presidential
candidates’ spoken comments from one minute to 10 seconds. Since then, coverage
Volti7e_14_ch13_241-262_highres.indd 256 28/11/12 6:02 PM

The Electronic Media: From the Telegraph to Television 257
has dropped to an average of 8 seconds, and only a third of these clips addressed
substantive policy issues.40 Televised political appeals make little room for ideas,
policies, and well-articulated political positions; what really matters is the kind of
personality and presence that a candidate projects on the screen. At the same time,
television often conveys the idea that problems can be quickly resolved—requiring
less than 30 minutes in the case of a particular show, and 30 seconds for individual
commercials. It is no wonder, then, that the electorate is often attracted to candi-
dates who offer simple and quick solutions. And these “solutions” are, of course,
neatly packaged for television; instead of lengthy discussions of a candidate’s stand
on particular issues, the typical commercial consists of 30 seconds of visual images
and a few empty slogans.
The pervasive influence of television on political life should worry anyone
concerned about the fate of our political institutions. As some critics have argued,
the greatest threat to democracy may come not from the assaults of hostile nations
but from the trivialization and even corruption of the political process that occurs
when television dictates the basic mode of discourse and comprehension.41
Television and Thought
For some students of the media, the effects of television go well beyond the ones just
described. The most notable of these has been Marshall McLuhan, whom we met
briefly in the preceding chapter. In McLuhan’s schema, television is a “cool” medium:
The televised image is indistinct and requires that viewers “fill in the blanks” in a
way that involves all of the senses. Viewing television is not a single-minded, linear
process, as reading is. Exposure to television in turn leads to a mode of perception
that affects many other aspects of life. For McLuhan, a television-based culture is less
concerned with sequence than with complete, all-at-once involvement. In his view,
these changes in perception are transforming every aspect of our lives; McLuhan
attributed everything from a desire for small cars to changes in church liturgy to the
influence of television. Examples such as these seem farfetched, and some of his pre-
dictions have not been borne out, such as his prophecy that baseball will necessarily
decline because it is a “linear” game. McLuhan’s focus on the influence of different
types of media opens up all kinds of fascinating possibilities, but his conclusions lack
clear proof, and in fact the basic thesis is virtually unprovable.
More concretely, it has been claimed that television helps young children to dev-
elop visual perception, such as learning that an object viewed on a two-dimensional
TV screen actually has three dimensions, and that an object viewed from different
perspectives is still the same object.42 This may be true for programming that has
been consciously designed to develop these skills, but most television shows have
little to offer in this regard. Moreover, when they occur, advances in perceptual
skills are limited and temporary. Even these gains are largely restricted to children
with low language ability and visual skills; television viewing may actually be det-
rimental to the perceptual development of children not deficient in these areas.43
Other deleterious effects of television on children can be noted. Experimen-
tal studies have shown that children supply more imaginative continuations to
Volti7e_14_ch13_241-262_highres.indd 257 28/11/12 6:02 PM

258 Communication
interrupted stories that they read than they do to televised stories. In similar
fashion, young children who are heavy watchers of television engage in less
imaginative play than other children.44 There is also a fair amount of evidence that
television watching may contribute to underdeveloped reading skills in children.
The relationship is not a perfectly direct one; it may emerge only when a certain
threshold of TV watching is reached, and it seems to be affected by the kind of
programming selected.45 Still, it seems evident that for many children television is
a negative influence in this regard. Finally, one might also speculate that the lack
of concentration and the short attention spans that seem so common today are at
least partially the result of growing up with television programs that are constantly
punctuated by commercial messages and other distractions.
Television was the most pervasive and significant medium for most of the
second half of the twentieth century. But just as it brought about a revolution in
how we are informed and entertained, it too is being reshaped by technological
change. A television set is no longer the only way of viewing televised program-
ming. In recent years growing numbers of viewers have been able to watch televised
programming—everything from network shows to the latest YouTube clip that has
gone viral—on laptops, tablets, game consoles, and smartphones. Streaming video
also has become a significant source of viewing material. In 2011, 12 million view-
ers watched movies and TV shows through Netflix, the largest provider of streamed
video.46 Although long-established television networks are providing much of the
content being viewed today, many media analysts foresee a time when most pro-
gramming will be supplied by streaming services operating independently of these
networks. Conventional television viewing has also been affected by digital video
recorders (DVRs) that allow prerecorded programs to be accessed at a viewer’s con-
venience. This often brings the added benefit of eliminating commercial interrup-
tions, a boon to viewers but a source of considerable consternation for advertisers.
Meanwhile, the integration of television and the internet is proceeding apace, as
a new generation of sets allows viewers to simultaneously watch television, access
the Internet, access and apply various apps, and communicate through social media
sites.
While television is being reshaped by new technologies, other forms of elec-
tronic media have been taking on growing importance for communication and
entertainment. In the next chapter we will take a look at video games, social net-
working, and the many components of the Internet. In so doing, we will consider
the economic, political, cultural, and social contexts that have shaped their devel-
opment. We will then bring radio and television back into the picture for an overall
assessment of the electronic media’s place in shaping our world and being shaped
by it.
Questions for Discussion
1. In many countries, a considerable amount of television and radio broadcast-
ing is done by government organizations such as the British Broadcasting
Corporation. In the United States, most broadcasting is done by commercial
stations. What are the advantages and disadvantages of the two systems?
Volti7e_14_ch13_241-262_highres.indd 258 28/11/12 6:02 PM

The Electronic Media: From the Telegraph to Television 259
2. Political scientists have noted that government regulatory agencies tend to be
“captured” by the industry that they are regulating. Is this inevitable? Is it pos-
sible for regulatory agencies to prevent this from happening?
3. Do you agree with this chapter’s general conclusions about television’s influ-
ence on violent behavior? Do you think that there is too much violence on
TV? Should television be more closely regulated so less violence appears on the
screen?
4. Television commercials are a major part of campaigns for the presidency and
congressional offices. In what ways has television affected the way that cam-
paigns are run? On balance, has television been a positive or negative influ-
ence on political campaigns and campaigners?
5. Should parents try to control the television-viewing habits of their children?
To what extent is it even possible to do so?
6. How do you watch televised material? Do you use a conventional TV set or a
new media platform like a smartphone? What are the advantages or disadvan-
tages of each?
7. Although many thought that radio would be completely replaced by television,
this never happened, and, in the aggregate, radio stations are quite profitable
today. How is radio programming today different from the “golden age of radio”
in the 1930s and 1940s? Might network television also have to undergo major
changes if it is to survive?
Notes
1. Tom Standage, The Victorian Internet: The Remarkable Story of the Telegraph and the
Nineteenth Century’s On-Line Pioneers (New York: Berkley Publishing Group, 1999).
2. For extensive coverage of the early history of radio, see Hugh G. J. Aitken, Syntony and
Spark: The Origins of Radio (Princeton, NJ: Princeton University Press, 1985), and The
Continuous Wave: Technology and American Radio, 1900–1932 (Princeton, NJ: Princeton
University Press, 1985).
3. Eric Barnouw, Tube of Plenty: The Evolution of American Television (New York: Oxford
University Press, 1975), pp. 4–5.
4. Albert Abramson, Zworykin, Pioneer of Television (Urbana and Chicago: University of
Illinois Press, 1995), pp. 87–113.
5. David E. Fisher and Marshall Jon Fisher, Tube: The Invention of Television (Washington,
DC: Counterpoint, 1996).
6. W. Rupert Maclaurin, Invention and Innovation in the Radio Industry (New York: Macmillan,
1949), pp. 225–240.
7. Tom Lewis, Empire of the Air: The Men Who Made Radio (New York: HarperCollins,
1991), pp. 268–269, 300–307.
8. Barnouw, Tube of Plenty, p. 100.
9. Ibid., pp. 200–201.
10. David Nachmias and David H. Rosenbloom, Bureaucratic Government USA (New
York: St. Martin’s Press, 1980), p. 23.
11. Kenneth J. Meier, Politics and the Bureaucracy: Policymaking in the Fourth Branch of
Government (North Scituate, MA: Duxbury, 1979), p. 72.
12. B. Guy Peters, The Politics of Bureaucracy: A Comparative Perspective (New York:
Longman, 1978), p. 120.
Volti7e_14_ch13_241-262_highres.indd 259 28/11/12 6:02 PM

260 Communication
13. Barry Cole and Mal Oettinger, Reluctant Regulators: The FCC and the Broadcast Au-
dience (Reading, MA: Addison-Wesley, 1978).
14. The Nielsen Company, “State of the Media: Trends in Television Viewing—2011 TV
Upfronts” (2011), accessed on June 19, 2012, at http://blog.nielsen.com/nielsenwire
/wp-content/uploads/2011/04/State-of-the-Media-2011-TV-Upfronts .
15. Ibid., and “Time Spent Watching TV,” AdAge Mediaworks (April 18, 2011), accessed
on March 29, 2012, at http://adage.com/article/mediaworks/time-spent-watching
-tv/227022/.
16. George Comstock and Erica Scharrer, Television: What’s On, Who’s Watching, and What
It Means (San Diego and London: Academic Press, 1999), p. 94.
17. Ibid, pp. 65, 70.
18. Albert Bandura, Dorothea Ross, and Sheila A. Ross, “Transmission of Aggression through
Imitation of Aggressive Models,” Journal of Abnormal and Social Psychology 63, 3 (1961):
575–582.
19. Comstock and Scharrer, Television: What’s On, p. 278.
20. George Comstock and Erica Scharrer, Media and the American Child (Amsterdam:
Elsevier, 2007) pp. 204–207.
21. Ibid., pp. 210–219.
22. Wilbur Schramm, Jack Lyle, and Edwin B. Parker, Television in the Lives of Our Children
(Stanford, CA: Stanford University Press, 1961), p. 1.
23. “Youth Violence: A Report of the Surgeon General” (November and December 2001),
accessed on April 2, 2012, at http://www.surgeongeneral.gov/library/youthviolence/default
.html.
24. George Comstock, Television in America (Beverly Hills, CA: Sage, 1980), p. 83.
25. Comstock and Scharrer, Media and the American Child, pp. 239–41.
26. Aletha C. Huston, Big World, Small Screen: The Role of Television in American Society
(Lincoln: University of Nebraska Press), 1992.
27. Lynne Shafer Gross and Edward John Fink, Telecommunications: An Introduction to Electronic
Media, 9th ed. (New York: McGraw-Hill, 2006), p. 70.
28. Barnouw, Tube of Plenty, p. 314.
29. Harris Interactive, “Troubles for Traditional Media—Both Print and Media” (October
28, 2010), accessed on June 15, 2012, at http://www.harrisinteractive.com/NewsRoom
/HarrisPolls/tabid/447/mid/1508/articleId/604/ctl/ReadCustom%20Default
/Default.aspx.
30. Emily Guskin, “Network News: Durability and Decline,” accessed on June 18, 2012, at
http://stateofthemedia.org/2011/network-essay/.
31. “Troubles for Traditional Media,” Table 2.
32. Pew Research Center for People and the Press, “Public Knowledge of Current Affairs
Little Changed by News and Information Revolutions” (April 15, 2007), accessed on
April 3, 2012, at http://www.people-press.org/2007/04/15/public-knowledge-of-current
-affairs-little-changed-by-news-and-information-revolutions/.
33. Comstock and Scharrer, Television: What’s On, p. 123.
34. Neil Postman, Amusing Ourselves to Death: Public Discourse in the Age of Show Business
(New York: Viking/Penguin, 1985), p. 152.
35. Melvin L. DeFleur and Everette E. Dennis, Understanding Mass Communication, 2nd ed.
(Boston: Houghton Mifflin, 1985), p. 450.
36. Andre Scheinkman et al., “Ad Wars,” New York Times (April 14, 2011), accessed on
April 3, 2012, at http://elections.nytimes.com/2008/president/advertising/index.html.
37. Thomas E. Patterson, “Voter’s Control of Information,” Society 22, 4 (May–June 1985): 56.
Volti7e_14_ch13_241-262_highres.indd 260 28/11/12 6:02 PM

http://blog.nielsen.com/nielsenwire

http://adage.com/article/mediaworks/time-spent-watching

http://www.surgeongeneral.gov/library/youthviolence/default

http://www.harrisinteractive.com/NewsRoom

http://stateofthemedia.org/2011/network-essay/

http://www.people-press.org/2007/04/15/public-knowledge-of-current

http://elections.nytimes.com/2008/president/advertising/index.html

The Electronic Media: From the Telegraph to Television 261
38. L. Patrick Devlin, “Campaign Commercials,” Society 22, 4 (May–June 1985): 45.
39. Don Lacy, From Grunts to Gigabytes: Communications and Society (Urbana and Chicago:
University of Illinois Press, 1996), pp. 116–118.
40. Ezra Klein, “Your Slip is Showing and Reshowing,” Los Angeles Times (18 May 2008),
p. M5.
41. Postman, Amusing Ourselves, pp. 125–141; Jerry Mander, Four Arguments for the
Elimination of Television (New York: Quill, 1978).
42. Patricia Marks Greenfield, Mind and Media: The Effects of Television, Video Games, and
Computers (Cambridge, MA: Harvard University Press, 1984), pp. 17, 33.
43. Comstock and Scharrer, Television: What’s On, pp. 231–232.
44. Greenfield, Mind and Media, pp. 88–89.
45. Johannes W. J. Beentjes and Tom H. A. Van der Voort, “Television’s Impact on
Children’s Reading Skills: A Review of Research” Reading Research Quarterly 23, 4 (Fall
1988).
46. Nielsen Wire, “Detailing the Digital Revolution: Social, Streaming and More”(February 24,
2012), accessed on June 20, 2012, at http://blog.nielsen.com/nielsenwire/media
_entertainment/detailing-the-digital-revolution-social-streaming-and-more/.
Volti7e_14_ch13_241-262_highres.indd 261 28/11/12 6:02 PM

http://blog.nielsen.com/nielsenwire/media

A Macmillan Higher Education Company
WORTH PUBLISHERS
Volti7e_01_FM_i-xviii_highres.indd 2 28/11/12 5:52 PM
this page left intentionally blank

As we saw in the last two chapters, a succession of new technologies has vastly
enhanced the ability of people to send and receive messages. Today, we are experi-
encing the latest phase of this ongoing revolution as the Internet, mobile phones,
and social media are reshaping the ways in which we communicate with one
another. But to repeat what by now should be a familiar theme, these advances have
been accompanied by a number of new issues and problems. The same free flow of
information that enhances our ability to communicate also gives rise to concerns
about the protection of information that we do not want to move so freely, such as
intellectual property and personal data, about unequal access to information exac-
erbating the gap between the haves and have-nots, as in the “digital divide,” and
about the difficulty and expense of filtering, managing, and using so much informa-
tion. This chapter traces the development of digital communication technologies
and the challenges they pose, and shows how technological advances can move
along unanticipated paths.
The Birth and Growth of the Internet
The Internet can be described as a giant network of smaller computer networks that
allows users to access files located anywhere within these individual networks. These
files can be e-mail messages, Web pages, YouTube videos, and social networking
updates, to name the most prominent. As with many technologies, the invention
of the digital network that eventually evolved into today’s Internet began with the
needs of the military. The sponsoring agency for the development of the first com-
puter network was the U.S. Department of Defense’s Advanced Research Projects
Agency (DARPA), hence the name of the first computer network, ARPANET. Its
initial purpose was to tie together a small number of mainframe computers so that
data could be uploaded and downloaded to and from any computer in the network.1
An important aspect of ARPANET was the division of each block of data into a
number of smaller “packets.” In this way the capacity of the network could be maxi-
mized by sending each packet over the route that was least congested at the time.2
This technique had the additional virtue of ensuring that messages would go through
even if some routes were destroyed by an enemy attack, although this was not its
primary purpose. The use of ARPANET for computer-to-computer communication
263
The Internet Age
c h a p t e r f o u r t e e n
Volti7e_15_ch14_263-284_highres.indd 263 28/11/12 6:02 PM

264 Communication
began in 1969 when a team at UCLA attempted to transmit data by connecting
their computer to one at Stanford University. The first transmission, the command
to “LOG IN,” failed when the computer crashed at “G.” The network was intended
to be used for the transfer of data, but before long researchers were using it to com-
municate with researchers at other sites. Not all of these communications were
serious; jokes and idle chatter were a significant portion of ARPANET communica-
tion. By the early 1970s the sending of digitized messages, which became known as
electronic mail or simply e-mail, constituted three-quarters of ARPANET’s traffic,
a function that had not been considered when ARPANET was first established. In
this sense, the network was a social construction, one that resulted from its users
taking it well beyond its intended purpose.3 This was not a unique occurrence; as
we have seen before, technologies created for one purpose can end up being used
for something quite different, as when radio mutated from wireless telegraphy into
commercial broadcasting.
Computer networking remained largely in the military realm in the 1970s; by
the end of the decade only 16 nodes were on university campuses, while the remain-
ing 46 belonged to members of the defense establishment.4 But the appeal of net-
worked computing stimulated the development of alternative systems. Responding
to growing demand, the National Science Foundation (NSF) provided financial
sponsorship for the Computer Science Network (CSNET), which went into opera-
tion in 1982, offering networking at considerably lower cost than the $100,000
annual expense of an ARPANET hookup. The NSF also created a “backbone” in
1985 to link its five supercomputing centers and to connect with CSNET and some
elements of ARPANET, which had been supplanted by an exclusively military
network in 1983. Computer networks expanded throughout the 1980s, facilitated
by the development of programs for the efficient routing of digitized information,
notably TCP (Transmission Control Program). It was during this period that the
power and utility of the Internet was given a massive boost, when programmers
began to build systems that allowed users to access other network sites by clicking
on highlighted text or an icon that appeared on the on-screen document. By click-
ing on these links users could easily move all over the Internet as they accessed
related text, image, and sound files.
The first of these networks were proprietary, and to be included required
the payment of a fee. A different approach was taken by Tim Berners-Lee at the
European Organization for Nuclear Research (known by its French acronym,
CERN), who was the primary developer of another software system for accessing
files within computer networks. Known as hypertext transfer protocol (http), it
was released in 1991 and formed the basis of a network that Berners-Lee called
the World Wide Web (WWW). Two years later, the WWW team announced that
they had made a crucial decision; CERN would not attempt to patent or claim any
copyright fees for the technology underlying the Web. This meant that there would
be no restrictions on becoming part of the Web, and as a result it became a virtually
universal repository of computer-based information and entertainment.
The Web underwent explosive growth in the years that followed. In December
1993 there were 623 websites; by the end of 2011 the Web contained about
Volti7e_15_ch14_263-284_highres.indd 264 28/11/12 6:02 PM

The Internet Age 265
555 million sites, with hundreds more being added every day.5 The growth of the
Web was paralleled by the growth of the Internet as a whole. By 2012 about one-
third of the world’s population were “netizens.” But as we shall see, worldwide
Internet usage is very unevenly distributed, with high rates of usage in some places
offset by very low rates in others.
The revolutionary changes wrought by the Internet reflect rapid advances in
the development of computer hardware and software. But as has been noted earlier,
large-scale changes are not just the product of solitary inventions. Rather, they
entail the prior existence or creation of complementary elements. This is particu-
larly evident when we consider the use of the Internet for the buying and selling of
goods—e-commerce, as it has come to be called. Computers and computer networks
have stimulated the emergence of a new kind of shopping by providing access to
digital catalogs and allowing customers to place orders by clicking a few onscreen
buttons. But the Internet and computers to access it are not enough; online buying
also requires credit or debit cards and the security measures employed to protect
the customers using them. For sellers, successful e-commerce depends on the use of
barcode readers for selecting and keeping track of items, forklifts for retrieving them
from high shelves, and conveyor belts to move them to the packing room.6 The
transport of these items to the customers who ordered them is usually performed
by nothing more avant-garde than a delivery truck, often in the service of the
venerable U.S. Postal Service. In sum, the Internet has brought substantial changes
to retailing, but much more has been required than sophisticated computers and
software.
As with all significant technological innovations, the Internet has been beset
by challenges and conflicts. One major issue is “net neutrality,” the expectation
that every Internet site will be accessible, irrespective of which Internet service
provider (ISP) is providing the access. Without net neutrality, an ISP could enable
access to its chosen sites while restricting access to those of its competitors. This is
particularly blatant when it is practiced by countries that attempt to block access
to sites that their governments view as threats to their power and authority. It
also has emerged as an issue in the United States where private-sector ISPs have
been accused of giving preferential treatment to allied TV services over other
providers.7
Another restriction on complete access to the Internet may be self-imposed.
This occurs when users confine most of their Internet usage to what have been
called “walled gardens.” Selective Internet access began when providers like
America Online offered an easy route to Internet sites. Today, Apple, Facebook,
and Google serve as portals to the Internet for many users, while at the same time
retaining some degree of control over access.8 This is particularly evident with
Google searches that provide links in accordance with the user’s search history.
Although perhaps convenient, this practice may result in someone with a perceived
political orientation being initially presented with links to sites that are in accord
with this orientation. This is analogous to what has been termed “narrowcasting,”
the tailoring of radio and television programming to fit the interests and beliefs of
specific audience segments.
Volti7e_15_ch14_263-284_highres.indd 265 28/11/12 6:02 PM

266 Communication
E-Mail and the Network Effect
The use of computer-to-computer communication has spread at warp speed since
the first message was sent in 1969; by 2011 there were 3.146 billion e-mail accounts
worldwide, and every day three billion messages traveled through cyberspace.9
The rapid expansion of e-mail communication is a prime example of the “network
effect.” This simply means that an interconnected system becomes increasingly
valuable as more members are connected to it. A telephone, computer, or any kind
of communication device would be of limited use if it were connected to only a
few dozen similar devices. But anyone with an e-mail account has the potential to
connect with more than three billion other e-mail users, A network also illustrates
the power of positive feedback; as a network expands it becomes of greater potential
value to people and organizations still outside the network. This motivates them
to join the network, which makes the network even more valuable, which induces
more people and organizations to join it, and so it goes until a saturation point is
eventually reached.
In the case of e-mail, the network effect is complemented by the convenience,
rapidity, and accuracy of e-mail messaging. But as with all useful technologies, the
consequences of widespread e-mail use are not entirely benign. Regular users are all
too aware of the annoyances engendered by e-mail. One of the worst of these is spam,
the unwanted appearance of messages sent by sites featuring pornography, bogus lot-
tery results, prescription medicines of dubious origin, and fake Rolex watches. Spam
filters can partly eliminate these unwanted messages, although occasionally at the
cost of a valid message being blocked.
Spam is one aspect of what might be termed a “reverse network effect.” Alth-
ough many e-mail messages are useless, annoying, or both, many other messages
have some value, but only of a very minor sort—lame jokes, the minutes of a
committee that has scant relevance to one’s own job, pictures of a colleague’s dog
wearing a Spiderman costume, and so on. The larger one’s network, the more mes-
sages of this sort will show up in the e-mail inbox. As a result, more and more hours
are devoted to e-mail, but with little to show for much of it. The torrent of e-mail
received by many users has in some cases resulted in what has been called “e-mail
bankruptcy,” whereby victims of message overload have shut down their accounts
and used a separate site to politely inform past senders that they will receive no
response to their messages.10 In today’s world, the abandonment of e-mail requires
an alternative means of communication, and some individuals and organizations
have turned to instant messaging, videoconferencing, and social media to meet
this need.
For those still connected, e-mail represents a significant degree of engagement
with the Internet. But even the hundreds of millions of hours spent on e-mail every
month pale in comparison with the monthly tally of 906 million hours (22.7 percent
of time spent online) devoted to social networking and 407 million hours (10.2 per-
cent of online time) playing games.11 These will be taken up in later sections of this
chapter, but first we will take a look at one of the main platforms for these activities,
smartphones and other mobile communication devices.
Volti7e_15_ch14_263-284_highres.indd 266 28/11/12 6:02 PM

The Internet Age 267
Mobile Communications
The growth of the many components of the Internet occurred in conjunction with
the widespread acquisition of personal computers that began in the early 1980s.
Access expanded significantly in the early years of the twenty-first century with the
appearance of the first mobile telephones capable of logging onto the Internet. In
the years that followed, the rapidly expanding capabilities of these so-called smart-
phones have expanded to the point that their ability to send and receive telephone
calls seems almost incidental to their other functions. More recently, the introduc-
tion of tablet computers such as Apple’s iPad has given their owners a larger screen
and new functions. The expanded capabilities of smartphones and tablets have
made them the favorite platforms for social networking, game playing, e-mail, and
other Internet-based activities. In addition, the portability of these devices allows
their users to do a variety of things that would be impossible or impractical with a
personal computer, such as GPS navigation and paying for purchases. Their specific
features aside, smartphones and tablets have ushered in a new phase of human exis-
tence, what has been called “The Age of Connection,” an era in which anyone with
a telephone or an Internet account can potentially reach anyone similarly equipped
almost instantaneously.
As with the other elements of twenty-first-century information and communication
technology, the number of smartphones has grown dramatically: 487.7 million smart-
phones were sold worldwide in 2011—more than the total sales of PCs and tablets12—
and it is likely that this figure will more than double by 2015.13 The United States has
been a receptive market for these devices; by early 2012 nearly 50 percent of mobile
phone users over the age of 18 were equipped with smartphones.14
Much of the attractiveness of smartphones and tablets lies in the applications
(“apps” for short) that have been developed for them, allowing their users to track
their investments, receive sports scores, receive turn-by-turn driving instructions,
find their cars in a parking lot, and much more. According to one survey, by the
fall of 2011, 18 billion apps for the Apple iPhone had been downloaded since the
firm first opened its app store in 2008.15 The development of these apps has become
an industry in itself that collectively produces hundreds of thousands of different
apps. The value of most of these apps is, however, open to question. Although vast
numbers of them are available for downloading, only a few account for the majority
of actual downloads. In 2011 a mere 10 apps accounted for 43 percent of the apps
actually used on the Android platform, and 50 of them accounted for 61 percent.16
Many apps are abandoned or rarely used; a quarter of downloaded apps are tried
once and then never used again,17 while 68 percent of owners of smartphones with
apps use only five or fewer of them at least once a week.18
Although we tend to associate twenty-first-century technologies like mobile
phones with the industrially developed countries, these phones have become wide-
spread in the poorer parts of the world. Most of them lack the features of smartphones,
but they have facilitated communications while circumventing the need for a wired
infrastructure. Mobile phones are used to maintain family ties for migratory work-
ers, to apprise farmers of crop prices, and even to provide basic medical diagnoses.19
Volti7e_15_ch14_263-284_highres.indd 267 28/11/12 6:02 PM

268 Communication
Some mobile phone users in Africa are even able to use their phones as mobile banks
that allow them to electronically store money, transfer funds, and pay bills.20 Mobile
phones, and smartphones especially, have been one of the most transformative tech-
nologies of the early twenty-first century. The changes they have wrought are appar-
ent everywhere, but nowhere more so than in the poor, rural areas of the world, where
half of the world’s population resides. In the space of a few years, the spread of mobile
phones has made it possible to communicate with others instantaneously instead of
having to walk long distances in order to do so.
More Digital Connections: Social Networks
As was noted above, more than 900 million hours each month are spent on social
media in the United States. Smartphones are of particular importance for this
activity; their users spend more time on social media than those who confine their
digital networking to desktop computers.21 In some ways the dramatic increase in
the number of social media sites and their members is merely the continuation of
age-old social patterns. People have always sought to connect with one another—it
is one of the defining characteristics of our species—but for most of human exis-
tence the ability to interact with one another and to create, join, and participate
in social networks did not go beyond face-to-face interactions. As we have seen in
Chapter 12, written language and printing greatly expanded the boundaries of com-
munication, and during the last two centuries the telegraph, telephone, radio, and
television stretched these boundaries even further.
Mobile phones have had an impact everywhere, but perhaps their greatest influence has been
in poor, developing countries. (© Jon Bower/LOOP IMAGES/Loop Images/Corbis)
Volti7e_15_ch14_263-284_highres.indd 268 28/11/12 6:02 PM

The Internet Age 269
But what has differed from old patterns is the speed at which digital commu-
nications media gained a large audience. According to a United Nations study, it
took radio broadcasters 38 years to reach an audience of 50 million, while television
took 13 years. For the Internet, only four years were required to assemble 50 million
users.22 A similar trajectory can be seen in the growth of social network sites.
Facebook, the most commonly used site, began with a few dozen users at Harvard
University in 2004. By the end of 2011, it counted 845 million active users and
$3.7 billion in revenues.23 At that time, 57 percent of Facebook’s active users inter-
acted with the service on any given day. The average Facebook user spends seven
hours per month on the site.24
Other social networking sites also exhibited impressive rates of growth,
LinkedIn, a site oriented toward work and careers, reached 150 million members
by 2012, nine years after its initial launch.25 Twitter, despite, or perhaps because,
its messages (“tweets”) are limited to no more than 140 characters, has also grown
at an impressive rate; founded in 2006, six years later it had 140 million users who
collectively accounted for an average of 340 million tweets every day.26
The numerical success of these and other social networking sites is impressive.
Of much more potential interest, however, is their effects on society and culture in
general, and on communication in particular. In contrast to firm statistics on the
number of users, an accounting of social media’s influence on society and culture
is necessarily speculative. The relative newness of Internet-based social networking
means that definitive statements are premature and are subject to later review, but
some interesting ideas and findings have emerged. On the one hand, social media
and other Internet-related media such as e-mail have moved us closer to Marshall
McLuhan’s “Global Village” by connecting people on the basis of interests, work
relationships, and political causes rather than geographical proximity. At the same
time, however, digitally mediated connections may be undermining person-to-
person relationships by making it all too easy to substitute the latter for the former.
As Sherry Turkle has observed, “We expect more from technology and less from one
another and seem increasingly drawn to technologies that provide the illusion com-
panionship without the demands of relationship. Always-on/always-on-you devices
provide three powerful fantasies: that we will always be heard; that we can put our
attention wherever we want it to be; and that we never have to be alone. Indeed our
new devices have turned being alone into a problem that can be solved.”27
This is a powerful indictment of digitally based relationships, but other studies
have found that an expansion of online relationships has not in fact led to a decline
of traditional, face-to-face interactions or displaced traditional, nondigitized social
relationships.28 Moreover, while involvement with social media sites absorbs a fair
amount of time, it does not come at the expense of traditional face-to-face contacts.
If anything is displaced, it is the time spent eating, sleeping, and watching televi-
sion.29 All in all, the effects of digitally based social media seems remarkably similar
to those in the early days of the telephone; despite widespread concerns at the time,
chatting on the phone did not displace traditional modes of communication and
socialization, If anything, telephone communications appear to have extended and
intensified existing social relationships.30
Volti7e_15_ch14_263-284_highres.indd 269 28/11/12 6:02 PM

270 Communication
Social Media and Social Movements
Social networks can promote stability within a society by forging and strengthen-
ing linkages between individuals. At the same time, however, they can be power-
ful tools for undermining an existing social order. Turning individual grievances
into a mass movement requires communication links to create a sense of common
purpose, to forge a group identity, to inform participants of intended actions, and
to schedule these actions. The power of social networks to do all of these was dem-
onstrated in August 2011 after the death of a man who had been killed by London
policemen in the course of a bungled arrest. After a day of rioting, some individuals
began to use a smartphone app that made it possible to send the same message to
everyone in their phone’s directory. The message was simple: it told them to con-
gregate at a particular time and place so they could “linkup and cause havic (sic),
just rob everything. Police can’t stop it.”31
The looting conducted by the resultant flash mob was only one of many
recent examples of collective action by individuals who coalesced and took action
in accordance with mobile phone messages. Other mass movements energized by
social media have been oriented toward political transformation rather than illegal
personal enrichment. Up to now, the most far-reaching of these has been the use of
social media to bring down the dictatorial regime of Hosni Mubarak in Egypt dur-
ing the “Arab Spring” of 2011. At that time, hundreds of thousands of disaffected
Egyptians had been regular readers of and contributors to Facebook pages that aired
their grievances and motivated them to demonstrate against the government at par-
ticular times and places. Twitter feeds provided up-to-the minute information, and
mobile phones served as crucial communication links during this turbulent period.
For some of the insurgents, these new communication technologies have been
viewed as powerful but nonviolent weapons for successfully challenging oppressive
regimes and effecting political transformation. As the creator and administrator of
the most influential Egyptian Facebook site put it:32
Now that so many people can easily connect with one another, the world is
less hospitable to authoritarian regimes. Humanity will always be cursed with
power-hungry people, and the rule of law and justice will not automatically
flourish in all places at all times. But thanks to modern technology, participatory
democracy is becoming a reality. Governments are finding it harder and harder
to keep the people isolated from one another, to censor information, and to hide
corruption and issue propaganda that goes unchallenged. Slowly but surely, the
weapons of mass oppression are becoming extinct.
Might this evaluation of new communications technologies be excessively
optimistic? Social networking sites and mobile phones helped to create the collec-
tive identity and actions that rocked Egypt and other parts of the world, but it is
important to note that they were an enabling factor, not an independent force for
change. As ever, we have to avoid invoking implicit technological determinism
that would make digital media autonomous sources of political change. In Egypt
and other places where users of social media challenged the existing order, griev-
ances had long festered within a significant portion of the population. The Mubarak
Volti7e_15_ch14_263-284_highres.indd 270 28/11/12 6:02 PM

The Internet Age 271
regime was associated with decades of corruption, political repression, a stagnant
economy, and severely limited opportunities for young people. It was a tinderbox
waiting for a spark, which was supplied by the self-immolation of a young vegetable
seller in Tunisia. New communication technologies helped to organize and energize
the movement, but they did not start or sustain it.
No less important, digital communication is a two-edged sword that can be a
force for oppression as well as of liberation. Technologically sophisticated authori-
tarian regimes can tap into e-mail accounts and mine social networking sites in
order to locate associates of known dissidents and to track their activities. They can
also plant bogus Web pages, blogs, and videos to discredit their opponents. Also, it
cannot be assumed that widespread connection to digital media will impart unity.
Tweets, blogs, and instant messages are inherently decentralized communication
modes, and their fragmented nature may prevent disaffected citizens from coalesc-
ing into a unified opposition movement.33
Even when the effective use of social media helps to foment a large-scale upris-
ing, it is less useful for generating long-term purposive action, especially when this
requires discipline, organization, and sacrifice. As Malcolm Gladwell has argued,
Facebook and similar sites excel at constructing networks, but these are not perma-
nent structures with well-defined individual responsibilities and unambiguous cen-
ters of authority. As seen by Gladwell, social networks are deficient when it comes
to setting long-term goals and creating effective strategies for achieving them.34
Facebook and other social media sites were important catalysts for overthrowing dictator-
ships in the Middle East and North Africa, but establishing viable democracies will be a long,
difficult process. (KHALED DESOUKI/AFP/Getty Images)
Volti7e_15_ch14_263-284_highres.indd 271 28/11/12 6:02 PM

272 Communication
The case of Egypt seems to bear out Gladwell’s critique. Mubarak and his henchmen
were swept aside by a movement that involved millions of Egyptians, but after the
dust had settled, the army was in control and democracy faced an uncertain future.
In the broadest terms, the role of digital media in the Egypt revolution is
reminiscent of the connection between printing and the Protestant reformation
nearly five centuries earlier. Neither movement would have unfolded the way it
did in the absence of new media, but much more has been involved than enhanced
communications capabilities. Without denying their unquestioned importance in
promoting social and political change, printing, social media, and mobile phones
by themselves do not a revolution make.
Video Games
Social media sites are a major component of the Internet. Also significant are
online games, which as we have seen, constitute a major use of the Internet. To
these can be added the games downloaded on smartphones, tablets, and computers,
along with the consoles and hand-held gaming devices found in more than half of
American households.35 Originally limited to arcades or early adopters of personal
computers, on-screen games now constitute a significant portion of leisure activity
for many individuals. The technical development of video games has proceeded
rapidly in recent years, providing amazingly realistic depictions of everything from
bowling alleys to combat zones to rock concerts.
Once comprising little more than a cluster of cottage industries, the video game
industry has become a very big business. The Entertainment Software Association,
the industry’s trade group, reported that 257.2 million video and computer games
were sold in 2011 and racking up revenues of $16.6 billion.36 Purchases of consoles
and accessories added $9.23 billion. Globally, revenues generated by electronic
games are now double those of the entire music industry and are one-quarter greater
than those of the magazine business.37
Although the typical video gamer is often envisaged as an adolescent boy, the
audience for video games has gone well beyond this stereotype. The average age of
gamers is 37, with 53 percent between the ages of 18 and 50. Women now comprise
42 percent of gamers, and it is worth noting that women over the age of 18 now
comprise a larger percentage of gamers (37 percent) than do boys 17 and younger
(13 percent) 38
Of all the games sold in 2011, 26.5 percent fell into the “mature” category, which
also encompassed 6 out of the top 10 sales of video games.39 There is no escaping
the fact that games featuring violent action are an important source of revenue and
profit. The most successful game of this genre, “Call of Duty: Modern Warfare 3,”
brought in $775 million on the first five days of worldwide sales.40 Concerns about
a possible connection between violent video games and real-life violence bring us
back to some of the issues voiced by critics of televised violence. As is the case with
televised violence, the influence of violent video games on behavior varies accord-
ing to age, gender, and other individual and social variables. Gaming preferences
also differ. Not only are males more likely to be video gamers, their choice of games
Volti7e_15_ch14_263-284_highres.indd 272 28/11/12 6:02 PM

The Internet Age 273
tends to differ from those of female players. Males are drawn to sports and action
games (not all of which are violent), while their female counterparts generally prefer
strategic and puzzle games, although one study found no gender differences in regard
to interest in adventure games.41 Other studies have found that girls and women
tend to play for social interaction and to form relationships. In contrast, boys and
men prefer games where winning is the most important outcome.42
Although most players of violent video games are no more likely to commit
acts of violence or aggression than individuals with milder gaming tastes, a number
of studies have found positive correlations between playing violent video games
and aggressive behavior, just as has been the case for television.43 For the majority
of players, participation in violent on-screen activities may be “just a game,” but
these games may catalyze aggressive and even violent acts for those who are already
disposed toward this kind of behavior.
Today’s video games have reached impressive levels of realism, so much so that
some are now being used to prepare soldiers and the police to operate in dangerous
and frightening situations. It is certainly possible that graphic on-screen violence
can have a stronger and more lasting effect on the players of these games.44 Along
with the realism that they bring, video games also require a level of attention and
involvement that television viewing does not require. This greater degree of par-
ticipation was highlighted in a report issued in 2001 by the U.S. Surgeon General,
which speculated that video games might stimulate more aggressive and violent
behavior than television viewing because the player is a participant in onscreen
violence and not simply a vicarious consumer of it.45
Video game “LAN parties,” as this event in Malaysia, show the social side of gaming. (© Bazuki
Muhammad/Corbis)
Volti7e_15_ch14_263-284_highres.indd 273 28/11/12 6:02 PM

274 Communication
“Speculated” is the proper term here. Despite literally thousands of studies
conducted over several decades, the link between television viewing and violent
behavior remains a matter of some controversy; given the much shorter history of
video games, a considerable amount of methodologically sophisticated research will
be required before the consequences of violent gaming are fully comprehended.
These reservations aside, an interesting negative correlation between the spread of
video games and society-wide violence merits our consideration. The hypothesized
connection between violent behavior and violent video games is belied by a sub-
stantial decrease in the rate of violent crime in the United States and most other
countries in recent decades. The reasons for the drop in violent crime have been
the subject of much debate, but this trend makes it hard to argue that violent video
games have triggered a wave of violence across the land.
It should be noted at this point that the primary victims of video games are
some of the players themselves. Although the American Psychiatric Association
decided that existing research did not justify including video game addiction in the
most recent edition of its Diagnostic and Statistical Manual of Mental Disorders, it is a prob-
lem for some individuals.46 Psychologists who have treated video game addiction
note that the majority of their clients are young men under 30 with poor self-esteem
and limited real-life social networks. They are especially drawn to role-playing
games that allow them to assume a new identity as they inhabit a virtual world
that is more appealing than the one in which they live. As with other addictions,
video game addiction is characterized by the need to devote large amounts of time
to gaming, withdrawal from other activities, and feelings of anger and depression
when one is unable to play or prevented from doing so. Addiction to technology in
general may be an inescapable part of modern life, but with excessive video game
play, it may become self-destructive.
On a more positive note, some organizations have begun to experiment online
with applying gaming mechanics as motivational devices in behavior change—for
example, providing real or virtual rewards as a user completes a sequence of stages
on the way to achieving a goal, often within a framework that encourages competi-
tion and interaction. The term used for this is “gamification,” which, according to
its exponents, makes work and other activities more engaging and enjoyable.47
The awarding of points, badges, or other rewards for doing certain things and
completing particular tasks is not a novel technique, of course; psychologists have
long known that frequent feedback generally improves performance, whether it be
of workers, students, or other participants. What is new about gamification is that
it depends on the existence of social media and the drive for constant connected-
ness. Gamification also taps into a universal attribute of human culture; in virtually
every kind of society, men and women and boys and girls play games of some sort.
However, the extent to which human activities can be made to resemble games is
open to question. The history of business strategies is replete with many examples
of fads that achieve considerable popularity and influence, only to eventually
fade away as their impracticality, triviality, and ineffectiveness become evident.
Gamification may fall into this category, or it may flourish because its intended
group of users has been involved with electronic games for much of their lives.
Volti7e_15_ch14_263-284_highres.indd 274 28/11/12 6:02 PM

The Internet Age 275
The Digital Divide
A computer or smartphone and an Internet connection are all that is needed for
buying things online, staying in touch with friends through e-mail, or using social
media to foment revolution. But for many people, access to cyberspace remains
limited at best. The Internet has been characterized as a highly democratic system
of communication because it is not under centralized control, yet it continues to
reflect inequalities within and between nations. According to statistics compiled
at the end of 2011, although nearly one-third of the world’s population used the
Internet to some extent, vast disparities existed within different regions of the
world. To take one of the starkest contrasts, 78.6 percent of North America’s
residents were Internet users, but only 13.5 percent of Africa’s population had this
capability.48 To take another example of a great disparity in internet connectivity,
95.6 percent of the population of Iceland, the country with the greatest degree of
Internet penetration, used the Internet, while only 1.3 percent of Cambodians
could connect to the Internet in 2010.49
As might be expected, all of the countries with high rates of Internet usage
lie within the developed world, but substantial disparities in Internet usage can be
found within these countries. Numerous critics have pointed to the “digital divide”
that separates people with easy access to the Internet from those with no access. In
the United States, Internet access in schools is virtually universal, but the ability to
access it at home has differed according to race, ethnicity, and especially social class.
According to the U.S. Department of Commerce, 71 percent of American house-
holds had an Internet connection in late 2010. Differences of race and ethnicity are
evident but not massive; 72 percent of white households had an Internet connec-
tion, while 65 percent of African American and 67 percent of Hispanic households
had one.50 Much more significant is financial status; 93 percent of households with
incomes over $100,000 were connected, far more than the 43 percent of households
with incomes under $25,000.51 Geographical location is also significant; 70 percent
of urban households have an Internet connection, whereas only 57 percent of rural
households are connected.52
The way that different groups log onto the Internet also shows some interest-
ing differences. The growth of smartphone ownership has created a new way to
access the Internet. African Americans and Latinos are slightly more likely than
whites to own a smartphone (although much of this disparity is likely due to the
higher average age of the white population), and they are more likely to use these
devices to access the Internet.53 This, however, may result in the “walled garden”
phenomenon whereby control by mobile carriers, which are not governed by net
neutrality laws, results in more limited access to the Internet, as when content
deemed “controversial or unsavory” has on occasion been blocked by mobile carri-
ers that serve as ISPs.54
In a world where the ability to acquire and make use of information has taken on
increasing importance, the Internet has the potential to empower people by making
information more accessible than it ever has been. But the Internet is not a technologi-
cal fix. As we have just seen, while the Internet has been a revolutionary technology,
access to it also reflects prevailing social and economic conditions. Perhaps of equal
Volti7e_15_ch14_263-284_highres.indd 275 28/11/12 6:02 PM

276 Communication
importance, even with widespread access to the Internet there remain significant dis-
parities in the ability to make effective use of it. In the absence of parallel social changes,
the Internet may intensify rather than diminish global and national social inequalities.
Intellectual Property
One of the most contentious issues surrounding the use of computer networks has
been the innocuous-sounding practice of file sharing. In this context, a file can be
anything in a digital format—text, sound, or pictures. No one objects when people use
the Internet to send a Christmas letter or a photograph of a new grandchild to friends
and family, but a host of legal and commercial issues arise when a DVD movie or the
contents of a music CD are sent to members of a file-sharing service. At the heart
of the matter is intellectual property, the ownership of a product of one’s creativity,
such as a novel or a musical performance. Printed works, music, and movies intended
for commercial use are protected by copyrights, which prevent the unauthorized use
of the material and usually require the payment of fees when they are legally used.
Governments grant copyrights for the same reason that they issue patents: both are
thought to stimulate creative efforts by giving individuals and firms monopoly control
over their creations, and with it the chance to reap financial rewards.
In previous decades, the development of radio, television, movies, and recorded
music greatly expanded the commercial value of creative efforts. Generations of
actors, artists, musicians, and others have owed their livelihoods to the media in its
various forms, but more recent technologies have been more problematic, and their
use has exemplified the ways in which new technologies may create winners and los-
ers, as noted in Chapter 1. One early example is the videocassette recorder (VCR).
When the VCR first came out, it was thought that its primary function would be
the playing of prerecorded tapes of movies rented from video stores. But many VCR
owners also used their machines to record programs that had been broadcast on
commercial television. The movie industry sued the manufacturers of VCRs and
videotapes, claiming copyright infringement. In a case that eventually went all the
way up to the U.S. Supreme Court, the court ruled in 1984 by a 5 to 4 margin that
home recording constituted “fair use” under existing copyright laws, allowing indi-
viduals to videotape with impunity.55
In the late 1990s the rapid spread of computer networks presented a new
challenge to existing conceptions of intellectual property, because after being com-
pressed and digitally encoded, a musical selection could be sent to anyone with a
networked computer and the requisite software. By the start of the new century,
60 million persons were sharing music that was being routed through Napster, the
most prominent file-sharing service. This was done by connecting one’s computer
to Napster’s network servers, which kept a list of all the music that was available on
clients’ computers in the MP3 format. Napster and its clients claimed that all that
was going on was the sharing of files by private parties, which constituted “fair use”
under existing copyright laws. But the courts did not agree, ruling that Napster’s
central database of music titles gave it primary responsibility for what was deemed
illegal downloading.56 Napster tried to reconstitute itself as a commercial distributor
Volti7e_15_ch14_263-284_highres.indd 276 28/11/12 6:02 PM

The Internet Age 277
of digitized music, but it failed to survive in this guise. Napster’s demise did not put
an end to file sharing. New services allowed individuals to access music residing in
millions of hard disks but avoided the legal problems caused by a central database
through the use of the network itself to channel requests and to inform users of
available music, movies, and other digitized files.
The rise of file sharing coincided with a significant drop in CD sales, but there is
no agreement that this reflected a cause-and-effect relationship. As might be expected,
the music industry attributed falling sales to widespread “piracy” and claimed that the
very survival of the music industry depended on the suppression of unauthorized file
sharing. Although the person-to-person transfer of files was a common occurrence,
the industry may have overstated its impact; one methodologically sophisticated study
concluded that unauthorized file sharing had only a minimal effect on CD sales.57
Falling CD sales, it was argued, were the result of high prices, the lack of new and
interesting music, and the appeal of alternative forms of entertainment such as video
games, smartphones, tablet computers, and movies on demand. The claim even was
made that file sharing actually promotes CD sales by exposing potential buyers to new
music and musicians, much as radio airplay has stimulated record sales for decades.
In 2011 the entertainment industry attempted to regain control over its prod-
ucts through the introduction of the Stop Online Piracy Act (SOPA) and the
Protect Intellectual Property Act (PIPA) in the U.S. House of Representatives and
the U.S. Senate, respectively. If enacted as laws, these bills would have penalized
third-party Internet firms such as Google and Facebook for providing access to sites
Congressional efforts to deter illicit downloading through the Stop Online Piracy Act (SOPA)
and the Protect Intellectual Property Act (PIPA) met with large-scale resistance and were
withdrawn. (© James Leynse/Corbis)
Volti7e_15_ch14_263-284_highres.indd 277 28/11/12 6:02 PM

278 Communication
offering pirated material.58 Internet firms fought back in a number of ways, includ-
ing a one-day shutdown by Wikipedia, and the bills were withdrawn. This was not
the end of the story, however, and the coming years surely will see more legislative
efforts to curtail online piracy.
Meanwhile, growing numbers of individuals are subscribing to fee-based ser-
vices that allow them to stream or download music and movies without fear of legal
reprisals. The services provided by iTunes, Rhapsody, Amazon, and others have
been stunning commercial successes. Apple, which had pioneered the sale of down-
loaded music for its line of iPods, tallied its 10 billionth iTune download in early
2010.59 The growth of these enterprises has cut into the practice of unauthorized
file sharing.60 After all, it doesn’t require much of a financial commitment, usually
less than a dollar, to obtain a hit song. Legal streaming (which entails the one-time
viewing of a movie or music) and downloading also retain a key advantage of illegal
downloading—the convenience of instantly acquiring a song without having to go
to a store or wait for a mail delivery. But piracy, although diminished, has not gone
away, and the fate of file sharing will ultimately depend on legal decisions, corporate
strategies, and consumer tastes. In the meantime, the controversy over file sharing
provides a good illustration of how technological change can be intimately con-
nected to some fundamental human concerns: freedom, money, and power.
Privacy in the Digital Age
As has been noted several times in this book, few technologies are unalloyed bless-
ings; problems usually accompany advances. For example, we saw in Chapter 9 that
modern genetics, in addition to being the basis of new medical treatments, also can
provide intimate information about individuals, everything from hitherto unknown
family relationships to propensities to contract certain diseases. Such is also the
case with the wired world. All of the communications technologies covered in this
chapter have the potential to undermine personal privacy, sometimes seriously. As
users of e-mail messages know (or should know) an e-mail message is more like a
postcard than a letter. Its contents can be easily accessed and used against the sender
or receiver. The use of social media and Internet search engines leaves a record
that contains quite a lot of information about the user. These can create new mar-
keting possibilities, but sometimes at the expense of individual privacy. Facebook
users were outraged in 2007 when they discovered that the social networking site
was tracking their activities on participating websites and reporting them to their
Facebook friends.61
An online petition involving tens of thousands of users ended these practices,
but the threat to privacy for users of social network sites remains. One of the site’s
chief commercial attractions is the wealth of information that its pages offer to
marketers, but the use of this information by marketers could constitute an invasion
of personal privacy. Facebook was cited by the Federal Trade Commission in 2011
over the violation of promises to users that their privacy would be respected. This
resulted in a settlement whereby Facebook agreed to a number of stipulations aimed
at protecting the privacy of its users and to monitoring by the FTC of its practices
for 20 years.62 But the fact remains that Facebook pages contain massive amounts
Volti7e_15_ch14_263-284_highres.indd 278 28/11/12 6:02 PM

The Internet Age 279
of personal data that marketers of every description intensely covet. Reconciling
Facebook’s commercial appeal with user privacy will be a difficult balancing act.
Social media are hardly the only source of assaults on privacy. Keeping pri-
vate information out of the hands of marketers and others will pose continual
challenges for everyone who lives in the digital era. For example, certain recently
manufactured cell phones are capable of secretly recording keystrokes made when
placing telephone calls and sending text messages.63 The recording of keystrokes,
or “keylogging” as it is known, has been used by the police in a few organized crime
cases, but it can also be used for clandestine eavesdropping by private parties. The
emerging technology of Radio Frequency Identification (RFID) gives governments,
businesses, and individuals the ability to track the movements of people who have
RFID tags incorporated into their passports, drivers’ licenses, credit cards, employee
access badges, student ID cards, and even library cards.64 Word processing docu-
ments may contain hidden “metadata” such as the name of the writer and the type
of computer that was used.65 The installation of a GPS device on a person’s car also
offers new opportunities to know the whereabouts of an individual, although the
U.S. Supreme Court, in the first decision regarding digital searches, ruled that such
an action requires a court order.66 Finally, drones, also known as unmanned aerial
vehicles (UAVs), offer many opportunities for spying on people and places. Border
control agents and some police forces are beginning to use them as inexpensive
substitutes for manned helicopters.67 It is also possible for members of the public to
buy or build their own drones for a few hundred dollars.68
One computer entrepreneur has famously claimed, “You already have zero pri-
vacy. Get over it.” It can be safely countered that this statement is not entirely true
today, for there are many defenses against privacy-invading technologies. But those
of a pessimistic disposition may argue that this obituary for personal privacy, while
not completely applicable today, is not wrong, only premature.
The Electronic Media in Modern Society
In considering the consequences of electronic media, beginning with radio and tele-
vision, and continuing with the numerous forms of digital media available today, we
are left with many unresolved issues. As with other major technological changes,
the electronic media have advanced in conjunction with other historical changes,
making it very difficult to come to an exact reckoning of their unique contributions
to culture and society. To put it slightly differently, to understand the influence of
the electronic media, it is necessary to consider the larger contexts within which
they emerged and diffused.
The electronic media are only the latest in a series of technologies that have
expanded our ability to communicate with one another. As human societies have
expanded in size, complexity, and mobility, the need to tie them together through
improved communications media has increased apace. As we have seen, visionaries
described technologies that would transmit sounds and pictures over great distances
long before the realization of these dreams was technically possible. Even so, as we
have seen in earlier chapters, the need for something does not ensure that it eventually
will be produced. The technical capacity has to be created, and individuals and
Volti7e_15_ch14_263-284_highres.indd 279 28/11/12 6:02 PM

280 Communication
organizations have to come forward with money and other resources to support it.
One of the major reasons for the success of radio, television, and the Internet can be
found in the solid financial support that they attracted. During its early years, radio
advanced because military, governmental, and business organizations perceived its
relevance to their needs. Later, the profit-making potential of radio and then tele-
vision was eagerly exploited, and the market created by commercial broadcasting
further stimulated technological development. More recently, the Internet began as
a creation of the military, was further developed as a means of business and personal
communication, and toward the end of the twentieth century emerged as a popular
medium with seemingly endless commercial possibilities. In the opening years of the
twenty-first century there has been a massive expansion in the products and services
tied to the Internet, and we surely have not seen the last of it.
The electronic media have become essential features of modern life because they
have helped to mitigate the disruption of stable community ties that has been a promi-
nent feature of the modern era. Economic and social changes have produced a great
deal of social and geographic mobility, as well as corresponding feelings of rootlessness.
In response, first radio and then television contributed to the construction of a com-
mon culture that served as at least a partial substitute for attenuated local ties. The
effects of the Internet, social media, and other digital media are less easily characterized.
Whereas television and radio formerly attracted large audiences through the broadcast-
ing of a small range of programs, cable television and some of the key products of the
Internet—websites, blogs, tweets, and social networks—are engaged in “narrowcast-
ing” that plays to a much more limited audience. The consequences of audience seg-
mentation are still somewhat speculative, but there can be little doubt that they have
contributed to the political and cultural polarization that is characteristic of our times.
Although participants in social media sites may list hundreds of “friends,” a virtu-
al community is not the same as one based on long-lasting face-to-face relationships.
As we have seen, person-to-person media such as the telephone and Facebook can
strengthen existing social connections, but they should not be seen as substitutes for
them. The loosening of local ties has not been reversed by the spread of new media,
and according to some social scientists we have become an increasingly individual-
ized society.69 For many people the electronic media have filled a void by bringing
information, entertainment, and new means of communication that have helped to
mitigate weakened connections to family, friends, and community.70 These media
can be characterized as “technological fixes,” and as we saw in Chapter 2, technology
cannot always fix problems that are social rather than technical in nature.
Questions for Discussion
1. For long-range communication, do you prefer e-mail or the telephone? Are
some communications more suitable for e-mail and others for the telephone?
Why? Are there times when a barrage of e-mail messages leaves you feeling
overwhelmed and frustrated?
2. As noted in this chapter, most smartphone owners use only a few apps for their
phones. Are you one of them? Have you downloaded some apps only to later
delete them? Why?
Volti7e_15_ch14_263-284_highres.indd 280 28/11/12 6:02 PM

The Internet Age 281
3. Some of the most devoted designers and players of games have argued that
game playing can impart many useful virtues such as cooperation with others,
optimism, and feelings of leading a purposeful life. Do you agree? Do the ben-
efits of gaming outweigh the drawbacks?
4. The rapid expansion of the World Wide Web has put vast amounts of informa-
tion and entertainment within easy access. The Web also contains many sites
that dispense pornography, gambling opportunities, and the ravings of hate
groups. Although these sites can be blocked through the installation of filters,
these are not completely effective and may limit access to important information
such as AIDS awareness. Should efforts be made to limit the spread of “objec-
tionable” material? How might this be done? What could be lost as a result?
5. Have you or any of your friends ever shared a file that was protected by a
copyright? Did you have any qualms about doing so? Are the big media firms
justified in treating file sharing as a criminal activity to be prosecuted to the
fullest extent of the law?
6. How much do you worry about the potential and real loss of your privacy due to
increasingly sophisticated technologies for eavesdropping, tracking, recording,
and general snooping? Is the loss of some privacy a fair price for the increased
communications capabilities made possible by today’s electronic technologies?
7. Do you have a social network account? How many friends and associates do
you have in your network? What is their relationship to friends with whom you
interact offline?
Notes
1. Janet Abbate, Inventing the Internet (Cambridge, MA: MIT Press, 1999).
2. Brian Winston, Media Technology and Society: A History: From the Telegraph to the
Internet (London and New York: Routledge, 1998), p. 324.
3. Alex Roland, “Will Nuclear War Interrupt My E-mail?” Address at the annual meeting
of the Society for the History of Technology, Henry Ford Museum, Dearborn, Michigan,
October 14, 1999.
4. Winston, Media Technology, pp. 331–332.
5. “Internet 2011 in Numbers,” (January 17, 2012), accessed on April 5, 2012, at http://royal
.pingdom.com/2012/01/17/internet-2011-in-numbers/.
6. Malcolm Gladwell, “Clicks and Mortar,” The New Yorker (December 6, 1999): 106–115.
7. Eduardo Porter, “Keeping the Internet Neutral,” New York Times (May 8, 2012),
accessed on August 18, 2012, at http://www.newyorktimes.com/2012/05/09/business
/economy/net-neutrality.
8. Matthew Ingram, “Open vs. Closed: What Kind of Internet Do We Want?” GigaOM
(March 23, 2012), accessed on June 29, 2012, at http://gigaom.com/2012/03/23/open
-vs-closed-what-kind-of-internet-do-we-want/.
9. “Internet 2011 in Numbers,” op. cit.
10. Michael Fitzgerald, “Call It the Dead E-Mail Office,” Wired (June 7, 2004), accessed on
April 20, 2012, at http://www.wired.com/culture/lifestyle/news/2004/06/63733.
11. Nielsen Wire, “What Americans Do Online: Social Media and Games Dominate
Activity” (August 2, 2010), accessed on September 26, 2012, at http://blog.nielsen
.com/nielsenwire/online_mobile/what-americans-do-online-social-media-and-games
-dominate-activity/.
Volti7e_15_ch14_263-284_highres.indd 281 28/11/12 6:02 PM

http://royal

http://www.newyorktimes.com/2012/05/09/business

http://gigaom.com/2012/03/23/open

http://www.wired.com/culture/lifestyle/news/2004/06/63733

http://blog.nielsen

282 Communication
12. “Smartphones Overtake Client PCs in 2011,” Canalys (February 2, 2012), accessed on
June 29, 2012, at http://www.canalys.com/newsroom/smart-phones-overtake-client
-pcs-2011.
13. Martin Giles, “Beyond the PC,” The Economist (October 8, 2011): 5.
14. Nielsen Wire, “Smartphones Account for Half of all Mobile Phones, Dominate
New Phone Purchases in the US” (March 29, 2012), accessed on April 14, 2012, at
http://blog.nielsen.com/nielsenwire/online_mobile/smartphones-account-for-half-of-all
-mobile-phones-dominate-new-phone-purchases-in-the-us.
15. Roger Cheng, “Apple: 18 Billion Apps Have Been Downloaded,” CNET News
(October 4, 2011), accessed on April 20, 2012, at http://news.cnet.com/8301-13579_3-
20115435-37/apple-18-billions-apps-have-been-downloaded/.
16. Giles, “Beyond the PC,” op. cit.
17. mobiThinking, “Global Mobile Statistics Part E: Mobile Apps, Apps Stores, Pricings,
and Failure Rates,” accessed on June 27, 2012, at http://mobithinking.com/mobile
-marketing-tools/latest-mobile-stats/e#lotsofapps.
18. Daniel Rubino. “Study: Apps Not That Important to Smartphone Users,” WP Central
(January 20, 2012), accessed on April 14, 2012, at http://www.wpcentral.com/study
-end-apps-not-important-smartphone-users.
19. Antonio Regalado, “How Mobile Phones Jump-Start Developing Economies,”
Technology Review (November 22, 2010), accessed on June 29, 2012, at http://www
.technologyreview.com/news/421769/how-mobile-phones-jump-start-developing
-economies/.
20. Killian Fox, “Africa’s Mobile Economic Revolution,” The Observer (July 23, 2011),
accessed on June 29, 2012, at http://www.guardian.co.uk/technology/2011/jul/24
/mobile-phones-africa-microfinance-farming.
21. Deanna Zandt, Share This! How You Will Change the World with Social Networking (San
Francisco: Berrett-Koehler, 2010), p. 27.
22. United Nations, “Information and Communications Technology (ICT),” accessed on
April 9, 2012, at http://www.un.org/cyberschoolbus/briefing/technology/tech .
23. Reuters, “Facebook IPO Tests Easy Growth Assumptions” (February 12, 2012), accessed
on April 9, 2012, at http://www.reuters.com/article/2012/02/02/us-facebook-growth
-idUSBRE8110EG20120202.
24. Ibid.
25. “LinkedIn’s Astonishing Growth: By the Numbers,” The Week (March 24, 2011),
accessed September 26, 2012, at http://theweek.com/article/index/213445/linkedins
-astonishing-growth-by-the-numbers.
26. “Twitter: Six Years Old and 340 Million Tweets Every Day,” EContent, (March 22,
2012), accessed on April 20, 2012, at http://www.econtentmag.com/Articles/Article
Reader.aspx?ArticleID=81531.
27. Sherry Turkle, “The Flight from Conversation,” New York Times (April 21, 2012),
p. 2, accessed on September 7, 2012, at http://www.nytimes.com/2012/04/22/opinion
/sunday/the-flight-from-conversation.html?pagewanted=1&_r=2.
28. Vincent Chua, Julia Madej, and Barry Wellman, “Personal Communities: The World
According to Me,” in Peter Carrington and John Scott (eds.), Handbook of Social
Network Analysis (Thousand Oaks, CA: Sage, 2011) p. 106.
29. Ibid., p. 105.
30. Claude S. Fischer, America Calling: A Social History of the Telephone to 1940 (Berkeley,
CA: University of California Press, 1992).
Volti7e_15_ch14_263-284_highres.indd 282 28/11/12 6:02 PM

http://www.canalys.com/newsroom/smart-phones-overtake-client

http://blog.nielsen.com/nielsenwire/online_mobile/smartphones-account-for-half-of-all

http://news.cnet.com/8301-13579_3-20115435-37/apple-18-billions-apps-have-been-downloaded/

http://news.cnet.com/8301-13579_3-20115435-37/apple-18-billions-apps-have-been-downloaded/

http://mobithinking.com/mobile

http://www.wpcentral.com/study

http://www

http://www.guardian.co.uk/technology/2011/jul/24

http://www.un.org/cyberschoolbus/briefing/technology/tech

http://www.reuters.com/article/2012/02/02/us-facebook-growth

http://theweek.com/article/index/213445/linkedins

http://www.econtentmag.com/Articles/Article

http://www.nytimes.com/2012/04/22/opinion

The Internet Age 283
31. Bill Wasik, “Crowd Control,” Wired (January 2012): 78.
32. Wael Ghonim, Revolution 2.0: The Power of the People Is Greater Than the People in
Power (New York and Boston: Houghton Mifflin Harcourt, 2012), pp. 292–293.
33. Evgeny Morozov, “The Digital Dictatorship,” The Wall Street Journal (Feb. 20, 2010),
accessed on September 6, 2012, at http://online.wsj.com/article/SB1000142405274870
3983004575073911147404540.html.
34. Malcolm Gladwell, “Small Change: Why the Revolution Will Not Be Tweeted,” The
New Yorker (October 4, 2010), accessed on April 11, 2012, at http://www.newyorker
.com/reporting/2010/10/04/101004fa_fact_gladwell?currentPage=4.
35. The Nielsen Company, “2010 Media Industry Fact Sheet,” accessed on April 20, 2012.
at http://blog.nielsen.com/nielsenwire/press/nielsen-fact-sheet-2010 .
36. Entertainment Software Association, “Essential Facts about the Computer and Video
Game Industry,” p. 10, accessed on July 3, 2012, at http://www.theesa.com/facts/pdfs
/ESA_EF_2012 .
37. Tim Cross, “All the World’s a Game,” The Economist (December 10, 2011), accessed on
July 3, 2012, at http://www.economist.com/node/21541164.
38. “Essential Facts,” p. 3.
39. “Essential Facts,” p. 9.
40. Keith Stuart, “Modern Warfare 3 Smashes Records: $775 Million in Five Days, The
Guardian, (November 18, 2011), accessed on April 20, 2011, at http://www.guardian
.co.uk/technology/2011/nov/18/modern-warfare-2-records-775m39.
41. Melissa Telecki et al., “Sex Differences and Similarities in Video Game Experience:
Implications for the Gaming Industry,” Current Psychology, 30, 1 (December 2010): 22–33
42. Heeter, C., et al., “Alien Games: Do Girls Prefer Games Designed by Girls?” Games and
Culture, 4, 1 (2009): 74–100. Cited in Ibid.
43. George Comstock and Erica Scharrer, Media and the American Child (Amsterdam:
Elsevier, 2007), pp. 234–238.
44. Ibid, pp. 236–237.
45. Ibid., p. 236.
46. Science Daily, “American Psychiatric Association Considers ‘Video Game Addiction’”
(June 25, 2007), accessed on June 29, 2012, at http://www.sciencedaily.com/releases
/2007/06/070625133354.htm.
47. J.P. Mangalindan,  “Play to Win: The Game-based Economy”  Fortune  and  CNN
Money  (September 3, 2010),  accessed on September 26, 2012, at http://tech.fortune
.cnn.com/2010/09/03/the-game-based-economy/; Gamification Wiki, “Gamification”
(June 11, 2012) accessed on September 26, 2012, at  http://gamification.org/wiki
/Gamification.
48. Internet World Stats, “Internet Users in the World by Geographic Region—2011”
accessed on April 19, 2012, at http://www.internetworldstats.com/stats.htm.
49. The World Bank, “Internet Users (per 100 People),” accessed on April 18, 2012,
at http://data.worldbank.org/indicator/IT.NET.USER.P2?order=wbapi_data_
value_2008+wbapi_data_value+wbapi_data_v[alue-last&sort=asc&cid=GPD_44.
50. U.S. Department of Commerce, “Exploring the Digital Nation: Computer and Internet
Use at Home,” (November 2011), accessed on April 29, 2012, at http://www.esa
.gov/sites/default/files/reports/documents/exploringthedigitalnation-computerandinter-
netuseathome .
51. Ibid.
52. Ibid.
Volti7e_15_ch14_263-284_highres.indd 283 28/11/12 6:02 PM

http://online.wsj.com/article/SB1000142405274870

http://www.newyorker

http://blog.nielsen.com/nielsenwire/press/nielsen-fact-sheet-2010

http://www.theesa.com/facts/pdfs

http://www.economist.com/node/21541164

http://www.guardian

http://www.sciencedaily.com/releases

http://tech.fortune

http://gamification.org/wiki

http://www.internetworldstats.com/stats.htm

http://data.worldbank.org/indicator/IT.NET.USER.P2?order=wbapi_data_

http://www.esa

284 Communication
53. Will Oremus, “New Digital Divide: Whites Less Likely to Own Smartphones,” Slate
(Aug. 7, 2012), accessed on September 7, 2012, at http://www.slate.com/blogs/future
_tense/2012/08/07/digital_divide_minorities_more_likely_than_whites_to_own_
smartphones.html.
54. Jamilah King, “How Big Telecom Used Smartphones to Create a New Digital Divide,”
Colorlines (December 6, 2011), accessed on September 7, 2012, at http://colorlines.com
/archives/2011/12/the_new_digital_divide_two_separate_but_unequal_internets
.html.
55. “Videotaping without Guilt,” New York Times (January 22, 1984), section 4:1.
56. Sam Costello, “Court Orders Napster to Stay Shut” (March 25, 2002), accessed on
April 23, 2012, at http://www.pcworld.com/article/91144/court_orders_napster_to_stay
_shut.html.
57. Felix Oberholzer-Gee and Koleman Strumpf, “The Effect of File Sharing on Record
Sales: An Empirical Analysis,” accessed on August 14, 2008, at http://www.utdallas
.edu/~liebowit/intprop/OS%202006-12-12 .
58. “Copyrights and Internet Piracy (SOPA and PIPA Legislation),” New York Times
(February 8, 2012), accessed on June 29, 2012, at http://topics.nytimes.com/top
/reference/timestopics/subjects/c/copyrights/index.html.
59. Apple Press Info, “Apple’s App Store Downloads Top Ten Billion” (January 22, 2011),
accessed on April 22, 2012, at http://www.apple.com/pr/library/2011/01/22Apples
-App-Store-Downloads-Top-10-Billion.html.
60. The American Assembly, “The Copy Culture Survey: Infringement and Enforcement
in the US” (November 15, 2011), accessed on September 8, 2012, at http://piracy.ssrc
.org/the-copy-culture-survey-infringement-and-enforcement-in-the-us/.
61. Daniel Solove, “The End of Privacy?” Scientific American 299, 3 (September 2008): 104.
62. Robert Hof, “What Facebook’s FTC Privacy Settlement Means to Marketers,” Forbes
(November 29, 2011), accessed on April 10, 2012, at http://www.forbes.com/sites
/roberthof/2011/11/29/what-facebooks-ftc-privacy-settlement-means-to-marketers/.
63. Andy Greenberg, “Phone ‘Rootkit’ Maker Carrier IQ May Have Violated Wiretap
Law in Millions Of Cases,” Forbes (November 30, 2011), accessed on April 30, 2012,
at http://www.forbes.com/sites/andygreenberg/2011/11/30/phone-rootkit-carrier-iq-may
-have-violated-wiretap-law-in-millions-of-cases/.
64. Katherine Albrecht, “RFID Tag—You’re It.” Scientific American 299, 3 (September
2008): 84–85.
65. Simson L. Garfunkel, “Information of the World Unite!” Scientific American 299,
3 (September 2008): 84–85.
66. Joan Biskupic, “Supreme Court Rules GPS Tracking Requires Warrant,” USA Today,
January 24, 2012, p. 2A.
67. Greg McNeal, “A Primer on Domestic Drones: Legal, Policy, and Privacy Issues,”
Forbes (April 10, 2012), accessed on June 28, 2012, at http://www.forbes.com/sites
/gregorymcneal/2012/04/10/a-primer-on-domestic-drones-and-privacy-implications/.
68. Chris Anderson, “Here Come the Drones,” Wired (July 2012): 100–111.
69. The most influential book on this issue is Robert Putnam, Bowling Alone: The Collapse
and Revival of American Community (New York: Simon and Schuster, 2000).
70. Raymond Williams, The Long Revolution (New York: Penguin Books, 1965), pp. 26–27.
Volti7e_15_ch14_263-284_highres.indd 284 28/11/12 6:02 PM

http://www.slate.com/blogs/future

http://colorlines.com

http://www.pcworld.com/article/91144/court_orders_napster_to_stay

http://www.utdallas

http://topics.nytimes.com/top

http://www.apple.com/pr/library/2011/01/22Apples

http://piracy.ssrc

http://www.forbes.com/sites

http://www.forbes.com/sites/andygreenberg/2011/11/30/phone-rootkit-carrier-iq-may

http://www.forbes.com/sites

285
p a r t s i x
War presents the dark side of technological advance. It is a sad fact of human history
that a large portion of human ingenuity has been directed toward improving the
ability to kill and destroy. Chapter 15 traces the evolution of weapons from ancient
times to the present, paying particular attention to the reciprocal relationships
between weaponry on the one hand, and military organization, cultural orienta-
tions, and political structures on the other.
Chapter 16 picks up where the previous chapter ended with a description of
today’s arsenal of cruise missiles, smart bombs, spy satellites, and drones. The devel-
opment and deployment of these devices has come at considerable expense, yet for
all of their improvements in accuracy and destructive power, these devices have not
conferred unchallenged military supremacy. Relatively simple weapons continue to
be used to deadly effect by forces with limited financial resources. Moreover, a high
level of technological sophistication increases vulnerability to cyber attacks that
can cripple a modern economy.
Chapter 17 reprises a now-familiar theme: the influence of culture, social arrange-
ments, economics, and politics on the development and application of technology.
It describes how the advance or the retardation of particular weapon systems has
been associated with particular social arrangements and organizational interests.
The chapter ends with a consideration of one of the greatest imperatives of our time:
the reduction and eventual elimination of nuclear weapons. The history of earlier
efforts to eliminate or control new weapons shows that efforts of this sort have had
mixed results at best.
The Tools of Destruction
Volti7e_16_ch15_285-304_highres.indd 285 28/11/12 6:03 PM

A Macmillan Higher Education Company
WORTH PUBLISHERS
Volti7e_01_FM_i-xviii_highres.indd 2 28/11/12 5:52 PM
this page left intentionally blank

287
For most of human existence, the baser instincts of humanity were checked by the
limitations of the weapons that could be used. Clubs and stone knives could be used
at close range, and rocks and other missiles could be hurled at a target a few dozen
yards away. There are still places in the world where warfare is conducted as it was
thousands of years ago. Unfortunately, however, human ingenuity has not always
been confined to activities that make our lives better. On many occasions our most
inventive minds have used their talents to find new ways of wreaking havoc on other
human beings. Warfare presents the dark side of technological progress; although
technological advances have bettered many aspects of human life, they have also
led to a terrifying expansion in the ability to kill and destroy.
Military Technology in the Ancient World
The technological changes that marked the beginning of civilization’s development
were from the start used for martial purposes. The production of metals, first bronze
and then iron, resulted in more lethal weapons, as well as armor to protect against
them. Around 4,000 years ago, armies in the Middle East began to deploy archers
in horse-drawn chariots, greatly increasing the mobility and striking power of their
attacking forces. So equipped, the armies of Assyria and Persia successfully invaded
their neighbors’ territories and established extensive empires.
Effective as they were, military forces based on the chariot and bow did not con-
quer everything in their path. Horses and chariots operated with great effectiveness in
the open expanses of Central Asia and the Middle East, but they were far less useful
in mountainous areas. Equally important, horses had to be fed, and in the absence of
adequate forage an army using large numbers of horses quickly outran its supply lines.
In contrast, the armies of ancient Greece relied on the hoplite, a foot soldier armed
with a spear and a short sword and protected by a shield, helmet, breastplate, and shin
guards. The effectiveness of a Greek fighting force derived from its being organized
into a phalanx, a mass formation of infantrymen at least eight rows deep. Deployed
in close ranks, the warriors’ shields offered a modicum of protection, while the spears
projecting out from the first three rows presented a fearsome offensive threat. A col-
lision between two phalanxes was the essence of Greek warfare, with victory usually
going to the side that demonstrated the greater degree of discipline and cohesion.1
Weapons and Their Consequences
c h a p t e r f i f t e e n
Volti7e_16_ch15_285-304_highres.indd 287 28/11/12 6:03 PM

288 The Tools of Destruction
The Romans brought a greater degree of mobility to infantry combat by deploy-
ing their soldiers in smaller units of 60 to 80 men each, who in turn were aggre-
gated into larger units that together formed a legion of 3,600 to 4,600 infantry and
300 cavalry.2 Although armaments were little changed from the time of the Greeks,
the Romans carved out a vast empire by employing superior tactics and, above
all, because they could rely on the discipline and coordination of their legions.
Mobility, after all, is no advantage if it leads to the dispersal of troops and the lack
of unified action. The cohesion of the Greek phalanx depended on its being com-
posed of groups of warriors belonging to social networks of friends and relatives. The
cohesion of the Roman legions derived from two elements unknown to the Greeks:
extensive drilling and the leadership of a permanent officer class, the centurions.3
The genius of Rome was at least as much organizational as it was technological, and
it gave the Romans a military superiority that lasted for centuries.
While the weapons of the infantry soldier changed little in the ancient world,
significant strides were made in the development of siege machinery. The most fearful
weapons of antiquity were catapults, first used in Sicily during the fourth century b.c.
They were then employed with considerable success by Philip of Macedon, the father of
The wall of spears of the Greek phalanx was a fearsome sight for enemy combatants. (© Look
and Learn/The Bridgeman Art Library)
Volti7e_16_ch15_285-304_highres.indd 288 28/11/12 6:03 PM

Weapons and Their Consequences 289
Alexander the Great, and played an important role in many of the battles waged by
the Greeks and Romans, as well as by their enemies. It is also significant that siege
machinery was the creation of some of the first specialists in technology; indeed,
the word “engineer” is derived from the Latin ingenium, an ingenious device used for
siege warfare.4
Catapults hurled rocks or large arrows by releasing the stored-up energy of a tor-
sion spring, made of twisted cords of animal sinew and human hair. Hair was much
prized as a military supply; there is an ancient report of thousands of pounds of hair
being sent as a present from one king to another.5 The most powerful catapults were
capable of hurling a bolt or a small stone a distance of 800 yards (although 400 yards
was a more normal range), while a 60-pound rock thrown a distance of 150 yards
produced the best effect when fortifications were the target. A smaller catapult
(known as an onager) used by the Romans for battlefield service was capable of fling-
ing an eight-pound projectile up to 500 yards.6 The main limitation of all catapults
was that hair and sinew stretch when wet, so in damp climates the springs lost their
resiliency and therefore much of their power. Despite this limitation, these devices
played an important role in siege warfare. Each Roman legion was supported by
55 catapults, and 300 catapults were deployed by Rome during the siege of Jerusalem
in 70 a.d. Even after the invention of gunpowder and cannon, many decades were
to pass before the destructive power of these weapons was surpassed.
The use of such siege machinery conferred great strength on offensively minded
empire builders like the Romans. The crude fortifications of the “barbarians” usually
provided only scant defenses against invading armies armed with these early mis-
sile launchers. Conversely, Roman military superiority was less evident when Rome
itself was put on the defensive during the barbarian invasions. Roman defensive
technology was inferior to the offensive weaponry of the time; this incapacity, when
coupled with the many internal problems besetting Rome, resulted in the eventual
disintegration of one of the world’s greatest empires.
Military Technology and the Feudal Order
Military technologies made few advances during the centuries following the fall
of Rome. But in the sixth century a new technology emerged that added a new
dimension to warfare. This innovation was not itself a weapon, but it made existing
weapons far more effective. The device was the stirrup, an invention of Chinese
origin that had diffused to Europe by the ninth century. Horses had long been used
in battle; as we have seen, many battles were won through the use of horse-drawn
chariots from which arrows could be shot and spears thrown. But a warrior mounted
directly on a horse was precariously perched. Although cavalry charges were some-
times employed in battle, in most cases when it came time to fight, the warriors
dismounted and fought on foot.
The stirrup greatly amplified the destructive potential of a warrior by fixing
him firmly to his horse. The horse was no longer merely a means of conveyance; its
power could now be used to augment the striking power of a spear or lance. As we
shall see, the horse-mounted knight was by no means invincible, but his determined
Volti7e_16_ch15_285-304_highres.indd 289 28/11/12 6:03 PM

290 The Tools of Destruction
charge could be devastating. Although mounted combat was not the dominant
mode of medieval warfare, in many places the local populace suffered considerable
insecurity if they lacked the protection of a mounted warrior.
The central importance of the mounted knight produced some significant eco-
nomic and political changes. A horse was an expensive item, and its maintenance
required considerable outlay. At the same time, the mounted knight was the prod-
uct of years of specialized training, during which time he could not be engaged in
productive activities. The maintenance of a horse, knight, his assistants, and their
training required 300 to 400 acres of land.7 Even kings lacked the financial means
and administrative capacity to raise and support an army of knights on their own.
Instead, they gained the military support of mounted warriors by granting them
tracts of land in return for their military support. This was the basis of the feudal
order, the dominant form of political organization throughout the Middle Ages.
Feudalism was an inherently decentralized system, and armies often were temp-
orary and disorderly assemblies. Knights relied on their individual skill and courage,
and were disinclined to submit to centralized discipline and strategic planning.
Battles were often chaotic affairs involving a great deal of individual combat
and little in the way of careful tactical planning. On occasion, opposing forces
might spend several days just trying to find each other so that a battle could take
place. The military technologies of the Middle Ages thus led to a retreat from
the principles of discipline and precise organization that had made Roman armies
so formidable.
Horse-mounted combat also generated the culture of chivalry. The word itself
derives from the French cheval, or horse. The horse-mounted knight was different
from ordinary soldiers by virtue of his noble birth and the long period of training
that was required to develop his proficiency in combat. The chivalrous knight lived
by a code that regulated certain aspects of warfare. Noncombatants—provided that
they were Christians—were to be left in peace, and combatants were supposed to
adhere to specific rules of battle. Above all, war was seen primarily as a contest
between members of a distinctive warrior class, in which individual glory could be
as important as the attainment of military objectives. War still remained a grisly
business, but as long as it was considered to be a contest between participants in a
common culture, some limits were set on the manner in which it was waged, and
the land was usually spared the total devastation that has been a hallmark of more
modern conflicts.
New Weapons and the Decline of Feudalism
Knightly combat was only one aspect of medieval warfare, and usually not the
most important one. Sieges were the most common military engagements, while in
pitched battles foot soldiers outnumbered mounted knights by five or six to one.8
Cavalry charges against numerically superior infantrymen were rarely successful,
and beginning in the fourteenth century the utility of the mounted warrior was
further undermined by the widespread deployment of the pike. Used to especially
good effect by the Swiss, the 19-foot-long pike was wielded by a solid formation of
Volti7e_16_ch15_285-304_highres.indd 290 28/11/12 6:03 PM

Weapons and Their Consequences 291
soldiers with four rows of pikemen extending their weapons to the front line—a
formation that resembled the ancient Greek phalanx in many respects. United by
an iron discipline and a high morale that came from being free men in the service of
their canton (a mini-state that often had its own army), the Swiss pikemen became
the most feared soldiers in late medieval Europe. Hard-charging knights were
dispersed by the pikemen and then assaulted by other soldiers wielding halberds
(long battle axes).9
The vulnerability of mounted knights was further increased by the use of
another infantry weapon, the longbow. Archers had been part of combat since
Neolithic times.10 But they were limited by the design of the weapons they
used. The bow was transformed into a decisive weapon simply by making it
longer. At the same time, however, the longbow was a difficult weapon to use
properly, for it required considerable strength and skill, and that came only with
extensive practice.11
First used by the Welsh during their resistance to English conquest, the
longbow was adopted by King Edward I during the early fourteenth century. In
skilled hands, this bow, which was six feet or more in length, could be a devastat-
ing weapon, capable of rapidly firing arrows that could do damage up to 650 feet
away.12 Drawn back to the ear instead of the chest, as was the case with smaller
bows, the longbow was capable of driving an arrow through several inches of oak,
and although it could not penetrate the best plate armor, it could pierce chain
mail or the joints in plate armor.13 Horses, having less armor protection, were
especially vulnerable, and when a horse went down its rider was in serious trouble.
The prowess of English archers was clearly demonstrated at the Battle of Crecy in
1346. Despite a two-to-one numerical inferiority, the English forces won a decisive
victory. A rain of arrows penetrated the chain-mail armor of the French knights and
wrought even greater destruction on their horses.
The mounted knight was also threatened by an eleventh-century invention,
the crossbow. A mechanically sophisticated device, the crossbow used a lever or a
crank-and-ratchet assembly to draw the string. The crossbow had a lot of stopping
power, as it fired a bolt that weighed a half pound or more. Its range and accuracy
were superior to that of the longbow, but the cumbersome process of winding it
restricted its rate of fire to only two bolts per minute. It was thus most effective
when the archer could find cover where he could safely reload.14 Despite this defi-
ciency, the crossbow was a formidable weapon that made the life of the knight all
the more precarious.
One answer to these new offensive weapons was the replacement of chain mail
by plate armor. This armor became increasingly thick, until by the late sixteenth
century a mounted knight was weighted down with as much as a hundred pounds
of armor. His horse, too, required protection, for a felled horse often meant death
for its dismounted rider. The result was a crippling loss of mobility for the knight
and his steed. Maneuverability and speed, hitherto the great virtues of the mounted
warrior, were lost, and the knight ceased to be the mainstay of the European battle
field. Mounted soldiers continued to play an important part in combat, not as
lance-wielding knights but as cavalry armed with pistols and sabres. If carefully
Volti7e_16_ch15_285-304_highres.indd 291 28/11/12 6:03 PM

292 The Tools of Destruction
coordinated with artillery salvos, a cavalry unit could force the enemy to maintain
solid ranks, thus presenting a convenient target for artillery.15 But the traditional
knight was no longer a major player in the game. At best he was an auxiliary, at
worst a vulnerable target.
The castle, the other great symbol of the Middle Ages, was also challenged
by new military technologies. Early medieval fortifications were little more than
mounds of earth surrounded by wooden walls, a far cry from the elaborate defenses
constructed by the Romans. During the Crusades, Europeans learned a great deal by
observing the castles constructed by their Muslim opponents, and this knowledge
diffused through Europe after they returned home. Throughout the early Middle
Ages, castles stood as virtually impregnable redoubts that conferred as much secu-
rity as could be hoped for during those turbulent times; about the only way to con-
quer a fortified position was to starve it into submission.
Attackers attempted to breach the walls of castles and other fortifications with
weapons little changed from Roman times, until in the twelfth century besieging
armies began to employ a device known as a trebuchet. This weapon consisted of a
long arm that was unequally balanced on a fulcrum. The long end, which held the
projectile, was held in place by a catch, while the short end was loaded with heavy
weights. When the catch was released, the long arm flew up and hurled the pro-
jectile in the direction of the enemy. These could be formidable weapons, capable
The medieval trebuchet used a counterweight and some human assistance to hurl a projectile
placed at the end of the long lever arm. (The Granger Collection, New York)
Volti7e_16_ch15_285-304_highres.indd 292 28/11/12 6:03 PM

Weapons and Their Consequences 293
of delivering a 300-pound projectile a distance of up to 300 yards and weights of
more than a ton for shorter distances.16 Although some trebuchets had less range
than the catapults of antiquity, the greater weight of the projectile made it a more
effective siege weapon.17 Also, since it did not depend on torsion springs made from
hair as the older missile launchers did, it was effective in all kinds of weather. While
improved siege engines were not always up to the task of battering down sections of
walls, they could be used to demoralize a castle’s inhabitants by lobbing incendiar-
ies, live snakes, and dead horses over the walls. And on occasion they were used to
speedily return prisoners to their place of origin over the walls.18
The Gunpowder Revolution
The crossbow, longbow, pike, and trebuchet posed a severe challenge to the knight
and the feudal system that maintained him. New weapons based on gunpowder
sealed his fate. Incendiary weapons had long been used in combat. Flaming arrows,
vases filled with a mixture of pitch, sulphur, and boiling oil, and primitive grenades
filled with naphtha were all used in battle. One of the most terrifying weapons
used in antiquity was Greek fire. Although its ingredients are a matter of debate
even today, there can be little doubt of its effectiveness. Like modern-day napalm,
it clung to whatever it came into contact with and burned fiercely, even on water.
First used in seventh-century Byzantium, it served as a decisive weapon in naval
engagements and as an antisiege weapon. Despite the tactical advantage that it
offered, Greek fire was used only sparingly, in part because it was more effective
as a defensive weapon than as an offensive one. No less important, the rulers of
Byzantium were reluctant to make widespread use of it for fear that it would fall
into enemy hands, a concern that has limited the diffusion of many other “secret
weapons” throughout history.19
Early incendiary devices were generally employed as auxiliaries to armies whose
main weapons continued to be swords, spears, and bows. With the invention of
gunpowder, the nature of warfare underwent a profound transformation. As with
many of the inventions that transformed medieval European society, gunpowder
originated in the East. First used for medicinal purposes by its Chinese inventors,
gunpowder became the propellant for simple rockets during the Tang Dynasty
in the eighth century.20 When first used in Europe during the early fourteenth
century, gunpowder was used to fire large arrows and stone cannonballs. At first,
these artillery pieces were little more than a nuisance. They had less destructive
power than a good trebuchet, and their chief advantage was that they could be built
more cheaply and be more easily transported than mechanical missile launchers.
Their barrels were often made of wood; when iron was used the barrel was made
from parallel strips that were welded together. In neither case could the cannon
withstand repeated firing, and their projectiles were stone balls that often shattered
when fired.
Artillery began to be used against fortifications by about 1420, and by 1453
Turkish attackers successfully used cannon to batter down the walls surround-
ing Constantinople prior to taking the city. In the succeeding years, a number of
Volti7e_16_ch15_285-304_highres.indd 293 28/11/12 6:03 PM

294 The Tools of Destruction
improvements resulted in larger, more accurate and reliable weapons. Iron and
bronze gun barrels were cast through the use of techniques that had been used for
the production of church bells—an ironic application of a peaceful technology to a
martial one. Iron cannonballs replaced stone ones. “Corned” gunpowder, which was
made into coarse grains, produced a more consistent and rapidly burning charge. As
a result of these innovations, by the middle of the sixteenth century the fortified
castles that had long been immune to direct attack were now vulnerable.
Still, the age of the castle was by no means at an end. Military engineers strength
ened castles by backing their walls with loose dirt, which absorbed much of a can-
nonball’s force, and by constructing bastions protruding from castle walls, from
which their own cannon could be fired in multiple directions. Such fortifications
provided a good deal of security until mobile artillery developed during the late eigh-
teenth century concentrated the firepower of several cannon, once again tipping the
balance in favor of the besiegers. Until then, castles conferred a great deal of security
against attack and did much to preserve the independence of smaller states, thereby
preventing the political consolidation of Europe.21
Smaller, more mobile artillery pieces were also used to good effect on the
battlefield. Of particular importance were the innovations of Sweden’s Gustavus
Adolphus during the Thirty Years’ War (1618–1648). Instead of deploying the
large field pieces commonly used, Gustavus relied on cannon that fired a nine- or
four-pound balls. Light in weight, these could be handled by a small crew and rap-
idly deployed during the course of battle. Artillery took on its classic role in battle:
softening up the enemy’s lines in preparation of a charge by the cavalry and infantry.
As we have seen, technological advances often occur through the expansion
of a small-scale device or process. In the case of firearms, however, the process was
reversed. Handguns were at first nothing more than small cannon, lacking even a
stock that would allow them to be tightly held and accurately aimed, and it was
not until a century after the introduction of cannon that useful handheld firearms
began to appear in battle. Still, they had many shortcomings. Large artillery pieces
were aimed at a fixed target and then had their charge lighted through a touchhole.
In contrast, small guns had to be used against rapidly moving targets. They could
not be aimed with any degree of precision if a soldier had to shift his gaze to the
weapon’s touchhole. The answer to this problem was the matchlock, which used
a smoldering wick held by a trigger-operated clamp to ignite the powder. Even so,
firing this gun was a cumbersome process, requiring no fewer than 28 separate pro-
cedures to load, fire, and clean the gun.22
Despite these shortcomings, by the first quarter of the sixteenth century small
arms had graduated from being battlefield auxiliaries to indispensable articles of
combat.23 Subsequent improvements made firearms all the more significant. The
cumbersome matchlock eventually gave way to the wheel lock and then the flint-
lock musket, which was capable of firing three rounds per minute.
The firepower and accuracy of these weapons greatly augmented the offensive
power of armies, provided that they were used in conjunction with proper battle-
field procedures. Military commanders learned to deploy their soldiers in rows, so
Volti7e_16_ch15_285-304_highres.indd 294 28/11/12 6:03 PM

Weapons and Their Consequences 295
that a soldier could fire his weapon and then retreat to the back rows where he
could reload in relative safety. Still, troops could not always be depended on to
use their weapons effectively. Although a musket could fire a ball several hundred
yards, it was accurate only to a distance of about 80 yards. The familiar “don’t fire
until you see the whites of their eyes” was a wise admonition, but one difficult to
adhere to under combat conditions. In the heat of battle, soldiers were inclined to
fire prematurely, and many were so distracted that they completely botched up the
necessary routines used for loading these single-shot weapons, as one report from
the American Civil War indicates:24
The official report of the examination of the arms collected upon the battle-field
of Gettysburg, states that “Of the whole number received, 27,574, we found at
least 24,000 of these loaded; about one-half of these contained two loads each,
one-fourth from three to ten loads each, and the balance one load each. In many
of these guns from two to six balls have been found, with only one charge of
powder. In some, the balls have been found at the bottom of the bore with the
charge of powder on top of the ball. In some cases as many as six paper regulation
caliber ’58 cartridges have been found, the cartridges having been put in the
gun without being torn or broken (preventing them from being exploded by
the percussion cap). Twenty-three loads were found in one Springfield rifle-
musket, each loaded in regular order. Twenty-two balls and 62 buckshot with
a corresponding quantity of powder, all mixed up together, were found in one
percussion smooth-bore musket.”
This sort of bungling could be expected, given the panic and paralysis often
experienced by men in combat. In order to counter it, successful military leaders
had to institute rigid battlefield procedures that had been deeply imbued through
countless parade-ground exercises. The numerous steps required for the effective use
of firearms were incessantly driven into the troops until they became virtually auto-
matic, even in the face of withering attacks from the enemy. In complete contrast
to the unruly egocentricity of knightly combat, warfare had become a routinized
procedure, and soldiers “became replaceable parts of a great military machine just
as much as their weaponry.”25
Earlier chapters have stressed the complementarity of organizational and tech-
nological changes, and nowhere is this better illustrated than in the military realm.
A well-drilled army always has an advantage over one that is lax in its ways, and the
technological development of warfare made this even more the case. The handling
of firearms required a precise series of operations; incessant drill was required if
soldiers were not to be totally unhinged during the course of battle. Adhering to an
iron discipline created by constant drilling and blindly obedient to a rigid chain of
command, European armies of the eighteenth century had become some of the most
grimly efficient organizations the world had ever seen.
The creation of these armies had a significance that transcended their impor-
tance in the military sphere, for they served as the prototype of new forms of civilian
organization: routinized, regimented, and hierarchical. Obedience to authority is
nothing new in human societies; it is, after all, one of the most important lessons
Volti7e_16_ch15_285-304_highres.indd 295 28/11/12 6:03 PM

296 The Tools of Destruction
learned by young children. But this sort of authority is embodied in a particular
person—first a parent, and then another authority figure. Soldiers of this era obeyed
the authority of a king and his delegates, but, equally important, they had to submit
to standard procedures drummed in through constant repetition. In time, churches,
schools, and factories made abundant use of these principles, and even in our own
times this mode of organization is predominant in many areas of life.
War and the Centralized State
Artillery and firearms expanded the scope of warfare. Battles were more complex,
organized affairs, while at the same time sieges became more elaborate. Both forms
of warfare intensified logistical problems; according to one calculation, maintaining
an army of 50,000 required the daily provision of 475 tons of food for soldiers and
horses.26 Bullets and powder also had to be readily available. This meant that an
army required long supply lines, giving rise to continual strategic problems of main-
taining these lines, as well as considerably enlarging the zone of military operations.
New forms of warfare greatly increased the cost of military campaigns. The feu-
dal nobility had the means to pay for their own arms and armor, and a longbow did
not entail large expenditures. In contrast, large contingents of musketeers and artil-
lerymen required heavy financial outlays if they were to be adequately trained and
equipped. In most cases only the monarch of a country could meet these demands.27
At the same time, firearms gave centralized states a decisive advantage in expand-
ing their territory and increasing control throughout their domain.28 New weapons
technologies thus reinforced the rising tide of nationalism, centralization, and royal
absolutism that characterized European history from the sixteenth century onward.
The converse was also true; the growth of the nation-state generated a large demand
for improved weaponry. As Charles Tilly has summed up this positive feedback loop,
“War made the state and the state made war.”29 The active involvement of central-
ized governments was also evident in the standardization of weaponry. Prior to the
seventeenth century, most soldiers did not wear uniforms and the weapons they
used were of great variety. Successful military leaders such as Gustavus Adolphus
and Oliver Cromwell strove to standardize weaponry and the other accoutrements
of war. Standardization, however, often acts as a brake on technological progress.
Once a gun or cannon becomes an army’s standard weapon, its replacement or
substantial modification requires a multitude of parallel changes. Not only will the
weapon’s auxiliaries, such as bullets, have to be changed, but so will a host of pro-
cedures, as well as the culture that has grown up around it. Soldiers, like the rest of
us, can become quite attached to doing things in a particular way, and they may be
especially prone to adhere to established routines when engaged in such an inher-
ently risky business as warfare.
Accordingly, the revolutionary changes brought on by the use of gunpowder
lost momentum. During the eighteenth and early nineteenth centuries, improve-
ments in weapons were few and far between. The invention of rifling—cutting a
spiral groove inside the barrel to impart a spin on a bullet—greatly improved the
accuracy and range of firearms, but they did not supplant the traditional musket.
Volti7e_16_ch15_285-304_highres.indd 296 28/11/12 6:03 PM

Weapons and Their Consequences 297
The “Brown Bess” flintlock musket remained the standard English infantry weapon
for 160 years before it was replaced by breechloading rifles (in which the bullet is
loaded from behind the barrel, rather than through the muzzle) during the mid-
1800s. Artillery fared no better; according to A. Rupert Hall, “The guns of Queen
Victoria’s wooden ships were capable of little more accurate practice than those of
Drake’s fleet which defeated the Armada.”30
Yet after decades of stagnation, military technologies changed dramatically
during the long reign of Queen Victoria, The invention of bullets that expanded
as they were fired made rifled firearms much easier to load because bullets did not
have to be laboriously rammed into the bore. The development of breechloading
weapons had the added benefit of allowing soldiers to remain prone while reloading
instead of standing upright, where they presented a conspicuous target. Improved
powders produced higher muzzle velocities, allowing the use of smaller bullets and
lighter firearms. These new powders also had the advantage of being smokeless, so
a soldier did not give away his position when he fired. But none of these changes
can be attributed to developments exclusively within the realm of weapons produc-
tion; they were products of an emerging industrial order, and as such, they will be
considered in greater depth in Chapter 17.
Technological Change and Naval Culture
in the Era of the Battleship
While land warfare in the nineteenth century was being transformed by the use of
new weapons, a similar revolution was taking place on the sea. One key element
of this revolution was the use of steam engines to propel ocean-going vessels so
they were no longer affected by the vagaries of the winds. Moreover, steam pro-
pulsion was the perfect complement to steel construction, since the new energy
source allowed the construction of much larger and heavier ships. The ships used
by England’s Admiral Nelson at the beginning of the nineteenth century displaced
2,000 tons at most; by the 1860s, such ships were up to 9,000 tons, and by the end
of the century they displaced 20,000 tons.31
Ships of this size were able to carry a formidable array of guns, made more
accurate and rapid firing by rifling, breechloading, and improved powders and
projectiles. All of these developments reinforced a trend that had begun when can-
non first began to be mounted on ships. Before this time, the traditional pattern
of naval combat had been based on closing with the enemy vessel (ramming it if
possible), boarding it, and then subduing its crew through the naval equivalent of
battlefield combat. Ship-mounted cannon made these tactics obsolete; opposing
ships now attempted to sink or disable their foe through long-distance cannon fire.
Countries such as Spain and the city-states of Italy that clung to the old methods
of combat put themselves at a great disadvantage, and over time the naval balance
of power began to shift to England and the other nations that embraced the use of
shipboard artillery.32 By the beginning of the twentieth century, the steam-powered
battleship with its collection of heavy ordnance had become the very embodiment
of modern weaponry, the most fearsome military artifact the world had ever seen.
Volti7e_16_ch15_285-304_highres.indd 297 28/11/12 6:03 PM

298 The Tools of Destruction
More than simply a weapons system, it came to symbolize a nation’s armed might,
both for traditional naval powers like England and aspiring ones like the United
States and Japan.
As with any profound technological change, the development of the battleship
presented numerous problems of adjustment. Modern navies became dependent on
the availability of coaling stations located along their areas of operation, and this
gave a strong impetus to imperial expansion, an expansion that was facilitated by
growing naval power. Navy officers basked in the reflected glory of modern war-
ships, but at the same time, the modern naval vessel forced changes in military
routines that were accepted with great reluctance. In the days of sail, a seaman’s
life revolved around a unique set of routines that had imbued navies with a distinct
culture. Steam changed many of these; many sailors were now little more than float-
ing factory workers—stoking coal, watching gauges, and keeping the machinery in
good repair.33 The result was a serious erosion of traditional maritime virtues, and
the eventual replacement of old-guard officers with more technically minded ones.
Just as the longbow and the gun destroyed the medieval knight and the culture of
chivalry, advances in naval technology put an end to a way of life that had been
common to generations of sailors.
Weapons and the Making of the Modern World
All these military innovations changed the conduct of war markedly. Although
the full implications of new military technologies were not always quickly grasped
by military planners in Europe, they were put to devastatingly good use abroad by
the European armies and navies that brought large portions of Asia and Africa
under imperialist domination. The technological superiority of Western military
forces was clearly demonstrated during the first Opium War (1839–1842), when the
cannon of British gunboats battered down Chinese fortifications and the superior
firepower of British soldiers routed a numerically superior Chinese army. A single
British ship armed with two 32-pound cannon destroyed nine war junks, five forts,
a shore battery, and two military stations—all in one day.34 China, the originator
of gunpowder and many other advanced martial technologies, had fallen woefully
behind the Western world—and paid the price in its subjugation by the West.
By the second half of the nineteenth century, the technological gap between
the Western powers and the rest of the world had grown even larger. Improved
weaponry and military auxiliaries like the steamboat, railroad, telegraph, and medi-
cines for the treatment of tropical diseases sharply divided the world into winners
and losers. Nations with industrial economies were able to expand their domains
to the point that, by 1914, European nations directly or indirectly controlled most
of the world’s landmass. The disparity in military capability was most marked in
Africa, where breechloading repeating rifles allowed small military contingents to
prevail over native troops that at best were armed with flintlock muskets, and often
with only spears and shields. Numerous instances could be cited of a few hundred
European soldiers utterly routing native armies that numbered in the thousands.35
To take the most prominent example, in 1898 at the Battle of Omdurman in
Volti7e_16_ch15_285-304_highres.indd 298 28/11/12 6:03 PM

Weapons and Their Consequences 299
the Sudan, a British force, assisted by a gunboat on a nearby river, confronted
40,000 Dervishes. After five hours of combat, 48 British soldiers and their Egyptian
allies had been killed. For the Dervishes, the battle was more costly; more than
10,000 of them lay dead.36
The most effective weapon of European imperial advance was the machine
gun. Used sparingly during the American Civil War, the machine gun came into
its own during the colonial wars of the late nineteenth and early twentieth centu-
ries, when a single gun crew could cut to shreds any native force foolish enough to
attack them. But little did the bearers of “the white man’s burden” realize that this
product of Western ingenuity would soon be used against their own kind. Smug
with the belief that the machine gun took its lethal toll only when directed against
“lesser breeds of men,” Europeans were slow to realize that the machine gun had
transformed the conditions of warfare.37 That realization came with frightening
force during the first years of World War I, as generals on both sides continued to
hurl their troops against enemy lines fortified by machine gun nests. The result was
an appalling slaughter, as attested by 60,000 British casualties during the first day of
the Battle of the Somme.
While the machine gun was rewriting the rules of land warfare, the submarine
was doing the same thing to war on the sea. Slipping beneath the ocean’s surface
and firing its torpedoes at enemy ships blind to its presence, the submarine defied all
of the conventions of naval combat, as well as the international laws that covered
the attack of commercial vessels. Instead of adhering to established prize rules that
stipulated that passenger and crew be allowed to board lifeboats before the sinking
A World War I machine gun crew wearing gas masks deployed during the catastrophic Battle
of the Somme. (© Hulton-Deutsch Collection/CORBIS)
Volti7e_16_ch15_285-304_highres.indd 299 28/11/12 6:03 PM

300 The Tools of Destruction
of their ship, submarines remained beneath the surface and summarily sunk their
unsuspecting prey. The submarine was the cornerstone of Germany’s naval strategy,
and during World War I their U-boats wreaked havoc on Allied shipping. But
Germany’s success was fleeting; unrestricted submarine warfare ultimately brought
the United States over to the side of Britain and France in 1917 and decisively
tipped the military balance in their favor.
At the same time that improved artillery and the machine gun were bringing
new horrors to the battlefield, the airplane was beginning to demonstrate what
modern technologies could do to civilian populations. First confined to observation
and reconnaissance, by the third year of World War I aircraft were being employed
as strategic weapons. Bombs dropped on England from German airplanes and
Zeppelins killed about 1,400 people and injured another 3,400, as well as doing a
fair amount of material damage.38
Later wars were to eclipse these figures by huge margins. Even before the first
A-bomb was dropped, American bombing attacks on Japanese cities during World
War II killed 260,000 and injured 412,000, and destroyed 40 percent of 66 cities’
built-up areas.39 A single incendiary attack on Tokyo destroyed more than a quarter-
million buildings, killed 84,000, injured more than 40,000, and left more than a
million homeless.40 On August 6, 1945, the first atomic bomb was dropped on
Hiroshima, killing more than 50,000 people, injuring as many more, and destroying
half of the city. A second A-bomb had similar results when it was detonated over
Nagasaki three days later.
Aerial bombardment also devastated much of Germany. Tens of thousands
of bombs were dropped on urban industrial centers, leaving thousands dead and
many more homeless. A single raid on Berlin that took place toward the end of
the war may have killed as many as 25,000 civilians.41 Yet it is evident that for all
the destruction it wrought, the bombing of industrial centers with large civilian
populations during World War II was not the decisive factor in winning the war
in Europe. In part, this was due to the inaccuracy of strategic bombing; only about
30 percent of the bombs dropped from American bombers landed within 1,000 feet
of their intended targets.42 Although large portions of cities were leveled and many
industrial plants were damaged or destroyed, German industry was never brought
to its knees. Many factories were put quickly back into operation after apparently
having been destroyed, and substitutes were found for components that could no
longer be produced. German industrial production increased each year until 1944,
reaching its highest level in the year when Allied bombardment was at its greatest.
Despite massive bombardment in 1944, German industry produced three times as
many military aircraft, five times as many armored vehicles, and eight times as many
artillery weapons as it had in 1941.43
Equally important, civilian morale never broke down in the face of continual
air raids. It may even have been the case that bombardment resulted in a stiffening
of the German and Japanese resolve to continue the war. Although there is ample
room for debate concerning its contributions to winning World War II, there is
little doubt that strategic bombing failed to live up to the extravagant claims made
by its proponents prior to the war. The official U.S. Strategic Bombing Survey that
Volti7e_16_ch15_285-304_highres.indd 300 28/11/12 6:03 PM

Weapons and Their Consequences 301
was conducted after the war noted the accomplishments of strategic bombing, but
it also called attention to its costs and shortcomings. In Walter Millis’ words, “It
would probably be fair to summarize the net conclusion as showing that strategic
bombing, when armed only with TNT and incendiary weapons, had involved much
greater casualties and had produced much smaller military results than had been
expected.”44
Part of the explanation for this ineffectiveness lies in the nature of long-term
bombing attacks. According to B. H. Liddell Hart’s analysis, strategic bombing with
conventional weapons puts pressure on an enemy but does not produce the deci-
sive result that comes with a sudden shock. Instead of producing quick, widespread
results, strategic bombing, even if it results in considerable devastation, only increases
pressure slowly. But, as Liddell Hart noted, “Human beings have an almost infinite
power of accommodation to degradation of living conditions, so long as the process
is gradual.”45
Yet once again technology has altered the rules of war. Missiles with nuclear
warheads are shock weapons of the greatest magnitude. A nuclear war would be
like no other war of the past, and would result in death and destruction on a
scale that can scarcely be imagined. A single one-megaton bomb would dig a
thousand-foot-wide crater to a depth of 200cfeet. No structures would be standing
within a radius of nearly two miles, and damage would be heavy for a considerable
distance beyond this. If the area had a normal urban population density, at least
200,000 people would be killed immediately, and half a million would be injured.
Fires would spread through the city, killing thousands more. After this initial
shock, radiation would spread through the area, killing many more in a few weeks
or months. And, of course, this would not be an isolated attack. The arms race
between the United States and the Soviet Union culminated in the two countries
collectively amassing an arsenal of over 4,000 nuclear-tipped ballistic missiles, along
with hundreds of manned bombers and cruise missiles with nuclear capabilities. If
even a small number of them had been used in anger, devastation would have been
immense.46 The basic structures of society would undoubtedly have snapped under
the strain, leaving civilization perched on the threshold of barbarism. Had this hap-
pened, it might have been better if mankind’s long history of technological advance
had never occurred.
Wars of the past, whatever their horrors, at least left some room for individual
valor; a nuclear war would be depersonalized, push-button destruction on a massive
scale. In this way, nuclear warfare would be a culmination of an important trend of
military technology. Increasingly sophisticated weapons have opened the psycho-
logical distance between the warrior and his victims. To some extent at least, mod-
ern warfare can be seen as an extreme example of the rational mindset discussed
in Chapter 1, the psychological distancing of oneself and one’s actions from the
people and things that suffer the consequences of these actions. In the past, the
inhabitants of a village might be slaughtered by troops wielding swords and axes;
today the same thing can be accomplished by dropping an incendiary bomb from
an altitude of 40,000 feet. The result is the same (or worse), but soldiers, sailors,
and airmen, as well as the engineers who have designed their weapons and the
Volti7e_16_ch15_285-304_highres.indd 301 28/11/12 6:03 PM

302 The Tools of Destruction
Hiroshima in the aftermath of the first use of an atomic bomb. (UPI/Corbis-Bettmann)
political leaders who sent them into battle, are removed from the fray. As Charles
Lindberg reflected after flying a World War II bombing sortie, “You press a button
and death flies down. . . . How can there be writhing, mangled bodies? How can
this air around you be filled with unseen projectiles? It is like listening to a radio
account of a battle on the other side of the earth. It is too far away, too separated
to hold reality. . . . In modern war one kills at a distance, and in so doing he does
not realize that he is killing.”47
Questions for Discussion
1. Can any equivalents of medieval knights be found today? In what ways do they
resemble medieval warriors? How do the technologies they use shape their
activities and attitudes?
2. The use of gunpowder changed the nature of warfare and played a significant
role in the transformation of European society. Yet in China, its land of origin,
gunpowder did not have this effect. Why do you think this was so? Can you
think of any possible differences between medieval Europe and traditional
China that might have been responsible for their different experiences?
3. To make effective use of firearms, soldiers had to be well-drilled and subjected
to rigid discipline. How have military models of organization influenced non-
military social institutions? What are the advantages and disadvantages of
military styles of organization when they are applied elsewhere?
Volti7e_16_ch15_285-304_highres.indd 302 28/11/12 6:03 PM

Weapons and Their Consequences 303
4. For all of its horrors, war has historically provided opportunities for people to
demonstrate some real virtues, such as resourcefulness, initiative, and courage.
Have modern military technologies made these virtues irrelevant? If so, does
the loss of these virtues make war more absurd than ever before?
5. To what extent, if any, should an engineer feel a sense of individual responsibil-
ity for designing a weapon that is used to kill large numbers of people? What
about the combatant who is ordered to use the weapon? Do the citizens whose
taxes were used to design and build the weapon also bear some responsibility?
Notes
1. Victor Davis Hanson, The Western Way of War: Infantry Battle in Ancient Greece (New
York: Knopf, 1989).
2. Stephen Morillo, Jeremy Black, and Paul Lococo, War in World History: Society, Tech-
nology, and War from Ancient Times to the Present (New York: McGraw-Hill, 2009)
p. 59.
3. James K. Finch, Engineering and Western Civilization (New York: McGraw-Hill, 1951),
p. 22.
4. Doyne Dawson, The Origins of Western Warfare: Militarism and Morality in the Ancient
World (Boulder, CO: Westview, 1996), p. 112.
5. William Reid, The Lore of Arms: A Concise History of Weaponry (New York: Facts on
File, 1984), p. 15.
6. E. W. Marsden, Greek and Roman Artillery: Historical Development (Oxford: Clarendon
Press, 1969), pp. 86–91.
7. Robert Laffont, The Ancient Art of Warfare, vol. 1 (Greenwich, CT: New York Graphic
Society, 1968), pp. 121, 128–129.
8. Geoffrey Parker (Ed.), The Cambridge Illustrated History of Warfare: The Triumph of the
West (Cambridge: Cambridge University Press, 1995), p. 84.
9. See Charles William Chadwick Oman, A History of the Art of War: The Middle Ages
from the Fourth to the Fourteenth Century (London: Methuen, 1898), pp. 73–115.
10. Hugh D. H. Soar, The Crooked Stick: A History of the Longbow (Yardley, PA: Westholme,
2004), pp. 1–17.
11. Archer Jones, The Art of War in the Western World (New York and Oxford: Oxford
University Press, 1987), pp. 156–157.
12. Gervase Phillips, “Longbow and Hackbutt: Weapons Technology and Technology
Transfer in Early Modern England,” Technology and Culture 40, 3 (July 1999): 579.
13. Richard A. Preston, Sydney F. Wise, and Herman O. Werner, A History of Warfare and
Its Interrelationships with Modern Society (New York: Frederick A. Praeger, 1956), p. 85.
14. Vernard Foley, George Palmer, and Werner Soedel, “The Crossbow,” Scientific American
252, 1 (January 1985): 104–110.
15. Laffont, The Ancient Art of Warfare, p. 444.
16. Lynn White, Jr., Medieval Technology and Social Change (New York: Oxford University
Press, 1966), p. 102.
17. Paul E. Chevedden, Les Eigenbrod, Vernard Foley, and Werner Soedel, “The Trebuchet,”
Scientific American 273, 1 (July 1995).
18. Martin van Creveld, Technology and War: From 2000 B.C. to the Present (New York:The
Free Press, 1989).
19. Alex Roland, “Secrecy, Technology, and War: Greek Fire and the Defense of Byzantium,
678–1204,” Technology and Culture 33, 4 (October 1992).
Volti7e_16_ch15_285-304_highres.indd 303 28/11/12 6:03 PM

304 The Tools of Destruction
20. China Science and Technology Museum, China’s Ancient Technology (Beijing: China
Reconstructs Magazine, 1983), pp. 18–20.
21. William H. McNeill, The Pursuit of Power: Technology, Armed Force, and Society since
A.D. 1000 (Chicago: University of Chicago Press, 1982), p. 91.
22. For a listing of all of these procedures, see Kenneth Chase, Firearms: A Global History
to 1700 (Cambridge: Cambridge University Press, 2003) p. 25.
23. Michael Howard, War in European History (London: Oxford University Press, 1976),
p. 33.
24. Quoted in Walter Buehr, Firearms (New York: Thomas Y. Crowell, 1967), pp. 134–135.
25. McNeill, The Pursuit of Power, p. 141.
26. van Creveld, Technology and War, p. 107.
27. H. W. Koch, The Rise of Modern Warfare: From the Age of Mercenaries through Napoleon
(New York: Crescent, 1982), p. 18.
28. Richard Bean, “War and the Birth of the National-State,” Journal of Economic History
33, 1 (March 1973): 203–221.
29. Charles Tilly, “Reflections on the History of European State-Making,” in Charles Tilly
(Ed.), The Formation of National States in Western Europe (Princeton, N.J.: Princeton
Univ. Press, 1975), p. 42.
30. Quoted in Bernard Brodie and Fawn Brodie, From Crossbow to H-Bomb (Bloomington:
Indiana University Press, 1973), p. 61.
31. Howard, War in European History, p. 123.
32. Carlo M. Cipolla, European Culture and Overseas Expansion (Harmondsworth, England:
Penguin, 1970), pp. 70–72.
33. Elting E. Morison, From Know-How to Nowhere: The Development of American Technology
(New York: New American Library, 1977), pp. 139–152.
34. Geoffrey Parker, The Military Revolution: Military Innovation and the Rise of the West,
1500–1800 (Cambridge: Cambridge University Press, 1988), p. 154.
35. See Daniel R. Headrick, The Tools of Empire: Technology and European Imperialism in the
Nineteenth Century (New York: Oxford University Press, 1981), pp. 117–119.
36. Morillo, Black, and Lococo, War in World History, pp. 468-469.
37. John Ellis, The Social History of the Machine Gun (Baltimore: The Johns Hopkins
University Press, 1975), pp. 111–147.
38. James L. Stokesbury, A Short History of Air Power (New York: Morrow, 1986), p. 78.
39. Brodie and Brodie, From Crossbow to H-Bomb, p. 224.
40. Theodore Ropp, War in the Modern World (New York: Collier Books, 1962), p. 379.
41. Noble Frankland, Bomber Offensive: The Devastation of Europe (New York: Ballantine,
1970), p. 149.
42. Stephen L. McFarland, America’s Pursuit of Precision Bombing, 1910–1945 (Washington
and London: Smithsonian Institution Press, 1995), p. 203.
43. Ian Hogg, The Weapons That Changed the World (New York: Arbor House, 1986), p. 144.
44. Walter Millis, Arms and Men: A Study in American Military History (New York: New
American Library, 1956), p. 277.
45. B. H. Liddell Hart, The Revolution in Warfare (London: Faber and Faber, 1946), p. 25.
46. Randall Forsberg, “A Bilateral Nuclear-Weapon Freeze,” Scientific American 247,5
(November 1982): 61.
47. Quoted in Michael S. Sherry, The Rise of American Air Power: The Creation of Armageddon
(New Haven, CT: Yale University Press, 1987), pp. 209–210.
Volti7e_16_ch15_285-304_highres.indd 304 28/11/12 6:03 PM

305
c h a p t e r s i x t e e n
The Era of Smart Weapons
The last chapter noted that advances in military technologies have created a spatial
and often a psychological gulf that has separated combatants from the consequences
of their actions. This trend has intensified in the opening years of the twenty-first
century. To be sure, much of the burden of fighting still falls upon infantry soldiers
close to the action, but a great amount of killing and destruction is now done by
remote control. Moreover, although many combatants were detached from the
consequences of their actions in past wars, their own lives were at risk. In World
War II, aerial combat was a highly dangerous enterprise in which 19,876 American
airmen lost their lives in the European Theater of Operation alone, a toll greater
than the number of U.S. Marines killed in the Pacific Theater.1 Today’s weapons
have removed much of that risk, but their use has been the source of many strategic,
political, legal, and ethical issues.
Cruise Missiles
Today’s weapon of choice for many offensive missions is the cruise missile.These
are relatively small pilotless craft that can be launched from land, submarines,
surface vessels, and aircraft. Through the use of sophisticated navigation devices,
cruise missiles can hone in on their target while eluding radar detection by flying
close to the ground. Some cruise missiles are capable of supersonic speeds, and
many are capable of delivering nuclear weapons. Although a single cruise missile
can cost more than a million dollars, their use has been ordered on several occa-
sions by American presidents who wanted to take action against a country or an
insurgent group but were reluctant to put American soldiers, sailors, and airmen
in harm’s way.
As with many other technologies, the concept underlying cruise missiles has
been around for a long time, but its practical realization required the development
of many complementary technologies. Back in 1917, Charles Kettering—who later
headed General Motors research laboratories—demonstrated an “Aerial Torpedo”
that came to known as the “Kettering Bug.” Built largely out of papier-mâché with
cardboard wings, the Bug was powered by a 40 h.p. engine and carried 300 pounds
of explosives. Although several dozen were manufactured, none saw action during
World War I. The first operational cruise missile appeared during World II in the
Volti7e_17_ch16_305-322_highres.indd 305 28/11/12 6:03 PM

306 The Tools of Destruction
form of Germany’s V-1. Powered by a simple pulsejet engine and stabilized by a
gyroscope, the V-1 had a top speed of about 400 mph and a range of 200 miles. From
1944 to the end of the war, about 10,000 were launched at England, of which 2,410
reached London, where they killed more than 6,000 and injured nearly 8,000.2
Unlike the rocket-powered V-2, the V-1 flew at subsonic speeds, which made it vul-
nerable to attacks by fighter pilots, who either shot it down or sent it out of control
by putting their plane’s wing tip under its wing and flipping it.
Decades later, the United States began to make extensive use of Tomahawk
cruise missiles; 288 were launched during the 1991 Gulf War, which was waged to
repel Iraq’s occupation of Kuwait. Another 725 were used against Iraq when the
United States invaded that country in 2003.3 Cruise missiles were also fired from
Navy vessels in the Arabian Sea in 1998 to attack suspected Al Qaeda sites in
Sudan and Afghanistan in retaliation for the bombing of U.S. embassies in Kenya
A Tomahawk cruise missile being fired from a truck-mounted launcher. Each missile can be
armed with a conventional or nuclear warhead. (DOD/DOD/Time & Life Pictures/Getty Images)
Volti7e_17_ch16_305-322_highres.indd 306 28/11/12 6:03 PM

The Era of Smart Weapons 307
and Tanzania. On several occasions, cruise missile strikes were directed against
Serbian forces during the conflicts that followed the breakup of Yugoslavia.
Smart Bombs
As we saw in the previous chapter, the expectation that the strategic bomber would
be the decisive weapon of war was thwarted by the inaccuracy of bombing attacks.
What was needed was an unpowered explosive device that could be accurately
guided to its target. Germany developed such a weapon during World War II, the
FX-1400 “Fritz,” a four-winged glide bomb that targeted armored ships. With a
warhead containing 660 pounds of explosives, it was dropped from an airplane
and steered by radio. In 1943 it succeeded in sinking an Italian battleship that was
attempting to defect to Allied forces after Italy’s surrender. It went on to sink or
damage several more ships, but the vulnerability of the airplane carrying the flying
bomb limited its deployment.4
During the 1950s advances in solid-state electronics created new possibilities
for remote-controlled ordnance. The first “smart bombs,” or “Precision Guided
Munitions” (PGMs), to use the military’s preferred term, employed a television
camera to provide a bomb’s-eye view of the terrain below. The bomb could then
be guided through the use of steerable fins as it zeroed in on its target. In the
1960s the emerging technologies of lasers and integrated circuits were employed
in a new generation of PGMs. These achieved a fair measure of success during the
Vietnam War, when strategic targets such as bridges that had withstood dozens of
conventional bombing attacks were brought down by a few laser-guided bombs. The
capabilities of PGMs were further demonstrated during the Gulf War (1990–1991)
when television viewers got a real-time view of targets being destroyed by a smart
bomb. These attacks made for good television, but in reality most bombing attacks
used conventional “iron bombs.”
Whether guided by television or a laser, PGMs were ineffective when adverse
weather conditions obscured an intended target. This deficiency began to be over-
come through the use of another emerging technology, Global Positioning Systems
(GPS), for the guidance of a bomb’s trajectory. Yet, as ever, defensive countermea-
sures were adopted; the intended victims of a PGM were often able to jam the radio
signals guiding the bomb, forcing it to depend on its less accurate internal naviga-
tion system to guide it to its target.
Cruise missiles and smart bombs seem to have realized the prophecies made
back in the 1920s and 1930s by the early apostles of air power. With the emergence
of solid-state electronics, onboard computers, and GPS navigation, the dream
of precision bombing has become a reality. But pinpoint accuracy is of no value
if military intelligence fails to accurately identify a target’s location. During the
invasion of Iraq led by the United States in 2003, at least 50 “decapitation strikes”
using precision-guided weaponry were launched with the intent of killing Saddam
Hussein and members of his leadership cadre. None of the targeted individuals
was hit, but dozens of civilians died or were injured as a result of these attacks.5
Although his country’s military was decisively routed after a few days of fighting,
Volti7e_17_ch16_305-322_highres.indd 307 28/11/12 6:03 PM

308 The Tools of Destruction
Saddam Hussein was able to evade capture for many weeks, until he was finally
caught by ground troops acting on information derived from the interrogation of
his former bodyguards and members of his family.
A tragic example of smart weaponry and defective intelligence came in 1988
when crewmen aboard the USS Vincennes in the Persian Gulf identified what they
thought was an Iranian Air Force F-14A. Attempts at radio contact were unsuccess-
ful, and under the impression that they were about to be attacked, crewmen aboard
the Vincennes fired two SM-2MR surface-to-air missiles at the presumed attacker.
The missiles accurately honed in on their target and destroyed what turned out to
be a civilian Airbus A300 bound for Dubai. All 290 passengers and crew, includ-
ing 66 children, were killed. Although there had been some ambiguity about the
identity and intentions of the aircraft, it is evident that the crew of the Vincennes
made critical mistakes when assessing the situation, errors that were compounded
by the attitude of the ship’s captain, who had a known penchant for aggressive and
even reckless actions. The United States, while not apologizing for what had hap-
pened, eventually paid reparations amounting to $131.8 million. Two years after the
incident, the Navy awarded the retiring captain with the Legion of Merit for his
service in the Persian Gulf.
High-Tech Surveillance
The tragic fate of the crew and passengers on the airliner brought down by a
“smart” weapon is one of untold numbers of breakdowns in military intelligence.
Throughout history, military operations have been hampered by an inability to
effectively identify and track an enemy. One technological response to this age-
old problem has been the development of satellites capable of providing highly
detailed views of the earth’s surface. Although much of the information about the
performance of these satellites is cloaked in secrecy, a general understanding of
their capabilities can be derived from non-classified sources. The first “reconnais-
sance satellites,” to use their official name, used film cameras to record features
on the earth’s surface. The exposed film was ejected and captured in mid-air by
specially equipped aircraft. Subsequent generations of satellites have used digital
imagery that can be directly relayed to stations on earth. Satellite-based spying by
the United States is the responsibility of the National Reconnaissance Office, an
agency so secretive that its existence was not publicly acknowledged until 1992, and
much of its budget is “black,” known to only a small number of elected representa-
tives and government officials.
At the outset, America’s satellite program was aimed at the surveillance of
the Soviet Union, although it was originally presented as a means of collecting
scientific data. Officially known as the Discovery Project, this supposedly scientific
endeavor was in reality a façade for the CORONA spy satellite program, which over
the years sent many highly detailed pictures of militarily significant sites.6 In recent
years, satellites and the ability to launch them have developed to the point where
commercial firms are able to deploy satellites that can send back images nearly as
good (and often at lower cost) as those produced by the National Reconnaissance
Volti7e_17_ch16_305-322_highres.indd 308 28/11/12 6:03 PM

The Era of Smart Weapons 309
Office. This has led to considerable disagreement within the government regarding
how much satellite surveillance can be handed over to private firms.7
Drones
Although it is possible that the most current spy satellites are capable of retrieving
images even when clouds obscure the earth’s surface, their orbits limit the amount
of time they can be within range of an area of interest. Consequently, they cannot
be expected to provide up-to-the minute images of particular locations. For these,
the military has made increasing use of low-flying unmanned aircraft that are collec-
tively known as drones, or Unmanned Aerial Vehicles (UAVs) in military parlance.
Some of these craft are the size of a conventional airplane, while others are no big-
ger than a radio-controlled model airplane. Classified research is also being done
on drones that are no bigger than a large insect but are equipped with sensors and
miniaturized cameras that offer the prospect of getting inside a building and obtain-
ing information about its interior.8 In addition to being used for aerial surveillance,
some drones are able to fire missiles, and others are intended for “suicide missions”
in which they crash into a target.
The absence of a pilot and a support system means that UAVs can be lighter
than a manned airplane, allowing them to fly higher and at greater distances while
staying aloft for long periods of time. The use of reconnaissance drones goes back
to the Vietnam War, where the Fire Fly drone flew 3,435 missions, but insufficient
research was done to accurately adjudge their effectiveness.9 Widespread awareness
of drones and their capabilities came with the American invasion and occupation
of Iraq. There, Predator drones logged hundreds of thousands of air miles annually.
So extensive has been the use of UAVs in recent years that in 2009 Secretary of
Defense Robert Gates stated that the U.S. Air Force was training more drone pilots
than pilots of conventional aircraft.10
As noted above, in addition to their ability to gather accurate and up-to-date
information, drones have the added attraction of serving as offensive weapons.
Hellfire missiles carried by Reaper UAVs have been effectively used as antitank
weapons. More significantly, drone strikes also have been used in Somalia, Yemen,
Afghanistan, and the tribal areas of Pakistan to kill suspected Al-Qaeda and Taliban
An MQ-9 Reaper landing after a mission in Iraq. A widely used drone, the Reaper is 36 ft.
long with a wingspan of 66 ft. It can carry 3,800 lbs. of reconnaissance gear or 14 Hellfire
air-to-ground missiles. (U.S. Air Force photo/Tech. Sgt. Erik Gudmundson)
Volti7e_17_ch16_305-322_highres.indd 309 28/11/12 6:03 PM

310 The Tools of Destruction
insurgents. These attacks have also resulted in the deaths of civilians, as admitted by
John Brennan, the Obama administration’s counterterrorism advisor.11 The extent
of these deaths has been a matter of contention. According to Brennan, civilian
deaths have been “exceedingly rare,” an assessment that has been challenged by the
London-based Bureau for Investigative Journalism, which counted 260 strikes by
Predator and Reaper drones since President Obama took office. The Bureau noted
that 282 to 535 civilians, 60 of them children, had been “credibly reported” as hav-
ing been killed in those attacks. American officials countered that this number was
much too high, though they did acknowledge that at least several dozen civilians
had been killed inadvertently in strikes aimed at militant suspects.12
One of the striking things about the use of drones as offensive weapons is that
they are “flown” by “pilots” who are situated thousands of miles from their targets.
Drone attacks on Afghanistan and Pakistan are controlled from Nellis Air Force
Base in Nevada, where pilots guide their craft and fire their onboard missiles while
sitting in front of television monitors. At a superficial level at least, this ability to
engage in remote-control warfare has made combat seem like just another high-tech
job, but as we shall see, it does give rise to some difficult issues.
The Cost of Technological Sophistication
The ethical and legal issues engendered by the use of UAVs and other advanced
military technologies will be taken up later in this chapter, but first we will consider
a more mundane yet still significant aspect of high-tech weaponry, the financial
burden that they place of governmental budgets.
In the opening decades of the twenty-first century, the United States has at
its command the most formidable collection of weaponry in human history. But
do cruise missiles, smart bombs, imaging satellites, and UAVs guarantee security in
an unstable world? Military strength is undeniably important, but a healthy econ-
omy is also essential for national well-being. The decision to orient government
policies toward either military strength or a thriving civilian economy has been
(erroneously) attributed to the nineteenth-century German Chancellor, Otto von
Bismarck, as the choice of “guns or butter.” From the 1940s onward, the United
States has not stinted when it has come to the acquisition of guns and other weap-
ons of war. In absolute terms, the defense budget of the United States is enormous,
coming in at $646 billion for fiscal year 2012.13 This amounts to about 4.8 percent
of the country’s Gross Domestic Product of $13.35 trillion projected for 2012. This
is less in percentage terms than defense expenditures at the height of the Cold War,
but it is still a lot of money, about equal to the rest of the world’s collective military
expenditures.
There are many reasons for such a large budget, but there is no question that
technology has been a major driver. As we saw in Chapter 3, a single Air Force F-22
fighter costs U.S. taxpayers $412 million when R&D and testing costs are taken
into account. This is a far cry from the World War II era, when a P-51 Mustang cost
about $650,000 in today’s dollars. Much of the difference between the two airplanes
can be attributed to vast improvements in engine power, materials, armaments,
Volti7e_17_ch16_305-322_highres.indd 310 28/11/12 6:03 PM

The Era of Smart Weapons 311
avionics, ability to evade radar, and all of the other technological advances that
separate the World War II era from the present day. But can such cost increases be
sustained indefinitely? This question was answered by Norman Augustine, an aero-
space executive and former Under Secretary of the Army, who plotted the increas-
ing cost of fighter planes over time. Extrapolating into the future, he noted: “In the
year 2054, the entire defense budget will purchase just one aircraft. This aircraft will
have to be shared by the Air Force and Navy 3½ days each per week except for leap
year, when it will be made available to the Marines for the extra day.”14
Other examples of high-priced military hardware are not hard to find. The
development and production costs for the 74,000 bombs that comprise the current
generation of Precision Guided Munitions used by the U.S. Air Force and Navy
come to more than $4.65 billion.15 A single M1A2 Abrams battle tank carries a
price tag of $6.2 million,16 and one nuclear submarine costs the taxpayers (or adds
to the budget deficit) $8.3 billion.17
A less obvious cost of a technologically advanced military force is the large
number of scientists and engineers who are engaged in defense R&D. In a world in
which scientific discoveries and technological innovations are crucial for economic
advance, a lot of talent is expended on projects and activities with little or no rel-
evance to the civilian economy. There have always been spinoffs from the defense
to the civilian sector—DARPA’s early sponsorship of what became the Internet is a
recent example—but most defense R&D remains confined to that sector.
It is an irony of modern technology that the military sector, which is tasked
with death and destruction, bears a number of similarities with medical care, which
is supposed to be about health and healing. In both cases there is a strong tendency
to be satisfied with only the most sophisticated technologies, which usually means
the most expensive technologies. A fighter pilot doesn’t want to engage an enemy
who is flying the world’s best combat aircraft when he is in the seat of the world’s
second-best airplane, just as a patient suffering from a life-threatening disease won’t
be happy with the second-best medical treatment. The two sectors also receive
much of their funding from third parties—insurance companies and local, state,
and federal governments in the case of medical care, and the federal government for
military expenditures. It also has to be noted that both the medical and the defense
sectors are prolific spenders on lobbying efforts when they interact with Congress
and other agencies. Defense firms also have been able to gain favor with members
of Congress by spreading their operations around the country so that the majority
of Congressional districts have at least one defense contractor within their borders.
Asymmetrical Warfare
High defense expenditures may be justified as the cost of protecting a country and
its inhabitants, but does modern weaponry guarantee security? Military engage-
ments in recent decades, most notably the Vietnam War, showed that wars are not
won by advanced technologies alone. Since the Korean War, the United States has
been engaged in a number of conflicts that can be characterized as “asymmetrical
warfare,” engagements where the United States was vastly superior in the quantity
Volti7e_17_ch16_305-322_highres.indd 311 28/11/12 6:03 PM

312 The Tools of Destruction
and quality of its materiel. In terms of quantity, the United States dropped more
bombs tonnage on North Vietnam and Cambodia than it did during the entire
duration of World War II. The United States also was vastly superior to North
Vietnam in military technology. Infrared goggles gave American soldiers the ability
to see in the dark, smart bombs destroyed their targets with pinpoint accuracy, and
surveillance satellites and first-generation drones provided detailed views of the
terrain below. More recently, during the invasion of Iraq in 2003, Saddam Hussein
and his military forces were quickly routed by a “shock and awe” campaign waged
with cruise missiles, smart bombs, depleted uranium ordnance, and many other
examples of advanced military technology. But these weapons did little to quell
the subsequent insurgency that resulted in the deaths of 4,486 Americans and
318 British and others.18
In Iraq and Afghanistan, the insurgents’ principal weapon has been the impro-
vised explosive device, or IED, planted in a roadway. Early versions consisted of
nothing more than explosives connected by wire to a detonator. Later examples
Improvised explosive devices (IEDs) may be triggered remotely with this simple cell phone–
based detonator. (Brian Palmer/Sipa Press/PalMarineIrq)
Volti7e_17_ch16_305-322_highres.indd 312 28/11/12 6:03 PM

The Era of Smart Weapons 313
were more sophisticated; some used timers or wireless triggers such as modified car
door lock openers and cordless telephones so a wire could not be followed to where
the insurgents were hiding. Over time, Iraqi insurgents devised more than 90 ways
of setting off an IED. Coalition forces were eventually able to deploy robots to dis-
arm IEDs, but even these could be thwarted by placing roadside bombs so they were
out of the robots’ reach.19
Technology and Terrorism
In some quarters, justification for the invasion of Iraq rested on the assertion that
Saddam Hussein had conspired with Al Qaeda to attack the World Trade Center in
New York City and the Pentagon on September 11, 2001. This supposed conspiracy
was unlikely from the start, but the belief persisted long after it was proven to be
a fabrication. It did, however, focus the world’s attention on a new set of secu-
rity issues. While the spread of nuclear weapons among established nation-states
remains a grave potential threat, the world is now focused on a different source of
danger—global terrorism. Terrorism is closely related to asymmetrical warfare in
that a militarily weaker side is able to challenge a stronger one through the use of
simple but effective weapons such as roadside bombs and car bombs. Terrorism is
not a new phenomenon; a depressing litany of examples can be found throughout
human history. What has changed in recent years is the increasing technological
sophistication of terrorists and their use, or potential use, of weapons of vastly
greater destructive power.
Nuclear weapons are unparalleled in their destructive power, and the acquisi-
tion of a multimegaton atomic bomb would provide terrorists with the ultimate
weapon. At the same time, however, the likelihood of this happening is relatively
low. The construction of a nuclear bomb requires a high degree of expertise and
access to specialized equipment, along with the production of sufficient quanti-
ties of fissionable materials like plutonium or uranium-235. The production of a
workable bomb is likely to be beyond the capabilities of terrorist groups lacking
the assistance of a nation that already has nuclear weapons, but fissionable mate-
rial for a crude bomb could be bought or stolen. A terrorist group does not need a
military-grade nuclear bomb to induce widespread terror and panic; the detonation
of a “dirty bomb,” a conventional explosive device that would spread radioactive
materials like cesium-137, iridium-192, or cobalt-60, could induce radiation sick-
ness on a significant scale.20 More importantly, it would engender panic over a wide
area even if actual loss of life were not great. Since the primary goal of terrorist
organizations is to spread fear and disruption, a bomb of this sort would be a highly
effective weapon.
Much easier than the development of nuclear weapons are the manufacture
and use of chemical weapons. As we have seen, poison gas was not deployed as a
combat weapon during World War II, although it was used in the 1920s and 30s
by Spain and Italy to subdue indigenous populations while securing and expanding
their colonial possessions.21 Poison gas reemerged as an offensive weapon when
Saddam Hussein used it during the Iran–Iraq war (1980–1988) and then against the
Volti7e_17_ch16_305-322_highres.indd 313 28/11/12 6:03 PM

314 The Tools of Destruction
Kurdish population of his own country in 1988. Saddam’s weapon of choice was a
combination of mustard gas and one or more nerve agents: sarin, tabun, and VX.22
In general, chemical weapons fall into four main categories: nerve agents;
blood agents, notably hydrogen cyanide and cyanogen chloride; blister agents, like
the mustard gas that was first used in World War I; and choking and incapacitat-
ing agents, such as chlorine, another gas used in that war.23 Up to now, the most
destructive use of a chemical weapon by a terrorist organization occurred in March
1995 when a Japanese religious cult known as Aum Shinrikyo (Aum Supreme
Truth) released sarin gas into the Tokyo subway system, killing 12 and injuring more
than 5,000. In the years leading up to the attack, the sect had accumulated consider-
able expertise in the manufacture of sarin and other chemical weapons, an endeavor
that was facilitated by having a number of scientists and engineers in its ranks. At
the same time, however, the mode of delivery was distinctly low-tech, entailing
nothing more than placing plastic bags containing sarin in five subway cars and
then puncturing them, allowing the volatile liquid to disperse as a lethal vapor.
The third category in the unholy trinity of terror weapons consists of biologi-
cal weapons. The list of these weapons is extensive, including bacteria that are the
sources of anthrax, tularemia, and plague, and viruses that cause smallpox, yellow
fever, Dengue fever, and Ebola. All of these are capable of producing death and
grave injury, but the most effective weapon employed so far has been the bacteria
that cause salmonella, a much less lethal disease. This was used by a group of reli-
gious extremists in Oregon in 1984 to contaminate salad bars in local restaurants in
the hope of incapacitating a large number of people prior to a local election.
Workers decontaminate a Tokyo subway car after the 1995 sarin gas attack perpetrated by
Aum Shinrikyo. (Noboru Hashimoto/CORBIS SYGMA)
Volti7e_17_ch16_305-322_highres.indd 314 28/11/12 6:03 PM

The Era of Smart Weapons 315
As with chemical weapons, biological weapons are easy to manufacture, and
they certainly do not require the assistance of a “rogue state.” Biological weapons
can be made in small facilities using little in the way of sophisticated equipment,
and the basic materials can be obtained from commercial sources or cultivated from
naturally occurring sources.24 Effective distribution is more problematic, however.
Even if there were effective means of spreading the pathogens, contaminating a
large area would be difficult because most of the released organisms would not sur-
vive for long (anthrax is an important exception, because its spores can survive for
an indefinite period). Most biological agents are not highly infectious and generally
require close contact or exposure to bodily fluids for transmission. But biological
weapons do not have to operate at a high level of effectiveness and efficiency; as
with terror weapons in general, their significance lies in their ability to spread fear
on a massive scale. The novelty of biological weapons and the fear engendered
by an invisible attacker, coupled with the difficulty of administering antidotes or
immunization on a large scale, makes them particularly attractive terror weapons.
Cyberterrorism and Cyberattacks
While modern technologies have enhanced the capabilities of terrorists, they also
have created new vulnerabilities for these terrorists to exploit. Technologically
sophisticated societies are highly dependent on a variety of infrastructures, such as
electrical grids run by computers and other digital devices. In the defense sector, a
nation’s ability to protect itself rests heavily on computer networks and equipment
to detect an imminent enemy attack and take measures against it. The large-scale
crippling of a country’s computers would undercut the use of radar and other detec-
tion technologies, cut off military communications, and prevent the use of most
weapons. In the civilian realm, the disabling of computers and their associated
networks would bring chaos to transportation systems, communications of all sorts,
banking and finance, manufacture, retail trade, and virtually every other aspect of
modern society.
Dependency on computers and their networks can be exploited by terrorists
through the use of tactics that fall into two broad categories: information attacks
and infrastructure attacks.25 Information attacks are attempts to access, destroy, or
alter the information retained, processed, and distributed by computers and their
networks. Some of these attacks are fairly trivial—such as the defacement of a Web
page—and can be easily remedied. Others could pose a serious threat to national
security if, for example, a terrorist gained access to vital information for espionage,
or simply to destroy or alter it.
Infrastructure attacks are potentially more dangerous than information attacks
because they can damage or disable critical services. Obvious targets are the com-
puter networks essential to the distribution of electricity, gas, and water; air traffic
control; and critical government services like policing and firefighting. Although
much of this infrastructure is protected by the use of redundant networks, it would
be overly optimistic to believe that these systems are invulnerable. There have been
numerous occasions when individuals have done millions of dollars of damage by
Volti7e_17_ch16_305-322_highres.indd 315 28/11/12 6:03 PM

316 The Tools of Destruction
hacking into computer networks. As with the dangerous and expensive mischief
caused by these hackers, cyberterrorists also have the ability to infect computer pro-
grams and files with disabling and self-replicating viruses, worms, and Trojan horses.
Up to now, most information attacks have been the work of hackers with no
political agenda. There have been, however, some instances of attacks on computer
networks that may have been politically inspired. The most notable of these was a
cyber attack that disabled Estonia’s computer network for several days in 2007. The
source of the attack is in dispute, although there is a strong suspicion that it origi-
nated in Russia, possibly with the connivance of the government.26 If this had been
the case, it would not have been a terrorist attack per se, since by definition ter-
rorism is the work of actors not directly connected to a legitimate state. An actual
cyberattack planned and implemented by a sovereign country began in 2008 when
the United States, working in collaboration with Israel, was able to introduce a
computer worm (called Stuxnet) into the computers running centrifuges that were
being used by Iran for the enrichment of uranium. As a result, at least a thousand
centrifuges were put out of commission. The extent to which these attacks put a
serious crimp in Iran’s nuclear program is still a matter for debate, but they certainly
did not put an end to it. Critics of the attack have pointed out that the unleash-
ing of Stuxnet will lead to a new kind of arms race based on expanded abilities to
launch and repel cyberattacks.27 As we saw in the previous chapter, during World
War II there was a tacit understanding that the use of poison gas would be met with
retaliation in kind, leaving the warring countries with no military advantage and
more casualties. By launching a cyberattack, the United States and Israel gained a
temporary advantage over a potential nuclear adversary, but it may have come at
the cost of long-term security.
A modern nation-state’s ability to launch a cyberattack can be seen as a
continuation of a high-tech arms race that has been going on since the invention of
gunpowder. In contrast, the development of this capability by terrorist groups poses
a certain irony. A cyberattack requires a high level of technological sophistication
for its planning and execution. In this sense, terrorists are effective participants
in the modern world. At the same time, however, in a number of countries and
regions, the rapid development of a globalized and networked world has been
viewed as a threat to established cultures and religious beliefs.28 The response
of some disaffected groups and individuals has been to attack those nations that
have been the source of the ideas and images diffused throughout the world via
the Internet and other electronic media. The irony here lies in the use of modern
technologies to attack the modern world.
In considering the novel threats posed by cyberterrorism, along with nuclear,
chemical, and biological weapons, it is important to keep in mind that effective
terrorist attacks do not require the use of high-tech weapons. The destruction
of the Alfred P. Murrah Federal Building in Oklahoma City in 1995 was accom-
plished by a bomb made from nothing more sophisticated than a mixture of
ammonium nitrate fertilizer and fuel oil. The attacks on three railway stations in
Madrid that killed nearly 200 people and wounded 1,450 on March 11, 2004, were
Volti7e_17_ch16_305-322_highres.indd 316 28/11/12 6:03 PM

The Era of Smart Weapons 317
done with conventional explosives. And although the horror of 9/11 entailed the
destructive use of the embodiment of modern technology, a jet airliner, the hijack-
ers were able to accomplish this by incapacitating the crew through the use of a few
small edge weapons.
Although chemical, biological, and even nuclear weapons do pose potential
threats, it can be argued that modern technology’s greatest contribution to terrorism
has been the electronic media. Just as the Internet, social media, and mobile phones
have helped to peacefully unseat repressive dictators, they also can be employed to
promote the goals of violent terrorists. Mobile phones have facilitated coordination
of terrorist operations and have been used to detonate bombs, and the Internet has
been an invaluable means of recruiting terrorists, presenting detailed information
on how to make weapons, and publicizing grievances and demands. Above all, the
global reach of television has brought terrorist acts into the consciousness of the
whole world. The hundreds of millions of people who watched broadcasts of the col-
lapse of New York’s Twin Towers after the 9/11 attacks were suddenly aware of the
intent of Osama Bin Laden and Al Qaeda to remake the world by waging holy war.
Military Technologies in a Changing World
Although the use of today’s weaponry has extended the ability to kill and destroy
to a level scarcely conceivable to earlier generations of warriors, many of the issues
presented by advanced military technologies have been around for centuries, if not
millennia. New kinds of weapons promise impressive tactical and strategic gains,
but they also have called into question traditional martial virtues as they redefined
what it means to be a warrior. As one Spartan king lamented when he witnessed the
demonstration of an early catapult, “O Hercules, the valor of man is at an end!”29
A similar attitude was expressed during the Middle Ages, when military leaders
were infuriated by the insidious way in which crossbowmen and musketeers could
pick off their soldiers from afar. Their anger was reflected in their harsh treatment
of captured soldiers using these weapons, which often extended to cutting off their
hands and having their eyes put out.
Needless to say, punitive measures and an occasional disdain for new weapons
did not stem the advance of military technology. We continue to struggle today to
find ways to limit the consequences of advances in military technology for soldiers
and civilians alike. The development and spread of smart bombs, drones, cruise
missiles, and the other components of high-tech warfare has raised many ques-
tions about the conduct of wars and individual warriors. One of the most striking
changes centers on the ability to wreak havoc from halfway around the globe. The
use of remotely controlled drones has put their pilots out of harm’s way, but it has
also engendered a psychological as well as spatial remoteness from the realities
of warfare; a drone pilot may launch an attack on a suspected terrorist base, wit-
ness the ensuing destruction, and then go to a PTA meeting a few hours later.30
The use of drones and other high-tech weapons also presents some thorny legal
issues. Are drone pilots bonafide combatants, and if so, would that justify an enemy
Volti7e_17_ch16_305-322_highres.indd 317 28/11/12 6:03 PM

318 The Tools of Destruction
killing them in cold blood as they walked out the front door of their homes? Who
is at fault when a drone or smart bomb is guided to the wrong target; is it the officer
in the field who ordered the attack or the pilot in Nevada who flew an incorrect
course? And who is to blame if innocent civilians are killed because the program
used to control a weapon had a glitch of some sort? Should the programmer who
made the error be blamed for the loss of lives?31 The use of drones and other robotic
weapons may be counterproductive if it convinces an opponent that their adversar-
ies are cowards who are unwilling to put their lives on the line in combat situations,
thereby increasing the enemy’s will to fight on.32 Also, the effects of these kinds of
weapons reprise a familiar theme of winners and losers amid technological change;
younger soldiers who grew up with video games may take over the jobs and respon-
sibilities of older soldiers whose skills have become less relevant.33 Finally, and most
disturbingly, by promising lower casualty rates, remote-control warfare may lower
the inhibitions for going to war.
While high-tech weapons have presented a number of new issues and dilem-
mas, changing configurations of power have called into question their relevancy.
For more than four decades after the conclusion of World War II, an immense
amount of money and expertise was expended on defending the United States and
its allies from a nuclear attack carried out by missiles and bombers. The dissolution
of the Soviet Union in 1991 lowered global tensions and reduced the threat of a
nuclear war, but in some respects the world is a more uncertain and dangerous place
A drone pilot trains at Creech Air Force base in Nevada. Drones used for missions in
Afghanistan, Iraq, and Pakistan are also piloted at this base. (Ethan Miller/Getty Images)
Volti7e_17_ch16_305-322_highres.indd 318 28/11/12 6:03 PM

The Era of Smart Weapons 319
than it was during the Cold War. Instead of facing one powerful foe, the United
States and its allies have to contend with a multitude of potential adversaries.
Some of them are nation-states, while others are terrorist organizations. None of
them has anything like the military might once possessed by the Soviet Union, but
many have the capacity to produce significant damage through the use of biological,
chemical, and perhaps nuclear weapons.
Despite ongoing efforts to settle disputes without recourse to war, the world
remains an unsettled place, torn by a multitude of grievances and disputes between
and within nations. Many of these could erupt into wars made more destructive
than ever by the use of increasingly lethal weaponry. While the possession of
modern weapons may have a deterrent effect in cases of state-to-state conflict, the
threats posed by terrorists are more problematic are. Drones and reconnaissance
satellites can identify areas that harbor terrorists, and missiles launched from drones
can kill and injure suspected terrorists. At the same time, however, the dispersal of
terrorist cells can make the whole enterprise feel like playing Whack-a-Mole, the
arcade game in which a mechanical mole is knocked back into its hole, only to be
replaced by another one popping out of its hole.
In addition to the tactical difficulties of dealing with terrorists, there is the
larger problem of living on a planet that includes people whose beliefs, anger, and
grievances are the underlying source of terrorist attacks. There is no technological
fix, military or otherwise, for this situation. As one former Army officer mused,
“The great paradox of this high-tech age is that its security problems arise from the
human heart and soul, domains which remain opaque to technology (and to those
who worship it)”34
For all of the present dangers engendered by terrorism, it is still the case that
nuclear proliferation poses the greatest long-term threat to humanity. Despite, or
perhaps because of, mutual hostility, the United States and the Soviet Union were
able to forge a number of arms control agreements from the 1960s onward. At the
same time, the prospect of mutually assured destruction further restrained whatever
aggressive tendencies the two nations may have had. But while the Soviet Union is
now a receding memory, other nations have advanced their ability to manufacture
and deliver nuclear weapons. Preventing or at least limiting the further spread of
these weapons will be one of the major tasks in the twenty-first century. Some of
these efforts are explored in the next chapter, but before taking up this subject, we
should know something about the causes of technological advance in weaponry.
This will be the theme of the first part of the next chapter.
Questions for Discussion
1. “Collateral damage,” in which innocent people are killed or injured and their
homes destroyed, has been an inevitable feature of warfare. World War II was
particularly lethal, as millions of civilians died as a result of enemy actions. Do
modern weapons have the potential to reduce civilian casualties? If so, what
prevents them from doing so?
2. Drones are now used extensively for reconnaissance in combat areas, and they
can also be used by the police and private parties to gain information about
Volti7e_17_ch16_305-322_highres.indd 319 28/11/12 6:03 PM

320 The Tools of Destruction
activities below them. Do they pose a serious threat to privacy? Should any-
thing be done to prevent or limit the use of drones?
3. A major goal of antiterrorist operations is to deny safe havens in countries or
parts of countries that support or tolerate terrorists. Does harboring terrorists
justify military actions that amount to waging an undeclared war on a sover-
eign power?
4. Could the United States survive an extensive cyberattack? Should the federal
government take steps to limit the damage done by such attacks? What sort
of things could be done to blunt the force and mitigate the consequences of a
cyberattack?
5. In recent wars, the number of uniformed men and women engaged in support
activities far outnumber those who are involved in combat. Are they really
soldiers? What about drone aircraft operators who are located far from the field
of combat?
Notes
1. Martin W. Bowman, The USAAF Handbook 1939–1945 (Gloustershire, England: Sutton,
1997) p. 232.
2. Luftwaffe Resource Center, “Fi-103/V-1 ‘Buzz Bomb,’” accessed on May 1, 2012, at
http://www.warbirdsresourcegroup.org/LRG/v1.html.
3. Sharon Weinberger, “Cruise Missiles: The Million Dollar Weapon,” The Huffington
Post (May 25, 2011), accessed on May 1, 2012, at http://www.huffingtonpost.com
/2011/03/25/cruise-missiles-missile_n_840365.html.
4. “Ruhrstahl X-1” Wehrmacht History 1939 to 1945, accessed on May 2, 2012, at http://
www.wehrmacht-history.com/luftwaffe/missiles/ruhrstahl-x-1-anti-ship-missile.htm.
5. Fred Kaplan, “Smart Bombs, Dumb Targets,” Slate (December 16, 2003), accessed on
May 3, 2012, at http://www.slate.com/articles/news_and_politics/war_stories/2003/12
/smart_bombs_dumb_targets.2.html.
6. Robert A. Gurriero, “Space-Based Reconnaissance,” accessed on May 2, 2012, at http://
www.armyspace.army.mil/spacejournal/Article.asp?AID=13.
7. James Risen, “A Military and Intelligence Clash over Spy Satellites,” New York Times
(April 19, 2012), accessed on May 2, 2012, at http://www.nytimes.com/2012/04/20/us
/politics/spy-satellite-clash-for-military-and-intelligence-officials.html?pagewanted=all.
8. P. S. Singer, Wired for War: The Robotics Revolution and Conflict in the Twenty-First Cen-
tury (New York: Penguin Press, 2009), pp. 117–118
9. Ibid., pp. 54–55
10. Robert M. Gates, “Remarks by Secretary Gates at the United States Air Force Academy,”
U.S. Department of Defense, News Transcript, accessed on May 4, 2012, at http://www
.defense.gov/transcripts/transcript.aspx?transcriptid=4779.
11. Brian Bennett and David S. Cloud, “Obama’s Counter-Terror Advisor Defends Drone
Strikes,“ Los Angeles Times (April 30, 2012), accessed on May 4, 2012, at http://articles
.latimes.com/2012/apr/30/world/la-fg-brennan-drones-20120501.
12. “Predator Drones and Unmanned Aerial Vehicles (UAVs),” New York Times(March 20,
2012), accessed on May 4, 2012, at http://topics.nytimes.com/top/reference/timestop-
ics/subjects/u/unmanned_aerial_vehicles/index.
13. Jim Garamone, “American Forces Press Services” (January 26, 2012), accessed on
May 7, 2012, at http://www.defense.gov/news/newsarticle.aspx?id=66940
Volti7e_17_ch16_305-322_highres.indd 320 28/11/12 6:03 PM

http://www.warbirdsresourcegroup.org/LRG/v1.html

http://www.huffingtonpost.com

http://www.wehrmacht-history.com/luftwaffe/missiles/ruhrstahl-x-1-anti-ship-missile.htm

http://www.wehrmacht-history.com/luftwaffe/missiles/ruhrstahl-x-1-anti-ship-missile.htm

http://www.slate.com/articles/news_and_politics/war_stories/2003/12

http://www.armyspace.army.mil/spacejournal/Article.asp?AID=13

http://www.armyspace.army.mil/spacejournal/Article.asp?AID=13

http://www.nytimes.com/2012/04/20/us

http://www

http://articles

http://topics.nytimes.com/top/reference/timestop-ics/subjects/u/unmanned_aerial_vehicles/index

http://topics.nytimes.com/top/reference/timestop-ics/subjects/u/unmanned_aerial_vehicles/index

http://topics.nytimes.com/top/reference/timestop-ics/subjects/u/unmanned_aerial_vehicles/index

http://www.defense.gov/news/newsarticle.aspx?id=66940

The Era of Smart Weapons 321
14. “Defense Spending in a Time of Austerity,” The Economist (August 26, 2010), accessed
on May 4, 2012, at http://www.economist.com/node/16886851.
15. Military Analysis Network, “Joint Direct Munitions (JDM)” (May 2, 2012), accessed on
May 2, 2012, at http://www.fas.org/man/dod-101/sys/smart/jdam.htm.
16. “How Much Does an M1A2 Abrams Battle Tank Cost?” Quora (July 5, 2011), accessed
on May 6, 2012, at http://www.quora.com/How-much-does-an-M1A2-Abrams-battle
-tank-cost.
17. Tony Capaccio, “Nuclear Submarine May Cost $1 Billion More Than Navy’s Estimate,
CBO Says,” Bloomberg (May 25, 2010), accessed on May 6, 2012, at http://www
.bloomberg.com/news/2010-05-25/nuclear-submarine-may-cost-1-billion-more-than
-navy-s-estimate-cbo-says.html.
18. Iraq Coalition Casualty Count, “Coalition Military Fatalities by Year,” accessed on
May 4, 2012, at http://icasualties.org/iraq/index.aspx.
19. Singer, Wired for War, pp. 218–219
20. Gilmore Commission First Annual Report, “Reasons and Rationales Behind Potential
CBRN Terrorism,” in Yonah Alexander and Milton Hoenig (Eds.), Super Terrorism:
Biological, Chemical, and Nuclear (Ardsley, NY: Transnational Publishers, 2001), p. 16.
21. Daniel R. Headrick, Power over Peoples: Technology, Environments, and Western Impe-
rialism, 1450 to the Present (Princeton and Oxford: Princeton University Press, 2010)
pp. 322–327.
22. Christine M. Gosden, “The 1988 Chemical Weapons Attack on Halabja, Iraq,” in
Alexander and Hoenig, Super Terrorism, p. 8.
23. Jessica Stern, The Ultimate Terrorists (Cambridge, MA and London: Harvard University
Press), p. 24.
24. Joseph F. Pilat, “The Bioterrorism Threat: Technological and Political Considerations,”
in Alexander and Hoenig, Super Terrorism, p. 64.
25. Robert W. Taylor, Tory J. Caeti, D. Kall Loper, Eric J. Fritsch, and John Liederbach,
Digital Crime and Digital Terrorism (Upper Saddle River, NJ: Pearson Prentice-Hall,
2006) pp, 23–28.
26. Joshua Davis, “Hackers Take Down the Most Wired Country in Europe,” Wired 25,
9 (September 2007).
27. Misha Glenny, “A Weapon We Can’t Control,” New York Times (June 24, 2012),
accessed on July 13, 2012, at http://www.nytimes.com/2012/06/25/opinion/stuxnet
-will-come-back-to-haunt-us.html?_r=1&ref=stuxnet.
28. Manuel Castells, The Rise of the Network Society (Malden, MA: Blackwell, 2000)
29. B. H. Liddell Hart, The Revolution in Warfare (London: Faber and Faber, 1946) p, 30
30. Singer, Wired for War, p. 347.
31. Ibid, pp. 382–387.
32. Ibid., p. 312.
33. Ibid., p. 364.
34. Quoted in ibid., p. 296.
Volti7e_17_ch16_305-322_highres.indd 321 28/11/12 6:03 PM

http://www.economist.com/node/16886851

http://www.fas.org/man/dod-101/sys/smart/jdam.htm

http://www.quora.com/How-much-does-an-M1A2-Abrams-battle

http://www

http://icasualties.org/iraq/index.aspx

http://www.nytimes.com/2012/06/25/opinion/stuxnet

A Macmillan Higher Education Company
WORTH PUBLISHERS
Volti7e_01_FM_i-xviii_highres.indd 2 28/11/12 5:52 PM
this page left intentionally blank

Chapter 15 outlined some of the consequences of technological change for the con-
duct of wars, as well as the effects of advances in military technology on the world as
a whole. But what is the source of technological change in the military realm? Why
has the development of weapons accelerated at such a frightening rate within our
own lifetimes? Why have some weapons been adopted while others have languished?
How have social and cultural conditions stimulated the development and use of
some weapons and not others? In the first part of this chapter we will try to address
some of these questions, and at the same time provide some background for the last
sections, which deal with past and present efforts to limit the use of new weapons.
Action and Reaction
The most obvious reason for the development of new weapons is that combat-
ants or potential combatants continually strive to gain the upper hand over their
enemies through superior weaponry. When one succeeds in doing so, its enemies
are strongly motivated to develop new weapons that can neutralize this new threat.
New offensive weapons are countered by new defensive weapons, which in turn
stimulate the development of better offensive weapons, and so on. The process is
one of constant action and reaction, as rival nations thrust and parry, each seeking
to neutralize their opponent’s capabilities, while at the same time augmenting their
own. The development of military technologies thus becomes an endless cycle in
which each new weapon stimulates the invention of another, and the capacity to
kill and destroy proceeds without limit.
History supplies us with many examples of this process. We have seen how
the offensive power of the mounted knight was checked by the pike and longbow.
Cannon threatened the security of medieval castles, until their destructive force
was temporarily diminished by improved fortifications. The machine gun produced
a stalemate during much of World War I, but the armored tank made the machine
gun a much less potent weapon and tipped the balance back in favor of the offense.
Since then, however, a number of military actions have powerfully demonstrated
the vulnerability of tanks to precision-guided weapons.1
323
How New Weapons Emerge—And How They
May Be Contained
c h a p t e r seventeen
Volti7e_18_ch17_323-346_highres.indd 323 28/11/12 6:03 PM

324 The Tools of Destruction
Useful as this action–reaction schema is, it does not tell the whole story. In
the first place, it isn’t always a simple matter to categorize military tactics as either
offensive or defensive. When a war is being fought, the contestants do not neatly
think in terms of offense and defense. Wars are won by defeating an enemy, which
means offensive action, but at the same time defensive measures are necessary to
protect the capacity to wage an offensive campaign. And even if a country seeks
only to protect itself from an aggressor, it may still find it necessary to go on the
attack, for as the old maxim has it, “The best defense is a good offense.”
In similar fashion, many weapons can be used for either offensive or defensive
purposes. For example, the possession of a more accurate, faster-firing gun may allow
an army either to rapidly advance against the enemy or to more easily defend their
positions in the face of enemy attack. Also, a strong defensive capability enhances
the capacity to engage in successful offensive actions. Nineteenth-century imperial-
ist powers were able to conquer less technologically advanced lands—an obviously
offensive purpose—because their superior firepower allowed them to easily hold off
the attacks of the natives.
The close relationship between offensive and defensive capabilities is a par-
ticularly important matter today as billions of dollars are being spent to develop an
antiballistic missile defense system. Should a workable system be implemented—
which is unlikely, given the immense technical obstacles to be overcome—this
defense would allow the United States to launch its own missiles with impunity, for
an enemy would be incapable of retaliation. A defensive system thus becomes the
basis of an overwhelming offensive superiority.
Social Structure and the Development of Military Technologies
Be they offensive or defensive, weapons are not developed and put into service
simply because of their intrinsic superiority. As has been noted throughout this
book, the creation and use of technologies are social constructions to a significant
degree, shaped by larger processes that involve existing social patterns, cultural
orientations, and the motivations of individual people. The development of mili-
tary technologies is no different. A weapon is part of a larger system, and its use
reflects the basic features of that system along with its strengths and deficiencies.
This is well illustrated by the adoption of firearms and the eclipse of the longbow
as the basic infantry weapon.2 When viewed solely in terms of effectiveness, the
longbow was by no means inferior to early muskets. Fired by a competent bowman,
an arrow could penetrate the armor of that era just as easily as a bullet propelled by
a firearm. Its range was greater, and it could be shot far more rapidly. In addition,
it was less expensive, it was not subject to the continual misfires that plagued early
firearms, and its effectiveness was not diminished by wet weather, as was the case
with early matchlocks. Why then was the bow replaced by firearms despite all their
imperfections?
In this description of the virtues of the bow, it was necessary to preface the
list with “fired by a competent bowman.” The bowman was, of course, an essen-
tial element in the technological system of medieval archery, and the level of his
skills determined the bow’s effectiveness. This skill could not be taken for granted;
Volti7e_18_ch17_323-346_highres.indd 324 28/11/12 6:03 PM

How New Weapons Emerge—And How They May Be Contained 325
proficiency with the bow required some natural ability and a great deal of practice.
In medieval England there was a strong expectation that men and boys would
devote a considerable amount of time to archery practice; indeed, it amounted to
the national sport of England.
Even so, not everyone was willing to submit to the rigors of archery practice.
Many preferred to spend their spare time engaged in card-playing, bowling, shoot-
ing dice, and the medieval version of football. On several occasions royal edicts
banned these activities, in the hope, as one Elizabethan act put it, that “archery
may be revived and practiced and that kind of ancient weapon whereby our nation
in times past has gotten so great honor may be kept in use.”3 As might be expected,
these laws were almost impossible to enforce, and the quality of English archery
underwent a steady decline.
In other nations matters took a different course. During the late fourteenth century,
the king of France also promoted archery by banning all other diversions. The qual-
ity of French archery surpassed that of England, to such a degree that the ruling elite
began to fear that their bowmen could pose a challenge to their rule. Consequently,
mass archery was superseded by requirements that a limited number of bowmen be
cultivated in each town and district, and the masses went back to their traditional
pastimes.4 In general, rulers were not inclined to support the military prowess of
their citizens, which could just as easily be used in the service of rebellion. Rather,
autocratic states were predisposed to build up permanent professional armies com-
posed of soldiers lacking high levels of individual skill.
Firearms fitted perfectly into this system, for they allowed the use of poorly
paid mercenary soldiers drawn from the dregs of the society. Unlike archery, the
use of firearms did not require high levels of skill; as Bert Hall has noted, “It was
far easier to learn to use guns . . . than it had been to use most of the weapons that
had preceded them, especially longbows, and this reduction in the fundamental skill
component meant that masses of untrained men could be made into competent
soldiers in six months or less.”5 To be sure, the use of early firearms was by no means
a simple matter, requiring numerous operations in precise sequence. But none of
these, not even aiming and firing the weapon, required a great deal of skill. Nor was
there much opportunity to instill higher levels of proficiency even if it were desired;
gunpowder and bullets were expensive, and as a result practice time was sharply
limited, amounting to the firing of only a few dozen rounds per year. In any event,
individual skill in handling weapons was far less important than maintaining a con-
centrated rate of fire. Battles were won by the discharge of withering volleys, not by
individual sharpshooting. What counted, therefore, was iron discipline. The main
requirement for effective use of arquebuses and muskets was that soldiers have the
proper procedures drummed into them through rigorous drill, and that they adhere
to these procedures in the heat of battle. Ranks of soldiers were expected to advance
as a single line toward the enemy and to maintain formation despite the prospect
that as many as a third of them would fall victim to enemy gunfire.
Mercenary soldiers of this sort were effective in battle because they had been
transformed into virtual robots, who, as Frederick the Great of Prussia wanted, feared
their own officers more than they did the enemy.6 Firearms were thus particularly
Volti7e_18_ch17_323-346_highres.indd 325 28/11/12 6:03 PM

326 The Tools of Destruction
well suited to armies raised by centralized, bureaucratically organized states where
discipline was much more highly valued than freedom and individual ability. Troops
of archers made up of independent yeomen were appropriate to the political and
social setting of late medieval England, while gun-wielding mercenary armies were
well suited to the social structures found in most of Europe from the sixteenth
century onward.
It should also be noted that the advantage of firearms was not confined to
their appropriateness to a particular mode of political organization. Technologies
are sometimes embraced because of the psychological needs they meet. Although
early firearms had a number of deficiencies when compared with the bow, they did
have the advantage of producing a great deal of noise. Despite being inaccurate and
cumbersome to use, early firearms certainly conveyed the impression that they were
dangerous and terrifying weapons. And one could also speculate on the sexual con-
notations that have long been attached to guns; one needn’t be a close follower of
Sigmund Freud to realize that handguns and cannon can easily serve as symbols of
male virility and potency.7
As the adoption of firearms shows, the motives for using one military technol-
ogy instead of another do not necessarily stem from the objective capabilities of
a weapon. The fit of a military technology with established interests and ways of
doing things may be of paramount importance for its adoption. At the same time,
stupidity and short-sightedness can delay the adoption of a weapon and retard its
effective use when this finally occurs. Although political and military leaders might
be expected to employ new military technologies that confer specific advantages,
some weapons have languished due to misperceptions of their potential.
A failure of this sort was evident in the slow adoption of the machine gun
by late nineteenth-century armies. This was a particularly ironic situation in the
United States, for the first practical machine guns, the Gatling gun and the Maxim
gun, were American inventions. But France and England were equally slow in
recognizing the machine gun’s potential. The problem with these weapons was
that they were heavy, cumbersome devices that had to be supported by large gun
carriages. Since they looked like conventional artillery pieces, they were treated as
such. The leaders of the armies of late nineteenth-century England, France, and the
United States did not grasp the unique capabilities of the machine gun, and they
made no effort to develop new tactics that could take advantage of the machine
gun’s rapid rate of fire.8
In contrast, the German General Staff had a strong interest in the machine gun
because a central element of their military plans was the massive use of reservists
whose marksmanship was necessarily poor. In this way