stat questions

CHAPTER 2 2.5

An SRSWOR of size 300 is drawn from a population to estimate the population mean of a characteristic of interest.A 95% Confidence Interval for the population mean based on the sample mean 297897 is(260706,335087).

Don't use plagiarized sources. Get Your Custom Essay on
stat questions
Just from \$13/Page

(a)what is the chance for the confidence interval to include the unknown population means?

(b)Find the numerical value of the SE(sample mean)

(c)Find the 95% confidence interval for the(population mean-sample mean). Then, give the reasons for your answer by showing the steps of your calculation.

(d) Using your findings in(b), determine how far the upper and lower confidence limits in(c) are away from zero in the unit of SE(sample mean).

(e) Present the numerical value of MOE.

(f) Determine the sample size for having(MOE/sample standard deviation)equals 1

Sampling
Design and Analysis
Third Edition
CHAPMAN & HALL/CRC
Texts in Statistical Science Series
Joseph K. Blitzstein, Harvard University, USA
Julian J. Faraway, University of Bath, UK
Martin Tanner, Northwestern University, USA
Jim Zidek, University of British Columbia, Canada
Recently Published Titles
Beyond Multiple Linear Regression
Applied Generalized Linear Models and Multilevel Models in R
Paul Roback, Julie Legler
Bayesian Thinking in Biostatistics
Gary L. Rosner, Purushottam W. Laud, and Wesley O. Johnson
Linear Models with Python
Julian J. Faraway
Modern Data Science with R, Second Edition
Benjamin S. Baumer, Daniel T. Kaplan, and Nicholas J. Horton
Probability and Statistical Inference
From Basic Principles to Advanced Models
Bayesian Networks
With Examples in R, Second Edition
Marco Scutari and Jean-Baptiste Denis
Time Series
Modeling, Computation, and Inference, Second Edition
Raquel Prado, Marco A. R. Ferreira and Mike West
A First Course in Linear Model Theory, Second Edition
Nalini Ravishanker, Zhiyi Chi, Dipak K. Dey
Foundations of Statistics for Data Scientists
With R and Python
Alan Agresti and Maria Kateri
Fundamentals of Causal Inference
With R
Babette A. Brumback
Sampling: Design and Analysis, Third Edition
Sharon L. Lohr
Chapman–Hall/CRC-Texts-in-Statistical-Science/book-series/CHTEXSTASCI
Sampling
Design and Analysis
Third Edition
Sharon L. Lohr
Data analyses and output in this book were generated using SAS/STAT® software, Version 14.3 of the SAS System for Windows.
Copyright © 2019 SAS Institute Inc. SAS ® and all other SAS Institute Inc. product or service names are registered trademarks or
trademarks of SAS Institute Inc., Cary, NC, USA.
Third edition published 2022
by CRC Press
6000 Broken Sound Parkway NW, Suite 300, Boca Raton, FL 33487-2742
and by CRC Press
2 Park Square, Milton Park, Abingdon, Oxon, OX14 4RN
CRC Press is an imprint of Taylor & Francis Group, LLC
Reasonable efforts have been made to publish reliable data and information, but the author and publisher cannot assume responsibility for the validity of all materials or the consequences of their use. The authors and publishers have attempted to trace the copyright
holders of all material reproduced in this publication and apologize to copyright holders if permission to publish in this form has
not been obtained. If any copyright material has not been acknowledged please write and let us know so we may rectify in any future
reprint.
Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmitted, or utilized in any form
by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying, microfilming, and recording, or in any information storage or retrieval system, without written permission from the publishers.
For permission to photocopy or use material electronically from this work, access www.copyright.com or contact the Copyright
Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA 01923, 978-750-8400. For works that are not available on CCC
Trademark notice: Product or corporate names may be trademarks or registered trademarks and are used only for identification and
explanation without intent to infringe.
Names: Lohr, Sharon L., author.
Title: Sampling : design and analysis / Sharon L. Lohr.
Description: Third edition. | Boca Raton : CRC Press, 2022. | Series: Chapman & Hall CRC texts in statistical science | Includes
index. | Summary: “ “The level is appropriate for an upper-level undergraduate or graduate-level statistics major. Sampling: Design and
Analysis (SDA) will also benefit a non-statistics major with a desire to understand the concepts of sampling from a finite population.
A student with patience to delve into the rigor of survey statistics will gain even more from the content that SDA offers. The updates to
SDA have potential to enrich traditional survey sampling classes at both the undergraduate and graduate levels. The new discussions
of low response rates, non-probability surveys, and internet as a data collection mode hold particular value, as these statistical issues
have become increasingly important in survey practice in recent years… I would eagerly adopt the new edition of SDA as the required
textbook.” (Emily Berg, Iowa State University) What is the unemployment rate? What is the total area of land planted with soybeans?
How many persons have antibodies to the virus causing COVID-19? Sampling: Design and Analysis, Third Edition shows you how to
design and analyze surveys to answer these and other questions. This authoritative text, used as a standard reference by numerous
survey organizations, teaches the principles of sampling with examples from social sciences, public opinion research, public health,
business, agriculture, and ecology. Readers should be familiar with concepts from an introductory statistics class including probability and linear regression; optional sections contain statistical theory for readers familiar with mathematical statistics. The third
edition, thoroughly revised to incorporate recent research and applications, includes a new chapter on nonprobability samples-when
to use them and how to evaluate their quality. More than 200 new examples and exercises have been added to the already extensive
sets in the second edition. SDA’s companion website contains data sets, computer code, and links to two free downloadable supplementary books (also available in paperback) that provide step-by-step guides-with code, annotated output, and helpful tips-for working through the SDA examples. Instructors can use either R or SAS ® software. SAS ® Software Companion for Sampling: Design and
Analysis, Third Edition by Sharon L. Lohr (2022, CRC Press) R Companion for Sampling: Design and Analysis, Third Edition by Yan
Lu and Sharon L. Lohr (2022, CRC Press)”– Provided by publisher.
Identifiers: LCCN 2021025531 (print) | LCCN 2021025532 (ebook) | ISBN
9780367279509 (hardback) | ISBN 9781032130590 (paperback) | ISBN
9780429298899 (ebook)
Subjects: LCSH: Sampling (Statistics)
Classification: LCC HA31.2 .L64 2022 (print) | LCC HA31.2 (ebook) | DDC
001.4/33–dc23
LC record available at https://lccn.loc.gov/2021025531
LC ebook record available at https://lccn.loc.gov/2021025532
ISBN: 978-0-367-27950-9 (hbk)
ISBN: 978-1-032-13059-0 (pbk)
ISBN: 978-0-429-29889-9 (ebk)
DOI: 10.1201/9780429298899
Typeset in LM Roman
by KnowledgeWorks Global Ltd.
Access the Support Material: http://routledge.com/9780367279509
To Doug
Contents
Preface
xiii
Symbols and Acronyms
xxi
1 Introduction
1.1 Guidance from Samples . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.2 Populations and Representative Samples . . . . . . . . . . . . . . . . . . .
1.3 Selection Bias . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.3.1 Convenience Samples . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.3.2 Purposive or Judgment Samples . . . . . . . . . . . . . . . . . . . .
1.3.3 Self-Selected Samples . . . . . . . . . . . . . . . . . . . . . . . . . .
1.3.4 Undercoverage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.3.5 Overcoverage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.3.6 Nonresponse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.3.7 What Good Are Samples with Selection Bias? . . . . . . . . . . . . .
1.4 Measurement Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.5 Questionnaire Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.6 Sampling and Nonsampling Errors . . . . . . . . . . . . . . . . . . . . . . .
1.7 Why Use Sampling? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.7.1 Advantages of Taking a Census . . . . . . . . . . . . . . . . . . . . .
1.7.2 Advantages of Taking a Sample Instead of a Census . . . . . . . . .
1.8 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.9 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1
1
3
6
6
6
6
8
8
9
9
10
13
17
18
19
19
20
22
2 Simple Probability Samples
2.1 Types of Probability Samples . . . . . . . . . . . . . . . . . . . . . . . . . .
2.2 Framework for Probability Sampling . . . . . . . . . . . . . . . . . . . . . .
2.3 Simple Random Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.4 Sampling Weights . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.5 Confidence Intervals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.6 Using Statistical Software to Analyze Survey Data . . . . . . . . . . . . . .
2.7 Determining the Sample Size . . . . . . . . . . . . . . . . . . . . . . . . . .
2.8 Systematic Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.9 Randomization Theory for Simple Random Sampling* . . . . . . . . . . . .
2.10 Model-Based Theory for Simple Random Sampling* . . . . . . . . . . . . .
2.11 When Should a Simple Random Sample Be Used? . . . . . . . . . . . . . .
2.12 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.13 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
31
32
34
39
44
46
50
50
55
56
58
62
63
66
vii
viii
Contents
3 Stratified Sampling
3.1 What Is Stratified Sampling? . . . . . . . . . . . . . . . . . . . . . . . . . .
3.2 Theory of Stratified Sampling . . . . . . . . . . . . . . . . . . . . . . . . .
3.3 Sampling Weights in Stratified Random Sampling . . . . . . . . . . . . . .
3.4 Allocating Observations to Strata . . . . . . . . . . . . . . . . . . . . . . .
3.4.1 Proportional Allocation . . . . . . . . . . . . . . . . . . . . . . . . .
3.4.2 Optimal Allocation . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.4.3 Allocation for Specified Precision within Strata . . . . . . . . . . . .
3.4.4 Which Allocation to Use? . . . . . . . . . . . . . . . . . . . . . . . .
3.4.5 Determining the Total Sample Size . . . . . . . . . . . . . . . . . . .
3.5 Defining Strata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.6 Model-Based Theory for Stratified Sampling* . . . . . . . . . . . . . . . . .
3.7 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
79
79
83
87
89
89
91
93
94
96
96
99
100
101
4 Ratio and Regression Estimation
121
4.1 Ratio Estimation in Simple Random Sampling . . . . . . . . . . . . . . . . 121
4.1.1 Why Use Ratio Estimation? . . . . . . . . . . . . . . . . . . . . . . . 122
4.1.2 Bias and Mean Squared Error of Ratio Estimators . . . . . . . . . . 125
4.1.3 Ratio Estimation with Proportions . . . . . . . . . . . . . . . . . . . 132
4.1.4 Ratio Estimation Using Weight Adjustments . . . . . . . . . . . . . 134
4.1.5 Advantages of Ratio Estimation . . . . . . . . . . . . . . . . . . . . 135
4.2 Regression Estimation in Simple Random Sampling . . . . . . . . . . . . . 135
4.3 Estimation in Domains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
4.4 Poststratification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
4.5 Ratio Estimation with Stratified Sampling . . . . . . . . . . . . . . . . . . 145
4.6 Model-Based Theory for Ratio and Regression Estimation* . . . . . . . . . 147
4.6.1 A Model for Ratio Estimation . . . . . . . . . . . . . . . . . . . . . . 148
4.6.2 A Model for Regression Estimation . . . . . . . . . . . . . . . . . . . 151
4.6.3 Differences between Model-Based and Design-Based Estimators . . . 152
4.7 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
4.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
5 Cluster Sampling with Equal Probabilities
167
5.1 Notation for Cluster Sampling . . . . . . . . . . . . . . . . . . . . . . . . . 171
5.2 One-Stage Cluster Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . 172
5.2.1 Clusters of Equal Sizes: Estimation . . . . . . . . . . . . . . . . . . . 172
5.2.2 Clusters of Equal Sizes: Theory . . . . . . . . . . . . . . . . . . . . . 174
5.2.3 Clusters of Unequal Sizes . . . . . . . . . . . . . . . . . . . . . . . . 179
5.3 Two-Stage Cluster Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . 182
5.4 Designing a Cluster Sample . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
5.4.1 Choosing the psu Size . . . . . . . . . . . . . . . . . . . . . . . . . . 193
5.4.2 Choosing Subsampling Sizes . . . . . . . . . . . . . . . . . . . . . . . 194
5.4.3 Choosing the Sample Size (Number of psus) . . . . . . . . . . . . . . 196
5.5 Systematic Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
5.6 Model-Based Theory for Cluster Sampling* . . . . . . . . . . . . . . . . . . 200
5.6.1 Estimation Using Models . . . . . . . . . . . . . . . . . . . . . . . . 202
5.6.2 Design Using Models . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
5.7 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
5.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
Contents
ix
6 Sampling with Unequal Probabilities
219
6.1 Sampling One Primary Sampling Unit . . . . . . . . . . . . . . . . . . . . . 221
6.2 One-Stage Sampling with Replacement . . . . . . . . . . . . . . . . . . . . 224
6.2.1 Selecting Primary Sampling Units . . . . . . . . . . . . . . . . . . . 224
6.2.2 Theory of Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . 226
6.2.3 Designing the Selection Probabilities . . . . . . . . . . . . . . . . . . 229
6.2.4 Weights in Unequal-Probability Sampling with Replacement . . . . . 230
6.3 Two-Stage Sampling with Replacement . . . . . . . . . . . . . . . . . . . . 230
6.4 Unequal-Probability Sampling without Replacement . . . . . . . . . . . . . 233
6.4.1 The Horvitz–Thompson Estimator for One-Stage Sampling . . . . . 235
6.4.2 Selecting the psus . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239
6.4.3 The Horvitz–Thompson Estimator for Two-Stage Sampling . . . . . 239
6.4.4 Weights in Unequal-Probability Samples . . . . . . . . . . . . . . . . 240
6.5 Examples of Unequal-Probability Samples . . . . . . . . . . . . . . . . . . 243
6.6 Randomization Theory Results and Proofs* . . . . . . . . . . . . . . . . . 247
6.7 Model-Based Inference with Unequal-Probability Samples* . . . . . . . . . 254
6.8 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
6.9 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
7 Complex Surveys
273
7.1 Assembling Design Components . . . . . . . . . . . . . . . . . . . . . . . . 273
7.1.1 Building Blocks for Surveys . . . . . . . . . . . . . . . . . . . . . . . 273
7.1.2 Ratio Estimation in Complex Surveys . . . . . . . . . . . . . . . . . 275
7.1.3 Simplicity in Survey Design . . . . . . . . . . . . . . . . . . . . . . . 276
7.2 Sampling Weights . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276
7.2.1 Constructing Sampling Weights . . . . . . . . . . . . . . . . . . . . . 276
7.2.2 Self-Weighting and Non-Self-Weighting Samples . . . . . . . . . . . . 279
7.3 Estimating Distribution Functions and Quantiles . . . . . . . . . . . . . . . 280
7.4 Design Effects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286
7.5 The National Health and Nutrition Examination Survey . . . . . . . . . . 288
7.6 Graphing Data from a Complex Survey . . . . . . . . . . . . . . . . . . . . 291
7.6.1 Univariate Plots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292
7.6.2 Bivariate Plots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295
7.7 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301
7.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
8 Nonresponse
311
8.1 Effects of Ignoring Nonresponse . . . . . . . . . . . . . . . . . . . . . . . . 312
8.2 Designing Surveys to Reduce Nonresponse . . . . . . . . . . . . . . . . . . 314
8.3 Two-Phase Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
8.4 Response Propensities and Mechanisms for Nonresponse . . . . . . . . . . 320
8.4.1 Auxiliary Information for Treating Nonresponse . . . . . . . . . . . . 320
8.4.2 Methods to Adjust for Nonresponse . . . . . . . . . . . . . . . . . . 320
8.4.3 Response Propensities . . . . . . . . . . . . . . . . . . . . . . . . . . 321
8.4.4 Types of Missing Data . . . . . . . . . . . . . . . . . . . . . . . . . . 321
8.5 Adjusting Weights for Nonresponse . . . . . . . . . . . . . . . . . . . . . . 323
8.5.1 Weighting Class Adjustments . . . . . . . . . . . . . . . . . . . . . . 324
8.5.2 Regression Models for Response Propensities . . . . . . . . . . . . . 328
8.6 Poststratification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329
8.6.1 Poststratification Using Weights . . . . . . . . . . . . . . . . . . . . 330
8.6.2 Raking Adjustments . . . . . . . . . . . . . . . . . . . . . . . . . . . 331
x
Contents
8.6.3 Steps for Constructing Final Survey Weights . . . . . . . . . . . . .
8.7 Imputation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.7.1 Deductive Imputation . . . . . . . . . . . . . . . . . . . . . . . . . .
8.7.2 Cell Mean Imputation . . . . . . . . . . . . . . . . . . . . . . . . . .
8.7.3 Hot-Deck Imputation . . . . . . . . . . . . . . . . . . . . . . . . . .
8.7.4 Regression Imputation and Chained Equations . . . . . . . . . . . .
8.7.5 Imputation from Another Data Source . . . . . . . . . . . . . . . . .
8.7.6 Multiple Imputation . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.7.7 Advantages and Disadvantages of Imputation . . . . . . . . . . . . .
8.8 Response Rates and Nonresponse Bias Assessments . . . . . . . . . . . . .
8.8.1 Calculating and Reporting Response Rates . . . . . . . . . . . . . .
8.8.2 What Is an Acceptable Response Rate? . . . . . . . . . . . . . . . .
8.8.3 Nonresponse Bias Assessments . . . . . . . . . . . . . . . . . . . . .
8.9 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.10 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
333
334
335
335
336
337
338
338
339
339
340
340
342
343
346
348
9 Variance Estimation in Complex Surveys
359
9.1 Linearization (Taylor Series) Methods . . . . . . . . . . . . . . . . . . . . . 359
9.2 Random Group Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363
9.2.1 Replicating the Survey Design . . . . . . . . . . . . . . . . . . . . . 363
9.2.2 Dividing the Sample into Random Groups . . . . . . . . . . . . . . . 365
9.3 Resampling and Replication Methods . . . . . . . . . . . . . . . . . . . . . 367
9.3.1 Balanced Repeated Replication (BRR) . . . . . . . . . . . . . . . . . 367
9.3.2 Jackknife . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373
9.3.3 Bootstrap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375
9.3.4 Creating and Using Replicate Weights . . . . . . . . . . . . . . . . . 377
9.4 Generalized Variance Functions . . . . . . . . . . . . . . . . . . . . . . . . 379
9.5 Confidence Intervals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381
9.5.1 Confidence Intervals for Smooth Functions of Population Totals . . . 381
9.5.2 Confidence Intervals for Population Quantiles . . . . . . . . . . . . . 382
9.6 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384
9.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386
10 Categorical Data Analysis in Complex Surveys
395
10.1 Chi-Square Tests with Multinomial Sampling . . . . . . . . . . . . . . . . . 395
10.1.1 Testing Independence of Factors . . . . . . . . . . . . . . . . . . . . 397
10.1.2 Testing Homogeneity of Proportions . . . . . . . . . . . . . . . . . . 398
10.1.3 Testing Goodness of Fit . . . . . . . . . . . . . . . . . . . . . . . . . 398
10.2 Effects of Survey Design on Chi-Square Tests . . . . . . . . . . . . . . . . . 399
10.2.1 Contingency Tables for Data from Complex Surveys . . . . . . . . . 400
10.2.2 Effects on Hypothesis Tests and Confidence Intervals . . . . . . . . . 401
10.3 Corrections to Chi-Square Tests . . . . . . . . . . . . . . . . . . . . . . . . 403
10.3.1 Wald Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403
10.3.2 Rao–Scott Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405
10.3.3 Model-Based Methods for Chi-Square Tests . . . . . . . . . . . . . . 407
10.4 Loglinear Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408
10.4.1 Loglinear Models with Multinomial Sampling . . . . . . . . . . . . . 409
10.4.2 Loglinear Models in a Complex Survey . . . . . . . . . . . . . . . . . 410
10.5 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411
10.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412
Contents
xi
11 Regression with Complex Survey Data
419
11.1 Model-Based Regression in Simple Random Samples . . . . . . . . . . . . . 420
11.2 Regression with Complex Survey Data . . . . . . . . . . . . . . . . . . . . 423
11.2.1 Point Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 424
11.2.2 Standard Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427
11.2.3 Multiple Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . 430
11.2.4 Regression Using Weights versus Weighted Least Squares . . . . . . 432
11.3 Using Regression to Compare Domain Means . . . . . . . . . . . . . . . . . 433
11.4 Interpreting Regression Coefficients from Survey Data . . . . . . . . . . . . 435
11.4.1 Purposes of Regression Analyses . . . . . . . . . . . . . . . . . . . . 435
11.4.2 Model-Based and Design-Based Inference . . . . . . . . . . . . . . . 436
11.4.3 Survey Weights and Regression . . . . . . . . . . . . . . . . . . . . . 437
11.4.4 Survey Design and Standard Errors . . . . . . . . . . . . . . . . . . 438
11.4.5 Mixed Models for Cluster Samples . . . . . . . . . . . . . . . . . . . 439
11.5 Logistic Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 440
11.6 Calibration to Population Totals . . . . . . . . . . . . . . . . . . . . . . . . 442
11.7 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 446
11.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 448
12 Two-Phase Sampling
457
12.1 Theory for Two-Phase Sampling . . . . . . . . . . . . . . . . . . . . . . . . 459
12.2 Two-Phase Sampling with Stratification . . . . . . . . . . . . . . . . . . . . 461
12.3 Ratio and Regression Estimation in Two-Phase Samples . . . . . . . . . . . 464
12.3.1 Two-Phase Sampling with Ratio Estimation . . . . . . . . . . . . . . 464
12.3.2 Generalized Regression Estimation in Two-Phase Sampling . . . . . 466
12.4 Jackknife Variance Estimation for Two-Phase Sampling . . . . . . . . . . . 467
12.5 Designing a Two-Phase Sample . . . . . . . . . . . . . . . . . . . . . . . . . 469
12.5.1 Two-Phase Sampling with Stratification . . . . . . . . . . . . . . . . 469
12.5.2 Optimal Allocation for Ratio Estimation . . . . . . . . . . . . . . . . 471
12.6 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 471
12.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 472
13 Estimating the Size of a Population
483
13.1 Capture–Recapture Estimation . . . . . . . . . . . . . . . . . . . . . . . . . 483
13.1.1 Contingency Tables for Capture–Recapture Experiments . . . . . . . 484
13.1.2 Confidence Intervals for N . . . . . . . . . . . . . . . . . . . . . . . . 485
13.1.3 Using Capture–Recapture on Lists . . . . . . . . . . . . . . . . . . . 486
13.2 Multiple Recapture Estimation . . . . . . . . . . . . . . . . . . . . . . . . . 488
13.3 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 491
13.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 492
14 Rare Populations and Small Area Estimation
499
14.1 Sampling Rare Populations . . . . . . . . . . . . . . . . . . . . . . . . . . . 500
14.1.1 Stratified Sampling with Disproportional Allocation . . . . . . . . . 500
14.1.2 Two-Phase Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . 501
14.1.3 Unequal-Probability Sampling . . . . . . . . . . . . . . . . . . . . . 501
14.1.4 Multiple Frame Surveys . . . . . . . . . . . . . . . . . . . . . . . . . 502
14.1.5 Network or Multiplicity Sampling . . . . . . . . . . . . . . . . . . . . 504
14.1.6 Snowball Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . 505
14.1.7 Sequential Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . 506
14.2 Small Area Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 506
xii
Contents
14.2.1 Direct Estimators . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14.2.2 Synthetic and Composite Estimators . . . . . . . . . . . . . . . . . .
14.2.3 Model-Based Estimators . . . . . . . . . . . . . . . . . . . . . . . . .
14.3 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
507
508
509
510
512
15 Nonprobability Samples
517
15.1 Types of Nonprobability Samples . . . . . . . . . . . . . . . . . . . . . . . 518
15.1.1 Administrative Records . . . . . . . . . . . . . . . . . . . . . . . . . 518
15.1.2 Quota Samples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 519
15.1.3 Judgment Samples . . . . . . . . . . . . . . . . . . . . . . . . . . . . 522
15.1.4 Convenience Samples . . . . . . . . . . . . . . . . . . . . . . . . . . . 523
15.2 Selection Bias and Mean Squared Error . . . . . . . . . . . . . . . . . . . . 524
15.2.1 Random Variables Describing Participation in a Sample . . . . . . . 525
15.2.2 Bias and Mean Squared Error of a Sample Mean . . . . . . . . . . . 528
15.3 Reducing Bias of Estimates from Nonprobability Samples . . . . . . . . . . 531
15.3.1 Weighting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 531
15.3.2 Estimate the Values of the Missing Units . . . . . . . . . . . . . . . 536
15.3.3 Measures of Uncertainty for Nonprobability Samples . . . . . . . . . 537
15.4 Nonprobability versus Low-Response Probability Samples . . . . . . . . . . 539
15.5 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 542
15.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 544
16 Survey Quality
557
16.1 Coverage Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 559
16.1.1 Measuring Coverage and Coverage Bias . . . . . . . . . . . . . . . . 559
16.1.2 Coverage and Survey Mode . . . . . . . . . . . . . . . . . . . . . . . 560
16.1.3 Improving Coverage . . . . . . . . . . . . . . . . . . . . . . . . . . . 562
16.2 Nonresponse Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 562
16.3 Measurement Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 564
16.3.1 Measuring and Modeling Measurement Error . . . . . . . . . . . . . 565
16.3.2 Reducing Measurement Error . . . . . . . . . . . . . . . . . . . . . . 567
16.3.3 Sensitive Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 568
16.3.4 Randomized Response . . . . . . . . . . . . . . . . . . . . . . . . . . 568
16.4 Processing Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 570
16.5 Total Survey Quality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 571
16.6 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 573
16.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 575
A Probability Concepts Used in Sampling
579
A.1 Probability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 579
A.1.1 Simple Random Sampling with Replacement . . . . . . . . . . . . . 580
A.1.2 Simple Random Sampling without Replacement . . . . . . . . . . . 581
A.2 Random Variables and Expected Value . . . . . . . . . . . . . . . . . . . . 582
A.3 Conditional Probability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 585
A.4 Conditional Expectation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 587
A.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 591
Bibliography
593
Index
641
Preface
We rarely have complete information in life. Instead, we make decisions from partial information, often in the form of a sample from the population we are interested in. Sampling:
Design and Analysis teaches the statistical principles for selecting samples and analyzing
data from a sample survey. It shows you how to evaluate the quality of estimates from a
survey, and how to design and analyze many different forms of sample surveys.
The third edition has been expanded and updated to incorporate recent research on
theoretical and applied aspects of survey sampling, and to reflect developments related to
the increasing availability of massive data sets (“big data”) and samples selected via the
internet. The new chapter on nonprobability sampling tells how to analyze and evaluate
information from samples that are not selected randomly (including big data), and contrasts
nonprobability samples with low-response-rate probability samples. The chapters on nonsampling errors have been extensively revised to include recent developments on treating
nonresponse and measurement errors. Material in other chapters has been revised where
there has been new research or I felt I could clarify the presentation of results. Examples
retained from the second edition have been updated when needed, and new examples have
been added throughout the book to illustrate recent applications of survey sampling.
The third edition has also been revised to be compatible with multiple statistical software packages. Two supplementary books, available for FREE download from the book’s
companion website (see page xviii for how to obtain the books), provide step-by-step guides
of how to use SAS R and R software to analyze the examples in Sampling: Design and Analysis. Both books are also available for purchase in paperback form, for readers who prefer
a hard copy.
Lohr, S. (2022). SAS R Software Companion for Sampling: Design and Analysis, Third
Edition. Boca Raton, FL: Chapman & Hall/CRC Press.
Lu, Y. and Lohr, S. (2022). R Companion for Sampling: Design and Analysis, Third Edition.
Boca Raton, FL: Chapman & Hall/CRC Press.
Instructors can choose which software package to use in the class (SAS software alone, R
software alone, or, if desired, both software packages) and have students download the appropriate supplementary book. See the Computing section on page xvi for more information
Features of Sampling: Design and Analysis, Third Edition
• The book is accessible to students with a wide range of statistical backgrounds, and is
flexible for content and level. By appropriate choice of sections, this book can be used
for an upper-level undergraduate class in statistics, a first- or second-year graduate class
for statistics students, or a class with students from business, sociology, psychology, or
biology who want to learn about designing and analyzing data from sample surveys. It
statistical aspects of surveys and recent developments. The book is intended for anyone
who is interested in using sampling methods to learn about a population, or who wants
to understand how data from surveys are collected, analyzed, and interpreted.
xiii
xiv
Preface
Chapters 1–8 can be read by students who are familiar with basic concepts of probability and statistics from an introductory statistics course, including independence and
expectation, confidence intervals, and straight-line regression. Appendix A reviews the
probability concepts needed to understand probability sampling. Parts of Chapters 9
to 16 require more advanced knowledge of mathematical and statistical concepts. Section 9.1, on linearization methods for variance estimation, assumes knowledge of calculus. Chapter 10, on categorical data analysis, assumes the reader is familiar with
chi-square tests and odds ratios. Chapter 11, on regression analysis for complex survey data, presupposes knowledge of matrices and the theory of multiple regression for
independent observations.
Each chapter concludes with a chapter summary, including a glossary of key terms and
references for further exploration.
• The examples and exercises feature real data sets from the social sciences, engineering,
agriculture, ecology, medicine, business, and a variety of other disciplines. Many of the
data sets contain other variables not specifically referenced in the text; an instructor
can use these for additional exercises and activities.
The data sets are available for download from the book’s companion website. Full descriptions of the variables in the data sets are given in Appendix A of the supplementary
books described above (Lohr, 2022; Lu and Lohr, 2022).
The exercises also give the instructor much flexibility for course level (see page xv).
Some emphasize mastering the mechanics, but many encourage the student to think
about the sampling issues involved and to understand the structure of sample designs
at a deeper level. Other exercises are open-ended and encourage further exploration of
the ideas.
In the exercises, students are asked to design and analyze data from real surveys. Many
of the data examples and exercises carry over from chapter to chapter, so students can
deepen their knowledge of the statistical concepts and see how different analyses are
performed with the sample. Data sets that are featured in multiple chapters are listed
in the “Data sets” entry of the Index so you can follow them across chapters.
• Sampling: Design and Analysis, Third Edition includes many topics not found in other
textbooks at this level. Chapters 7–11 discuss how to analyze complex surveys such as
those administered by federal statistical agencies, how to assess the effects of nonresponse and weight the data to adjust for it, how to use computer-intensive methods
for estimating variances in complex surveys, and how to perform chi-square tests and
regression analyses using data from complex surveys. Chapters 12–14 present methods
for two-phase sampling, using a survey to estimate population size, and designing a
survey to study a subpopulation that is hard to identify or locate. Chapter 15, new for
the third edition, contrasts probability and nonprobability samples, and provides guidance on how to evaluate the quality of nonprobability samples. Chapter 16 discusses a
total quality framework for survey design, and presents some thoughts on the future of
sampling.
• Design of surveys is emphasized throughout, and is related to methods for analyzing
the data from a survey. The book presents the philosophy that the design is by far the
most important aspect of any survey: No amount of statistical analysis can compensate
• Sampling: Design and Analysis, Third Edition emphasizes the importance of graphing
the data. Graphical analysis of survey data is challenging because of the large sizes and
complexity of survey data sets but graphs can provide insight into the data structure.
Preface
xv
• While most of the book adopts a randomization-based perspective, I have also included
sections that approach sampling from a model-based perspective, with the goal of placing sampling methods within the framework used in other areas of statistics. Many
important results in survey research have involved models, and an understanding of
both approaches is essential for the survey practitioner. All methods for dealing with
nonresponse are model-based. The model-based approach is introduced in Section 2.10
and further developed in successive chapters; those sections can be covered while those
chapters are taught or discussed at any time later in the course.
Exercises. The book contains more than 550 exercises, organized into four types. More
than 150 of the exercises are new to the third edition.
A. Introductory exercises are intended to develop skills on the basic ideas in the book.
B. Working with Survey Data exercises ask students to analyze data from real surveys.
Most require the use of statistical software; see section on Computing below.
C. Working with Theory exercises are intended for a more mathematically oriented
class, allowing students to work through proofs of results in a step-by-step manner
and explore the theory of sampling in more depth. They also include presentations of
additional results about survey sampling that may be of interest to more advanced students. Many of these exercises require students to know calculus, probability theory, or
mathematical statistics.
D. Projects and Activities exercises contain activities suitable for classroom use or for
assignment as a project. Many of these activities ask the student to design, collect, and
analyze a sample selected from a population. The activities continue from chapter to
chapter, allowing students to build on their knowledge and compare various sampling
designs. I always assigned Exercise 35 from Chapter 7 and its continuation in subsequent
chapters as a course project, and asked students to write a report with their findings.
and analyze the data. Along the way, the students read and translate the survey design
descriptions into the design features studied in class, develop skills in analyzing survey
data, and gain experience in dealing with nonresponse or other challenges.
Suggested chapters for sampling classes. Chapters 1–6 treat the building blocks of simple random, stratified, and cluster sampling, as well as ratio and regression estimation.
To read them requires familiarity with basic ideas of expectation, sampling distributions,
confidence intervals, and linear regression—material covered in most introductory statistics
classes. Along with Chapters 7 and 8, these chapters form the foundation of a one-quarter
or one-semester course. Sections on the statistical theory in these chapters are marked with
asterisks—these require more familiarity with probability theory and mathematical statistics. The material in Chapters 9–16 can be covered in almost any order, with topics chosen
to fit the needs of the students.
Sampling: Design and Analysis, Third Edition can be used for many different types of
classes, and the choice of chapters to cover can be tailored to meet the needs of the students
in that class. Here are suggestions of chapters to cover for four types of sampling classes.
Undergraduate class of statistics students: Chapters 1–8, skipping sections with asterisks; Chapters 15 and 16.
One-semester graduate class of statistics students: Chapters 1–9, with topics chosen from the remaining chapters according to the desired emphasis of the class.
xvi
Preface
Two-semester graduate class of statistics students: All chapters, with in-depth coverage of Chapters 1–8 in the first term and Chapters 9–16 in the second term. The
exercises contain many additional theoretical results for survey sampling; these can be
presented in class or assigned for students to work on.
Students from social sciences, biology, business, or other subjects: Chapters 1–7
should be covered for all classes, skipping sections with asterisks. Choice of other material depends on how the students will be using surveys in the future. Persons teaching
classes for social scientists may want to include Chapters 8 (nonresponse), 10 (chi-square
tests), and 11 (regression analyses of survey data). Persons teaching classes for biology
students may want to cover Chapter 11 and Chapter 13 on using surveys to estimate
population sizes. Students who will be analyzing data from large government surveys
would want to learn about replication-based variance estimation methods in Chapter 9.
Students who may be using nonprobability samples should read Chapter 15.
Any of these can be taught as activity-based classes, and that is how I structured my
sampling classes. Students were asked to read the relevant sections of the book at home
before class. During class, after I gave a ten-minute review of the concepts, students worked
in small groups with their laptops on designing or analyzing survey data from the chapter
examples or the “Projects and Activities” section, and I gave help and suggestions as needed.
We ended each class with a group discussion of the issues and a preview of the next session’s
activities.
Computing. You need to use a statistical software package to analyze most of the data sets
provided with this book. I wrote Sampling: Design and Analysis, Third Edition for use with
either SAS or R software. You can choose which software package to use for computations:
SAS software alone, R alone, or both, according to your preference. Both software packages
are available at no cost for students and independent learners, and the supplementary books
tell how to obtain them.
The supplementary books, SAS R Software Companion for Sampling: Design and Analysis, Third Edition by Sharon L. Lohr, and R Companion for Sampling: Design and Analysis,
companion website, demonstrate how to use SAS and R software, respectively, to analyze
the examples in Sampling: Design and Analysis, Third Edition. Both books are also available
for purchase in paperback form, for readers who prefer hard copies. The two supplementary
books are written in parallel format, making it easy to find how a particular example is
coded in each software package. They thus would also be useful for a reader who is familiar
with one of the software packages but would like to learn how to use the other.
The supplementary books provide the code used to select, or produce estimates or graphs
from, the samples used for the examples in Chapters 1–13 of this book. They display and
interpret the output produced by the code, and discuss special features of the procedure
or function used to produce the output. Each chapter concludes with tips and warnings on
how to avoid common errors when designing or analyzing surveys.
Which software package should you use? If you are already familiar with R or SAS
software, you may want to consider adopting that package when working through Sampling:
Design and Analysis, Third Edition. You may also want to consider the following features
of the two software packages for survey data.
Features of SAS software for survey data:
• Students and independent learners anywhere in the world can access a FREE, cloudbased version of the software: SAS R OnDemand for Academics (https://www.sas.
com/en_us/software/on-demand-for-academics.html) contains all of the programs
Preface
xvii
needed to select samples, compute estimates, and graph data for surveys. Short online
videos for instructors show how to create a course site online, upload data that can be
short video tutorials help students become acquainted with the basic features of the
system; other videos, and online help materials, introduce students to basic concepts of
programming in SAS software.
• Most of the data analyses or sample selections for this book’s examples and exercises
can be done with short programs (usually containing five or fewer lines) that follow a
standard syntax.
The survey analysis procedures in SAS/STAT R software, which at this writing include the SURVEYMEANS, SURVEYREG, SURVEYFREQ, SURVEYLOGISTIC, and
SURVEYPHREG procedures, are specifically designed to produce estimates from complex surveys. The procedures can calculate either linearization-based variance estimates
(used in Chapters 1–8) or the replication variance estimates described in Chapter 9, and
they will construct replicate weights for a survey design that you specify. They will also
produce appropriate survey-weighted plots of the data. The output provides the statistics you request as well as additional information that allows you to verify the design
and weighting information used in the analysis. The procedures also print warnings if
you have written code that is associated with some common survey data analysis errors.
The SURVEYSELECT procedure will draw every type of probability sample discussed
in this book, again with output that confirms the procedure used to draw the sample.
• SAS software is designed to allow you to manipulate and manage large data sets (some
survey data sets contain tens of thousands or even millions of records), and compute
estimates for those data sets using numerically stable and efficient algorithms. Many
large survey data sets (such as the National Health and Nutrition Examination Survey
data discussed in Chapter 7) are distributed as SAS data sets; you can also import files
from spreadsheet programs, comma- or tab-delimited files, and other formats.
• The software is backward compatible—that is, code written for previous versions of the
software will continue to work with newer versions. All programs are thoroughly tested
before release, and the customer support team resolves any problems with the software
that users might discover after release (they do not answer questions about how to do
homework problems, though!). Appendix 5 of SAS Institute Inc. (2020) describes the
methods used to quality-check and validate statistical procedures in SAS software.
• You do not need to learn computer programming to perform standard survey data analyses with SAS software. But for advanced users, the software offers the capability to write
programs in SAS/IML R software or use macros. In addition, many user-contributed
macros that perform specialized analyses of survey data have been published.
Features of the R statistical software environment for survey data:
• The software is available FREE from https://www.r-project.org/. It is open-source
software, which means anyone can use it without a license or fee. Many tutorials on
how to use R are available online; these tell you how to use the software to compute
statistics and to create customized graphics.
• Base R contains functions that will select and analyze data from simple random samples. To select and analyze data from other types of samples, however—those discussed
xviii
Preface
after Chapter 2 of this book—R users must either (1) write their own R functions or (2)
use functions that have been developed by other R users and made available through
a contributed package. As of September 2020, the Comprehensive R Archive Network
(CRAN) contained more than 16,000 contributed packages. If a statistical method has
been published, there is a good chance that someone has developed a contributed package for R that performs the computations.
Contributed packages for R are not peer-reviewed or quality-checked unless the package
authors arrange for such review. Functions in base R and contributed packages can
change at any time, and are not always backward compatible.
But the open-source nature of R means that other users can view and test the functions
in the packages. The book by Lu and Lohr (2022) makes use of functions in two popular
contributed packages that have been developed for survey data by Lumley (2020) and
by Tillé and Matei (2021). These functions will compute estimates and select samples for
every type of probability sampling design discussed in Sampling: Design and Analysis,
Third Edition.
• You need to learn how to work with functions in R in order to use it to analyze or select
surveys. After you have gained experience with R, however, you can write functions to
produce estimates for new statistical methods or to conduct simulation studies such as
that requested in Exercise 21 of Chapter 4.
Software packages other than SAS and R can also be used with the book, as long as they
have programs that correctly calculate estimates from complex survey data. Brogan (2015)
illustrated the errors that result when non-survey software is used to analyze data from a
complex survey. Software packages with survey data capabilities include SUDAAN R (RTI
International, 2012), Stata R (Kolenikov, 2010), SPSS R (Zou et al., 2020), Mplus R (Muthén
and Muthén, 2017), WesVar R (Westat, 2015), and IVEware (Raghunathan et al., 2016).
See West et al. (2018) for reviews of these and other packages. New computer programs
for analyzing survey data are developed all the time; the newsletter of the International
Association of Survey Statisticians (http://isi-iass.org) is a good resource for updated
information.
Website for the book. The book’s website can be reached from either of the following
https://www.sharonlohr.com
https://www.routledge.com/9780367279509.
• Downloadable pdf files for the supplementary books SAS R Software Companion for
Sampling: Design and Analysis, Third Edition and R Companion for Sampling: Design
and Analysis, Third Edition. The pdf files are identical to the published paperback
versions of the books.
• All data sets referenced in the book. These are available in comma-delimited (.csv),
SAS, or R format. The data sets in R format are also available in the R contributed
package SDAResources (Lu and Lohr, 2021).
• Other resources related to the book.
A solutions manual for the book is available (for instructors only) from the publisher at
https://www.routledge.com/9780367279509.
Preface
xix
Acknowledgments. I have been fortunate to receive comments and advice from many people who have used or reviewed one or more of the editions of this book. Serge Alalouf,
David Bellhouse, Emily Berg, Paul Biemer, Mike Brick, Trent Buskirk, Ted Chang, Ron
Christensen, Mark Conaway, Dale Everson, Andrew Gelman, James Gentle, Burke Grandjean, Michael Hamada, David Haziza, Nancy Heckman, Mike Hidiroglou, Norma Hubele,
Tim Johnson, Jae-Kwang Kim, Stas Kolenikov, Partha Lahiri, Yan Lu, Steve MacEachern,
David Marker, Ruth Mickey, Sarah Nusser, N. G. N. Prasad, Minsun Riddles, Deborah
Rumsey, Thomas P. Ryan, Fritz Scheuren, Samantha Seals, Elizabeth Stasny, Imbi Traat,
Shap Wolf, Tommy Wright, Wesley Yung, and Elaine Zanutto have all provided suggestions
that resulted in substantial improvements in the exposition. I am profoundly grateful that
these extraordinary statisticians were willing to take the time to share their insights about
how the book could better meet the needs of students and sampling professionals.
I’d like to thank Sandra Clark, Mark Asiala, and Jason Fields for providing helpful suggestions and references for the material on the American Community Survey and Household
Pulse Survey. Kinsey Dinan, Isaac McGinn, Arianna Fishman, and Jayme Day answered
questions and pointed me to websites with information about the procedures for the annual point-in-time count described in Example 3.13. Pierre Lavallée, Dave Chapman, Jason
Rivera, Marina Pollán, Roberto Pastor-Barriuso, Sunghee Lee, Mark Duda, and Matt Hayat
generously helped me with questions about various examples in the book.
J. N. K. Rao has provided encouragement, advice, and suggestions for this book since
the first edition. I began collaborating with Jon on research shortly after receiving tenure,
and have always been awed at his ability to identify and solve the important problems in
survey sampling—often years before anyone else realizes how crucial the topics will be. I can
think of no one who has done more to develop the field of survey sampling, not only through
his research contributions but also through his strong support for young statisticians from
all over the world. Thank you, Jon, for all your friendship and wise counsel over the years.
John Kimmel, editor extraordinaire at CRC Press, encouraged me to write this third
edition, and it was his idea to have supplemental books showing how to use SAS and R
software with the book examples. I feel immensely privileged to have had the opportunity
to work with him and to benefit from his amazing knowledge of all things publishing.
Sharon L. Lohr
April 2021
Symbols and Acronyms
The number in parentheses is the page where the notation is introduced.
ACS
American Community Survey. (4)
ASA
American Statistical Association. (91)
ANOVA
Analysis of variance. (90)
B
Ratio ty /tx or, more generally, a regression coefficient. (122)
BMI
Body mass index (variable measured in NHANES). (291)
2
χ
Chi-square. (349)
C
Set of units in a convenience (or other nonprobability) sample. (528)
cdf
Cumulative distribution function. (281)
CI
Confidence interval. (46)
Cov
Covariance. (57)
CV
Coefficient of variation. (42)
deff
Design effect. (286)
df
Degrees of freedom. (48)
Di
Random variable indicating inclusion in phase II of a two-phase sample. (460)
E
Expected value. (36)
f
Probability density or mass function. (281)
F
Cumulative distribution function. (281) In other contexts, F represents the F
distribution. (404)
fpc
Finite population correction, = (1 − n/N ) for a simple random sample. (41)
GREG
Generalized regression. (444)
GVF
Generalized variance function. (379)
HT
Horvitz-Thompson estimator or variance estimator. (236)
ICC
Intraclass correlation coefficient. (176)
IPUMS
Integrated Public Use Microdata Series. (78)
ln
Natural logarithm. (338)
logit
Logit(p) = ln[p/(1 − p)]. (441)
Mi
Number of ssus in the population from psu i. (170)
mi
Number of ssus in the sample from psu i. (171)
M0
Total number of ssus in the population, in all psus. (170)
MAR
Missing at random given covariates, a mechanism for missing data. (322)
MCAR
Missing completely at random, a mechanism for missing data. (321)
xxi
xxii
Symbols and Acronyms
MICE
Multivariate imputation by chained equations. (338)
MSE
Mean squared error. (37)
µ
Theoretical value of mean in an infinite population, used in model-based inference. (56)
NHANES
National Health and Nutrition Examination Survey. (273)
NMAR
Not missing at random, a mechanism for missing data. (323)
N
Number of units in the population. (34)
n
Number of units in the sample. (32)
OLS
Ordinary least squares. (420)
P
Probability operator. (34)
p
Proportion of units in the population having a characteristic. (38)

Estimated proportion of units in the population having a characteristic. (39)
PES
Post-enumeration survey. (487)
πi
Probability that unit i is in the sample. (34)
πik
Probability that units i and k are both in the sample (joint inclusion probability). (235)
φi
Probability that unit i responds to a survey after being selected for the sample,
called the response propensity. (321)
ψi
Probability that unit i is selected on the first draw in a with-replacement
sample. (220)
pps
Probability proportional to size. (229)
psu
Primary sampling unit. (167)
Qi
Random variable indicating the number of times unit i appears in a withreplacement sample. (73)
R
Set of respondents to the survey. (323)
Ri
Random variable indicating whether unit i responds to a survey after being selected for the sample. (321) In Chapter 15, Ri is the random variable indicating
participation in a non-probability sample. (525)
R2
Coefficient of determination for a regression analysis. (421)
Ra2
S
Set of units in a probability sample. (34)
Sh
Set of units sampled from stratum h in a stratified sample. (84)
Si
Set of ssus sampled from psu i in a cluster sample. (171)
S (1)
Phase I sample. (459)
S
(2)
Phase II sample. (460)
S
2
Population variance of y. (38)
2
S
Sample variance of y in a simple random sample. (42)

Population standard deviation of y, = S 2 . (38)
Sh2
Population variance in stratum h. (84)
s
Symbols and Acronyms
xxiii
s2h
Sample variance in stratum h, in a stratified random sample. (84)
σ
Theoretical value of standard deviation for an infinite population, used in
model-based theory. (59)
SE
Standard error. (42)
SRS
Simple random sample without replacement. (39)
SRSWR
Simple random sample with replacement. (39)
ssu
Secondary sampling unit. (167)
SYG
Sen-Yates-Grundy, specifying an estimator of the variance. (236)
PN
Population total, with t = ty = i=1 yi . (35)
t
T
Population total in model-based approach. (59) When used as superscript on
a vector or matrix, as in xT , T denotes transpose. (404)

Estimator of population total. (35)
t̂HT
Horvitz–Thompson estimator of the population total. (236)
tα/2,k
The 100(1 − α/2)th percentile of a t distribution with k degrees of freedom.
(48)
tsu
Tertiary (third-level) sampling unit. (243)
U
Set of units in the population, also called the universe. (34)
V
Variance. (37)
W
Set of units in a with-replacement probability sample, including the repeated
units multiple times. (226)
wi
Weight associated with unit i in the sample. (44)
WLS
Weighted least squares. (432)
xi
An auxiliary variable for unit i in the population. This symbol is in boldface
when a vector of auxiliary variables is considered. (121)
yi
A characteristic of interest observed for sampled unit i. (35)
Yi
A random variable used in model-based inference; yi is the realization of Yi in
the sample. (59)
ȳU
Population mean, =
N

1 X
yi . (38)
N i=1
1X
Sample mean, =
yi . (35)
n
i∈S
ȳˆ
An estimator of the population mean. (122)
ȲS
Sample mean, in model-based approach. (59)
ȳC
Sample mean from a convenience or other nonprobability sample of size n,
1X
yi . (528)
=
n
zα/2
The 100(1 − α/2)th percentile of the standard normal distribution. (48)
Zi
Random variable indicating inclusion in a without-replacement probability
sample. Zi = 1 if unit i is in the sample and 0 otherwise. (56)
i∈C
1
Introduction
When statistics are not based on strictly accurate calculations, they mislead instead of
guide. The mind easily lets itself be taken in by the false appearance of exactitude which
statistics retain in their mistakes, and confidently adopts errors clothed in the form of
mathematical truth.
—Alexis de Tocqueville, Democracy in America
1.1
Guidance from Samples
We all use data from samples to make decisions. When tasting soup to correct the seasoning,
deciding to buy a book after reading the first page, choosing a major after taking first-year
college classes, or buying a car following a test drive, we rely on partial information to judge
the whole.
External data used to help with those decisions come from samples, too. Statistics such
as the average rating for a book in online reviews, the median salary of psychology majors,
the percentage of persons with an undergraduate mathematics degree who are working in
a mathematics-related job, or the number of injuries resulting from automobile accidents
in 2018 are all derived from samples. So are statistics about unemployment and poverty
rates, inflation, number and characteristics of persons with diabetes, medical expenditures
of persons aged 65 and over, persons experiencing food insecurity, criminal victimizations
not reported to the police, reading proficiency among fourth-grade children, household expenditures on energy, public opinion of political candidates, land area under cultivation for
rice, livestock owned by farmers, contaminants in drinking water, size of the Antarctic population of emperor penguins—I could go on, but you get the idea. Samples, and statistics
calculated from samples, surround us.
But statistics from some samples are more trustworthy than those from others. What
those that “guide”?
This book sets out the statistical principles that tell you how to design a sample survey,
and analyze data from a sample, so that statistics calculated from a sample accurately
describe the population from which the sample was drawn. These principles also help you
evaluate the quality of any statistic you encounter that originated from a sample survey.
Before embarking on our journey, let’s look at how a statistic from a now-infamous
Example 1.1. The Survey That Killed a Magazine. Any time a pollster predicts the wrong
winner of an election, some commentator is sure to mention the Literary Digest Poll of
1936. It has been called “one of the worst political predictions in history” (Little, 2016) and
is regularly cited as the classic example of poor survey practice. What went wrong with the
poll, and was it really as flawed as it has been portrayed?
DOI: 10.1201/9780429298899-1
1
2
Introduction
In the first three decades of the twentieth century, The Literary Digest, a weekly news
magazine founded in 1890, was one of the most respected news sources in the United States.
In presidential election years, it, like many other newspapers and magazines, devoted page
after page to speculation about who would win the election. For the 1916 election, however,
the editors wrote that “[p]olitical forecasters are in the dark” and asked subscribers in five
states to mail in a ballot indicating their preferred candidate (Literary Digest, 1916).
The 1916 poll predicted the correct winner in four of the five states, and the magazine
continued polling subsequent presidential elections, with a larger sample each time. In each
of the next four election years—1920, 1924 (the first year the poll collected data from all
states), 1928, and 1932—the person predicted to win the presidency did so, and the magazine
accurately predicted the margin of victory. In 1932, for example, the poll predicted that
Franklin Roosevelt would receive 56% of the popular vote and 474 votes in the Electoral
College; in the actual election, Roosevelt received 57% of the popular vote and 472 votes in
the Electoral College.
With such a strong record of accuracy, it is not surprising that the editors of The Literary
Digest gained confidence in their polling methods. Launching the 1936 poll, they wrote:
The Poll represents thirty years’ constant evolution and perfection. Based on the
“commercial sampling” methods used for more than a century by publishing houses
to push book sales, the present mailing list is drawn from every telephone book in the
United States, from the rosters of clubs and associations, from city directories, lists
of registered voters, classified mail-order and occupational data. (Literary Digest,
1936b, p. 3)
On October 31, 1936, the poll predicted that Republican Alf Landon would receive 54%
of the popular vote, compared with 41% for Democrat Franklin Roosevelt. The final article
on polling before the election contained the statement, “We make no claim to infallibility.
We did not coin the phrase ‘uncanny accuracy’ which has been so freely applied to our
Polls” (Literary Digest, 1936a). It is a good thing The Literary Digest made no claim to
infallibility. In the election, Roosevelt received 61% of the vote; Landon, 37%. It is widely
thought that this polling debacle contributed to the demise of the magazine in 1938.
What went wrong? One problem may have been that names of persons to be polled
were compiled from sources such as telephone directories and automobile registration lists.
Households with a telephone or automobile in 1936 were generally more affluent than other
households, and opinion of Roosevelt’s economic policies was generally related to the economic class of the respondent. But the mailing list’s deficiencies do not explain all of the
difference. Postmortem analyses of the poll (Squire, 1988; Calahan, 1989; Lusinchi, 2012)
indicated that even persons with both a car and a telephone tended to favor Roosevelt,
though not to the degree that persons with neither car nor telephone supported him.
Nonresponse—the failure of persons selected for the sample to provide data—was likely
the source of much of the error. Ten million questionnaires were mailed out, and more than
2.3 million were returned—an enormous sample, but fewer than one-quarter of those solicited. In Allentown, Pennsylvania, for example, the survey was mailed to every registered
voter, but the poll results for Allentown were still incorrect because only one-third of the
ballots were returned (Literary Digest, 1936c). Squire (1988) reported that persons supporting Landon were much more likely to have returned the survey; in fact, many Roosevelt
supporters did not remember receiving a survey even though they were on the mailing list.
One lesson to be learned from The Literary Digest poll is that the sheer size of a sample
is no guarantee of its accuracy. The Digest editors became complacent because they sent out
questionnaires to more than one-quarter of all registered voters and obtained a huge sample
of more than 2.3 million people. But large unrepresentative samples can perform as badly as
Populations and Representative Samples
3
small unrepresentative samples. A large unrepresentative sample may even do more harm
than a small one because many people think that large samples are always superior to small
ones. In reality, as we shall discuss in this book, the design of the sample survey—how units
are selected to be in the sample—is far more important than its size.
Another lesson is that past accuracy of a flawed sampling procedure does not guarantee
future results. The Literary Digest poll was accurate for five successive elections—until
suddenly, in 1936, it wasn’t. Reliable statistics result from using statistically sound sampling
and estimation procedures. With good procedures, statisticians can provide a measure of a
statistic’s accuracy; without good procedures, a sampling disaster can happen at any time
even if previous statistics appeared to be accurate. 
Some of today’s data sets make the size of the Literary Digest’s sample seem tiny by
comparison, and some types of data can be gathered almost instantaneously from all over
the world. But the challenges of inferring the characteristics of a population when we observe
only part of it remain the same. The statistical principles underlying sampling apply to any
sample, of any size, at any time or place in the universe.
Chapters 2 through 7 of this book show you how to design a sample so that its data
can be used to estimate characteristics of unobserved parts of the population; Chapters 9
through 14 show how to use survey data to estimate population sizes, relationships among
variables, and other characteristics of interest. But even though you might design and select
your sample in accordance with statistical principles, in many cases, you cannot guarantee
that everyone selected for the sample will agree to participate in it. A typical election
poll in 2021 has a much lower response rate than the Literary Digest poll, but modern
survey samplers use statistical models, described in Chapters 8 and 15, to adjust for the
nonresponse. We’ll return to the Literary Digest poll in Chapter 15 and see if a nonresponse
model would have improved the poll’s forecast (and perhaps have saved the magazine).
1.2
Populations and Representative Samples
In the 1947 movie “Magic Town,” the public opinion researcher played by James Stewart
discovered a town that had exactly the same characteristics as the whole United States:
Grandview had exactly the same proportion of people who voted Republican, the same
proportion of people under the poverty line, the same proportion of auto mechanics, and
so on, as the United States taken as a whole. All that Stewart’s character had to do was to
interview the people of Grandview, and he would know public opinion in the United States.
Grandview is a “scaled-down” version of the population, mirroring every characteristic
of the whole population. In that sense, it is representative of the population of the United
States because any numerical quantity that could be calculated from the population can be
inferred from the sample.
But a sample does not necessarily have to be a small-scale replica of the population to
be representative. As we shall discuss in Chapters 2 and 3, a sample is representative if
it can be used to “reconstruct” what the population looks like—and if we can provide an
accurate assessment of how good that reconstruction is.
Some definitions are needed to make the notions of a “population” and a “representative
sample” more precise.
Observation unit An object on which a measurement is taken, sometimes called an element. In surveys of human populations, observation units are often individual persons;
4
Introduction
in agriculture or ecology surveys, they may be small areas of land; in audit surveys, they
may be financial records.
Target population The complete collection of observations we want to study. Defining the
target population is an important and often difficult part of the study. For example, in
a political poll, should the target population be all adults eligible to vote? All registered
voters? All persons who voted in the last election? The choice of target population will
profoundly affect the statistics that result.
Sample A subset of a population.
Sampled population The collection of all possible observation units that might have been
chosen in a sample; the population from which the sample was taken.
Sampling unit A unit that can be selected for a sample. We may want to study individuals
but do not have a list of all individuals in the target population. Instead, households
serve as the sampling units, and the observation units are the individuals living in the
households.
Sampling frame A list, map, or other specification of sampling units in the population
from which a sample may be selected. For a telephone survey, the sampling frame might
be a list of telephone numbers of registered voters, or simply the collection of all possible
telephone numbers. For a survey using in-person interviews, the sampling frame might
be a list of all street addresses. For an agricultural survey, a sampling frame might be a
list of all farms, or a map of areas containing farms.
In an ideal survey, the sampled population will be identical to the target population,
but this ideal is rarely met exactly. In surveys of people, the sampled population is usually
smaller than the target population. As illustrated in Figure 1.1, some persons in the target
population are missing from the sampling frame, and some will not respond to the survey.
It is also possible for the sampled population to include units that are not in the target
population, for example, if the target population consists of persons at least 18 years old,
but some persons who complete the survey are younger than that.
The target population for the American Community Survey (ACS), an annual survey
conducted by the U.S. Census Bureau, is the resident population of the United States
(U.S. Census Bureau, 2020e). The sampling frame comes from the Census Bureau’s lists of
residential housing units (for example, houses, apartments, and mobile homes) and group
quarters (for example, prisons, skilled nursing facilities, and college dormitories). These
lists are regularly updated to include new construction. A sample of about 3.5 million
housing unit addresses is selected randomly from the housing unit list; an adult at each
household members. Approximately 2% of the group quarters population is also sampled.
The sampled population consists of persons who reside at one of the places on the lists, can
be contacted, and are willing to answer the survey questions. Some U.S. residents, such as
persons experiencing homelessness or residing at an unlisted location, may be missing from
the sampling frame; others cannot be contacted or refuse or are unable to participate in the
survey (U.S. Census Bureau, 2014).
In an agricultural survey taken to estimate crop acreages and livestock inventories, the
target population may be all areas of land that are used for agriculture. Area frames are
often used for agricultural surveys, particularly when there is no list of all farm operators
or of households that engage in agriculture, or when lists of farm operators or land under
agricultural production may be outdated. The land area of a country is divided into smaller
areas that form the sampling units. The sampling frame is the list of all of the areas, which
Populations and Representative Samples
5
TARGET POPULATION
SAMPLING
FRAME
POPULATION
Not reachable
Not included in
sampling frame
Refuse to
respond
SAMPLED
POPULATION
Not eligible
for survey
Not capable
of responding
FIGURE 1.1
Target population and sampled population in a telephone survey of registered voters. Some
persons in the target population do not have a telephone or will not be associated with a
telephone number in the sampling frame. In some households with telephones, the residents
are not registered to vote and hence are not eligible for the survey. Some eligible persons
in the sampling frame population do not respond because they cannot be contacted, some
refuse to respond to the survey, and some may be ill and incapable of responding.
together comprise the target population of all land that could be used for agriculture in
the country. A sample of land areas is randomly selected. In some agricultural surveys, the
sampler directly observes the acreage devoted to different crops and counts the livestock
in the sampled areas. In others, the sampler conducts interviews with all farm operators
operating within the boundaries of the sampled areas; in this case, the sampling unit is the
area, and the observation unit is the farm operator.
In the Literary Digest poll, the characteristic of interest was the percentage of 1936
election-day voters who would support Roosevelt. An individual person was an element.
The target population was all persons who would vote on election day in the United States.
The sampled population was persons on the lists used by the Literary Digest who would
return the sample ballot.
Election polls conducted in the 21st century have the same target population (persons
who will vote in the election) and elements (individual voters) as the Literary Digest poll,
but the sampled populations differ from poll to poll. In some polls, the sampled population
consists of persons who can be reached by telephone and who are judged to be likely to
vote in the next election (see Figure 1.1); in other polls, the sampled population consists of
persons who are recruited over the internet and meet screening criteria for participation; in
still others, the sampled population consists of anyone who clicks on a website and expresses
a preference for one of the election candidates.
Mismatches between the target population and sampled population can cause the sample
to be unrepresentative and statistics calculated from it to be biased. Bias is a systematic
error in the sampling, measurement, or estimation procedures that results in a statistic
6
Introduction
being consistently larger (or consistently smaller) than the population characteristic that
it estimates. In an election poll, bias can occur if, unknown to the pollster, the sample
selection procedure systematically excludes or underrepresents voters supporting one of the
candidates (as occurred in the Literary Digest poll); or if support for one or more candidates
is measured in a way that does not reflect the voters’ actual opinions (for example, if the
ordering of candidates on the list advantages some candidates relative to others); or if the
estimation procedure results in a statistic that tends to be too small (or too large). The
next two sections discuss selection and measurement bias; estimation bias is considered in
Chapter 2.
1.3
Selection Bias
Selection bias occurs when the target population does not coincide with the sampled
population or, more generally, when some population units are sampled at a different rate
than intended by the investigator. If a survey designed to study household income has fewer
poor households than would be obtained in a representative sample, the survey estimates
of the average or median household income will be too large.
The following examples indicate some ways in which selection bias can occur.
1.3.1
Convenience Samples
Some persons who are conducting surveys use the first set of population units they encounter
as the sample. The problem is that the population units that are easiest to locate or collect
may differ from other units in the population on the measures being studied. The sample
selection may, unknown to the investigators, depend on some characteristic associated with
the properties of interest.
For example, a group of investigators took a convenience sample of adolescents to study
willing to talk to the investigators about AIDS are probably also more likely to talk to
other authority figures about AIDS. The investigators, who simply averaged the amounts
of time that adolescents in the sample said they spent talking with their parents and teachers, probably overestimated the amount of communication occurring between parents and
1.3.2
Purposive or Judgment Samples
Some survey conductors deliberately or purposively select a “representative” sample. If we
want to estimate the average amount a shopper spends at the Mall of America in a shopping
trip, and we sample shoppers who look like they have spent an “average” amount, we have
deliberately selected a sample to confirm our prior opinion. This type of sample is sometimes
called a judgment sample—the investigators use their judgment to select the specific units
to be included in the sample.
1.3.3
Self-Selected Samples
A self-selected sample consists entirely of volunteers—persons who select themselves to be
in the sample. Such is the case in radio and television call-in polls, and in many surveys
Selection Bias
7
conducted over the internet. The statistics from such surveys cannot be trusted. At best,
they are entertainment; at worst, they mislead.
Yet statistics from call-in polls or internet surveys of volunteers are cited as supporting
evidence by independent research institutes, policy organizations, news organizations, and
scholarly journals. For example, Maher (2008) reported that about 20 percent of the 1,427
people responding to an internet poll (described in the article as an “informal survey” that
solicited readers to take the survey on a website) said they had used one of the cognitiveenhancing drugs methylphenidate (Ritalin), modafinil, or beta blockers for non-medical
reasons in order to “stimulate their focus, concentration or memory.” As of 2020, the statistic
had been cited in more than 200 scientific journal articles, but few of the citing articles
mentioned the volunteer nature of the original sample or the fact that the statistic applies
only to the 1,427 persons who responded to the survey and not to a more general population.
In fact, all that can be concluded from the poll is that about 280 people who visited a website
said they had used one of the three drugs; nothing can be inferred about the rest of the
population without making heroic assumptions.
An additional problem with volunteer samples is that some individuals or organizations
may respond multiple times to the survey, skewing the results. This occurred with an internet poll conducted by Parade magazine that asked readers whether they blamed actor
Tom Cruise, or whether they blamed the media, for his “disastrous public relations year”
(United Press International, 2006, reporting on the poll, mentioned an incident in which
Cruise had jumped on the couch during Oprah Winfrey’s television show). The editors grew
suspicious, however, when 84 percent of respondents said the media—not Cruise—was to
blame. The magazine’s publicist wrote: “We did some investigating and found out that
more than 14,000 (of the 18,000-plus votes) that came in were cast from only 10 computers.
One computer was responsible for nearly 8,400 votes alone, all blaming the media for Tom’s
troubles. We also discovered that at least two other machines were the sources of inordinate
numbers of votes . . . . It seems these folks (whoever they may be) resorted to extraordinary
measures to try to portray Tom in a positive light for the Parade.com survey.”
Example 1.2. Many researchers collect samples from persons who sign up to take surveys
on the internet and are paid for their efforts. How well do such samples represent the
population for which inference is desired?
Ellis et al. (2018) asked a sample of 1,339 U.S. adults to take a survey about eating
behavior. The study participants were recruited from Amazon’s Mechanical Turk, a crowdsourcing website that allows persons or businesses to temporarily hire persons who are
registered on the site as “Workers.” Workers who expressed interest in the study were directed to the online survey and paid 50 cents upon completing it. The sample was thus
self-selected—participants first chose to register with Mechanical Turk and then chose to
take and complete the survey.
Do the survey participants have the same eating behavior patterns as U.S. adults as a
whole? We can’t tell from this survey, but the participants differed from the population of
U.S. adults on other characteristics. According to the 2017 ACS, about 51 percent of the U.S.
population aged 18 and over was female; 63 percent was white non-Hispanic; and 29 percent
had a bachelor’s degree or higher (U.S. Census Bureau, 2020b). The sample of Mechanical
Turk Workers was 60 percent female and 80 percent white non-Hispanic, and 52 percent
had a bachelor’s degree or higher. As found in other research (see, for example, Hitlin,
2016; Mortensen et al., 2018), the persons recruited from the Mechanical Turk website were
more likely to be female, highly educated, white, and non-Hispanic than persons not in the
internet and were willing to take a 15-minute survey in exchange for a tiny remuneration.
The study authors made no claims that their sample represents the U.S. population.
8
Introduction
Their purpose was to explore potential relationships between picky eating and outcomes
such as social eating anxiety, body mass index, and depressive symptoms. As the authors
stated, further research would be needed to determine whether the relationships found in
this study apply more generally.
Because the sample was self-selected, statistics calculated from it describe only the
of the persons in the sample fit the “picky eater” profile, but we cannot conclude from the
study that 18 percent of all adults in the United States are picky eaters. Even if the sample
resembled the population with respect to all demographic characteristics, picky eaters could
have chosen to participate in the survey at a higher (or lower) rate than non-picky eaters. 
1.3.4
Undercoverage
Undercoverage occurs when the sampling frame fails to include some members of the
target population. Population units that are not in the sampling frame have no chance
of being in the sample; if they differ systematically from population units that are in the
frame, statistics calculated from the sample may be biased.
Undercoverage occurs in telephone surveys because some households and persons do not
have telephones. In 2020, nearly all telephone surveys in the United States used sampling
frames that included both cellular telephones and landline telephones. In earlier years,
however, many telephone surveys excluded cellular telephones, which meant that persons
in households with no landline were not covered.
A mail survey has undercoverage of persons whose addresses are missing from the address
list or who have no fixed address. An online or e-mail survey fails to cover persons who lack
internet access. A survey of anglers that uses a state’s list of persons with fishing licenses
as a sampling frame has undercoverage of unlicensed anglers or anglers from out-of-state.
1.3.5
Overcoverage
Overcoverage occurs when units not in the target population can end up in the sample.
It is not always easy to construct a sampling frame that corresponds exactly with the
target population. There might be no list of all households with children under age 5, persons
who are employed in science or engineering fields, or businesses that sell food products to
consumers. To survey those populations, samplers often use a too-large sampling frame, then
screen out ineligible units. For example, the sampling frame might consist of all household
addresses in the area, and interviewers visiting sampled addresses would exclude households
with no children under age 5. But overcoverage can occur when persons not in the target
population are not screened out of the sample, or when data collectors are not given clear
instructions on sample eligibility. In some surveys, particularly when payment is offered for
taking the survey, overcoverage may occur when persons not eligible for the survey falsely
claim to meet the eligibility criteria (Kan and Drummey, 2018).
Another form of overcoverage occurs when individual units appear multiple times in
the sampling frame, and thus have multiple chances to be included in the sample, but the
multiplicity is not adjusted for in the analysis. In its simplest form, random digit dialing prescribes selecting a random sample of 10-digit telephone numbers. Households with
more than one telephone line have a higher chance of being selected in the sample. This
multiplicity can be compensated in the estimation (we’ll discuss this in Section 6.5); if it
is ignored, bias can result. One might expect households with more telephone lines to be
larger or more affluent, so if no adjustment is made for those households having a higher
probability of being selected for the sample, estimates of average income or household size
Selection Bias
9
may be too large. Similarly, a person with multiple e-mail addresses has a higher chance of
being selected in an e-mail survey.
Some surveys have both undercoverage and overcoverage. Political polls attempt to
predict election results from a sample of likely voters. But defining the set of persons who
will vote in the election is difficult. Pollsters use a variety of different methods and models
to predict who will vote in the election, but the predictions can exclude some voters and
include some nonvoters.
To assess undercoverage and overcoverage, you need information that is external to
the survey. In the ACS, for example, coverage errors are assessed for the population by
comparing survey estimates with independent population estimates that are calculated from
data on housing, births, deaths, and immigration (U.S. Census Bureau, 2014).
1.3.6
Nonresponse
Nonresponse—failing to obtain responses from some members of the chosen sample—
distorts the results of many surveys, even surveys that are carefully designed to minimize
other sources of selection bias. Many surveys reported in newspapers or research journals
have dismal response rates—in some, fewer than one percent of the households or persons
selected to be in the sample agree to participate.
Numerous studies comparing respondents and nonrespondents have found differences
between the two groups. Although survey samplers attempt to adjust for the nonresponse
using methods we’ll discuss in Chapter 8, systematic differences between the respondents
and nonrespondents may persist even after the adjustments. Typically knowledge from an
external source is needed to assess effects of nonresponse—you cannot tell the effects of
nonresponse by examining data from the respondents alone.
Example 1.3. Response rates for the U.S. National Health Interview Survey, an annual
survey conducted in person at respondents’ residences, have been declining since the early
1990s. The survey achieved household response rates exceeding 90% in the 1990s, but by
2015 only about 70% of the households selected to participate did so. The goal of the
survey is to provide information about the health status of U.S. residents and their access
to health care. If the nonrespondents are less healthy than the persons who answer the
survey, however, then estimates from the survey may overstate the health of the nation.
Evaluating effects of nonresponse can be challenging: the nonrespondents’ health status
is, in general, unknown to the survey conductor because nonrespondents do not provide
answers to the survey. Sometimes, though, information about the nonrespondents can be
obtained from another source. By matching National Health Interview Survey respondents
from 1990 through 2009 with a centralized database of death record information, Keyes
et al. (2018) were able to determine which of the survey respondents had died as of 2011.
They found that the mortality rates for survey respondents were lower than those for the
general population, indicating that respondents may be healthier, on average, than persons
who are not in the sampling frame or who do not respond to the survey. 
1.3.7
What Good Are Samples with Selection Bias?
Selection bias is of concern when it is desired to use estimates from a sample to describe the
population. If we want to estimate the total number of violent crime victims in the United
States, or the percentage of likely voters in the United Kingdom who intend to vote for the
Labour Party in the next election, selection bias can cause estimates from the sample to be
far from the corresponding population quantities.
10
Introduction
But samples with selection bias may provide valuable information for other purposes,
particularly in the early stages of an investigation. Such was the case for a convenience
sample taken in fall 2019.
Example 1.4. As of October 2019, more than 1,600 cases of lung injuries associated with
use of electronic cigarettes (e-cigarettes) had occurred, including 34 deaths (Moritz et al.,
2019), but the cause of the injuries was unknown. Lewis et al. (2019) conducted interviews with 53 patients in Utah who had used e-cigarette products within three months of
experiencing lung injury. Forty-nine of them (92 percent) reported using cartridges containing tetrahydrocannabinol (THC is the psychoactive ingredient in marijuana). Most of the
THC-containing products were acquired from friends or from illicit dealers.
The study authors identified possible sources of selection bias in their report. Although
they attempted to interview all 83 patients who were reported to have lung injuries following
use of e-cigarettes, only 53 participated, and the nonresponse might cause estimates to be
biased. Additional bias might occur because physicians may have reported only the more
serious cases, or because THC was illegal in Utah and patients might have underreported
its use. Persons with lung injuries who did not seek medical care were excluded from the
study. The sample used in the study was likely not representative of e-cigarette users with
lung injuries in the United States as a whole, or even in Utah.
But even with the selection bias, the sample provided new information about the lung
injuries. The majority of the persons with lung injury in the sample had been using ecigarettes containing THC, and this finding led the authors to recommend that the public
stop using these products, pending further research. The purpose of the sample was to
provide timely information for improving public health, not to produce statistics describing
the entire population of e-cigarette users, and the data in the sample provided a basis for
further investigations. 
1.4
Measurement Error
A good sample has accurate responses to the items of interest. When a response in the survey
differs from the true value, measurement error has occurred. Measurement bias occurs
when the response has a tendency to differ from the true value in one direction. As with
selection bias, measurement error and bias must be considered and minimized in the design
stage of the survey; no amount of statistical analysis will disclose that the scale erroneously
added 5 kilograms to the weight of every person in the health survey.
Measurement error is a concern in all surveys and can be insidious. In many surveys of
vegetation, for example, areas to be sampled are divided into smaller plots. A sample of
plots is selected, and the number of plants in each plot is recorded. When a plant is near the
boundary of the region, the field researcher needs to decide whether to include the plant in
the tally. A person who includes all plants near or on the boundary in the count is likely to
produce an estimate of the total number of plants in the area that is too high because some
plants may be counted twice. High-quality ecological surveys have clearly defined protocols
for counting plants near the boundaries of the sampled plots.
Example 1.5. Measurement errors may arise for reasons that are not immediately obvious.
More than 20,000 households participated in a survey conducted in Afghanistan in 2018.
Because the survey asked several hundred questions, the questions were divided among
several modules. Two modules, however, gave very different estimates of the percentage of
Measurement Error
11
children who had recently had a fever, and the investigators struggled to understand why.
of the same set of sampled households about the same children. Why was the estimated
percentage of children who recently had fever twice as high for Module 2 as Module 1?
Alba et al. (2019) found potential reasons for the discrepancy. Questions in the two
modules were answered by different persons in the household and had different contexts.
Men were asked the questions in Module 1, which concerned medical expenditures. Women
were asked the questions in Module 2, which concerned treatment practices for childhood
illnesses. The context of medical expenditures in Module 1 may have focused recall on fevers
requiring professional medical treatment, and respondents may have neglected to mention
less serious fevers. In addition, women, more likely to be the children’s primary caregivers,
may have been aware of more fever episodes than men. 
Sometimes measurement bias is unavoidable. In the North American Breeding Bird
Survey, observers stop every one-half mile on designated routes and count all birds heard or
seen during a 3-minute period within a quarter-mile radius (Ziolkowski et al., 2010; Sauer
et al., 2017). The count of birds at a stop is almost always smaller than the true number
of birds in the area because some birds are silent and unseen during the 3-minute count;
scientists use statistical models and information about the detectability of different bird
species to obtain population estimates. If data are collected with the same procedure and
with similarly skilled observers from year to year, however, the survey counts can be used
to estimate trends in the population of different species—the biases from different years are
expected to be similar, and may cancel when year-to-year differences are calculated.
Obtaining accurate responses is challenging in all types of surveys, but particularly so
in surveys of people:
• People sometimes do not tell the truth. In an agricultural survey, farmers in an area with
food-aid programs may underreport crop yields, hoping for more food aid. Obtaining
truthful responses is a particular challenge in surveys involving sensitive subject matter,
such as surveys about drug use.
• People forget. A victimization survey may ask respondents to describe criminal victimizations that occurred to them within the past year. Some persons, however, may forget
to mention an incident that occurred; others may include a memorable incident that
occurred more than a year ago.
• People do not always understand the questions. Confusing questions elicit confused responses. A question such as “Are you concerned about housing conditions in your neighborhood?” has multiple sources of potential confusion. What is meant by “concern,”
“housing conditions,” or “neighborhood”? Even the pronoun “you” may be ambiguous
in this question. Is it a singular pronoun referring to the individual survey respondent
or a collective pronoun referring to the entire neighborhood?
• People may give different answers to surveys conducted by different modes (Dillman,
2006; de Leeuw, 2008; Hox et al., 2017). The survey mode is the method used to
distribute and collect answers to the survey. Some surveys are conducted using a single
mode—in-person, internet, telephone, or mail—while others allow participants to choose
their mode when responding. Respondents may perceive questions differently when they
hear them than when they read them.
Respondents may also give different answers to a self-administered survey (for example,
an internet or mail survey where respondents enter answers directly) than to a survey
in which questions are asked by interviewers. This is particularly true for questions on
12
Introduction
sensitive topics such as drug use, criminal activity, or health risk behaviors—people
may be more willing to disclose information that puts them in a bad light in a selfadministered survey than to an interviewer (Kreuter et al., 2008; Lind et al., 2013).
Conversely, people may be more likely to provide “socially desirable” answers that portray them in a positive light to an interviewer. Dillman and Christian (2005) found
that people are more likely to rate their health as excellent when in a face-to-face interview than when they fill out a questionnaire sent by mail. In another experiment,
Keeter (2015) randomly assigned persons taking a survey to telephone mode (with an
interviewer) or internet mode (with no interviewer). Among those taking the survey by
telephone, 62 percent said they were “very satisfied” with their family life; among those
taking the survey over the internet, 44 percent said they were “very satisfied.”
• People may say what they think an interviewer wants to hear or what they think will
impress, or not offend, the interviewer. West and Blom (2017) reviewed studies finding
that the race or gender of an interviewer may influence survey responses. Eisinga et al.
(2011) reported that survey respondents were more likely to report dietary be…

Calculator

Total price:\$26
Our features

Need a better grade? We've got you covered.

Order your essay today and save 20% with the discount code GOLDEN