Total Survey Error in Practice
John Wiley & Sons Inc (Verlag)
978-1-119-04167-2 (ISBN)
Featuring a timely presentation of total survey error (TSE), this edited volume introduces valuable tools for understanding and improving survey data quality in the context of evolving large-scale data sets
This book provides an overview of the TSE framework and current TSE research as related to survey design, data collection, estimation, and analysis. It recognizes that survey data affects many public policy and business decisions and thus focuses on the framework for understanding and improving survey data quality. The book also addresses issues with data quality in official statistics and in social, opinion, and market research as these fields continue to evolve, leading to larger and messier data sets. This perspective challenges survey organizations to find ways to collect and process data more efficiently without sacrificing quality. The volume consists of the most up-to-date research and reporting from over 70 contributors representing the best academics and researchers from a range of fields. The chapters are broken out into five main sections: The Concept of TSE and the TSE Paradigm, Implications for Survey Design, Data Collection and Data Processing Applications, Evaluation and Improvement, and Estimation and Analysis. Each chapter introduces and examines multiple error sources, such as sampling error, measurement error, and nonresponse error, which often offer the greatest risks to data quality, while also encouraging readers not to lose sight of the less commonly studied error sources, such as coverage error, processing error, and specification error. The book also notes the relationships between errors and the ways in which efforts to reduce one type can increase another, resulting in an estimate with larger total error.
This book:
• Features various error sources, and the complex relationships between them, in 25 high-quality chapters on the most up-to-date research in the field of TSE
• Provides comprehensive reviews of the literature on error sources as well as data collection approaches and estimation methods to reduce their effects
• Presents examples of recent international events that demonstrate the effects of data error, the importance of survey data quality, and the real-world issues that arise from these errors
• Spans the four pillars of the total survey error paradigm (design, data collection, evaluation and analysis) to address key data quality issues in official statistics and survey research
Total Survey Error in Practice is a reference for survey researchers and data scientists in research areas that include social science, public opinion, public policy, and business. It can also be used as a textbook or supplementary material for a graduate-level course in survey research methods.
Paul P. Biemer, PhD, is distinguished fellow at RTI International and associate director of Survey Research and Development at the Odum Institute, University of North Carolina, USA. Edith de Leeuw, PhD, is professor of survey methodology in the Department of Methodology and Statistics at Utrecht University, the Netherlands. Stephanie Eckman, PhD, is fellow at RTI International, USA. Brad Edwards is vice president, director of Field Services, and deputy area director at Westat, USA. Frauke Kreuter, PhD, is professor and director of the Joint Program in Survey Methodology, University of Maryland, USA; professor of statistics and methodology at the University of Mannheim, Germany; and head of the Statistical Methods Research Department at the Institute for Employment Research, Germany. Lars E. Lyberg, PhD, is senior advisor at Inizio, Sweden. N. Clyde Tucker, PhD, is principal survey methodologist at the American Institutes for Research, USA. Brady T. West, PhD, is research associate professor in the Survey Research Center, located within the Institute for Social Research at the University of Michigan (U-M), and also serves as statistical consultant on the Consulting for Statistics, Computing and Analytics Research (CSCAR) team at U-M, USA.
Notes on Contributors xix
Preface xxv
Section 1 The Concept of TSE and the TSE Paradigm 1
1 The Roots and Evolution of the Total Survey Error Concept 3
Lars E. Lyberg and Diana Maria Stukel
1.1 Introduction and Historical Backdrop 3
1.2 Specific Error Sources and Their Control or Evaluation 5
1.3 Survey Models and Total Survey Design 10
1.4 The Advent of More Systematic Approaches Toward Survey Quality 12
1.5 What the Future Will Bring 16
References 18
2 Total Twitter Error: Decomposing Public Opinion Measurement on Twitter from a Total Survey Error Perspective 23
Yuli Patrick Hsieh and Joe Murphy
2.1 Introduction 23
2.2 Social Media: An Evolving Online Public Sphere 25
2.3 Components of Twitter Error 27
2.4 Studying Public Opinion on the Twittersphere and the Potential Error Sources of Twitter Data: Two Case Studies 31
2.5 Discussion 40
2.6 Conclusion 42
References 43
3 Big Data: A Survey Research Perspective 47
Reg Baker
3.1 Introduction 47
3.2 Definitions 48
3.3 The Analytic Challenge: From Database Marketing to Big Data and Data Science 56
3.4 Assessing Data Quality 58
3.5 Applications in Market, Opinion, and Social Research 59
3.6 The Ethics of Research Using Big Data 62
3.7 The Future of Surveys in a Data-Rich Environment 62
References 65
4 The Role of Statistical Disclosure Limitation in Total Survey Error 71
Alan F. Karr
4.1 Introduction 71
4.2 Primer on SDL 72
4.3 TSE-Aware SDL 75
4.4 Edit-Respecting SDL 79
4.5 SDL-Aware TSE 83
4.6 Full Unification of Edit, Imputation, and SDL 84
4.7 “Big Data” Issues 87
4.8 Conclusion 89
Acknowledgments 91
References 92
Section 2 Implications for Survey Design 95
5 The Undercoverage–Nonresponse Tradeoff 97
Stephanie Eckman and Frauke Kreuter
5.1 Introduction 97
5.2 Examples of the Tradeoff 98
5.3 Simple Demonstration of the Tradeoff 99
5.4 Coverage and Response Propensities and Bias 100
5.5 Simulation Study of Rates and Bias 102
5.6 Costs 110
5.7 Lessons for Survey Practice 111
References 112
6 Mixing Modes: Tradeoffs Among Coverage, Nonresponse, and Measurement Error 115
Roger Tourangeau
6.1 Introduction 115
6.2 The Effect of Offering a Choice of Modes 118
6.3 Getting People to Respond Online 119
6.4 Sequencing Different Modes of Data Collection 120
6.5 Separating the Effects of Mode on Selection and Reporting 122
6.6 Maximizing Comparability Versus Minimizing Error 127
6.7 Conclusions 129
References 130
7 Mobile Web Surveys: A Total Survey Error Perspective 133
Mick P. Couper, Christopher Antoun, and Aigul Mavletova
7.1 Introduction 133
7.2 Coverage 135
7.3 Nonresponse 137
7.4 Measurement Error 142
7.5 Links Between Different Error Sources 148
7.6 The Future of Mobile Web Surveys 149
References 150
8 The Effects of a Mid-Data Collection Change in Financial Incentives on Total Survey Error in the National Survey of Family Growth: Results from a Randomized Experiment 155
James Wagner, Brady T. West, Heidi Guyer, Paul Burton, Jennifer Kelley, Mick P. Couper, and William D. Mosher
8.1 Introduction 155
8.2 Literature Review: Incentives in Face-to-Face Surveys 156
8.3 Data and Methods 159
8.4 Results 163
8.5 Conclusion 173
References 175
9 A Total Survey Error Perspective on Surveys in Multinational, Multiregional, and Multicultural Contexts 179
Beth-Ellen Pennell, Kristen Cibelli Hibben, Lars E. Lyberg, Peter Ph. Mohler, and Gelaye Worku
9.1 Introduction 179
9.2 TSE in Multinational, Multiregional, and Multicultural Surveys 180
9.3 Challenges Related to Representation and Measurement Error Components in Comparative Surveys 184
9.4 QA and QC in 3MC Surveys 192
References 196
10 Smartphone Participation in Web Surveys: Choosing Between the Potential for Coverage, Nonresponse, and Measurement Error 203
Gregg Peterson, Jamie Griffin, John LaFrance, and JiaoJiao Li
10.1 Introduction 203
10.2 Prevalence of Smartphone Participation in Web Surveys 206
10.3 Smartphone Participation Choices 209
10.4 Instrument Design Choices 212
10.5 Device and Design Treatment Choices 216
10.6 Conclusion 218
10.7 Future Challenges and Research Needs 219
Appendix 10.A: Data Sources 220
Appendix 10.B: Smartphone Prevalence in Web Surveys 221
Appendix 10.C: Screen Captures from Peterson et al. (2013) Experiment 225
Appendix 10.D: Survey Questions Used in the Analysis of the Peterson et al. (2013) Experiment 229
References 231
11 Survey Research and the Quality of Survey Data Among Ethnic Minorities 235
Joost Kappelhof
11.1 Introduction 235
11.2 On the Use of the Terms Ethnicity and Ethnic Minorities 236
11.3 On the Representation of Ethnic Minorities in Surveys 237 Ethnic Minorities 241
11.4 Measurement Issues 242
11.5 Comparability, Timeliness, and Cost Concerns 244
11.6 Conclusion 247
References 248
Section 3 Data Collection and Data Processing Applications 253
12 Measurement Error in Survey Operations Management: Detection, Quantification, Visualization, and Reduction 255
Brad Edwards, Aaron Maitland, and Sue Connor
12.1 TSE Background on Survey Operations 256
12.2 Better and Better: Using Behavior Coding (CARIcode) and Paradata to Evaluate and Improve Question (Specification) Error and Interviewer Error 257
12.3 Field-Centered Design: Mobile App for Rapid Reporting and Management 261
12.4 Faster and Cheaper: Detecting Falsification With GIS Tools 265
12.5 Putting It All Together: Field Supervisor Dashboards 268
12.6 Discussion 273
References 275
13 Total Survey Error for Longitudinal Surveys 279
Peter Lynn and Peter J. Lugtig
13.1 Introduction 279
13.2 Distinctive Aspects of Longitudinal Surveys 280
13.3 TSE Components in Longitudinal Surveys 281
13.4 Design of Longitudinal Surveys from a TSE Perspective 285
13.5 Examples of Tradeoffs in Three Longitudinal Surveys 290
13.6 Discussion 294
References 295
14 Text Interviews on Mobile Devices 299
Frederick G. Conrad, Michael F. Schober, Christopher Antoun, Andrew L. Hupp, and H. Yanna Yan
14.1 Texting as a Way of Interacting 300
14.2 Contacting and Inviting Potential Respondents through Text 303
14.3 Texting as an Interview Mode 303
14.4 Costs and Efficiency of Text Interviewing 312
14.5 Discussion 314
References 315
15 Quantifying Measurement Errors in Partially Edited Business Survey Data 319
Thomas Laitila, Karin Lindgren, Anders Norberg, and Can Tongur
15.1 Introduction 319
15.2 Selective Editing 320
15.3 Effects of Errors Remaining After SE 325
15.4 Case Study: Foreign Trade in Goods Within the European Union 328
15.5 Editing Big Data 334
15.6 Conclusions 335
References 335
Section 4 Evaluation and Improvement 339
16 Estimating Error Rates in an Administrative Register and Survey Questions Using a Latent Class Model 341
Daniel L. Oberski
16.1 Introduction 341
16.2 Administrative and Survey Measures of Neighborhood 342
16.3 A Latent Class Model for Neighborhood of Residence 345
16.4 Results 348
Appendix 16.A: Program Input and Data 355
Acknowledgments 357
References 357
17 ASPIRE: An Approach for Evaluating and Reducing the Total Error in Statistical Products with Application to Registers and the National Accounts 359
Paul P. Biemer, Dennis Trewin, Heather Bergdahl, and Yingfu Xie
17.1 Introduction and Background 359
17.2 Overview of ASPIRE 360
17.3 The ASPIRE Model 362
17.4 Evaluation of Registers 367
17.5 National Accounts 371
17.6 A Sensitivity Analysis of GDP Error Sources 376
17.7 Concluding Remarks 379
Appendix 17.A: Accuracy Dimension Checklist 381
References 384
18 Classification Error in Crime Victimization Surveys: A Markov Latent Class Analysis 387
Marcus E. Berzofsky and Paul P. Biemer
18.1 Introduction 387
18.2 Background 389
18.3 Analytic Approach 392
18.4 Model Selection 396
18.5 Results 399
18.6 Discussion and Summary of Findings 404
18.7 Conclusions 407
Appendix 18.A: Derivation of the Composite False-Negative Rate 407
Appendix 18.B: Derivation of the Lower Bound for False-Negative Rates from a Composite Measure 408
Appendix 18.C: Examples of Latent GOLD Syntax 408
References 410
19 Using Doorstep Concerns Data to Evaluate and Correct for Nonresponse Error in a Longitudinal Survey 413
Ting Yan
19.1 Introduction 413
19.2 Data and Methods 416
19.3 Results 418
19.4 Discussion 428
Acknowledgment 430
References 430
20 Total Survey Error Assessment for Sociodemographic Subgroups in the 2012 U.S. National Immunization Survey 433
Kirk M. Wolter, Vicki J. Pineau, Benjamin Skalland, Wei Zeng, James A. Singleton, Meena Khare, Zhen Zhao, David Yankey, and Philip J. Smith
20.1 Introduction 433
20.2 TSE Model Framework 434
20.3 Overview of the National Immunization Survey 437
20.4 National Immunization Survey: Inputs for TSE Model 440
20.5 National Immunization Survey TSE Analysis 445
20.6 Summary 452
References 453
21 Establishing Infrastructure for the Use of Big Data to Understand Total Survey Error: Examples from Four Survey Research Organizations Overview 457
Brady T. West
Part 1 Big Data Infrastructure at the Institute for Employment Research (IAB) 458
Antje Kirchner, Daniela Hochfellner, Stefan Bender
Acknowledgments 464
References 464
Part 2 Using Administrative Records Data at the U.S. Census Bureau: Lessons Learned from Two Research Projects Evaluating Survey Data 467
Elizabeth M. Nichols, Mary H. Mulry, and Jennifer Hunter Childs
Acknowledgments and Disclaimers 472
References 472
Part 3 Statistics New Zealand’s Approach to Making Use of Alternative Data Sources in a New Era of Integrated Data 474
Anders Holmberg and Christine Bycroft
References 478
Part 4 Big Data Serving Survey Research: Experiences at the University of Michigan Survey Research Center 478
Grant Benson and Frost Hubbard
Acknowledgments and Disclaimers 484
References 484
Section 5 Estimation and Analysis 487
22 Analytic Error as an Important Component of Total Survey Error: Results from a Meta-Analysis 489
Brady T. West, Joseph W. Sakshaug, and Yumi Kim
22.1 Overview 489
22.2 Analytic Error as a Component of TSE 490
22.3 Appropriate Analytic Methods for Survey Data 492
22.4 Methods 495
22.5 Results 497
22.6 Discussion 505
Acknowledgments 508
References 508
23 Mixed-Mode Research: Issues in Design and Analysis 511
Joop Hox, Edith de Leeuw, and Thomas Klausch
23.1 Introduction 511
23.2 Designing Mixed-Mode Surveys 512
23.3 Literature Overview 514
23.4 Diagnosing Sources of Error in Mixed-Mode Surveys 516
23.5 Adjusting for Mode Measurement Effects 523
23.6 Conclusion 527
References 528
24 The Effect of Nonresponse and Measurement Error on Wage Regression across Survey Modes: A Validation Study 531
Antje Kirchner and Barbara Felderer
24.1 Introduction 531
24.2 Nonresponse and Response Bias in Survey Statistics 532
24.3 Data and Methods 534
24.4 Results 541
24.5 Summary and Conclusion 546
Acknowledgments 547
Appendix 24.A 548
Appendix 24.B 549
References 554
25 Errors in Linking Survey and Administrative Data 557
Joseph W. Sakshaug and Manfred Antoni
25.1 Introduction 557
25.2 Conceptual Framework of Linkage and Error Sources 559
25.3 Errors Due to Linkage Consent 561
25.4 Erroneous Linkage with Unique Identifiers 565
25.5 Erroneous Linkage with Nonunique Identifiers 567
25.6 Applications and Practical Guidance 568
25.7 Conclusions and Take-Home Points 571
References 571
Index 575
Erscheinungsdatum | 18.03.2017 |
---|---|
Reihe/Serie | Wiley Series in Survey Methodology |
Verlagsort | New York |
Sprache | englisch |
Maße | 185 x 257 mm |
Gewicht | 1270 g |
Themenwelt | Mathematik / Informatik ► Mathematik ► Angewandte Mathematik |
Mathematik / Informatik ► Mathematik ► Wahrscheinlichkeit / Kombinatorik | |
Sozialwissenschaften ► Soziologie | |
ISBN-10 | 1-119-04167-8 / 1119041678 |
ISBN-13 | 978-1-119-04167-2 / 9781119041672 |
Zustand | Neuware |
Haben Sie eine Frage zum Produkt? |
aus dem Bereich