Nicht aus der Schweiz? Besuchen Sie lehmanns.de
Für diesen Artikel ist leider kein Bild verfügbar.

Online Panel Research – A Data Quality Perspective

Software / Digital Media
512 Seiten
2014
John Wiley & Sons Inc (Hersteller)
978-1-118-76352-0 (ISBN)
CHF 102,65 inkl. MwSt
  • Keine Verlagsinformationen verfügbar
  • Artikel merken
Provides new insights into the accuracy and value of online panels for completing surveys Over the last decade, there has been a major global shift in survey and market research towards data collection, using samples selected from online panels. Yet despite their widespread use, remarkably little is known about the quality of the resulting data.
Provides new insights into the accuracy and value of online panels for completing surveys Over the last decade, there has been a major global shift in survey and market research towards data collection, using samples selected from online panels. Yet despite their widespread use, remarkably little is known about the quality of the resulting data. This edited volume is one of the first attempts to carefully examine the quality of the survey data being generated by online samples. It describes some of the best empirically-based research on what has become a very important yet controversial method of collecting data. Online Panel Research presents 19 chapters of previously unpublished work addressing a wide range of topics, including coverage bias, nonresponse, measurement error, adjustment techniques, the relationship between nonresponse and measurement error, impact of smartphone adoption on data collection, Internet rating panels, and operational issues. The datasets used to prepare the analyses reported in the chapters are available on the accompanying website: www.wiley.com/go/online-panel Covers controversial topics such as professional respondents, speeders, and respondent validation.
Addresses cutting-edge topics such as the challenge of smartphone survey completion, software to manage online panels, and Internet and mobile ratings panels. Discusses and provides examples of comparison studies between online panels and other surveys or benchmarks. Describes adjustment techniques to improve sample representativeness. Addresses coverage, nonresponse, attrition, and the relationship between nonresponse and measurement error with examples using data from the United States and Europe. Addresses practical questions such as motivations for joining an online panel and best practices for managing communications with panelists. Presents a meta-analysis of determinants of response quantity. Features contributions from 50 international authors with a wide variety of backgrounds and expertise. This book will be an invaluable resource for opinion and market researchers, academic researchers relying on web-based data collection, governmental researchers, statisticians, psychologists, sociologists, and other research practitioners.

Mario Callegaro, Survey Research Scientist, Quantitative Marketing, Google Inc., UK Reg Baker, President & Chief Operating Officer, Market Strategies International, USA Paul J. Lavrakas, Nielsen Media Research, Research Psychologist/Research Methodologist, USA Jon A. Krosnick, Professor of Political Science, Communication, Psychology, Stanford University, USA Jelke Bethlehem, Department of Quantitative Economics, University of Amsterdam, The Netherlands Anja Goeritz, University of Erlangen-Nuremberg, Department of Economics and Social Psychology, Germany

Preface Acknowledgments A bout the Editors About the Contributors 1 Online panel research: History, concepts, applications and a look at the future Mario Callegaro, Reg Baker, Jelke Bethlehem, Anja S. Goritz, Jon A. Krosnick, and Paul J. Lavrakas 1.1 Introduction 1.2 Internet penetration and online panels 1.3 Definitions and terminology 1.3.1 Types of online panels 1.3.2 Panel composition 1.4 A brief history of online panels 1.4.1 Early days of online panels 1.4.2 Consolidation of online panels 1.4.3 River sampling 1.5 Development and maintenance of online panels 1.5.1 Recruiting 1.5.2 Nonprobability panels 1.5.3 Probability-based panels 1.5.4 Invitation-only panels 1.5.5 Joining the panel 1.5.6 Profile stage 1.5.7 Incentives 1.5.8 Panel attrition, maintenance, and the concept of active panel membership 1.5.9 Sampling for specific studies 1.5.10 Adjustments to improve representativeness 1.6 Types of studies for which online panels are used 1.7 Industry standards, professional associations' guidelines, and advisory groups 1.8 Data quality issues 1.9 Looking ahead to the future of online panels References 2 A critical review of studies investigating the quality of data obtained with online panels based on probability and nonprobability samples Mario Callegaro, Ana Villar, David Yeager, and Jon A. Krosnick 2.1 Introduction 2.2 Taxonomy of comparison studies 2.3 Accuracy metrics 2.4 Large-scale experiments on point estimates 2.4.1 The NOPVO project 2.4.2 The ARF study 2.4.3 The Burke study 2.4.4 The MRIA study 2.4.5 The Stanford studies 2.4.6 Summary of the largest-scale experiments 2.4.7 The Canadian Newspaper Audience Databank (NADbank) experience 2.4.8 Conclusions for the largest comparison studies on point estimates 2.5 Weighting adjustments 2.6 Predictive relationship studies 2.6.1 The Harris-Interactive, Knowledge Networks study 2.6.2 The BES study 2.6.3 The ANES study 2.6.4 The US Census study 2.7 Experiment replicability studies 2.7.1 Theoretical issues in the replication of experiments across sample types 2.7.2 Evidence and future research needed on the replication of experiments in probability and nonprobability samples 2.8 The special case of pre-election polls 2.9 Completion rates and accuracy 2.10 Multiple panel membership 2.10.1 Effects of multiple panel membership on survey estimates and data quality 2.10.2 Effects of number of surveys completed on survey estimates and survey quality 2.11 Online panel studies when the offline population is less of a concern 2.12 Life of an online panel member 2.13 Summary and conclusion References Part I COVERAGE Introduction to Part I Mario Callegaro and Jon A. Krosnick 3 Assessing representativeness of a probability-based online panel in Germany Bella Struminskaya, Lars Kaczmirek, Ines Schaurer, and Wolfgang Bandilla 3.1 Probability-based online panels 3.2 Description of the GESIS Online Panel Pilot 3.2.1 Goals and general information 3.2.2 Telephone recruitment 3.2.3 Online interviewing 3.3 Assessing recruitment of the Online Panel Pilot 3.4 Assessing data quality: Comparison with external data 3.4.1 Description of the benchmark surveys 3.4.2 Measures and method of analyses 3.5 Results 3.5.1 Demographic variables 3.5.2 Attitudinal variables 3.5.3 Comparison of the GESIS Online Panel Pilot to ALLBUS with post-stratification 3.5.4 Additional analysis: Regression 3.5.5 Replication with all observations with missing values dropped 3.6 Discussion and conclusion References Appendix 3.A 4 Online panels and validity: Representativeness and attrition in the Finnish eOpinion panel Kimmo Gronlund and Kim Strandberg 4.1 Introduction 4.2 Online panels: Overview of methodological considerations 4.3 Design and research questions 4.4 Data and methods 4.4.1 Sampling 4.4.2 E-Panel data collection 4.5 Finding 4.5.1 Socio-demographics 4.5.2 Attitudes and behavior 4.5.3 Use of the Internet and media 4.6 Conclusion References 5 The untold story of multi-mode (online and mail) consumer panels:From optimal recruitment to retention and attrition Allan L. McCutcheon, Kumar Rao, and Olena Kaminska 5.1 Introduction 5.2 Literature review 5.3 Methods 5.3.1 Gallup Panel recruitment experiment 5.3.2 Panel survey mode assignment 5.3.3 Covariate measures used in this study 5.3.4 Sample composition 5.4 Results 5.4.1 Incidence of panel dropouts 5.4.2 Attrition rates 5.4.3 Survival analysis: Kaplan--Meier survival curves and Cox regression models for attrition 5.4.4 Respondent attrition vs. data attrition: Cox regression model with shared frailty 5.5 Discussion and conclusion References Part II NONRESPONSE Introduction to Part II Jelke Bethlehem and Paul J. Lavrakas 6 Nonresponse and attrition in a probability-based online panel for the general population Peter Lugtig, Marcel Das, and Annette Scherpenzeel 6.1 Introduction 6.2 Attrition in online panels versus offline panels 6.3 The LISS panel 6.3.1 Initial nonresponse 6.4 Attrition modeling and results 6.5 Comparison of attrition and nonresponse bias 6.6 Discussion and conclusion References 7 Determinants of the starting rate and the completion rate in online panel studies Anja S. Goritz 7.1 Introduction 7.2 Dependent variables 7.3 Independent variables 7.4 Hypotheses 7.5 Method 7.6 Results 7.6.1 Descriptives 7.6.2 Starting rate 7.6.3 Completion rate 7.7 Discussion and conclusion 7.7.1 Recommendations 7.7.2 Limitations References 8 Motives for joining nonprobability online panels and their association with survey participation behavior Florian Keusch, Bernad Batinic, and Wolfgang Mayerhofer 8.1 Introduction 8.2 Motives for survey participation and panel enrollment 8.2.1 Previous research on online panel enrollment 8.2.2 Reasons for not joining online panels 8.2.3 The role of monetary motives in online panel enrollment 8.3 Present study 8.3.1 Sample 8.3.2 Questionnaire 8.3.3 Data on past panel behavior 8.3.4 Analysis plan 8.4 Results 8.4.1 Motives for joining the online panel 8.4.2 Materialism 8.4.3 Predicting survey participation behavior 8.5 Conclusion 8.5.1 Money as a leitmotif 8.5.2 Limitations and future work References Appendix 8.A 9 Informing panel members about study results: Effects of traditional and innovative forms of feedback on participation Annette Scherpenzeel and Vera Toepoel 9.1 Introduction 9.2 Background 9.2.1 Survey participation 9.2.2 Methods for increasing participation 9.2.3 Nonresponse bias and tailored design 9.3 Method 9.3.1 Sample 9.3.2 Experimental design 9.4 Results 9.4.1 Effects of information on response 9.4.2 "The perfect panel member" versus "the sleeper" 9.4.3 Information and nonresponse bias 9.4.4 Evaluation of the materials 9.5 Discussion and conclusion References Appendix 9.A Part III MEASUREMENT ERROR Introduction to Part III Reg Baker and Mario Callegaro 10 Professional respondents in nonprobability online panels D. Sunshine Hillygus, Natalie Jackson, and McKenzie Young 10.1 Introduction 10.2 Background 10.3 Professional respondents and data quality 10.4 Approaches to handling professional respondents 10.5 Research hypotheses 10.6 Data and methods 10.7 Results 10.8 Satisficing behavior 10.9 Discussion References Appendix 10.A 11 The impact of speeding on data quality in nonprobability and freshly recruited probability-based online panels Robert Greszki, Marco Meyer, and Harald Schoen 11.1 Introduction 11.2 Theoretical framework 11.3 Data and methodology 11.4 Response time as indicator of data quality 11.5 How to measure "speeding"? 11.6 Does speeding matter? 11.7 Conclusion References Part IV WEIGHTING ADJUSTMENTS Introduction to Part IV Jelke Bethlehem and Mario Callegaro 12 Improving web survey quality: Potentials and constraints of propensity score adjustments Stephanie Steinmetz, Annamaria Bianchi, Kea Tijdens, and Silvia Biffignandi 12.1 Introduction 12.2 Survey quality and sources of error in nonprobability web surveys 12.3 Data, bias description, and PSA 12.3.1 Data 12.3.2 Distribution comparison of core variables 12.3.3 Propensity score adjustment and weight specification 12.4 Results 12.4.1 Applying PSA: The comparison of wages 12.4.2 Applying PSA: The comparison of socio-demographic and wage-related covariates 12.5 Potentials and constraints of PSA to improve nonprobability web survey quality: Conclusion References Appendix 12.A 13 Estimating the effects of nonresponses in online panels through imputation 299 Weiyu Zhang 13.1 Introduction 13.2 Method 13.2.1 The Dataset 13.2.2 Imputation analyses 13.3 Measurements 13.3.1 Demographics 13.3.2 Response propensity 13.3.3 Opinion items 13.4 Findings 13.5 Discussion and conclusion Acknowledgement References Part V NONRESPONSE AND MEASUREMENT ERROR Introduction to Part V Anja S. Goritz and Jon A. Krosnick 14 The relationship between nonresponse strategies and measurement error: Comparing online panel surveys to traditional surveys Neil Malhotra, Joanne M. Miller, and Justin Wedeking 14.1 Introduction 14.2 Previous research and theoretical overview 14.3 Does interview mode moderate the relationship between nonresponse strategies and data quality? 14.4 Data 14.4.1 Study 1: 2002 GfK/Knowledge Networks study 14.4.2 Study 2: 2012 GfK/KN study 14.4.3 Study 3: American National Election Studies 14.5 Measures 14.5.1 Studies 1 and 2 dependent variables: Measures of satisficing 14.5.2 Study 3 dependent variable: Measure of satisficing 14.5.3 Studies 1 and 2 independent variables: Nonresponse strategies 14.5.4 Study 3 independent variable 14.6 Results 14.6.1 Internet mode 14.6.2 Internet vs. telephone 14.6.3 Internet vs. face-to-face 14.7 Discussion and conclusion References 15 Nonresponse and measurement error in an online panel: Does additional effort to recruit reluctant respondents result in poorer quality data? Caroline Roberts, Nick Allum, and Patrick Sturgis 15.1 Introduction 15.2 Understanding the relation between nonresponse and measurement error 15.3 Response propensity and measurement error in panel surveys 15.4 The present study 15.5 Data 15.6 Analytical strategy 15.6.1 Measures and indicators of response quality 15.6.2 Taking shortcuts 15.6.3 Response effects in attitudinal variables 15.7 Results 15.7.1 The relation between recruitment efforts and panel cooperation 15.7.2 The relation between panel cooperation and response quality 15.7.3 Common causes of attrition propensity and response quality 15.7.4 Panel conditioning, cooperation and response propensity 15.8 Discussion and conclusion References Part VI SPECIAL DOMAINS Introduction to Part VI Reg Baker and Anja S. Goritz 16 An empirical test of the impact of smartphones on panel-based online data collection Frank Drewes 16.1 Introduction 16.2 Method 16.3 Results 16.3.1 Study 1: Observation of survey access 16.3.2 Study 2: Monitoring of mobile survey access 16.3.3 Study 3: Smartphone-related usage behavior and attitudes 16.3.4 Study 4: Experimental test of the impact of survey participation via smartphone on the quality of survey results 16.4 Discussion and conclusion References 17 Internet and mobile ratings panels Philip M. Napoli, Paul J. Lavrakas, and Mario Callegaro 17.1 Introduction 17.2 History and development of Internet ratings panels 17.3 Recruitment and panel cooperation 17.3.1 Probability sampling for building a new online Internet measurement panel 17.3.2 Nonprobability sampling for a new online Internet measurement panel 17.3.3 Creating a new panel from an existing Internet measurement panel for eligibility, privacy and confidentiality agreements, gaining cooperation, and installing the measurement system 17.3.5 Motivating cooperation 17.4 Compliance and panel attrition 17.5 Measurement issues 17.5.1 Coverage of Internet access points 17.5.2 Confounding who is measured 17.6 Long tail and panel size 17.7 Accuracy and validation studies 17.8 Statistical adjustment and modeling 17.9 Representative research 17.10 The future of Internet audience measurement References Part VII OPERATIONAL ISSUES IN ONLINE PANELS Introduction to Part VII Paul J. Lavrakas and Anja S. Goritz 18 Online panel software Tim Macer 18.1 Introduction 18.2 What does online panel software do? 18.3 Survey of software providers 18.4 A typology of panel research software 18.4.1 Standalone panel software 18.4.2 Integrated panel research software 18.4.3 Online research community software 18.5 Support for the different panel software typologies 18.5.1 Mobile research 18.6 The panel database 18.6.1 Deployment models 18.6.2 Database architecture 18.6.3 Database limitations 18.6.4 Software deployment and data protection 18.7 Panel recruitment and profile data 18.7.1 Panel recruitment methods 18.7.2 Double opt-in 18.7.3 Verification 18.7.4 Profile data capture 18.8 Panel administration 18.8.1 Member administration and opt-out requests 18.8.2 Incentive management 18.9 Member portal 18.9.1 Custom portal page 18.9.2 Profile updating 18.9.3 Mobile apps 18.9.4 Panel and community research tools 18.10 Sample administration 18.11 Data capture, data linkage and interoperability 18.11.1 Updating the panel history: Response data and survey paradata 18.11.2 Email bounce-backs 18.11.3 Panel enrichment 18.11.4 Interoperability 18.12 Diagnostics and active panel management 18.12.1 Data required for monitoring panel health 18.12.2 Tools required for monitoring panel health 18.13 Conclusion and further work 18.13.1 Recent developments: Communities and mobiles 18.13.2 Demands for interoperability and data exchange 18.13.3 Panel health 18.13.4 Respondent quality References 19 Validating respondents' identity in online samples: The impact of efforts to eliminate fraudulent respondents Reg Baker, Chuck Miller, Dinaz Kachhi, Keith Lange, Lisa Wilding-Brown, and Jacob Tucker 19.1 Introduction 19.2 The 2011 study 19.3 The 2012 study 19.4 Results 19.4.1 Outcomes from the validation process 19.4.2 The impact of excluded respondents 19.5 Discussion 19.6 Conclusion References Appendix 19.A Index

Verlagsort New York
Sprache englisch
Maße 152 x 229 mm
Gewicht 666 g
Themenwelt Mathematik / Informatik Mathematik
Sozialwissenschaften Soziologie
ISBN-10 1-118-76352-1 / 1118763521
ISBN-13 978-1-118-76352-0 / 9781118763520
Zustand Neuware
Haben Sie eine Frage zum Produkt?
Mehr entdecken
aus dem Bereich