Cover image for Online Panel Research : A Data Quality Perspective.
Online Panel Research : A Data Quality Perspective.
Title:
Online Panel Research : A Data Quality Perspective.
Author:
Callegaro, Mario.
ISBN:
9781118763506
Personal Author:
Edition:
1st ed.
Physical Description:
1 online resource (512 pages)
Series:
Wiley Series in Survey Methodology
Contents:
Cover -- Title Page -- Copyright -- Contents -- Preface -- Acknowledgments -- About the Editors -- About the Contributors -- Chapter 1 Online panel research: History, concepts, applications and a look at the future -- 1.1 Introduction -- 1.2 Internet penetration and online panels -- 1.3 Definitions and terminology -- 1.3.1 Types of online panels -- 1.3.2 Panel composition -- 1.4 A brief history of online panels -- 1.4.1 Early days of online panels -- 1.4.2 Consolidation of online panels -- 1.4.3 River sampling -- 1.5 Development and maintenance of online panels -- 1.5.1 Recruiting -- 1.5.2 Nonprobability panels -- 1.5.3 Probability-based panels -- 1.5.4 Invitation-only panels -- 1.5.5 Joining the panel -- 1.5.6 Profile stage -- 1.5.7 Incentives -- 1.5.8 Panel attrition, maintenance, and the concept of active panel membership -- 1.5.9 Sampling for specific studies -- 1.5.10 Adjustments to improve representativeness -- 1.6 Types of studies for which online panels are used -- 1.7 Industry standards, professional associations' guidelines, and advisory groups -- 1.8 Data quality issues -- 1.9 Looking ahead to the future of online panels -- References -- Chapter 2 A critical review of studies investigating the quality of data obtained with online panels based on probability and nonprobability samples -- 2.1 Introduction -- 2.2 Taxonomy of comparison studies -- 2.3 Accuracy metrics -- 2.4 Large-scale experiments on point estimates -- 2.4.1 The NOPVO project -- 2.4.2 The ARF study -- 2.4.3 The Burke study -- 2.4.4 The MRIA study -- 2.4.5 The Stanford studies -- 2.4.6 Summary of the largest-scale experiments -- 2.4.7 The Canadian Newspaper Audience Databank (NADbank) experience -- 2.4.8 Conclusions for the largest comparison studies on point estimates -- 2.5 Weighting adjustments.

2.6 Predictive relationship studies -- 2.6.1 The Harris-Interactive, Knowledge Networks study -- 2.6.2 The BES study -- 2.6.3 The ANES study -- 2.6.4 The US Census study -- 2.7 Experiment replicability studies -- 2.7.1 Theoretical issues in the replication of experiments across sample types -- 2.7.2 Evidence and future research needed on the replication of experiments in probability and nonprobability samples -- 2.8 The special case of pre-election polls -- 2.9 Completion rates and accuracy -- 2.10 Multiple panel membership -- 2.10.1 Effects of multiple panel membership on survey estimates and data quality -- 2.10.2 Effects of number of surveys completed on survey estimates and survey quality -- 2.11 Online panel studies when the offline population is less of a concern -- 2.12 Life of an online panel member -- 2.13 Summary and conclusion -- References -- Part I Coverage -- Introduction to Part I -- Chapter 3 Assessing representativeness of a probability-based online panel in Germany -- 3.1 Probability-based online panels -- 3.2 Description of the GESIS Online Panel Pilot -- 3.2.1 Goals and general information -- 3.2.2 Telephone recruitment -- 3.2.3 Online interviewing -- 3.3 Assessing recruitment of the Online Panel Pilot -- 3.4 Assessing data quality: Comparison with external data -- 3.4.1 Description of the benchmark surveys -- 3.4.2 Measures and method of analyses -- 3.5 Results -- 3.5.1 Demographic variables -- 3.5.2 Attitudinal variables -- 3.5.3 Comparison of the GESIS Online Panel Pilot to ALLBUS with post-stratification -- 3.5.4 Additional analysis: Regression -- 3.5.5 Replication with all observations with missing values dropped -- 3.6 Discussion and conclusion -- References -- Appendix 3.A.

Chapter 4 Online panels and validity: Representativeness and attrition in the Finnish eOpinion panel -- 4.1 Introduction -- 4.2 Online panels: Overview of methodological considerations -- 4.3 Design and research questions -- 4.4 Data and methods -- 4.4.1 Sampling -- 4.4.2 E-Panel data collection -- 4.5 Findings -- 4.5.1 Socio-demographics -- 4.5.2 Attitudes and behavior -- 4.5.3 Use of the Internet and media -- 4.6 Conclusion -- References -- Chapter 5 The untold story of multi-mode (online and mail) consumer panels: From optimal recruitment to retention and attrition -- 5.1 Introduction -- 5.2 Literature review -- 5.3 Methods -- 5.3.1 Gallup Panel recruitment experiment -- 5.3.2 Panel survey mode assignment -- 5.3.3 Covariate measures used in this study -- 5.3.4 Sample composition -- 5.4 Results -- 5.4.1 Incidence of panel dropouts -- 5.4.2 Attrition rates -- 5.4.3 Survival analysis: Kaplan-Meier survival curves and Cox regression models for attrition -- 5.4.4 Respondent attrition vs. data attrition: Cox regression model with shared frailty -- 5.5 Discussion and conclusion -- References -- Part II Nonresponse -- Introduction to Part II -- Chapter 6 Nonresponse and attrition in a probability-based online panel for the general population -- 6.1 Introduction -- 6.2 Attrition in online panels versus offline panels -- 6.3 The LISS panel -- 6.3.1 Initial nonresponse -- 6.4 Attrition modeling and results -- 6.5 Comparison of attrition and nonresponse bias -- 6.6 Discussion and conclusion -- References -- Chapter 7 Determinants of the starting rate and the completion rate in online panel studies -- 7.1 Introduction -- 7.2 Dependent variables -- 7.3 Independent variables -- 7.4 Hypotheses -- 7.5 Method -- 7.6 Results -- 7.6.1 Descriptives -- 7.6.2 Starting rate -- 7.6.3 Completion rate -- 7.7 Discussion and conclusion.

7.7.1 Recommendations -- 7.7.2 Limitations -- References -- Chapter 8 Motives for joining nonprobability online panels and their association with survey participation behavior -- 8.1 Introduction -- 8.2 Motives for survey participation and panel enrollment -- 8.2.1 Previous research on online panel enrollment -- 8.2.2 Reasons for not joining online panels -- 8.2.3 The role of monetary motives in online panel enrollment -- 8.3 Present study -- 8.3.1 Sample -- 8.3.2 Questionnaire -- 8.3.3 Data on past panel behavior -- 8.3.4 Analysis plan -- 8.4 Results -- 8.4.1 Motives for joining the online panel -- 8.4.2 Materialism -- 8.4.3 Predicting survey participation behavior -- 8.5 Conclusion -- 8.5.1 Money as a leitmotif -- 8.5.2 Limitations and future work -- References -- Appendix 8.A -- Chapter 9 Informing panel members about study results: Effects of traditional and innovative forms of feedback on participation -- 9.1 Introduction -- 9.2 Background -- 9.2.1 Survey participation -- 9.2.2 Methods for increasing participation -- 9.2.3 Nonresponse bias and tailored design -- 9.3 Method -- 9.3.1 Sample -- 9.3.2 Experimental design -- 9.4 Results -- 9.4.1 Effects of information on response -- 9.4.2 "The perfect panel member'' versus "the sleeper'' -- 9.4.3 Information and nonresponse bias -- 9.4.4 Evaluation of the materials -- 9.5 Discussion and conclusion -- References -- Appendix 9.A -- Part III Measurement Error -- Introduction to Part III -- Chapter 10 Professional respondents in nonprobability online panels -- 10.1 Introduction -- 10.2 Background -- 10.3 Professional respondents and data quality -- 10.4 Approaches to handling professional respondents -- 10.5 Research hypotheses -- 10.6 Data and methods -- 10.7 Results -- 10.8 Satisficing behavior -- 10.9 Discussion -- References -- Appendix 10.A.

Chapter 11 The impact of speeding on data quality in nonprobability and freshly recruited probability-based online panels -- 11.1 Introduction -- 11.2 Theoretical framework -- 11.3 Data and methodology -- 11.4 Response time as indicator of data quality -- 11.5 How to measure "speeding''? -- 11.6 Does speeding matter? -- 11.7 Conclusion -- References -- Part IV Weighting Adjustments -- Introduction to Part IV -- Chapter 12 Improving web survey quality: Potentials and constraints of propensity score adjustments -- 12.1 Introduction -- 12.2 Survey quality and sources of error in nonprobability web surveys -- 12.3 Data, bias description, and PSA -- 12.3.1 Data -- 12.3.2 Distribution comparison of core variables -- 12.3.3 Propensity score adjustment and weight specification -- 12.4 Results -- 12.4.1 Applying PSA: The comparison of wages -- 12.4.2 Applying PSA: The comparison of socio-demographic and wage-related covariates -- 12.5 Potentials and constraints of PSA to improve nonprobability web survey quality: Conclusion -- References -- Appendix 12.A -- Chapter 13 Estimating the effects of nonresponses in online panels through imputation -- 13.1 Introduction -- 13.2 Method -- 13.2.1 The Dataset -- 13.2.2 Imputation analyses -- 13.3 Measurements -- 13.3.1 Demographics -- 13.3.2 Response propensity -- 13.3.3 Opinion items -- 13.4 Findings -- 13.5 Discussion and conclusion -- Acknowledgement -- References -- Part V Nonresponse and Measurement Error -- Introduction to Part V -- Chapter 14 The relationship between nonresponse strategies and measurement error: Comparing online panel surveys to traditional surveys -- 14.1 Introduction -- 14.2 Previous research and theoretical overview.

14.3 Does interview mode moderate the relationship between nonresponse strategies and data quality?.
Abstract:
Provides new insights into the accuracy and value of online panels for completing surveys Over the last decade, there has been a major global shift in survey and market research towards data collection, using samples selected from online panels. Yet despite their widespread use, remarkably little is known about the quality of the resulting data. This edited volume is one of the first attempts to carefully examine the quality of the survey data being generated by online samples. It describes some of the best empirically-based research on what has become a very important yet controversial method of collecting data. Online Panel Research presents 19 chapters of previously unpublished work addressing a wide range of topics, including coverage bias, nonresponse, measurement error, adjustment techniques, the relationship between nonresponse and measurement error, impact of smartphone adoption on data collection, Internet rating panels, and operational issues. The datasets used to prepare the analyses reported in the chapters are available on the accompanying website: www.wiley.com/go/online_panel Covers controversial topics such as professional respondents, speeders, and respondent validation. Addresses cutting-edge topics such as the challenge of smartphone survey completion, software to manage online panels, and Internet and mobile ratings panels. Discusses and provides examples of comparison studies between online panels and other surveys or benchmarks. Describes adjustment techniques to improve sample representativeness. Addresses coverage, nonresponse, attrition, and the relationship between nonresponse and measurement error with examples using data from the United States and Europe. Addresses practical questions such as motivations for joining an online panel and best practices for managing communications with panelists. Presents a

meta-analysis of determinants of response quantity. Features contributions from 50 international authors with a wide variety of backgrounds and expertise. This book will be an invaluable resource for opinion and market researchers, academic researchers relying on web-based data collection, governmental researchers, statisticians, psychologists, sociologists, and other research practitioners.
Local Note:
Electronic reproduction. Ann Arbor, Michigan : ProQuest Ebook Central, 2017. Available via World Wide Web. Access may be limited to ProQuest Ebook Central affiliated libraries.
Electronic Access:
Click to View
Holds: Copies: