Hostname: page-component-6bf8c574d5-86b6f Total loading time: 0 Render date: 2025-03-08T01:15:09.144Z Has data issue: false hasContentIssue false

Reducing attrition in phone-based panel surveys: best practices and semi-automation for survey workflows

Published online by Cambridge University Press:  07 March 2025

Ala Alrababah*
Affiliation:
Department of Social and Political Sciences, Bocconi University, Milano, Italy Immigration Policy Lab, Stanford University and ETH Zürich, Zurich, Switzerland
Marine Casalis
Affiliation:
Immigration Policy Lab, Stanford University and ETH Zürich, Zurich, Switzerland
Daniel Masterson
Affiliation:
Immigration Policy Lab, Stanford University and ETH Zürich, Zurich, Switzerland Department of Political Science, University of California Santa Barbara, Santa Barbara, CA, USA
Dominik Hangartner
Affiliation:
Immigration Policy Lab, Stanford University and ETH Zürich, Zurich, Switzerland Center for International and Comparative Studies, ETH Zürich, Zurich, Switzerland
Stefan Wehrli
Affiliation:
Decision Science Laboratory, ETH Zürich, Zurich, Switzerland
Jeremy Weinstein
Affiliation:
Immigration Policy Lab, Stanford University and ETH Zürich, Zurich, Switzerland Department of Political Science, Stanford University, California, USA Harvard Kennedy School, Cambridge, Massachusetts, USA
*
Corresponding author: Ala Alrababah; Email: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

Panel surveys and phone-based data collection are essential for survey research and are often used together due to the practical advantages of conducting repeated interviews over the phone. These tools are particularly critical for research in dynamic or high-risk settings, as highlighted by researchers’ responses to the COVID-19 pandemic. However, preventing high attrition is a major challenge in panel surveys. Current solutions in political science focus on statistical fixes to address attrition ex-post but often overlook a preferable solution: minimizing attrition in the first place. Building on a review of political science panel studies and established best practices, we propose a framework to reduce attrition and introduce an online platform to facilitate the logistics of survey implementation. The web application semi-automates survey call scheduling and enumerator workflows, helping to reduce panel attrition, improve data quality, and minimize enumerator errors. Using this framework in a panel study of Syrian refugees in Lebanon, we maintained participant retention at 63 percent four and a half years after the baseline survey. We provide guidelines for researchers to report panel studies transparently and describe their designs in detail.

Type
Research Note
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
Copyright © The Author(s), 2025. Published by Cambridge University Press on behalf of EPS Academic Ltd

1. Introduction

Social scientists are often interested in studying how phenomena change over time. This may require measuring variables that take weeks or months to materialize or tracking how predictors and outcomes vary over time. Recontacting respondents is necessary to measure such changes in surveys. While surveys are a widely used method for data collection in the social sciences, panel surveys present unique challenges. One of the most significant challenges in panel research is respondent attrition. Recent research has emphasized the need to address attrition in longitudinal studies to ensure the reliability of findings (e.g., Özler et al., Reference Özler, Cuevas, Celik and Parisotto2019; Munzert et al., Reference Munzert, Selb, Gohdes, Stoetzer and Lowe2021).

Attrition has long been recognized as a major problem in panel surveys. To address this issue, researchers have developed statistical methods to correct for panel attrition ex post (Manski, Reference Manski1997; Wooldridge, Reference Wooldridge2007; Lee, Reference Lee2009; Weuve et al., Reference Weuve, Tchetgen, Glymour, Beck, Aggarwal, Wilson, Evans and de Leon2012). However, these approaches often rely on unverifiable assumptions. One possible approach for dealing with attrition is complete case analysis, which assumes that responses are missing completely at random, a condition that is often unrealistic. Other approaches use inverse-probability weights or imputation methods, which assume that observations are missing conditionally at random or that attrition can be predicted by observed covariates. Once we condition on these covariates, we assume that attrition is as-if random. Bounds offer another statistical approach for handling attrition without additional assumptions. However, they can be so large as to make the results uninformative.Footnote 1

Statistical approaches to dealing with attrition are ex post and, thus, second best. Ex ante choices focused on research design to minimize attrition in the first place are critical. Our study contributes to the literature on panel survey design strategies aimed at maximizing response rates. We build on prior work in survey methodology (Tourangeau and Ye, Reference Tourangeau and Ye2009; Fumagalli et al., Reference Fumagalli, Laurie and Lynn2013; Calderwood, Reference Calderwood2016; Eisnecker and Kroh, Reference Eisnecker and Kroh2016) and public health (Booker et al., Reference Booker, Harding and Benzeval2011; Teague et al., Reference Teague, Youssef, Macdonald, Sciberras, Shatte, Fuller-Tyszkiewicz, Greenwood, McIntosh, Olsson and Hutchinson2018). We also build on the existing literature that demonstrates best practices and guidelines in new survey modalities, including quota sampling using Facebook advertisements and online recruitment techniques in diverse geographies (Zhang et al., Reference Zhang, Mildenberger, Howe, Marlon, Rosenthal and Leiserowitz2020; Boas et al., Reference Boas, Christenson and Glick2020). Our paper provides a set of strategies that utilize technology to increase the retention of vulnerable and mobile populations in panel surveys. We propose a set of strategies to (1) build trust with respondents, (2) ensure accurate contact information, (3) reduce barriers for respondents, such as through flexible call scheduling and timing, and (4) trace respondents using secondary contacts and, where possible, WhatsApp functionality.

This paper presents a framework for minimizing attrition in panel surveys, drawing on our experience with Syrian refugees in Lebanon and a review of panel studies in political science. To support implementation, we provide an online platform that semi-automates the process, with flowcharts of the platform in Appendix C and code for the platform on the Harvard dataverse.Footnote 2 In Appendix E, we also include recommendations for reporting critical elements of panel research design and attrition statistics in future studies. Our goal is to promote transparency and shared standards and offer a toolkit for other researchers.

2. Panel survey of Syrian refugees in Lebanon

Between August 2019 and March 2024, we conducted a panel survey with a nationally representative sample of 3,003 Syrian refugees in Lebanon (Alrababah et al., Reference Alrababah, Masterson, Casalis, Hangartner and Weinstein2023). The baseline survey was conducted in-person with heads of households. We used a stratified sampling frame. In the first stage, we selected localities based on the prevalence of Syrian refugees and the majority Lebanese sect. In the second stage, we used a random walk procedure to select households. In the third stage, we selected the head of household. The response rate to the baseline survey was 77 percent.Footnote 3 During the baseline survey, we asked respondents for their phone numbers as well as numbers of secondary contacts to call if we were unable to reach the primary respondents. We attempted to re-contact the respondents every 2–5 months using WhatsApp, except for the final wave, which was conducted approximately a year after the previous one.Footnote 4 In total, we conducted 14 follow-up rounds of data collection over 4.5 years. The first follow-up wave after the baseline had a retention rate of around 77 percent of the original baseline respondents. The retention rate declined over time, reaching 67 percent of the baseline sample after 3.5 years and 63 percent after 4.5 years.Footnote 5 The largest drop in the response rate occurred between the baseline and the first wave, possibly due to incorrectly recorded phone numbers, although selection is also likely an important factor.

Our survey constitutes a particularly challenging test of the framework and online platform that we present below. Many Syrian refugees in Lebanon are mobile and vulnerable, facing challenges that complicate panel data collection. Some of these difficulties include widespread unemployment and poverty, discrimination and curfews targeting Syrians, and lockdowns due to COVID-19 for part of the duration of the panel survey. Exacerbating the challenges of panel data collection in this context, many respondents have low literacy levels, frequently change their phone numbers, and can be unreachable for hours or days due to electricity cuts and cellular network outages. Nonetheless, the retention rate in our study is high relative to other panels in political science, as demonstrated in the next section. Further, the duration of the panel survey is important, with only a few original panel surveys published in political science journals lasting for 4.5 years or longer (the duration of our panel).

At the same time, we acknowledge that the lessons of our survey are context-dependent, and not all of them will apply to all other surveys in political science. In particular, many of the lessons presented here focus on phone interviews through WhatsApp. Such lessons will not necessarily apply to online, mail, or face-to-face surveys, or to contexts where people do not use WhatsApp for phone calls. Although our strategies are designed for use with a vulnerable population in the Global South, we believe the main lessons of this study are relevant to many phone interviews in similar contexts and may apply in other settings. Researchers can benefit from approaches such as building trust with respondents, assigning the same enumerators consistently, providing compensation, reducing barriers through flexible scheduling, and maintaining updated contact details.

3. Attrition in political science panel surveys

How do panel data collection practices and retention rates vary in political science research? Panel surveys are widely used in political science, but several challenges undermine the central goal of retaining participants over time. Attrition may result from respondent mobility, changes in contact information, and survey fatigue, among other factors. These challenges are often even greater in research with vulnerable populations, such as those facing financial difficulties, living in areas with inconsistent network coverage, or fearing data disclosure due to persecution or repression risks.Footnote 6

Despite these challenges, the use of panel surveys is widespread in political science. A review of 13 major journals in political science between 2005 and 2021 yielded 128 original panel surveys published in 105 research papers.Footnote 7 To conduct the review, we searched for “panel” and “survey” or “longitudinal” and “survey” in several leading political science journals (listed in Appendix A). We included only original panel surveys, defined as panel surveys collected by the authors or by parties they commissioned. We included all surveys, including ones commissioned by the authors through online platforms such as YouGov and MTurk, or in classrooms. (We disaggregate the results by survey method in Appendix Figures A.2 and A.3.) We excluded life course research panels not conducted by the authors and existing surveys such as American National Election Study (ANES) panel studies or the British Household Panel Survey, for which data collection was not necessarily commissioned by the authors.

Figure 1 shows the retention rates over the first five years. For these studies, we attempted to calculate the retention rate as a proportion of sample size at baseline. In some cases, this was not possible. We clarify exceptions to this in the details about the list of studies in Appendix F. The majority of panels lasted less than one year after baseline. The median length of a panel survey in the sample is 2 months and the interquartile range is 0.5–12.9 months. Around 80 percent of the panels include only one follow-up round.Footnote 8 Among panel surveys with more than one follow-up round, retention declines rapidly after the first follow-up. Our panel survey, highlighted in blue, retains a relatively high retention rate after 51 months. Figures A.1 and A.2 in Appendix A provide additional details about the panels in the review of the published studies, including their locations and method of contact, and a breakdown of attrition by method. Additionally, our literature review found that some studies do not report key details of their panel design, such as retention rates and time between waves. Appendix E lists items that should be reported in panel surveys.

Figure 1. Retention rate in panel surveys published in several leading political science journals between 2005 and 2021. If a study includes more than one follow-up round, data points are connected with a line. When multiple published papers use the same original panel data, we only include one study.

4. Proposed framework

In this section, we use the existing literature and our survey to describe a set of practices that can contribute to maximizing panel retention. We also introduce an online platform that can help semi-automate critical survey scheduling and workflow issues.

Existing research in public health and survey methodology has assessed many tools that attempt to maximize response rates and minimize attrition in panel surveys. Booker et al. (Reference Booker, Harding and Benzeval2011) examine 45 retention strategies that they categorize into incentives, reminders and alternative modes of data collection, and other methods. Wagner (Reference Wagner and Kreuter2013) focuses on using survey paradata to improve contact rates in surveys, such as using respondent constraints to inform interview scheduling and sequencing. Fumagalli et al. (Reference Fumagalli, Laurie and Lynn2013) find that between-wave mailing of a change-of-address card can reduce attrition. Calderwood (Reference Calderwood2016) also examines between-wave mailing, in addition to other strategies, including refusal conversion, where people who previously refused to participate in a survey are approached again in order to convert their refusal to successful interviews. Teague et al. (Reference Teague, Youssef, Macdonald, Sciberras, Shatte, Fuller-Tyszkiewicz, Greenwood, McIntosh, Olsson and Hutchinson2018) categorize many of the existing strategies into barrier-reduction, community building, follow-up and reminder strategies, and tracing.

Building on this research and our study of Syrian refugees in Lebanon, we propose a set of steps that attempt to (1) build trust with respondents, (2) ensure the accuracy of contact details, (3) reduce participation barriers for respondents, and (4) trace respondents through secondary contacts and WhatsApp functionality. We present the proposed framework visually in Figure 2. We then briefly discuss its approach to call scheduling and enumerator workflow. Appendix B offers a detailed discussion of the design and the specific steps we propose to build trust, ensure accuracy, reduce barriers, and trace respondents. We divide these steps by when they can be conducted in the survey process. Appendix C provides the flowcharts used to build the online platform.Footnote 9

Figure 2. Outline of the proposed framework to conduct panel surveys.

4.1 Prior to data collection

To minimize attrition, researchers should take several design steps into consideration. These include building trust with participants prior to data collection, clearly communicating the study’s goals and structure, and setting up recontact infrastructure (e.g., WhatsApp phone numbers for enumerators) before baseline data collection. Conducting the baseline survey in person may help build trust with respondents. Enumerators should be trained to clearly and respectfully explain the study’s goals and the panel structure to participants.Footnote 10 Using a proactive and engaging consent process can also help establish a relationship and increase trust.

If the baseline survey is done in person and follow-up surveys are conducted by phone, researchers should inform respondents beforehand and ask for their phone numbers during baseline data collection. In our survey, attrition was highest between the in-person baseline and the first phone-based follow-up wave. Among those who did not participate in any surveys after the baseline, 214 contacts (7 percent) had invalid phone numbers and 211 contacts (7 percent) had incorrect phone numbers. In retrospect, we suspect that this issue could have been mitigated if we had assigned a WhatsApp number to each enumerator before baseline data collection and asked respondents to send a WhatsApp message to the appropriate enumerator as part of the baseline survey interview.

4.1.1 Setting up a specialized communication app

Researchers should consider using specialized communication applications like WhatsApp for follow-up data collection and setting them up before the baseline survey.Footnote 11 Calls and messages on these applications are often free or inexpensive for both respondents and researchers, saving resources for everyone involved, increasing response rates, and reducing selection bias. WhatsApp numbers can also be more stable than mobile numbers for two reasons. Respondents are often able to keep the same account identifier (e.g., WhatsApp number) even if they change SIM cards. Second, respondents can answer a WhatsApp call via Wi-Fi even if they do not have an active phone plan (such as when not paying phone bills). Another benefit is that some of these applications, including WhatsApp, are end-to-end encrypted, which mitigates some risks of monitoring when working with vulnerable populations, and these applications allow for audio messages, making them useful for low-literacy populations. A benefit of WhatsApp is that it can notify all contacts of changes to a user’s number, which can help reduce attrition. In our study, we were able to recontact over 17 percent of respondents who had changed their phone or WhatsApp numbers using this function and other backup information (such as secondary contacts—see below).

4.1.2 Collecting backup contact information

Researchers can reduce attrition by collecting both primary and secondary contact information. In addition to collecting respondents’ WhatsApp numbers, researchers should ask for any other phone numbers the respondent has and for contact information of secondary contacts, such as family members, friends, or neighbors.Footnote 12 This can help maintain contact with respondents if they are not reachable through their main WhatsApp number. In our survey, a majority of respondents (73 percent) provided at least one secondary contact and 33 percent provided at least two secondary contacts. Figure 3 (left panel) shows the percentage of successful calls that used updated contact information provided by secondary contacts.Footnote 13

Figure 3. (Left) Cumulative percent of respondents successfully called in each wave using information from secondary contacts (in that wave or in a previous wave). (Right) The number of call attempts required to successfully contact respondents, i.e., the total number of attempts we made on all the respondents’ different numbers. The shapes enumerated in the figure indicate the round of follow-up data collection.

4.2 During panel data collection

In this section, we present our framework to optimize conducting phone-based data collection. We implement the workflow through our web app, which semi-automates panel survey management and execution in order to increase response and participation. This begins with improving the timing and sequencing of calls, sending text messages before calls, and conducting and recording the outcome of survey interviews.

4.2.1 Optimizing call sequencing and timing

Collecting multiple phone numbers for each respondent can reduce attrition but complicates the enumerators’ workflow. In our application of the framework, enumerators tried calling respondents up to 10 times over multiple days before moving to secondary contacts. After an unsuccessful call, the web application’s automated scheduler waited 4–18 hours before prompting the enumerator to call the respondent again. The wait times were set to ensure that calls took place at different times of the day.

If the respondent did not answer after 10 attempts, the enumerator was then prompted to call secondary contacts, starting with household members and followed by non-household members. The enumerator was prompted to make three attempts with 4–18 hour gaps between calls. If a secondary contact was reached, the enumerator asked for updated contact information for the primary respondent. The primary contact was then called up to 5 times. If unsuccessful, the web application then directed the enumerator to move on to the next secondary contact.

We found this approach effective, often successfully recontacting respondents who did not answer after initial attempts. Four and a half years into the study, we were able to reach 522 respondents using phone numbers different from those recorded at baseline. This represents 27 percent of respondents in our final round of follow-up data collection. Figure 3 (right panel) shows the percentage of primary respondents who we successfully reached given the number of call attempts. In each round, hundreds of respondents required multiple call attempts, and we see large marginal gains in contact rates in the first 3–4 call attempts. We see relatively small marginal gains after about 5–6 attempts.

4.2.2 Semi-automating tasks

An efficient interface to manage these processes is essential. Many of the tasks discussed above are difficult to manage manually, and attempts to carry them out without efficient workflow management could lead to accumulated errors and substantial panel attrition.

To manage this, we developed a web application that automates most of the tasks related to scheduling calls and updating contact information within a round.Footnote 14 This platform works by having a supervisor assign enumerators a set of contacts to message and call.Footnote 15 Respondents’ contact information appears on the dashboard for each enumerator (see Figure A.8 in the Appendix) under a “Messages and Calls” dashboard. As enumerators make calls and indicate whether attempts are successful, the application automatically schedules upcoming enumerator tasks, working through the list of primary and secondary contacts.

Enumerators also use the dashboard on their mobile phones to send messages and make calls to respondents. Clicking on a message task in the dashboard leads them to a screen (shown in Figure A.9 in the Appendix). The text of the message is pre-populated, and clicking the “Transfer” button adds the respondent’s contact and populates a text message in WhatsApp. If the respondent does not have WhatsApp, clicking the SMS button adds the contact to a normal SMS message and copies the message text (in our case, for Lebanese numbers only).

After sending a message, the enumerator accesses the respondent details screen (Figure A.10 in the Appendix) by clicking on the respondent’s name under “Calls & Surveys” on the dashboard. This screen displays information about the respondent, which is imported into Qualtrics. The enumerator clicks the “Survey” button to open the survey in Qualtrics and the “Call” button to call the respondent via WhatsApp. The Qualtrics survey imports information such as the name and place of origin, which helps verify the respondent’s identity. The “GC” (Gift Code) button is added to send phone credit to respondents to compensate them for their time.

The dashboard simplifies the job of enumerators by assigning them a set of respondents to stay in touch with and providing a set of simple tasks to complete. The application is responsible for managing more routine tasks such as populating text messages, inputting responses, attempting phone calls, and removing nonrespondents from the dashboard. The application also automatically adds secondary contacts to the call queue for respondents who cannot be reached and manages scheduling and appointments.

Before starting a survey, the enumerator can review the participant details, past text messages, and notes in the dashboard. Additionally, the application manages payments by automatically generating gift codes (phone cards in our case) for respondents, which can be sent after a successful call. Overall, the dashboard efficiently presents a schedule (queue) for calling respondents, keeps track of responses, and facilitates direct transfer of compensation to participants. These features streamline the enumerators’ work, allowing them to focus on carefully conducting each interview and correctly implementing other non-automated tasks such as respectful communication with respondents and sending text messages when needed.

4.3 After each round of data collection

Researchers should examine context-appropriate methods for building and maintaining trust with participants, as well as ensuring data security. Researchers may consider whether offering financial compensation to respondents would help reduce attrition rates and maintain trust (Zagorsky and Rhoton, Reference Zagorsky and Rhoton2008; Pforr et al., Reference Pforr, Blohm, Blom, Erdel, Felderer, Fräßdorf, Hajek, Helmschrott, Kleinert, Koch, Krieger, Kroh, Martin, Saßenroth, Schmiedeberg, Trüdinger and Rammstedt2015). However, ethical concerns must be taken into account, as financial incentives can potentially undermine meaningful consent. At the same time, researchers should consider a possible ethical imperative to compensate respondents for the time and resources that they gave up in order to participate in the research. The decision of whether, how, and how much to compensate should be made on a case-by-case basis, with input from local stakeholders, such as civil society or humanitarian actors. In our case, we provided respondents with around $3.50 of phone credit in short survey rounds and $7 in long survey rounds.Footnote 16 We made clear to respondents that this was a small compensation to acknowledge their contribution to the research and to cover the time and phone credit required for participation. The compensation amount was carefully considered and discussed with humanitarian actors operating in Lebanon to avoid incentivizing any potentially risky behavior while expressing appreciation for the respondents’ time. We do not find a difference in response rates between rounds with high and low compensation.

Researchers may consider sending occasional messages unrelated to the research project, such as birthday or holiday greetings (taking into account the religious context), or providing useful information to respondents when possible about policy changes or ongoing events.Footnote 17

5. Conclusion

Panel surveys and phone-based data collection are crucial tools in social science research, but attrition can significantly bias results. While statistical methods can address attrition ex-post, we advocate for a proactive approach that minimizes attrition through careful design choices. This paper offers best practices and guidelines for researchers to reduce attrition in panel surveys, including a semi-automated web application to streamline the process. Additionally, our review of published panel studies reveals inconsistent reporting of key methodological details. To improve reporting, we propose standardized reporting guidelines (Appendix E) to help researchers learn from past studies and improve future panel survey designs.

Although our framework is designed for a sample of Syrian refugees in Lebanon, some of its lessons may apply to other political science research settings. Lebanon’s challenging operational environment—characterized by political instability, security risks, poor infrastructure, and a mobile refugee population facing economic hardship—has similarities with conditions in numerous contexts that are of interest to political scientists. The framework is particularly relevant for researchers studying hard-to-survey or mobile populations in unstable contexts. Some elements of the workflow, such as WhatsApp-based communication, may not be universally applicable, but many of our strategies can be adapted to various settings. For instance, building trust through proactive consent and consistent enumerator assignment, offering compensation, reducing barriers with flexible scheduling and interviews outside of business hours, and updating contact details can benefit studies in many research settings, particularly where respondents might be wary of participating or face time constraints. Our framework is meant to complement, rather than replace, qualitative and contextual knowledge in research design. We encourage researchers to combine our guidelines with other tools, engage with stakeholders, and obtain local knowledge in order to conduct successful panel studies.

Supplementary material

The supplementary material for this article can be found at https://doi.org/10.1017/psrm.2025.6. To obtain replication material for this article, https://doi.org/10.7910/DVN/IL3IKY

Acknowledgements

We are grateful to Oliver Brägger for his support with the platform. Maëlle Delouis-Jost, Amanda Gach, and Fabio Schmocker provided excellent research assistance. We appreciate the support of the authors of the panel studies we reviewed for taking the time to answer our questions and provide additional information. We would also like to thank Aala Abdelgadir, Achim Ahrens, Laura Bronner, Jeremy Ferwerda, Gloria Gennaro, Elisabeth van Lieshout, Hans Lueders, William Marble, Rachel Myrick, Matthew Tyler, and participants at EGAP 2023 Annual Meeting in London for their feedback.

Footnotes

1 Bounds can be narrowed by invoking assumptions such as monotonicity.

3 For more details on the sampling strategy, see Alrababah et al. (Reference Alrababah, Masterson, Casalis, Hangartner and Weinstein2023).

4 Using different data collection modes across waves assumes outcome stability, which suggests that potential outcomes are invariant across data collection rounds (see Coppock et al., Reference Coppock, Gerber, Green and Kern2017). In particular, survey mode can shape whether respondents answer the survey and how they answer specific questions. Unfortunately, we cannot directly test this assumption in our survey.

5 These response rates are reported as a proportion of the original baseline sample of 3,003 respondents. Individuals who refused to answer or had incorrect contact information that could not be corrected through secondary contacts were removed from the survey at each wave.

6 Although it is beyond the scope of this paper, we note that concerns related to ethics, privacy, and data security are important considerations in such settings.

7 We focus the review on panel studies in political science to ensure comparability with our study. Our review focused on articles posted on the websites of these journals between 2005 and 2021 (even if the publication date came after 2021).

8 See Figure A.4.

10 All enumerators in our study were Lebanese and native Arabic speakers, with a dialect very close to Syrian Arabic.

11 Note that WhatsApp may not be the best solution everywhere and this is context-dependent. We decided to rely on WhatsApp because interviews with the community suggested that WhatsApp is widely used among our intended sample.

12 Note that secondary contacts are not used for proxy interviews. Instead, secondary contacts are only used to get up-to-date contact information for the primary respondents in order to successfully recontact them.

13 This includes respondents reached in wave t through secondary contacts either in that wave or in a prior wave.

14 The flowchart for this platform can be seen in Appendix C. The code for the platform can be accessed here: https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/CPACNZ

15 We ensured that the enumerators were assigned to the same respondents in each wave.

16 In Lebanon, the cost of a mobile phone line is particularly high and our compensation amounts are relatively small. The phone credit could only be used for mobile minutes, messages, and data.

17 We did not adopt this in our design.

References

Alrababah, A, Masterson, D, Casalis, M, Hangartner, D and Weinstein, J (2023) The dynamics of refugee return: Syrian refugees and their migration intentions. British Journal of Political Science 53, 11081131.CrossRefGoogle Scholar
Boas, TC, Christenson, DP and Glick, DM (2020) Recruiting large online samples in the United States and India: Facebook, Mechanical Turk, and Qualtrics. Political Science Research and Methods 8, 232250.CrossRefGoogle Scholar
Booker, CL, Harding, S and Benzeval, M (2011) A systematic review of the effect of retention methods in population-based cohort studies. BMC Public Health 11, 112.CrossRefGoogle ScholarPubMed
Calderwood, L (2016) Reducing non-response in longitudinal surveys by improving survey practice. PhD thesis, UCL (University College London).Google Scholar
Coppock, A, Gerber, AS, Green, DP and Kern, HL (2017) Combining double sampling and bounds to address nonignorable missing outcomes in randomized experiments. Political Analysis 25, 188206.CrossRefGoogle Scholar
Eisnecker, PS and Kroh, M (2016) The informed consent to record linkage in panel studies: optimal starting wave, consent refusals, and subsequent panel attrition. Public Opinion Quarterly 81, 131143.Google Scholar
Fumagalli, L, Laurie, H and Lynn, P (2013) Experiments with methods to reduce attrition in longitudinal surveys. Journal of the Royal Statistical Society Series A: Statistics in Society 176, 499519.CrossRefGoogle Scholar
Lee, DS (2009) Training, wages, and sample selection: estimating sharp bounds on treatment effects. The Review of Economic Studies 76, 10711102.CrossRefGoogle Scholar
Manski, CF (1997) Monotone treatment response. Econometrica: Journal of the Econometric Society 65, 13111334.CrossRefGoogle Scholar
Munzert, S, Selb, P, Gohdes, A, Stoetzer, LF and Lowe, W (2021) Tracking and promoting the usage of a covid-19 contact tracing app. Nature Human Behaviour 5, 247255.CrossRefGoogle ScholarPubMed
Özler, B, Cuevas, PF, Celik, C and Parisotto, L (2019) Reducing attrition in phone surveys. Development Impact. https://tinyurl.com/developmentimpact (Accessed February 19, 2025).Google Scholar
Pforr, K, Blohm, M, Blom, AG, Erdel, B, Felderer, B, Fräßdorf, M, Hajek, K, Helmschrott, S, Kleinert, C, Koch, A, Krieger, U, Kroh, M, Martin, S, Saßenroth, D, Schmiedeberg, C, Trüdinger, EM and Rammstedt, B (2015) Are incentive effects on response rates and nonresponse bias in large-scale, face-to-face surveys generalizable to Germany? Evidence from ten experiments. Public Opinion Quarterly 79, 740768.CrossRefGoogle Scholar
Teague, S, Youssef, GJ, Macdonald, JA, Sciberras, E, Shatte, A, Fuller-Tyszkiewicz, M, Greenwood, C, McIntosh, J, Olsson, CA, Hutchinson, D and the SEED Lifecourse Sciences Theme (2018) Retention strategies in longitudinal cohort studies: a systematic review and meta-analysis. BMC Medical Research Methodology 18, 122.CrossRefGoogle ScholarPubMed
Tourangeau, R and Ye, C (2009) The framing of the survey request and panel attrition. Public Opinion Quarterly 73, 338348.CrossRefGoogle Scholar
Wagner, J. (2013) Using paradata-driven models to improve contact rates in telephone and face-to-face surveys. In Kreuter, F (ed.), Improving Surveys with Paradata: Analytic Uses of Process Information. Hoboken, NJ: Wiley, pp. 145170.CrossRefGoogle Scholar
Weuve, J, Tchetgen, EJT, Glymour, MM, Beck, TL, Aggarwal, NT, Wilson, RS, Evans, DA and de Leon, CFM (2012) Accounting for bias due to selective attrition: the example of smoking and cognitive decline. Epidemiology 23, 119128.CrossRefGoogle ScholarPubMed
Wooldridge, JM (2007) Inverse probability weighted estimation for general missing data problems. Journal of Econometrics 141, 12811301.CrossRefGoogle Scholar
Zagorsky, JL and Rhoton, P (2008) The effects of promised monetary incentives on attrition in a long-term panel survey. Public Opinion Quarterly 72, 502513.CrossRefGoogle Scholar
Zhang, B, Mildenberger, M, Howe, PD, Marlon, J, Rosenthal, SA and Leiserowitz, A (2020) Quota sampling using Facebook advertisements. Political Science Research and Methods 8, 558564.CrossRefGoogle Scholar
Figure 0

Figure 1. Retention rate in panel surveys published in several leading political science journals between 2005 and 2021. If a study includes more than one follow-up round, data points are connected with a line. When multiple published papers use the same original panel data, we only include one study.

Figure 1

Figure 2. Outline of the proposed framework to conduct panel surveys.

Figure 2

Figure 3. (Left) Cumulative percent of respondents successfully called in each wave using information from secondary contacts (in that wave or in a previous wave). (Right) The number of call attempts required to successfully contact respondents, i.e., the total number of attempts we made on all the respondents’ different numbers. The shapes enumerated in the figure indicate the round of follow-up data collection.

Supplementary material: File

Alrababah et al. supplementary material

Alrababah et al. supplementary material
Download Alrababah et al. supplementary material(File)
File 808.9 KB
Supplementary material: Link

Alrababah et al. Dataset

Link