-
PDF
- Split View
-
Views
-
Cite
Cite
Pablo Cabrera-Álvarez, Peter Lynn, Text Messages to Facilitate the Transition to Web-First Sequential Mixed-Mode Designs in Longitudinal Surveys, Journal of Survey Statistics and Methodology, Volume 12, Issue 3, June 2024, Pages 651–673, https://doi.org/10.1093/jssam/smae003
- Share Icon Share
Abstract
This article is concerned with the transition of a longitudinal survey from a single-mode design to a web-first mixed-mode design and the role that text messages to sample members can play in smoothing that transition. We present the results of an experiment that investigates the effects of augmenting the contact strategy of letters and emails with text messages, inviting the sample members to complete a web questionnaire and reminding them of the invite. The experiment was conducted in a subsample of Understanding Society, a household panel survey in the United Kingdom, in the wave that transitioned from a CAPI-only design to a sequential design combining web and CATI. In the experiment, a quarter of the sample received letters and emails, while the rest received between one and three text messages with a personalized link to the questionnaire. We examine the effect of the text messages on response rates, both at the web phase of a sequential design and at the end of the fieldwork after a CATI follow-up phase, and explore various mechanisms that might drive the increase in response rates. We also look at the effects on the device used to complete the survey and field efforts needed at the CATI stage. The findings indicate that text messages did not help to significantly increase response rates overall, although some subgroups benefited from them, such as panel members who had not provided an email or postal address before. Likewise, the text messages increased web completion among younger panel members and those with an irregular response pattern. We only found a slight and nonsignificant effect on smartphone use and no effect on the web household response rate, a proxy for fieldwork efforts.
The transition from CAPI-only to a web-first sequential mixed-mode design can be challenging for longitudinal surveys that need to maintain high response rates to allow for longitudinal analyses. In this article, we present the results of an experiment where text messages were added to a contact strategy of letters and emails to increase the response rates in the transition from a CAPI-only to a web-first and CATI sequential design. The response rate of panel members who could not be reached by email or postal mail, irregular respondents, and younger adults increased after receiving the text messages. However, we did not find an effect of the text messages on the device used to complete the web questionnaire or the level of fieldwork efforts at the interviewer-administered fieldwork phase.
1. INTRODUCTION
Over the past decade, there has been a notable surge in the use of web-first sequential mixed-mode designs. This mixed-mode sequential design that blends web and a second mode provides the opportunity to achieve a better balance between data quality and survey costs by benefiting from the advantages of both modes (de Leeuw 2018). In the field of longitudinal surveys, this growing interest has encouraged the transition of some studies from CAPI-only to a web-first mixed-mode design (Jäckle et al. 2015; Bianchi et al. 2017; Brown and Calderwood 2020; Biemer et al. 2021), while others have experimented with adding a web component to their original design (Voorpostel et al. 2021; Sastry and McGonagle 2022).
In the context of longitudinal studies, the shift from a CAPI-only to a web-first mixed-mode design offers the potential to reduce fieldwork efforts, which might yield a positive impact on survey costs (Dillman et al. 2014, p. 401). However, this transition poses a significant challenge: the new combination of modes must achieve a high cumulative response rate over waves. This requirement holds particular significance because, in longitudinal studies, the sample members that drop out cannot be substituted by new ones, as it is feasible in cross-sectional surveys, and a lower cumulative response rate could compromise the ability to conduct longitudinal analyses (Lynn 2018). Simultaneously, reducing fieldwork efforts and associated costs requires that the maximum number of panel members respond during the web-only phase, thereby avoiding transfers to the subsequent, more resource-intensive mode. An enduring question in the literature, which remains partially unanswered, concerns the strategies for achieving this dual objective: sustaining high response rates whilst increasing web completion. This article seeks to contribute to answering this question by evaluating the effect of incorporating text messages into a contact strategy that combines letters and emails in the context of a mode transition from CAPI to a web-first and CATI sequential design.
This article presents findings from an experimental study embedded in wave 11 of Understanding Society, the United Kingdom Household Longitudinal Study, and conducted during the coronavirus disease 2019 (COVID-19) pandemic, when the sample switched to a web-first and CATI sequential design due to the suspension of face-to-face fieldwork. The analysis uses the data from a random subsample of households that, before the onset of COVID-19, had always been assigned to a CAPI-only protocol, referred to as the “CAPI-only group.” The primary objective of the article is to investigate the effect of adding text messages on the web and final response rates, with a particular focus on exploring the mechanisms that underlie this effect. Additionally, we delve into how the inclusion of text messages influenced the choice of the device used to complete the survey among the web respondents and the level of fieldwork efforts.
In the following section, we provide an overview of the prior research and outline the research hypotheses. Subsequently, we present a description of Understanding Society, the study where the experiment was embedded. This includes a description of the experimental design and the analysis plan. Finally, we present the analysis results and discuss the main findings.
2. BACKGROUND
In a web-first sequential mixed-mode design, a web survey is administered first, followed by another mode for those who did not respond during the web-only phase. This design is particularly attractive due to its capacity to benefit from the advantages offered by both modes, achieving a more favorable balance between data quality and survey costs (de Leeuw 2018). The central component of this design, the web mode, is more cost-effective compared to other modes, particularly those involving interviewers and thus offers the potential for cost savings (Dillman et al. 2014, p. 401). Then, a second mode is used to follow up the nonrespondents and can help mitigate the coverage and nonresponse issues associated with the web mode. Over the past decade, several longitudinal studies have transitioned from a CAPI-only to a web-first sequential design, including Understanding Society, which introduced a web-first and CAPI sequential design from wave 7 in 2015 (Carpenter and Burton 2018) and Next Steps, the former Longitudinal Study of Young People in England, which introduced a sequential design beginning with wave 5 in 2008 (Calderwood and Sanchez 2016).
Several characteristics of longitudinal studies can facilitate the transition to a web-first design. In longitudinal surveys, the research team can collect contact information, such as email addresses or mobile numbers, in earlier waves, and use this information to implement the new mode (Bianchi et al. 2017). Moreover, data gathered from previous waves can be instrumental in targeting sample subgroups who may be less prone to participate in the new mode, enabling the allocation of these subgroups to an alternative fieldwork protocol (Lynn 2017a) or the implementation of a targeted response maximization strategy (Lynn 2017b). Finally, panel members have a history of interaction with the study, diminishing the necessity of using interviewers to introduce the study and persuade sample members about the legitimacy of the survey request (Jäckle et al. 2015).
A successful transition of a longitudinal study from a CAPI-only to a web-first sequential design must achieve a double objective. A primary objective of the transition is to maximize the response during the web-only phase, thereby reducing the extent of fieldwork efforts in the subsequent phase. Considering that the cost-per-interview tends to be higher in the follow-up mode compared to web, a reduction in fieldwork efforts due to a higher web response rate could yield cost savings. The second objective concerns the response rate: the new mix of modes must maintain response rates at a level equivalent to those previously achieved in the CAPI mode. This is essential because, unlike in cross-sectional surveys, sample members who drop out of the study cannot be replaced with new ones, and a steady decrease in the cumulative response rate can affect the feasibility of conducting longitudinal analyses (Lynn 2018).
Transitioning to a design that primarily relies on a web mode poses a potential challenge to the goal of sustaining high response rates. Recent research, both experimental and nonexperimental studies in the context of longitudinal surveys, has revealed that introducing a web mode into the design can lead to an overall reduction in response rates (Martin and Lynn 2011; Jäckle et al. 2015; Gaia 2017; Voorpostel et al. 2021). An important lesson gleaned from these studies and others focused on web cross-sectional surveys is that response maximization strategies can help mitigate the potentially adverse effect of including the web mode on response propensities (Gaia 2017; Daikeler et al. 2020). One such strategy involves the manipulation of the mode or modes used to contact sample members and the content of communications (Cernat and Lynn 2018; Lynn 2019). This article focuses on the mix of contact modes and, particularly, on how text messages, in combination with letters and emails, can contribute to enhancing response rates and mitigating survey costs in a transition from a CAPI-only to a web and CATI sequential design.
Text messages have some characteristics that render them highly suitable as a contact mode for web surveys. Text messaging is an almost universal technology, with a significant majority of the adult population (94 percent) in the United Kingdom having access to a mobile phone, and 7 in 10 individuals engaging in daily text message exchanges (Ofcom 2019). Moreover, the concise and direct nature of text messages makes them particularly effective in capturing the attention of sample members, thereby enhancing the chances of successfully conveying the survey participation message enclosed in the short message service (SMS) invitation (Rettie 2009; Mavletova and Couper 2014). A final advantage of using text messages is their capacity to incorporate a survey link, allowing smartphone users to easily access and participate in the survey. Conversely, there are certain barriers associated with the use of text messages as a contact mode. Sending text messages requires that the survey organization has access to the mobile numbers of sample members and permission to use them. Furthermore, the use of contact information is subject to local regulations, as is the case in the United Kingdom and the European Union, where the General Data Protection Regulation (Kim and Couper 2021) regulates such practices. Another limitation of using text messages is the restriction on message length, which hinders the ability to include a comprehensive and persuasive message (Mavletova and Couper 2014).
Text messaging has not been tested experimentally in the context of a web-first longitudinal study, although some studies have explored its effect on participation in web cross-sectional surveys. These investigations have shown that using text messages as prenotifications, invites, or reminders does not yield higher response rates when used in isolation (Crawford et al. 2013; DuBray 2013; De Bruijne and Wijnant 2014; McGeeney and Yanna Yan 2016; Toepoel and Lugtig 2018), but they have been more effective when combined with emails (Bosnjak et al. 2008; Barry et al. 2021). For instance, Mavletova and Couper (2014) conducted an experiment using email and SMS invites and reminders in the context of an opt-in web panel, concluding that combining the email invitation with an SMS reminder achieved the highest response rate. Furthermore, an experiment conducted in the Gallup Panel found that using both text messages and emails for invites and reminders outperformed using these modes separately (Marlar 2017).
3. RESEARCH HYPOTHESES
The background provided in the previous section sets the stage for proposing a series of hypotheses. The first hypothesis posits that adding text messages to a contact strategy that combines emails and letters will have a positive impact on response rates. It is important to note that when referring to response rates in the hypotheses below (H1–H4), three indicators are considered: the response rate at the end of the web-only fieldwork phase, the web response rate at the end of the entire fieldwork, and the final response rate including both web and CATI responses. In the subsequent hypotheses, we expect the text messages will have a similar effect on all three outcomes.
H1: Individual response rates will be higher if text message invites and/or reminders are sent.
Three main mechanisms explain how including text messages in the contact strategy can boost response propensities: increasing the probability of establishing contact with the sample member, facilitating access to the survey, and reinforcing the message conveyed by other modes. Some sample members might not receive the survey communications because they cannot access the mode used for delivery. Enhancing the contact strategy with a new mode can extend the reach to sample members who might otherwise not receive the information or might overlook messages delivered through other modes (Dillman et al. 2014, pp. 418–419). If this mechanism operates as anticipated, we would expect panel members for whom the email or postal address is missing and consequently cannot receive the emails or letters to exhibit a higher response rate after receiving the text messages.
H2: Sending text message invites and/or reminders is more likely to improve individual response rates for panel members whose email or postal address was missing compared to those with full contact details.
The second mechanism consists of facilitating access to the web survey. This is achieved through the combination of personalized links embedded in the text messages that can be used to access the survey questionnaire on a smartphone, a widespread technology among the United Kingdom adult population—91 percent had access to smartphones in 2019 (Ofcom 2019). Therefore, we hypothesize that sending text messages with a survey link will introduce a “push-to-smartphone” effect, whereby the positive impact of the SMS on response rates is mainly observed among panel members who have a smartphone and those who are more familiar with this technology. Since there is no direct measure of smartphone skills in Understanding Society, we use the frequency of Internet access as a proxy, which is associated with the propensity of completing the surveys on the smartphone (Maslovskaya et al. 2019).
H3a: Sending text message invites and/or reminders is more likely to improve individual response rates for panel members with a smartphone compared to panel members who do not have access to a smartphone.
H3b: Sending text message invites and/or reminders is more likely to improve individual response rates for panel members using the Internet daily than those who use it less often.
Prior research has indicated that certain population groups are more likely to complete a web survey on smartphones. If easing access to the questionnaire through an SMS with a survey link encourages certain groups to complete the survey on their smartphones, this also could serve as an effective incentive for response, particularly among younger adults (Toepoel and Lugtig 2014; Revilla et al. 2016; Bosnjak et al. 2018; Gummer et al. 2019; Maslovskaya et al. 2019) and females (De Bruijne and Wijnant 2014; Revilla et al. 2016; Maslovskaya et al. 2019), two groups that are more prone to respond to web surveys on smartphones.
H3c: Sending text message invites and/or reminders is more likely to improve individual response rates for female panel members than for male panel members.
H3d: Sending text message invites and/or reminders is more likely to improve individual response rates for panel members aged 16–34 compared to older panel members (35 and older).
The third mechanism leverages text messages as an additional contact mode to reinforce the message conveyed by other modes and prompt the feeling that participation is important. The leverage-salience theory posits that a survey design attribute can influence the survey participation decision differently for each sample member (Groves et al. 2000). We hypothesize that the additional contact will exert a more significant influence on individuals who downplay the importance of the survey to avoid the cognitive dissonance arising from nonparticipating, such as panel members with irregular response patterns.
H4: Sending text message invites and/or reminders is more likely to improve individual response rates for panel members with an irregular response pattern compared to regular participants.
Besides affecting response rates, previous research has demonstrated that text message invitations and reminders, containing links to the survey questionnaire, can increase the proportion of surveys completed on smartphones (Crawford et al. 2013; De Bruijne and Wijnant 2014; Mavletova and Couper 2014; Barry et al. 2021). Within the scope of this study, we expect to observe an increase in smartphone completion among panel members receiving the SMS. This increase in smartphone completion would also confirm that text messages are generating a “push-to-smartphone” effect. A related concern is whether data quality might be compromised when respondents complete the questionnaire on a smartphone as opposed to other devices. While this study does not directly address this question, some empirical evidence suggests that such an adverse effect is not observed (Maslovskaya et al. 2020).
H5: Sending text message invites and/or reminders with the link to the questionnaire would increase the percentage of sample members completing the web survey on their smartphone compared to other devices (PC, laptops, and tablets).
One of the primary advantages of employing a web-first mixed-mode design is the opportunity to reduce survey costs compared to using a single interviewer-administered mode. In a household survey like Understanding Society, where all adults within a household are invited to participate, the main route for cost savings relies on augmenting the response rate during the web-only fieldwork phase. This would increase the number of households where all members completed the survey earlier and, as a result, will not require the call of an interviewer in the subsequent phase of the fieldwork, diminishing the workload of the interviewers and saving costs (Jäckle et al. 2015). Thus, we expect the text message to boost the full household response rate—households in which all adults completed the individual interviews—as a result of the increase in the individual response propensities during the web-only fieldwork phase.
H6: In a household-based survey, the proportion of households in which all adults have completed an individual interview will be higher at the end of the web phase if at least one household member is sent text message invites and/or reminders.
4. DATA AND METHODS
In this section, we describe Understanding Society, the study where the SMS experiment was conducted, the experimental design, and the analysis plan.
4.1 Understanding Society and the CAPI-Only Group
Understanding Society, the United Kingdom Household Longitudinal Study (UKHLS), is a survey that collects data from a probability sample of individuals residing in the United Kingdom. The main component of the sample—the General Population Sample (GPS)—is based upon a two-stage stratified random sample of residential postal addresses in Great Britain (GB) plus a single-stage random sample of addresses in Northern Ireland. In GB, at the first stage, 2,640 postal sectors (geographical areas containing an average of around 2,500 households) were selected with probability proportional to size as PSUs, and at the second stage, 18 addresses were selected from each PSU. In Northern Ireland, 2,400 addresses were selected. All persons resident at a sample address at the time of wave 1 fieldwork in 2009–2010 became sample members, and all babies subsequently born to sample members have themselves become sample members. In addition to the GPS, Understanding Society includes an ethnic minority boost sample and, since wave 2, it includes the former British Household Panel Survey. Further details of the sample design can be found in Lynn (2009). Adult panel members aged 16 or over are invited to participate in the survey every year alongside other household members. An important design feature is that Understanding Society, as a household survey, aims to interview all adults in the household.
The study started as a mainly face-to-face survey, with a few interviews being completed on the phone during a mop-up period. From wave 7 onwards, an increasing proportion of the sample transitioned to a web and CAPI mixed-mode design. At the same time, in order to have a group that allows the assessment of mode effects, a random sample of 20 percent of the households selected at wave 8 remained in a CAPI-only design. This study relies on the data of this random subsample, the CAPI-only group, issued face-to-face until the pandemic outbreak, when all households were moved to a web-first and CATI protocol.
Due to the COVID-19 crisis, from April 2020, all the households, including those in the CAPI-only group, switched to a web and telephone sequential mixed-mode design (Burton et al. 2020). The web-first mixed-mode design used during the pandemic consisted of a 5-week web-only fieldwork period throughout which the panel members received a combination of invites and reminders via post and email. After 5 weeks of fieldwork, CATI interviewers started contacting the remaining nonrespondents, although the web questionnaire remained open. Panel members received a conditional or unconditional incentive based on their previous participation and were offered a £10 early bird bonus upon completing the web survey during the first 5 weeks of the fieldwork (see Carpenter 2021). The individual wave 11 cross-sectional response rate, the percentage of adults eligible for an interview responding at wave 11 (AAPOR RR6), was 72.0 percent, while the cumulative wave 11 individual response rate, which accounts for the probability of being recruited at the initial wave and responding at wave 11, was 15.2 percent (see appendix A in the supplementary data online). The data from Understanding Society wave 11 used in this analysis are publicly available (University of Essex 2023).
4.2 Text Messages and Experimental Design
The sample of Understanding Society is issued on a monthly basis, with each monthly sample being a random subset of the annual survey sample. The text message experiment was conducted in 6 monthly samples of wave 11, covering from April to September 2020. The analysis of the experiment presented in this article only uses the data from the CAPI-only group, the random subsample of households that transitioned from CAPI-only to the web-first sequential mixed-mode design.
In the experiment, households were randomly allocated to four equal-size groups, as outlined in table 1. The control group received a combination of emails and letters, the usual contact strategy used for the web-first sample. The “invite” group received, in addition to the letters and emails, an SMS invite. The “reminders” group received two SMS reminders, while the “invite and reminders” group received three text messages, an invite, and two reminders.
Contact Strategy by Experimental Group During the Five-Week Web-Only Period
. | Week 1 (invite) . | Week 2 . | Week 3 . | Week 4 . | Week 5 . | n . |
---|---|---|---|---|---|---|
Control | Letter + email | Letter + email | Letter + email | 284 | ||
Invite | Letter + email + SMS | Letter + email | Letter + email | 261 | ||
Reminders | Letter + email | Letter + email + SMS | Letter + email + SMS | 270 | ||
Invite and reminders | Letter + email + SMS | Letter + email + SMS | Letter + email + SMS | 250 |
. | Week 1 (invite) . | Week 2 . | Week 3 . | Week 4 . | Week 5 . | n . |
---|---|---|---|---|---|---|
Control | Letter + email | Letter + email | Letter + email | 284 | ||
Invite | Letter + email + SMS | Letter + email | Letter + email | 261 | ||
Reminders | Letter + email | Letter + email + SMS | Letter + email + SMS | 270 | ||
Invite and reminders | Letter + email + SMS | Letter + email + SMS | Letter + email + SMS | 250 |
Note.—The n refers to the number of eligible adults who had provided a mobile number in the previous waves from the CAPI-only group.
Contact Strategy by Experimental Group During the Five-Week Web-Only Period
. | Week 1 (invite) . | Week 2 . | Week 3 . | Week 4 . | Week 5 . | n . |
---|---|---|---|---|---|---|
Control | Letter + email | Letter + email | Letter + email | 284 | ||
Invite | Letter + email + SMS | Letter + email | Letter + email | 261 | ||
Reminders | Letter + email | Letter + email + SMS | Letter + email + SMS | 270 | ||
Invite and reminders | Letter + email + SMS | Letter + email + SMS | Letter + email + SMS | 250 |
. | Week 1 (invite) . | Week 2 . | Week 3 . | Week 4 . | Week 5 . | n . |
---|---|---|---|---|---|---|
Control | Letter + email | Letter + email | Letter + email | 284 | ||
Invite | Letter + email + SMS | Letter + email | Letter + email | 261 | ||
Reminders | Letter + email | Letter + email + SMS | Letter + email + SMS | 270 | ||
Invite and reminders | Letter + email + SMS | Letter + email + SMS | Letter + email + SMS | 250 |
Note.—The n refers to the number of eligible adults who had provided a mobile number in the previous waves from the CAPI-only group.
The text messages featured a brief personalized salutation and a survey link, as illustrated in figure 1. The field agency automatically dispatched these text messages to adult sample members with valid mobile numbers on record who had not yet responded to the survey. All the text messages were sent during the 5-week web-only fieldwork phase, and they were scheduled to reach the participants after the letters and emails. These preceding communications informed the panel members about the impending text messages. Panel members could opt out of receiving the text messages by replying, but no one did so in the CAPI-only group.

Content of the Text Messages Sent to the Panel Members in the Treatment Groups.
All households and, therefore, panel members were allocated to an experimental group at the design stage, although some of them had never been interviewed, and others had refused to provide a mobile number. The panel members assigned to any experimental groups with no mobile number on record could not be part of the experiment and were omitted from the analysis, which was restricted to the compliant sample (n = 1,065). The noncompliant cases do not constitute a random subsample; they are less cooperative, older, and less likely to have a smartphone than those sharing the mobile number (see appendix C in the supplementary data online), and we cannot ascertain how these noncompliant cases would have reacted to a text message. Nevertheless, these noncompliant cases were randomly assigned to the experimental groups and should not interfere with the conclusions drawn from the analysis.
4.3 Analysis Plan
The analysis presented in this article covers the text message experiment in the CAPI-only group of Understanding Society. As mentioned above, this group transitioned from a CAPI-only design to a web-first and CATI sequential design, allowing us to explore the effect of text messages in the context of a shift to a mixed-mode design. However, the use of the (previously) CAPI-only group—which encompasses two-in-ten households of those included in the experiment—has some implications for the analysis, with the most notable being the relatively low statistical power of the tests stemming from the sample size, which affects the ability to detect changes attributed to the text messages. To address this issue and enhance the statistical power of the analysis, we merged the three treatment groups—invite, reminders, and invite and reminders—that vary in the number and type of messages. For transparency, we also present the analysis using the original experimental groups in appendix D in the supplementary data online.
4.3.1 Response rates and moderators
Some of the research hypotheses (H2–H4) involved looking at the effects of the text messages across sample groups defined by a set of moderators. A detailed description and descriptive statistics of these moderators are presented in table 2. To test the differences in response rates between the control and treatment groups, we used a linear contrast of the average predicted probabilities derived from a logistic regression model (Mize 2019). The models fitted, estimated parameters, and predicted response propensites can be found in appendix F in the supplementary data online.
Moderator . | Description . | Distribution (unweighted) . |
---|---|---|
Sex | The sex variable was derived from the household grid questionnaire, which is asked at the beginning of the annual interview. |
|
Age | Age in three groups was derived from the age information collected in the household grid. A robustness check was carried out to assess the cutpoint used to define the younger group (see appendix F in the supplementary data online). |
|
Smartphone | Whether the respondent has a smartphone is asked to mobile users since wave 5, “Is your mobile a smartphone? (-2) Refusal, (-1) Don’t know, (1) Yes, (2) No.” The most recent valid response was imputed for those not responding at wave 11. |
|
Frequency of using the Internet | The frequency of using the Internet has been asked since wave 3, “How often do you use the internet for your personal use? (-2) Refusal, (-1) Don’t know, (1) Every day, (2) Several times a week, (3) Several times a month, (4) Once a month, (5) Less than once a month, (6) Never use, (7) No access at home, at work or elsewhere.” The variable was recoded into two categories, daily use and less often than daily, and the most recent valid response was imputed for those not responding at wave 11. |
|
Contact details (address and email) | The information on whether panel members provided an email and mobile number was facilitated by Understanding Society. |
|
Previous response behavior | This variable was derived using the outcome code for the adult interviews in which the panel members had been invited to participate up to wave 10. First, we calculated the ratio of adult interviews the panel member completed to the waves they were issued to the field. Then, we identified regular respondents as those who completed at least 2-in-3 interviews and irregular respondents who participated less than 66% of the time. A robustness check was carried out using different cutpoints (see appendix G in the supplementary data online). |
|
Moderator . | Description . | Distribution (unweighted) . |
---|---|---|
Sex | The sex variable was derived from the household grid questionnaire, which is asked at the beginning of the annual interview. |
|
Age | Age in three groups was derived from the age information collected in the household grid. A robustness check was carried out to assess the cutpoint used to define the younger group (see appendix F in the supplementary data online). |
|
Smartphone | Whether the respondent has a smartphone is asked to mobile users since wave 5, “Is your mobile a smartphone? (-2) Refusal, (-1) Don’t know, (1) Yes, (2) No.” The most recent valid response was imputed for those not responding at wave 11. |
|
Frequency of using the Internet | The frequency of using the Internet has been asked since wave 3, “How often do you use the internet for your personal use? (-2) Refusal, (-1) Don’t know, (1) Every day, (2) Several times a week, (3) Several times a month, (4) Once a month, (5) Less than once a month, (6) Never use, (7) No access at home, at work or elsewhere.” The variable was recoded into two categories, daily use and less often than daily, and the most recent valid response was imputed for those not responding at wave 11. |
|
Contact details (address and email) | The information on whether panel members provided an email and mobile number was facilitated by Understanding Society. |
|
Previous response behavior | This variable was derived using the outcome code for the adult interviews in which the panel members had been invited to participate up to wave 10. First, we calculated the ratio of adult interviews the panel member completed to the waves they were issued to the field. Then, we identified regular respondents as those who completed at least 2-in-3 interviews and irregular respondents who participated less than 66% of the time. A robustness check was carried out using different cutpoints (see appendix G in the supplementary data online). |
|
Note.—The distribution shown in the table corresponds to the compliant sample, i.e., panel members eligible for an adult interview at wave 11 (monthly sample April to September 2020) with a mobile number on record.
Moderator . | Description . | Distribution (unweighted) . |
---|---|---|
Sex | The sex variable was derived from the household grid questionnaire, which is asked at the beginning of the annual interview. |
|
Age | Age in three groups was derived from the age information collected in the household grid. A robustness check was carried out to assess the cutpoint used to define the younger group (see appendix F in the supplementary data online). |
|
Smartphone | Whether the respondent has a smartphone is asked to mobile users since wave 5, “Is your mobile a smartphone? (-2) Refusal, (-1) Don’t know, (1) Yes, (2) No.” The most recent valid response was imputed for those not responding at wave 11. |
|
Frequency of using the Internet | The frequency of using the Internet has been asked since wave 3, “How often do you use the internet for your personal use? (-2) Refusal, (-1) Don’t know, (1) Every day, (2) Several times a week, (3) Several times a month, (4) Once a month, (5) Less than once a month, (6) Never use, (7) No access at home, at work or elsewhere.” The variable was recoded into two categories, daily use and less often than daily, and the most recent valid response was imputed for those not responding at wave 11. |
|
Contact details (address and email) | The information on whether panel members provided an email and mobile number was facilitated by Understanding Society. |
|
Previous response behavior | This variable was derived using the outcome code for the adult interviews in which the panel members had been invited to participate up to wave 10. First, we calculated the ratio of adult interviews the panel member completed to the waves they were issued to the field. Then, we identified regular respondents as those who completed at least 2-in-3 interviews and irregular respondents who participated less than 66% of the time. A robustness check was carried out using different cutpoints (see appendix G in the supplementary data online). |
|
Moderator . | Description . | Distribution (unweighted) . |
---|---|---|
Sex | The sex variable was derived from the household grid questionnaire, which is asked at the beginning of the annual interview. |
|
Age | Age in three groups was derived from the age information collected in the household grid. A robustness check was carried out to assess the cutpoint used to define the younger group (see appendix F in the supplementary data online). |
|
Smartphone | Whether the respondent has a smartphone is asked to mobile users since wave 5, “Is your mobile a smartphone? (-2) Refusal, (-1) Don’t know, (1) Yes, (2) No.” The most recent valid response was imputed for those not responding at wave 11. |
|
Frequency of using the Internet | The frequency of using the Internet has been asked since wave 3, “How often do you use the internet for your personal use? (-2) Refusal, (-1) Don’t know, (1) Every day, (2) Several times a week, (3) Several times a month, (4) Once a month, (5) Less than once a month, (6) Never use, (7) No access at home, at work or elsewhere.” The variable was recoded into two categories, daily use and less often than daily, and the most recent valid response was imputed for those not responding at wave 11. |
|
Contact details (address and email) | The information on whether panel members provided an email and mobile number was facilitated by Understanding Society. |
|
Previous response behavior | This variable was derived using the outcome code for the adult interviews in which the panel members had been invited to participate up to wave 10. First, we calculated the ratio of adult interviews the panel member completed to the waves they were issued to the field. Then, we identified regular respondents as those who completed at least 2-in-3 interviews and irregular respondents who participated less than 66% of the time. A robustness check was carried out using different cutpoints (see appendix G in the supplementary data online). |
|
Note.—The distribution shown in the table corresponds to the compliant sample, i.e., panel members eligible for an adult interview at wave 11 (monthly sample April to September 2020) with a mobile number on record.
4.3.2 Device used to complete the survey
To assess whether sending the text message invite and/or reminders influence the device used to complete the web survey (H5), we relied on the variable that indicates the device used to complete the individual interview, which is derived from the paradata and has six categories: (i) PC/laptop/netbook, (ii) large tablet, (iii) medium tablet, (iv) small tablet, (v) smartphone with touchscreen, and (vi) other. For the analysis, we merged the categories of small tablet and smartphone with touchscreen. This decision was made because, based on the paradata, it became apparent that the majority of small tablets were, in fact, smartphones. We also combined large and medium tablet categories. We employed a chi-squared test to assess whether there were differences in the distribution of device usage among the groups.
4.3.3 Full household response rate
All analyses accounted for the complex sample design and were weighted to account for the unequal selection probabilities, nonresponse, and selection into the 6-monthly samples of wave 11, where the experiment was conducted. The significance level for all tests was set to 5 percent. The analysis was carried out using Stata 17 (StataCorp 2021). A full PRICSSA checklist (Seidenberg et al. 2023) can be found in appendix B in the supplementary data online.
5. RESULTS
This section presents the results of the analysis. First, we focus on the impact of text messages on individual response rates. Second, we present the distribution of devices used to complete the web questionnaire, and finally, we examine the household response rates for the experimental groups.
5.1 Individual and Final Response Rates (H1–H4)
Table 3 presents the individual web and final response rates for the compliant sample—panel members who had provided a mobile number before wave 11—as well as for various subgroups defined by the moderators. The first hypothesis (H1) anticipated a positive effect of text messages on the web and final response rates. The analysis shows no evidence of such an effect on the outcomes examined. After the web-only phase of the fieldwork, the web response rate for the group receiving the SMS was 1.7 percentage points (p.p.) higher than the control condition (51.5 versus 49.8 percent, p = .753). At the end of the fieldwork, the difference had vanished (60.5 versus 60.2 percent, p = .954), and the same was true for the final response rate, which includes the CATI interviewing (76.9 versus 76.0 percent, p = .827).
Individual Web Completion Rate and Response Rates at Different Stages of the Fieldwork for the Full Treated Sample and by Moderators
. | Web response rate . | Final response rate (web + CATI) . | |||||||
---|---|---|---|---|---|---|---|---|---|
Web-only period . | End of the fieldwork . | ||||||||
Control (n = 284) . | Any SMS (n = 781) . | p-value . | Control (n = 284) . | Any SMS (n = 781) . | p-value . | Control (n = 284) . | Any SMS (n = 781) . | p-value . | |
Compliant sample | 49.8 | 51.5 | .753 | 60.2 | 60.5 | .954 | 76.0 | 76.9 | .827 |
(4.4) | (2.6) | (4.3) | (2.6) | (3.7) | (2.1) | ||||
Sex | |||||||||
Male (n = 475) | 45.9 | 49.7 | .574 | 56.1 | 56.9 | .905 | 72.1 | 69.9 | .724 |
(5.6) | (3.2) | (5.6) | (3.3) | (5.3) | (3.1) | ||||
Female (n = 490) | 52.8 | 53.1 | .967 | 63.5 | 63.8 | .955 | 79.0 | 83.2 | .359 |
(5.0) | (3.2) | (4.6) | (3.2) | (3.9) | (2.3) | ||||
Age | |||||||||
16–34 (n = 298) | 25.6 | 42.2 | .040 | 45.8 | 53.9 | .385 | 63.5 | 60.8 | .769 |
(6.0) | (4.5) | (7.7) | (4.7) | (7.7) | (4.5) | ||||
35–59 (n = 505) | 60.8 | 53.4 | .335 | 70.2 | 63.5 | .345 | 79.4 | 78.7 | .904 |
(6.1) | (3.6) | (5.8) | (3.5) | (4.8) | (3.1) | ||||
60+ (n = 262) | 54.6 | 55.8 | .898 | 57.9 | 60.8 | .739 | 81.9 | 87.2 | .403 |
(7.0) | (4.5) | (7.0) | (4.5) | (5.5) | (3.0) | ||||
Smartphone | |||||||||
No smartphone (n = 72) | 34.4 | 38.0 | .833 | 44.4 | 39.9 | .797 | 67.8 | 77.9 | .500 |
(14.4) | (7.4) | (14.4) | (7.9) | (13.2) | (6.5) | ||||
Has a smartphone (n = 971) | 51.5 | 52.9 | .812 | 61.9 | 62.8 | .865 | 77.5 | 77.0 | .899 |
(4.6) | (2.7) | (4.4) | (2.7) | (3.8) | (2.3) | ||||
Frequency of Internet use | |||||||||
Less often than daily (n = 152) | 26.9 | 34.6 | .475 | 32.5 | 37.0 | .677 | 68.6 | 77.6 | .381 |
(8.8) | (5.8) | (8.9) | (5.8) | (8.7) | (4.8) | ||||
Internet daily (n = 912) | 53.6 | 54.6 | .863 | 64.9 | 64.9 | .999 | 77.2 | 77.0 | .956 |
(4.8) | (2.8) | (4.6) | (2.7) | (4.0) | (2.3) | ||||
Contact details (address and email) | |||||||||
Full contact details (n = 917) | 55.7 | 57.3 | .787 | 66.6 | 66.1 | .919 | 80.1 | 78.4 | .690 |
(4.8) | (2.8) | (4.2) | (2.7) | (3.5) | (2.2) | ||||
Email or postal address missing (n = 148) | 2.0 | 15.0 | .006 | 8.4 | 25.4 | .079 | 43.0 | 67.5 | .037 |
(2.1) | (4.0) | (6.6) | (6.6) | (9.7) | (6.0) | ||||
Previous response behaviour | |||||||||
Irregular respondent (n = 190) | 1.7 | 19.0 | .003 | 1.8 | 21.9 | .001 | 14.8 | 26.6 | .189 |
(1.8) | (5.1) | (1.8) | (5.4) | (6.4) | (5.6) | ||||
Regular respondent (n = 875) | 56.9 | 55.9 | .850 | 68.9 | 65.8 | .532 | 85.1 | 83.8 | .717 |
(4.3) | (2.7) | (3.9) | (2.7) | (2.9) | (2.0) |
. | Web response rate . | Final response rate (web + CATI) . | |||||||
---|---|---|---|---|---|---|---|---|---|
Web-only period . | End of the fieldwork . | ||||||||
Control (n = 284) . | Any SMS (n = 781) . | p-value . | Control (n = 284) . | Any SMS (n = 781) . | p-value . | Control (n = 284) . | Any SMS (n = 781) . | p-value . | |
Compliant sample | 49.8 | 51.5 | .753 | 60.2 | 60.5 | .954 | 76.0 | 76.9 | .827 |
(4.4) | (2.6) | (4.3) | (2.6) | (3.7) | (2.1) | ||||
Sex | |||||||||
Male (n = 475) | 45.9 | 49.7 | .574 | 56.1 | 56.9 | .905 | 72.1 | 69.9 | .724 |
(5.6) | (3.2) | (5.6) | (3.3) | (5.3) | (3.1) | ||||
Female (n = 490) | 52.8 | 53.1 | .967 | 63.5 | 63.8 | .955 | 79.0 | 83.2 | .359 |
(5.0) | (3.2) | (4.6) | (3.2) | (3.9) | (2.3) | ||||
Age | |||||||||
16–34 (n = 298) | 25.6 | 42.2 | .040 | 45.8 | 53.9 | .385 | 63.5 | 60.8 | .769 |
(6.0) | (4.5) | (7.7) | (4.7) | (7.7) | (4.5) | ||||
35–59 (n = 505) | 60.8 | 53.4 | .335 | 70.2 | 63.5 | .345 | 79.4 | 78.7 | .904 |
(6.1) | (3.6) | (5.8) | (3.5) | (4.8) | (3.1) | ||||
60+ (n = 262) | 54.6 | 55.8 | .898 | 57.9 | 60.8 | .739 | 81.9 | 87.2 | .403 |
(7.0) | (4.5) | (7.0) | (4.5) | (5.5) | (3.0) | ||||
Smartphone | |||||||||
No smartphone (n = 72) | 34.4 | 38.0 | .833 | 44.4 | 39.9 | .797 | 67.8 | 77.9 | .500 |
(14.4) | (7.4) | (14.4) | (7.9) | (13.2) | (6.5) | ||||
Has a smartphone (n = 971) | 51.5 | 52.9 | .812 | 61.9 | 62.8 | .865 | 77.5 | 77.0 | .899 |
(4.6) | (2.7) | (4.4) | (2.7) | (3.8) | (2.3) | ||||
Frequency of Internet use | |||||||||
Less often than daily (n = 152) | 26.9 | 34.6 | .475 | 32.5 | 37.0 | .677 | 68.6 | 77.6 | .381 |
(8.8) | (5.8) | (8.9) | (5.8) | (8.7) | (4.8) | ||||
Internet daily (n = 912) | 53.6 | 54.6 | .863 | 64.9 | 64.9 | .999 | 77.2 | 77.0 | .956 |
(4.8) | (2.8) | (4.6) | (2.7) | (4.0) | (2.3) | ||||
Contact details (address and email) | |||||||||
Full contact details (n = 917) | 55.7 | 57.3 | .787 | 66.6 | 66.1 | .919 | 80.1 | 78.4 | .690 |
(4.8) | (2.8) | (4.2) | (2.7) | (3.5) | (2.2) | ||||
Email or postal address missing (n = 148) | 2.0 | 15.0 | .006 | 8.4 | 25.4 | .079 | 43.0 | 67.5 | .037 |
(2.1) | (4.0) | (6.6) | (6.6) | (9.7) | (6.0) | ||||
Previous response behaviour | |||||||||
Irregular respondent (n = 190) | 1.7 | 19.0 | .003 | 1.8 | 21.9 | .001 | 14.8 | 26.6 | .189 |
(1.8) | (5.1) | (1.8) | (5.4) | (6.4) | (5.6) | ||||
Regular respondent (n = 875) | 56.9 | 55.9 | .850 | 68.9 | 65.8 | .532 | 85.1 | 83.8 | .717 |
(4.3) | (2.7) | (3.9) | (2.7) | (2.9) | (2.0) |
Note.—The base for the analysis is the compliant sample, i.e., panel members eligible for an adult interview who had provided a mobile number before wave 11 fieldwork. Differences between the control and treatment groups were tested by comparing the average predicted probabilities from logistic regression models. The models underpinning these comparisons can be found in Online Appendix E.
Individual Web Completion Rate and Response Rates at Different Stages of the Fieldwork for the Full Treated Sample and by Moderators
. | Web response rate . | Final response rate (web + CATI) . | |||||||
---|---|---|---|---|---|---|---|---|---|
Web-only period . | End of the fieldwork . | ||||||||
Control (n = 284) . | Any SMS (n = 781) . | p-value . | Control (n = 284) . | Any SMS (n = 781) . | p-value . | Control (n = 284) . | Any SMS (n = 781) . | p-value . | |
Compliant sample | 49.8 | 51.5 | .753 | 60.2 | 60.5 | .954 | 76.0 | 76.9 | .827 |
(4.4) | (2.6) | (4.3) | (2.6) | (3.7) | (2.1) | ||||
Sex | |||||||||
Male (n = 475) | 45.9 | 49.7 | .574 | 56.1 | 56.9 | .905 | 72.1 | 69.9 | .724 |
(5.6) | (3.2) | (5.6) | (3.3) | (5.3) | (3.1) | ||||
Female (n = 490) | 52.8 | 53.1 | .967 | 63.5 | 63.8 | .955 | 79.0 | 83.2 | .359 |
(5.0) | (3.2) | (4.6) | (3.2) | (3.9) | (2.3) | ||||
Age | |||||||||
16–34 (n = 298) | 25.6 | 42.2 | .040 | 45.8 | 53.9 | .385 | 63.5 | 60.8 | .769 |
(6.0) | (4.5) | (7.7) | (4.7) | (7.7) | (4.5) | ||||
35–59 (n = 505) | 60.8 | 53.4 | .335 | 70.2 | 63.5 | .345 | 79.4 | 78.7 | .904 |
(6.1) | (3.6) | (5.8) | (3.5) | (4.8) | (3.1) | ||||
60+ (n = 262) | 54.6 | 55.8 | .898 | 57.9 | 60.8 | .739 | 81.9 | 87.2 | .403 |
(7.0) | (4.5) | (7.0) | (4.5) | (5.5) | (3.0) | ||||
Smartphone | |||||||||
No smartphone (n = 72) | 34.4 | 38.0 | .833 | 44.4 | 39.9 | .797 | 67.8 | 77.9 | .500 |
(14.4) | (7.4) | (14.4) | (7.9) | (13.2) | (6.5) | ||||
Has a smartphone (n = 971) | 51.5 | 52.9 | .812 | 61.9 | 62.8 | .865 | 77.5 | 77.0 | .899 |
(4.6) | (2.7) | (4.4) | (2.7) | (3.8) | (2.3) | ||||
Frequency of Internet use | |||||||||
Less often than daily (n = 152) | 26.9 | 34.6 | .475 | 32.5 | 37.0 | .677 | 68.6 | 77.6 | .381 |
(8.8) | (5.8) | (8.9) | (5.8) | (8.7) | (4.8) | ||||
Internet daily (n = 912) | 53.6 | 54.6 | .863 | 64.9 | 64.9 | .999 | 77.2 | 77.0 | .956 |
(4.8) | (2.8) | (4.6) | (2.7) | (4.0) | (2.3) | ||||
Contact details (address and email) | |||||||||
Full contact details (n = 917) | 55.7 | 57.3 | .787 | 66.6 | 66.1 | .919 | 80.1 | 78.4 | .690 |
(4.8) | (2.8) | (4.2) | (2.7) | (3.5) | (2.2) | ||||
Email or postal address missing (n = 148) | 2.0 | 15.0 | .006 | 8.4 | 25.4 | .079 | 43.0 | 67.5 | .037 |
(2.1) | (4.0) | (6.6) | (6.6) | (9.7) | (6.0) | ||||
Previous response behaviour | |||||||||
Irregular respondent (n = 190) | 1.7 | 19.0 | .003 | 1.8 | 21.9 | .001 | 14.8 | 26.6 | .189 |
(1.8) | (5.1) | (1.8) | (5.4) | (6.4) | (5.6) | ||||
Regular respondent (n = 875) | 56.9 | 55.9 | .850 | 68.9 | 65.8 | .532 | 85.1 | 83.8 | .717 |
(4.3) | (2.7) | (3.9) | (2.7) | (2.9) | (2.0) |
. | Web response rate . | Final response rate (web + CATI) . | |||||||
---|---|---|---|---|---|---|---|---|---|
Web-only period . | End of the fieldwork . | ||||||||
Control (n = 284) . | Any SMS (n = 781) . | p-value . | Control (n = 284) . | Any SMS (n = 781) . | p-value . | Control (n = 284) . | Any SMS (n = 781) . | p-value . | |
Compliant sample | 49.8 | 51.5 | .753 | 60.2 | 60.5 | .954 | 76.0 | 76.9 | .827 |
(4.4) | (2.6) | (4.3) | (2.6) | (3.7) | (2.1) | ||||
Sex | |||||||||
Male (n = 475) | 45.9 | 49.7 | .574 | 56.1 | 56.9 | .905 | 72.1 | 69.9 | .724 |
(5.6) | (3.2) | (5.6) | (3.3) | (5.3) | (3.1) | ||||
Female (n = 490) | 52.8 | 53.1 | .967 | 63.5 | 63.8 | .955 | 79.0 | 83.2 | .359 |
(5.0) | (3.2) | (4.6) | (3.2) | (3.9) | (2.3) | ||||
Age | |||||||||
16–34 (n = 298) | 25.6 | 42.2 | .040 | 45.8 | 53.9 | .385 | 63.5 | 60.8 | .769 |
(6.0) | (4.5) | (7.7) | (4.7) | (7.7) | (4.5) | ||||
35–59 (n = 505) | 60.8 | 53.4 | .335 | 70.2 | 63.5 | .345 | 79.4 | 78.7 | .904 |
(6.1) | (3.6) | (5.8) | (3.5) | (4.8) | (3.1) | ||||
60+ (n = 262) | 54.6 | 55.8 | .898 | 57.9 | 60.8 | .739 | 81.9 | 87.2 | .403 |
(7.0) | (4.5) | (7.0) | (4.5) | (5.5) | (3.0) | ||||
Smartphone | |||||||||
No smartphone (n = 72) | 34.4 | 38.0 | .833 | 44.4 | 39.9 | .797 | 67.8 | 77.9 | .500 |
(14.4) | (7.4) | (14.4) | (7.9) | (13.2) | (6.5) | ||||
Has a smartphone (n = 971) | 51.5 | 52.9 | .812 | 61.9 | 62.8 | .865 | 77.5 | 77.0 | .899 |
(4.6) | (2.7) | (4.4) | (2.7) | (3.8) | (2.3) | ||||
Frequency of Internet use | |||||||||
Less often than daily (n = 152) | 26.9 | 34.6 | .475 | 32.5 | 37.0 | .677 | 68.6 | 77.6 | .381 |
(8.8) | (5.8) | (8.9) | (5.8) | (8.7) | (4.8) | ||||
Internet daily (n = 912) | 53.6 | 54.6 | .863 | 64.9 | 64.9 | .999 | 77.2 | 77.0 | .956 |
(4.8) | (2.8) | (4.6) | (2.7) | (4.0) | (2.3) | ||||
Contact details (address and email) | |||||||||
Full contact details (n = 917) | 55.7 | 57.3 | .787 | 66.6 | 66.1 | .919 | 80.1 | 78.4 | .690 |
(4.8) | (2.8) | (4.2) | (2.7) | (3.5) | (2.2) | ||||
Email or postal address missing (n = 148) | 2.0 | 15.0 | .006 | 8.4 | 25.4 | .079 | 43.0 | 67.5 | .037 |
(2.1) | (4.0) | (6.6) | (6.6) | (9.7) | (6.0) | ||||
Previous response behaviour | |||||||||
Irregular respondent (n = 190) | 1.7 | 19.0 | .003 | 1.8 | 21.9 | .001 | 14.8 | 26.6 | .189 |
(1.8) | (5.1) | (1.8) | (5.4) | (6.4) | (5.6) | ||||
Regular respondent (n = 875) | 56.9 | 55.9 | .850 | 68.9 | 65.8 | .532 | 85.1 | 83.8 | .717 |
(4.3) | (2.7) | (3.9) | (2.7) | (2.9) | (2.0) |
Note.—The base for the analysis is the compliant sample, i.e., panel members eligible for an adult interview who had provided a mobile number before wave 11 fieldwork. Differences between the control and treatment groups were tested by comparing the average predicted probabilities from logistic regression models. The models underpinning these comparisons can be found in Online Appendix E.
The second hypothesis (H2) suggested that text messages could boost response rates among the panel members with a wrong address or no email on the record. The results show that text messages as an alternative contact mode increased the web and overall response rate for this group. At the end of the web-only fieldwork, the web response rate was 13.0 p.p. higher for the SMS group (15.0 versus 2.0 percent, p = .006), and this difference was slightly larger at the end of the fieldwork, 17.0 p.p. (25.4 versus 8.4 percent, p = .079). The final response rate incorporating the CATI interviewing was 43.0 percent for the control group and 67.5 percent for the group receiving an SMS invite and/or reminders (p = .037). Conversely, no difference in response rates was observed for the sample members with complete contact details.
The analysis reveals that having or not having a smartphone and the frequency of Internet use do not moderate the impact of text messages on the web or overall response rates (H3a and H3b). Likewise, the text message invite and/or reminders did not alter the response rates for males or females (H3c). However, we found a positive effect on the web response rate of the young adult panel members during the web-only phase of the fieldwork, while for those aged 35 and older, the web response rates of the control and SMS groups were similar (H3d). At the conclusion of the web-only fieldwork phase, panel members aged 16–34 receiving the SMS had a web completion rate 16.6 p.p. higher than the control group (42.2 versus 25.6 percent, p = .040). The effect on the web response rate faded at the end of the fieldwork (53.9 versus 45.8 percent, p = .385), and the final response rate was slightly higher among the control condition (60.8 versus 63.5 percent, p = .769), although these differences were not significant.
The fourth hypothesis (H4) examined whether irregular respondents, who are less likely to participate, would be more inclined to respond after receiving the text messages. The irregular respondents, defined as having responded to less than 2-in-3 invites to participate in the study, increased their web response rate after receiving the text message invite and/or reminders during the web-only period (19.0 versus 1.7 percent, p = .003). This difference in the web response rate was slightly larger at the end of the fieldwork (21.9 versus 1.8 percent, p = .001). The CATI interviewing diminished the difference between the control and treatment groups in the final response rate to 11.8 p.p. (26.6 versus 14.8 percent, p = .189), which was no longer significant but suggested that the positive effect of the SMS could boost the final response rate.
5.2 Web Completion and Device Used (H5)
The fifth hypothesis (H5) examined whether the text messages with a personalized link encouraged panel members to complete the interview on a smartphone. Table 4 presents the distribution of devices used to complete the web individual questionnaire at the conclusion of the web-only phase and at the end of the fieldwork. Although the proportion of smartphone completion is higher for the group receiving the text messages—6.3 p.p. after the web-only period and 7.6 p.p. at the end of the fieldwork—we cannot conclude statistically that there was an increase due to the text messages. Nonetheless, it is worth noting that the higher proportion of smartphone completion in the SMS group is associated with a reduction in the use of tablets and did not affect the proportion of respondents completing the survey on a PC or laptop.
Device Used to Complete the Web Individual Questionnaire by Experimental Group
. | Web-only period . | End of the fieldwork . | ||||
---|---|---|---|---|---|---|
Control (n = 125) . | Any SMS (n = 385) . | Chi-squared test . | Control (n = 154) . | Any SMS (n = 440) . | Chi-squared test . | |
Device used to complete the individual web interview | ||||||
Smartphone | 38.1 | 44.4 | (2) = 16.84 | 37.7 | 45.3 | (2) = 17.33 |
(5.6) | (3.3) | (2.0, 709.1) = 1.65 | (5.2) | (3.1) | (2.0, 735.7) = 1.90 | |
Tablet | 15.4 | 8.4 | p = .192 | 14.9 | 8.4 | p = .150 |
(4.0) | (2.0) | (3.5) | (1.8) | |||
PC or laptop | 46.4 | 47.2 | 47.4 | 46.4 | ||
(5.5) | (3.4) | (5.2) | (3.2) |
. | Web-only period . | End of the fieldwork . | ||||
---|---|---|---|---|---|---|
Control (n = 125) . | Any SMS (n = 385) . | Chi-squared test . | Control (n = 154) . | Any SMS (n = 440) . | Chi-squared test . | |
Device used to complete the individual web interview | ||||||
Smartphone | 38.1 | 44.4 | (2) = 16.84 | 37.7 | 45.3 | (2) = 17.33 |
(5.6) | (3.3) | (2.0, 709.1) = 1.65 | (5.2) | (3.1) | (2.0, 735.7) = 1.90 | |
Tablet | 15.4 | 8.4 | p = .192 | 14.9 | 8.4 | p = .150 |
(4.0) | (2.0) | (3.5) | (1.8) | |||
PC or laptop | 46.4 | 47.2 | 47.4 | 46.4 | ||
(5.5) | (3.4) | (5.2) | (3.2) |
Note.—Analyses are based on individuals completing the survey online during the 5-week web-only period (left) and all respondents completing online (right). Differences in the distributions were tested using a chi-squared test with the second-order Rao and Scott correction.
Device Used to Complete the Web Individual Questionnaire by Experimental Group
. | Web-only period . | End of the fieldwork . | ||||
---|---|---|---|---|---|---|
Control (n = 125) . | Any SMS (n = 385) . | Chi-squared test . | Control (n = 154) . | Any SMS (n = 440) . | Chi-squared test . | |
Device used to complete the individual web interview | ||||||
Smartphone | 38.1 | 44.4 | (2) = 16.84 | 37.7 | 45.3 | (2) = 17.33 |
(5.6) | (3.3) | (2.0, 709.1) = 1.65 | (5.2) | (3.1) | (2.0, 735.7) = 1.90 | |
Tablet | 15.4 | 8.4 | p = .192 | 14.9 | 8.4 | p = .150 |
(4.0) | (2.0) | (3.5) | (1.8) | |||
PC or laptop | 46.4 | 47.2 | 47.4 | 46.4 | ||
(5.5) | (3.4) | (5.2) | (3.2) |
. | Web-only period . | End of the fieldwork . | ||||
---|---|---|---|---|---|---|
Control (n = 125) . | Any SMS (n = 385) . | Chi-squared test . | Control (n = 154) . | Any SMS (n = 440) . | Chi-squared test . | |
Device used to complete the individual web interview | ||||||
Smartphone | 38.1 | 44.4 | (2) = 16.84 | 37.7 | 45.3 | (2) = 17.33 |
(5.6) | (3.3) | (2.0, 709.1) = 1.65 | (5.2) | (3.1) | (2.0, 735.7) = 1.90 | |
Tablet | 15.4 | 8.4 | p = .192 | 14.9 | 8.4 | p = .150 |
(4.0) | (2.0) | (3.5) | (1.8) | |||
PC or laptop | 46.4 | 47.2 | 47.4 | 46.4 | ||
(5.5) | (3.4) | (5.2) | (3.2) |
Note.—Analyses are based on individuals completing the survey online during the 5-week web-only period (left) and all respondents completing online (right). Differences in the distributions were tested using a chi-squared test with the second-order Rao and Scott correction.
. | Control (n = 165) . | Any SMS (n = 492) . | Chi-squared test . |
---|---|---|---|
Full household response rate | 42.1 | 39.1 | (1) = 0.58 |
(4.5) | (2.7) | (1.0, 541.0) = 0.32 | |
p = .573 |
. | Control (n = 165) . | Any SMS (n = 492) . | Chi-squared test . |
---|---|---|---|
Full household response rate | 42.1 | 39.1 | (1) = 0.58 |
(4.5) | (2.7) | (1.0, 541.0) = 0.32 | |
p = .573 |
Note.—Analysis is based on households in which at least one adult had given a mobile number. Differences in the distribution were tested with the second-order Rao and Scott correction.
. | Control (n = 165) . | Any SMS (n = 492) . | Chi-squared test . |
---|---|---|---|
Full household response rate | 42.1 | 39.1 | (1) = 0.58 |
(4.5) | (2.7) | (1.0, 541.0) = 0.32 | |
p = .573 |
. | Control (n = 165) . | Any SMS (n = 492) . | Chi-squared test . |
---|---|---|---|
Full household response rate | 42.1 | 39.1 | (1) = 0.58 |
(4.5) | (2.7) | (1.0, 541.0) = 0.32 | |
p = .573 |
Note.—Analysis is based on households in which at least one adult had given a mobile number. Differences in the distribution were tested with the second-order Rao and Scott correction.
5.3 Full Household Web Response Rate and Fieldwork Efforts (H6)
The final hypothesis (H6) examined the proportion of households in which all adults had completed the individual interview (full household web response rate) before the start of the CATI fieldwork. This analysis was conducted for households in which at least one adult had provided a valid mobile number before wave 11. The results in table 5 show that the text message had no effect on the full household web response rate (42.1 versus 39.1 percent, p = .575) during the web-only fieldwork.
6. DISCUSSION
This article presents the findings of a survey experiment that explored the benefits of incorporating text messages into a contact strategy of letters and emails in the wave that a sample shifted from a CAPI-only to a web-first and CATI sequential design. The primary objective of this research was to investigate the impact of text messages on response rates within the specific context of the mode transition.
The analysis did not reveal a significant overall impact of text messages on the web response rate or the final response rate, which encompasses both web and CATI interviews. The web response rate was only 1.7 p.p. higher for the group receiving the SMS after the web-only fieldwork, and it vanished afterwards. This outcome suggests that including text messages alongside letters and emails did not appear to significantly enhance response propensities in the specific context of this study. However, the analysis also shows that specific subgroups benefited from the additional SMS invite and/or reminders.
Panel members who had not provided an email or had changed their address since the last wave benefited from the text message invite and/or reminders. They exhibited higher web and final response rates after receiving the text messages. This result aligns with the literature suggesting a supplementary contact mode to effectively reach sample members who might otherwise remain uncontacted (Dillman et al. 2014). In the context of a mode transition to a web-first design, text messages can help increase the chance of contacting sample members and directing them to the web mode, which are the two main objectives to meet in the mode shift in a longitudinal survey.
We did not find an increase in the web response or final response rates for smartphone users, daily internet users, or females—groups that had exhibited a greater predisposition to complete web surveys on smartphones (e.g., Toepoel and Lugtig 2014; Revilla et al. 2016; Maslovskaya et al. 2019). The absence of an effect of the text messages on these subgroups suggests that the greater predisposition to complete the survey on a smartphone does not necessarily translate into a greater response propensity if we ease the smartphone survey access and completion. This idea is supported by the rather timid increase in smartphone completion we observed after sending the text messages with a link to the survey. We did find a positive effect of the text messages among young adults (16–34) on the web response rate during the web-only phase, although the difference eroded after the CATI interviewing.
The text messages reinforced the “take part” message conveyed by the other modes. This reinforcement proved particularly effective in elevating the web response rate among panel members with an irregular response pattern, who had been more reluctant to participate in the past. This group holds particular relevance in longitudinal studies, where they cannot be replaced by new sample members (Bianchi et al. 2017).
We also expected that the inclusion of text messages with personalized links in the contact strategy might encourage panel members to complete the survey on their smartphones. While we did observe a slightly higher proportion of smartphone completion in the group receiving the SMS, at the expense of tablet usage, these differences were not significant. This finding slightly departs from previous studies that observed a relationship between text message invites and/or reminders and a higher rate of smartphone completion (Crawford et al. 2013; De Bruijne and Wijnant 2014; Mavletova and Couper 2014; Barry et al. 2021). This discrepancy can be explained by the longitudinal nature of Understanding Society, where panel members are familiar with the questionnaire and can use this information to decide on the device to complete the survey.
In the context of a web-first and CATI design, the SMS could lead to cost savings by increasing the number of web respondents and thus reducing the fieldwork efforts at the CATI stage. However, in a household survey like Understanding Society, where we seek to interview all adults residing in the household, significant fieldwork efforts reduction can only be achieved if, during the web-only phase, all adults in the household complete the survey. In the experiment, the full household web response rate was slightly higher for the control group after the web-only field, which indicates that the extra SMS might not translate into significant cost savings.
This study does have certain limitations that should be acknowledged. A significant limitation, as discussed in section 4, is the relatively small sample size and its impact on the statistical power of the analysis, which, in addition, undermined our ability to determine the optimal number and type—invite or reminder—of SMS. Additionally, the experiment was carried out during the COVID-19 pandemic, which could have affected how panel members reacted to the SMS.
The findings presented in this article endorse the usage of text messages as an additional contact mode to bolster response rates, particularly among specific subgroups. This relatively costless intervention can lead to a significant increase in response rates among individuals for whom other contact details are unavailable or who may be more hesitant to respond. Also, text messages could be an effective intervention to boost web response rates among young adults. The above-average effect of the SMS on specific subgroups suggests that this measure could be more cost-effective in the context of a targeted design. In Understanding Society, these positive findings fostered the introduction of SMS invites and reminders for the entire sample in the following wave.
Supplementary Materials
Supplementary materials are available online at academic.oup.com/jssam.
Understanding Society is an initiative funded by the Economic and Social Research Council and various Government Departments, with scientific leadership by the Institute for Social and Economic Research, University of Essex, and survey delivery by NatCen Social Research and Verian (formerly Kantar Public). The research data are distributed by the UK Data Service. We are grateful for the feedback and suggestions received from the editors and reviewers. The study design and analysis were not preregistered.