Abstract

Using multiple modes of contact has been found to increase survey participation over a single contact mode. Text messaging has emerged as a new mode to contact survey participants in mixed-mode survey designs, especially for surveys that include web and/or phone data collection. However, it is unclear how to best combine text messages with mailings and other outreach contacts to improve response rates and data quality. To explore the effectiveness of using text messaging as a contact mode, we conducted a full factorial experiment that varies the sequencing of text messages with mailing contacts (early versus late reminder) and the time text messages were sent (morning versus afternoon). The experiment was implemented in a follow-up wave of a mixed-mode nationally representative longitudinal survey with two sample groups (Cooperative versus Other Respondents). For Cooperative Respondents, text reminders seemed to be effective at increasing completion rates, with the early text reminder being somewhat more effective than the late text reminder, at least early in the field period. For Other Respondents, text invitations were effective at improving the completion rate, but effects diminished quickly once the invitation letter was sent. Additionally, the early text reminder appears to be more effective than the late text reminder at increasing completion rates for Other Respondents. The sequencing of text messages did not affect data quality across sample groups or substantially impact nonresponse. The time of day the text messages were sent did not affect any of the outcome measures examined.

Statement of Significance

This article focuses on text messaging as an emerging contact mode in mixed-mode surveys. Using texting to encourage response is an important survey innovation as smartphone ownership is near ubiquitous in the United States and because texting provides a direct way to initiate a web or phone survey. As researchers look at new technologies to help curtail rising survey costs while maintaining high-quality, representative samples, texting will likely be a key contact mode. This paper presents the results from an experiment varying the sequencing of texting with mailing contacts and the time texts were sent, conducted as part of a follow-up wave of a mixed-mode nationally representative longitudinal survey. This work furthers research on how best to sequence text messaging with other contact modes in mixed-mode designs and whether text message timing impacts response.

1. INTRODUCTION

Using multiple modes of contact is especially important in encouraging response to surveys and increasing response rates over a single contact mode (Dillman et al. 2014). As researchers look to new technologies to help curtail rising survey costs while maintaining high-quality, representative samples, texting will likely be a key contact mode in web and mixed-mode survey designs. Given that smartphone ownership has expanded greatly (Perrin 2021), and the use of texting is widespread, text messaging has become more feasible as a survey contact mode.

Text outreach can be an important survey innovation to reach people who may not respond via other contact modes and reduce the burden in responding to surveys. Texting a link to a web survey or a phone number to call can reduce respondent burden by providing a direct way to initiate a web or phone survey from the smartphone where the text is received, thus making it easier to respond. For many surveys, texting would likely not be used alone but in combination with other modes of outreach, such as mail, email, and outbound phone (i.e., interviewers making outgoing calls to sample members). However, there is little research on how best to sequence text messages with other modes of contact to maximize survey response and minimize nonresponse, while maintaining overall data quality.

Our research focuses on text messaging as an emerging contact mode in mixed-mode surveys and aims to address these gaps in the literature. This paper presents the design and results of a full factorial experiment that varies the sequencing of text messages with mailing contacts and the time of the day text messages were sent. The experiment was conducted in the second wave of a mixed-mode nationally representative longitudinal survey, the 2022 National Survey of Fishing, Hunting, and Wildlife-Associated Recreation (FHWAR). Data collection was conducted via web and inbound phone (i.e., interviewers receiving incoming calls from sample members), offering an ideal opportunity to research how best to sequence text messaging with other mail contacts in mixed-mode recruitment designs and whether the time a text message is sent has an impact on response.

Our research focuses on understanding whether sending a text reminder early versus later in the field period or sending text messages in the morning versus afternoon improves completion rates, response speed, and nonresponse, as well as how a text reminder compares to a text invite in these areas. The experimental design and analysis were stratified by Cooperative Respondents, those who completed the first survey wave after receiving a text message, and Other Respondents, a group that contained all other respondents who completed the first survey wave and screener respondents who did not respond to the first wave. The screener was a short survey sent to all address-based sample (ABS) members that contained questions about household members’ participation in fishing, hunting, and wildlife-watching activities, basic demographic characteristics, and contact information. Since all respondents participated in the screener, the design allowed for a robust nonresponse analysis between the screener and second-wave respondents. Our research also examines whether sending text messages has any negative impacts on web response time and item nonresponse rate, two recognized indicators of data quality (Groves 1989; de Leeuw et al. 2003; Andreadis 2021).

Our research questions include:

  1. What is the association between sequencing of text messages and time of day texts were sent and the overall completion rate or the mode or device of response?

  2. What is the association between sequencing of text messages and time of day texts were sent and the number of days respondents took to complete the survey?

  3. What is the association between sequencing of text messages and time of day texts were sent and data quality, as measured by web survey response time and item nonresponse rates?

  4. What is the association between sequencing of text messages and time of day texts were sent and sample composition and nonresponse?

2. BACKGROUND

2.1 Survey Response Theory and Mode Preference

Various theoretical perspectives have been developed to illuminate an individual’s decision making about whether to respond to a survey. These provide theoretical models for explaining participants’ internal psychology and cognitive processes in a way that can be leveraged by survey researchers to inform recruitment and contact strategies, including the use of text messaging. Our research is grounded in leverage-saliency and social exchange theory, as well as what is known presently from the survey research literature (Cook et al. 2000; Dillman et al. 2014; Dillman 2017; Daikeler et al. 2022; Cabrera-Álvarez and Lynn 2022; and others described below).

Leverage-saliency theory proposes that people place different levels of importance on various survey features and that survey designers can emphasize the survey features that might animate a response and deemphasize features that might discourage a response (Groves et al. 2000). From this perspective, text messaging may be a key survey feature that improves access for some sample members over traditional contact modes. Social exchange suggests that respondents are cognizant of the potential costs (e.g., expending time, mental effort) and benefits (e.g., answering interesting questions, helping others, receiving an incentive) of survey participation (Dillman et al. 2014). This means that survey researchers can promote cooperation by establishing trust with participants and communicating that the potential benefits of survey participation outweigh the costs (Dillman et al. 2014). From this perspective, texting may have a positive effect on participation by reducing burden and making it easier for an individual to respond, especially when the text invites the recipient to a web or phone survey that can be completed on the same device. However, texting could also have a negative effect if text messages are not viewed as legitimate or are considered a nuisance because of over-contacting.

Research on mixed-mode designs needs to consider how mode and device preferences may impact people’s willingness to respond. Survey researchers have identified multiple factors that can influence people’s response mode preferences (Olson et al. 2012; Smyth et al. 2014). We group and expand these factors into three broad areas: (i) media or device factors, including access, familiarity and ease of use, and perceived privacy and confidentiality related to the specific mode of contact or data collection; (ii) respondent factors, including time demands, external distractions, and concerns about presentation of self; and (iii) survey-specific factors, including perceived legitimacy, ease of visual and aural processing, and cognitive burden. People’s evaluation as to the importance of each factor will vary, influencing to different degrees whether they decide to respond to the survey request. The media and device factors may be especially relevant when exploring text outreach in surveys.

2.2 Literature on Mixed-Mode Contact and Recruitment

Utilizing multiple modes of contact, such as mail, email, text, telephone, and in-person, can be critical to getting potential participants to respond to a survey. Each mode has strengths and limitations, such as accessibility, cost, suitability for population(s) of interest, and technical and legal considerations. In general, the aim of pairing two or more contact modes is to reduce survey costs, reduce recruitment or collection time, and/or minimize total survey error (Dillman et al. 2014). Overall, recruitment strategies should be carefully crafted so contacts work together in a holistic manner, where each contact is connected with similar branding and visual design but has its own distinct approach, messaging, and plea so they appeal to different aspects that may encourage response.

Existing survey research literature has shown that, in general, the number of contacts is positively associated with response rate (Cook et al. 2000; Daikeler et al. 2022). The more invitations or contacts sent, the greater the response rate, with suggested maximum numbers that vary by mode (Cook et al. 2000; Lundquist and Särndal 2013; Moore et al. 2016). Multiple contact modes can be used to improve data quality by improving response rates and decreasing coverage error without increasing other sources of error (Dillman et al. 2014). Mixed-mode contact approaches have increased response rates and representativeness in general population studies (de Leeuw 2005; Beebe et al. 2005, 2012). Similar findings have been observed in certain special populations, such as in surveys of clinicians (Beebe et al. 2007, 2018; Dykema et al. 2013) and students (Millar and Dillman 2011; Patrick et al. 2022). However, a limitation of this literature is that the effects of multiple contact modes are often considered in conjunction with the effects of multiple data collection modes (e.g., Beebe et al. 2012). For example, Beebe et al. (2012) found that, while multiple contact and data collection modes were associated with increased response rates in a general population health survey, they did not necessarily reduce nonresponse bias. This suggests that survey researchers should continue to investigate the relationship between mixed-mode contact approaches and nonresponse, among other dimensions of data quality.

2.3 Literature on Text Messages as a Contact Mode

Texting is an emerging technology for survey contact. For the past several years, some survey researchers have gathered cell phone numbers and the consent to text survey participants, where the participant agrees to be texted for the purposes of the study. This approach has been used in panel and longitudinal study designs, so texting can be used for follow-up attempts to contact participants for future surveys. More recently, survey researchers are also using unconsented texting, where cell phone numbers are matched to addresses in address-based studies or used directly in listed samples or random digital dial studies, but the participant has not agreed to be texted for the purposes of the study. Depending on the approach used, various types of software can facilitate sending texts or more manual approaches can be used. Both approaches incur costs, such as staff time to write and send individual text messages or service fees for batch text message platforms.

The optimal time to introduce text messaging into a contact strategy is understudied, but existing research suggests that text messaging may be more impactful at the reminder stage, rather than at the prenotification or invitation stage (Bošnjak et al. 2008; DuBray 2013; De Bruijne and Wijnant 2014; Cabrera-Álvarez and Lynn 2022). One possible explanation, based on social exchange theory, is that text messages may not convey the same degree of trust or legitimacy as other contact modes, such as mail. Thus, text messaging may be most effective after some rapport has been established through other modes. Cabrera-Álvarez and Lynn (2022) compared the impact of adding text messages to various parts of the contact strategy for a mixed-mode survey of panelists in the United Kingdom. The authors found that a consented text message reminder led to the greatest increase in response rate, even more than when text messages were used for both the invitation and reminder. Two European studies, one with German students and the other with a panel in the Netherlands, found that consented text invitations did not improve response over email invitations (Bošnjak et al. 2008; De Bruijne and Wijnant 2014).

DuBray (2013) experimented with unconsented text prenotifications for the Centers for Disease Control and Prevention’s Behavioral Risk Factor Surveillance System telephone survey, finding no significant impact. In Australia, Dal Grande et al. (2016) also experimented with unconsented text prenotifications and found opposite results; respondents who received an unconsented text prenotification had a significantly higher response rate than those who did not.

Extant literature on text messaging is limited in certain respects. For example, existing studies tend to rely on non-experimental designs in surveys (Bošnjak et al. 2008; De Bruijne and Wijnant 2014; Cabrera-Álvarez and Lynn 2022) and many of these studies involve non-US samples (Bošnjak et al. 2008; De Bruijne and Wijnant 2014; Cabrera-Álvarez and Lynn 2022). We also note that few existing studies had the necessary sample sizes to look at effects of text messaging within subgroups, such as historically hard-to-survey groups. The collective literature finds promising associations with text contacts and survey completion, yet little in terms of conclusive evidence as to optimal text timing or frequency. Informed by leverage salience and social exchange survey response theories and research on factors that impact mode preferences, our study seeks to contribute new knowledge from an experiment to understand whether text reminders sent early versus later in the field period are more effective, and how they compare to a text invite, as well as whether the time of day text messages are sent impacts survey participation, response speed, and data quality. Our work furthers the research on how best to include text messaging with other contact modes in mixed-mode recruitment designs.

3. METHODS

3.1 Data

The experiment uses data collected by NORC at the University of Chicago for the 2022 National Survey of FHWAR. The FHWAR, conducted about every five years since 1955, is a nationally representative longitudinal survey sponsored by the US Fish and Wildlife Service that measures fishing, hunting, and wildlife-watching participation and expenditure by people over 16 years old living in the United States. The 2022 FHWAR consisted of a screener wave and three subsequent data collection waves. For each wave, sample members received a mailed letter with a $1 incentive directing them to complete the questionnaire online or over the phone with a live interviewer by calling a toll-free number. Nonrespondents received two reminder postcards and finally a self-administered paper questionnaire with a letter explaining the survey. In waves 1, 2, and 3, the research team experimented with adding text message and email outreach to this contact strategy.

The screener was administered from January 15 to April 15, 2022. The screener sample included an ABS sample and one drawn from AmeriSpeak, a nationally representative probability-based panel. The ABS sample was developed from the November 2021 US Postal Service’s Delivery Sequence File (DSF), including only city-style residential addresses and PO Box addresses that were flagged as Only Way to Get Mail (OWGM). Drop delivery and vacant households were removed. The ABS sample allowed for oversampling counties with high hunting participation identified by hunting license lists. The sample was stratified by state. The contact strategy for the screener varied depending on the sample source. AmeriSpeak panel members were contacted via web, text, and phone contacts, while ABS sample members received up to four contact attempts as described previously in addition to the text messages outlined in detail in section 3.2.

Table 1 shows the number of completes and response rates for the screener and the first two waves of the survey. In total, 42,340 households completed the screener. By mode, 32,928 (78 percent) households completed by web, 2,341 by phone (6 percent), and 7,071 by paper survey (17 percent). By sample source, 30,854 ABS sample members completed the screener (an American Association for Public Opinion Research (AAPOR) RR3 response rate of 11 percent), and 11,486 panel members completed the screener (an AAPOR RR3 response rate of 9 percent). The screener obtained household rostering and demographic characteristics, potential participation in fishing, hunting, and wildlife-watching activities, contact information, and a request to consent to receive text messages. Question wording for key screener items used in the analysis is provided in appendix A in the supplementary data online. Based on their screener data, household and household members were sampled into one of three groups for subsequent waves—fishing, hunting, or wildlife-watching (see table S1 in the supplementary data online for comparison of characteristics across the three groups). Questionnaires and outreach materials were tailored to each group.

Table 1.

Response Rates and Completes by Wave

Screener
Wave 1
Wave 2
ABS samplePanel sampleABS samplePanel sampleABS samplePanel sample

Response rate (AAPOR RR3)11%9%4%8%4%7%

Combined completesCombined completesCombined completes
Web32,928 (78%)14,227 (86%)15,279 (90%)
Phone2,341 (6%)620 (4%)728 (4%)
Paper7,071 (17%)1,762 (11%)961 (6%)
Total42,34016,60916,968
Screener
Wave 1
Wave 2
ABS samplePanel sampleABS samplePanel sampleABS samplePanel sample

Response rate (AAPOR RR3)11%9%4%8%4%7%

Combined completesCombined completesCombined completes
Web32,928 (78%)14,227 (86%)15,279 (90%)
Phone2,341 (6%)620 (4%)728 (4%)
Paper7,071 (17%)1,762 (11%)961 (6%)
Total42,34016,60916,968
Table 1.

Response Rates and Completes by Wave

Screener
Wave 1
Wave 2
ABS samplePanel sampleABS samplePanel sampleABS samplePanel sample

Response rate (AAPOR RR3)11%9%4%8%4%7%

Combined completesCombined completesCombined completes
Web32,928 (78%)14,227 (86%)15,279 (90%)
Phone2,341 (6%)620 (4%)728 (4%)
Paper7,071 (17%)1,762 (11%)961 (6%)
Total42,34016,60916,968
Screener
Wave 1
Wave 2
ABS samplePanel sampleABS samplePanel sampleABS samplePanel sample

Response rate (AAPOR RR3)11%9%4%8%4%7%

Combined completesCombined completesCombined completes
Web32,928 (78%)14,227 (86%)15,279 (90%)
Phone2,341 (6%)620 (4%)728 (4%)
Paper7,071 (17%)1,762 (11%)961 (6%)
Total42,34016,60916,968

Wave 1 data collection was conducted between May 4 and August 29, 2022. The wave 1 sample consisted of all households identified during screening and used the contact strategy described earlier. Wave 1 questionnaires asked respondents about activities they had participated in since January 1, 2022. In total, 16,609 surveys were completed in wave 1. This included 14,227 (86 percent) households, which completed by web, 620 (4 percent) by phone, and 1,762 (11 percent) by paper survey. By sample source, 10,403 ABS sample members (an AAPOR RR3 response rate of 4 percent) and 6,206 panel sample members (an AAPOR RR3 response rate of 8 percent) completed the wave 1 survey.

Wave 2 data collection, which is the focus of this paper, was conducted September 7 to December 23, 2022. A supplemental sample of panel members was also included in wave 2 to improve the precision of the results. In addition to the four contact attempts used in previous waves, sample members who had consented to receive text messages also received text messages as part of the contact strategy. This design is described in the following section. Wave 2 panelists could complete the survey via web and phone mode. For ABS sample members, web and inbound phone were offered concurrently in the initial mailings, with web mentioned in the text messages, and paper offered to nonrespondents later in the field period when the self-administered questionnaire was sent. Wave 2 questionnaires asked respondents about activities they had participated in since completing the wave 1 survey. In total, 16,968 surveys were completed in wave 2. This included 15,279 (90 percent) households that completed by web, 728 (4 percent) by phone, and 961 (6 percent) by paper survey. By sample source, 10,357 ABS sample members (an AAPOR RR3 response rate of 4 percent) and 6,611 panel members (an AAPOR RR3 response rate of 7 percent) completed the wave 2 survey. See appendix C in the supplementary data online for wave 2 Preferred Reporting Items for Complex Sample Survey Analysis (PRICSSA).

3.2 Experimental Design

During wave 2 of the 2022 FHWAR, text messaging was integrated as an additional contact mode for ABS sample members who had consented to receiving text messages during the screener. The text messages let the respondent know the survey was ready and provided a direct link to the web survey to make it easier for people to complete it. See appendix D in the supplementary data online for the text messages used.

The experiment consisted of varying the sequencing of text messages—whether the text was sent as an invitation, early reminder, or late reminder—as well as the time of day the text messages were sent, either morning or afternoon. Morning text messages were sent at 10:00 am Central Time and afternoon text messages were sent at 5:00 pm Central Time. These times were chosen based on their appropriateness across major US time zones (i.e., not too early or late for respondents in Eastern or Pacific time zones) while still delineating two discrete periods of the day (i.e., morning and afternoon). The sample was divided into two groups based on their response to wave 1. The Cooperative Respondents group consisted of respondents who completed wave 1 after receiving a text message (n = 2,315). The Other Respondents group was much larger (n = 15,694) and consisted of the remaining wave 1 completes, as well as screener respondents who did not respond in wave 1. Cooperative Respondents were older, less likely to be Hispanic, more likely to be Black, more likely to have participated in fishing, and lived in smaller households compared to Other Respondents (see table S2 in the supplementary data online for comparison of characteristics across the two groups).

Text messages were sent in batches using Twilio, a communications platform. Note that not all text messages were delivered successfully: 1,011 texts went undelivered through blocking by a phone company and Twilio stopped service because of the high number of STOP requests received. Part of the reason for the high number of STOP requests may have been the length of time between when respondents gave consent and when they received the first text message. The blocking affected all experimental conditions equally and is an important operational consideration when sending text messages for survey contacts. Few differences were found between those with blocked texts and those who received texts, where Black respondents were less likely to be blocked (χ23=7.87, p=.05) and those from single-person households were blocked at higher rates (χ23=12.69, p=.005). Sample members with undelivered text messages were excluded from the subsequent data analysis.

As shown in table 2, Cooperative Respondents were randomly assigned to one of the four conditions formed by crossing two factors—sequencing (early text reminder vesus late text reminder) and time of text message (morning versus afternoon)—in a 2×2 factorial design. Other Respondents were randomly assigned to one of the six conditions formed by crossing two factors—sequencing (text invitation versus early text reminder versus late text reminder) and time of text message (morning versus afternoon)—in a 3×2 factorial design. Day of the week was not part of the experimental design as we wanted to text on specific days that best fit within the overall contact schedule.

Table 2.

Sample Size Allocation to Treatments

Cooperative Respondents
Other Respondents
MorningAfternoonTotalMorningAfternoonTotal
Text invitation, blocked (%)n/an/an/a2,613, 151 (6%)2,688, 156 (6%)5,301, 307 (6%)
Early text reminder, blocked (%)570, 35 (6%)601, 39(6%)1,171, 74 (6%)2,555, 144 (6%)2,599, 134 (5%)5,154, 278 (5%)
Late text reminder, blocked (%)571, 32 (6%)573, 34 (6%)1,144, 66 (6%)2,630, 147 (6%)2,609, 139 (5%)5,239, 286 (5%)
Total, blocked (%)1,141, 67 (6%)1,174, 73 (6%)2,315, 140 (6%)7,798, 442 (6%)7,896, 429 (5%)15,694, 871 (6%)
Cooperative Respondents
Other Respondents
MorningAfternoonTotalMorningAfternoonTotal
Text invitation, blocked (%)n/an/an/a2,613, 151 (6%)2,688, 156 (6%)5,301, 307 (6%)
Early text reminder, blocked (%)570, 35 (6%)601, 39(6%)1,171, 74 (6%)2,555, 144 (6%)2,599, 134 (5%)5,154, 278 (5%)
Late text reminder, blocked (%)571, 32 (6%)573, 34 (6%)1,144, 66 (6%)2,630, 147 (6%)2,609, 139 (5%)5,239, 286 (5%)
Total, blocked (%)1,141, 67 (6%)1,174, 73 (6%)2,315, 140 (6%)7,798, 442 (6%)7,896, 429 (5%)15,694, 871 (6%)

Note.—Sample members with blocked text messages were excluded from analyses (1,011). Number of excluded cases by treatment in italics. Proportion of excluded cases in parentheses.

Table 2.

Sample Size Allocation to Treatments

Cooperative Respondents
Other Respondents
MorningAfternoonTotalMorningAfternoonTotal
Text invitation, blocked (%)n/an/an/a2,613, 151 (6%)2,688, 156 (6%)5,301, 307 (6%)
Early text reminder, blocked (%)570, 35 (6%)601, 39(6%)1,171, 74 (6%)2,555, 144 (6%)2,599, 134 (5%)5,154, 278 (5%)
Late text reminder, blocked (%)571, 32 (6%)573, 34 (6%)1,144, 66 (6%)2,630, 147 (6%)2,609, 139 (5%)5,239, 286 (5%)
Total, blocked (%)1,141, 67 (6%)1,174, 73 (6%)2,315, 140 (6%)7,798, 442 (6%)7,896, 429 (5%)15,694, 871 (6%)
Cooperative Respondents
Other Respondents
MorningAfternoonTotalMorningAfternoonTotal
Text invitation, blocked (%)n/an/an/a2,613, 151 (6%)2,688, 156 (6%)5,301, 307 (6%)
Early text reminder, blocked (%)570, 35 (6%)601, 39(6%)1,171, 74 (6%)2,555, 144 (6%)2,599, 134 (5%)5,154, 278 (5%)
Late text reminder, blocked (%)571, 32 (6%)573, 34 (6%)1,144, 66 (6%)2,630, 147 (6%)2,609, 139 (5%)5,239, 286 (5%)
Total, blocked (%)1,141, 67 (6%)1,174, 73 (6%)2,315, 140 (6%)7,798, 442 (6%)7,896, 429 (5%)15,694, 871 (6%)

Note.—Sample members with blocked text messages were excluded from analyses (1,011). Number of excluded cases by treatment in italics. Proportion of excluded cases in parentheses.

Table 3 shows the contact attempts for each group. All sample members received the standard FHWAR invitation letter with a $1 incentive provided in the envelope. Nonrespondents received two postcard reminders and a self-administered questionnaire with a letter explaining the survey. The Cooperative Respondents group received an additional two text message contacts (a text invitation before the letter invitation and a text reminder) for a total of six contacts. The timing of the text reminders varied, with the early text reminder group receiving the reminder between the invitation letter and the first postcard and the late text reminder group receiving the reminder between the first and second postcards. The Other Respondents group received one text message contact in addition to the standard FHWAR contact strategy, for a total of five contacts. As with the Cooperative Respondents group, the timing of the text messages for the Other Respondents group varied. The text invite group received a text message before the letter invitation, the early text reminder group received a text between the letter invitation and first postcard, and the late text reminder received a text after the final mailing.

Table 3.

Contact Procedures by Treatment Group

Cooperative Respondents
Other Respondents
Early text reminderLate text reminderText inviteEarly text reminderLate text reminder
Text invitation (9/7–9)Contact 1Contact 1Contact 1
Letter invitation (9/16–19)Contact 2Contact 2Contact 2Contact 1Contact 1
Text reminder (9/22–23)Contact 3Contact 2
Postcard 1 (9/30–10/3)Contact 4Contact 3Contact 3Contact 3Contact 2
Text reminder (10/6)Contact 4
Postcard 2 (10/14–17)Contact 5Contact 5Contact 4Contact 4Contact 3
Letter and SAQ (10/31–11/1)Contact 6Contact 6Contact 5Contact 5Contact 4
Text reminder (11/17)Contact 5
Total contacts66555
Cooperative Respondents
Other Respondents
Early text reminderLate text reminderText inviteEarly text reminderLate text reminder
Text invitation (9/7–9)Contact 1Contact 1Contact 1
Letter invitation (9/16–19)Contact 2Contact 2Contact 2Contact 1Contact 1
Text reminder (9/22–23)Contact 3Contact 2
Postcard 1 (9/30–10/3)Contact 4Contact 3Contact 3Contact 3Contact 2
Text reminder (10/6)Contact 4
Postcard 2 (10/14–17)Contact 5Contact 5Contact 4Contact 4Contact 3
Letter and SAQ (10/31–11/1)Contact 6Contact 6Contact 5Contact 5Contact 4
Text reminder (11/17)Contact 5
Total contacts66555

Note.—Mailings were batched over multiple days.

Table 3.

Contact Procedures by Treatment Group

Cooperative Respondents
Other Respondents
Early text reminderLate text reminderText inviteEarly text reminderLate text reminder
Text invitation (9/7–9)Contact 1Contact 1Contact 1
Letter invitation (9/16–19)Contact 2Contact 2Contact 2Contact 1Contact 1
Text reminder (9/22–23)Contact 3Contact 2
Postcard 1 (9/30–10/3)Contact 4Contact 3Contact 3Contact 3Contact 2
Text reminder (10/6)Contact 4
Postcard 2 (10/14–17)Contact 5Contact 5Contact 4Contact 4Contact 3
Letter and SAQ (10/31–11/1)Contact 6Contact 6Contact 5Contact 5Contact 4
Text reminder (11/17)Contact 5
Total contacts66555
Cooperative Respondents
Other Respondents
Early text reminderLate text reminderText inviteEarly text reminderLate text reminder
Text invitation (9/7–9)Contact 1Contact 1Contact 1
Letter invitation (9/16–19)Contact 2Contact 2Contact 2Contact 1Contact 1
Text reminder (9/22–23)Contact 3Contact 2
Postcard 1 (9/30–10/3)Contact 4Contact 3Contact 3Contact 3Contact 2
Text reminder (10/6)Contact 4
Postcard 2 (10/14–17)Contact 5Contact 5Contact 4Contact 4Contact 3
Letter and SAQ (10/31–11/1)Contact 6Contact 6Contact 5Contact 5Contact 4
Text reminder (11/17)Contact 5
Total contacts66555

Note.—Mailings were batched over multiple days.

3.3 Methodological and Statistical Approach

In this study, we explored the effects of the sequencing (text invite, early text reminder versus late text reminder) and time of day that text messages were sent (morning versus afternoon) on completion rates, response speed, data quality, and nonresponse.

For each research question, data were analyzed separately for Cooperative and Other Respondents. Completion rates were calculated as the total number of completed surveys over the total number of wave 2 sample members; completion rates were used because detailed disposition codes are not always available for text contacts and guidelines for their use when available are still being developed. We compared the sequencing and time conditions on the overall completion rates, and the proportion completing by web, paper, and phone. All the completion rates reported were base-weighted to account for the unequal probabilities of selection as counties with high hunting participation were oversampled. The base weights were derived as the inverse of the probability of selection of the sampled household. All other analyses were performed unweighted. The weighted completion rates were compared via Rao–Scott chi-square goodness-of-fit tests using the first-order design corrections as estimated by the equations below (Rao and Scott 1979):

In these formulas, P^c is the proportion of completes for condition c, Var^(P^c) is the variance of the estimate, and n is the total sample allocated.

We also examined the effects of the sequencing and time conditions on the response speed in terms of days to complete the web survey or paper questionnaire. The days to completion were analyzed via unweighted t-tests. In addition, we compared the conditions on data quality measures, including web response time (i.e., the number of minutes it took for the respondent to complete the web survey) and item nonresponse for web completes and paper completes (unweighted). The durations for the web response times were trimmed so that respondents with a response time in the lower 1 percent of the distribution were set to be 1 percent and respondents with a response time in the upper 1 percent of the distribution were set to be 99 percent to diminish the effects of extreme values on analysis when conducting unweighted t-tests. Rates of item nonresponse were calculated as the number of survey items for which the respondent did not provide a valid response category, either through a “Don’t Know,” “Refused,” or “Skip,” over the total number of survey items. Rates of item nonresponse were calculated for all questions a respondent was asked in the survey, which ranged from 29 questions up to 218 questions, with 46 questions asked on average. Finally, sample composition was compared for each condition by analyzing distributions of demographics and key survey indicators and testing for significance using chi-squared tests.

4. RESULTS

4.1 Completion Rates

We first examined the effects of sequencing and time of text messaging on weighted completion rates:

AAPOR response rates could not be calculated from a lack of detailed dispositions from the text messages. Table 4 presents the web, paper, phone, and overall completion rates by sequencing condition. Among Cooperative Respondents, there were no statistically significant differences between the early and late text reminder groups on web, paper, phone, or overall completion rates. Similarly, among Other Respondents, there were no statistically significant differences between the text invite, early text reminder, or late text reminder groups on web, paper, phone, or overall completion rates. However, those who received the text invite or early text reminder were more likely to complete the web survey using a smartphone than those who received the late text reminder (χ22=36.38, p<.0001). As expected, completion rates were significantly higher for Cooperative Respondents than for Other Respondents as the Cooperative Respondents had a history of responding after receiving a text in wave 1.

Table 4.

Completion Rates by Sequencing of Text Messages

Cooperative Respondents
Other Respondents
SequencingEarly text reminderLate text reminderText inviteEarly text reminderLate text reminder
Unweighted n1,1711,1445,3015,1545,239
Overall (%)77.6*72.8*57.258.858.4
Web (%)66.062.354.654.755.3
 Smartphone (%) 55.8 48.3  37.1**  34.5** 26.2
 Other web (%) 10.2 14.0 17.5 20.2  29.1
Paper (%)4.32.41.32.22.2
Phone (%)7.48.11.31.80.9
Cooperative Respondents
Other Respondents
SequencingEarly text reminderLate text reminderText inviteEarly text reminderLate text reminder
Unweighted n1,1711,1445,3015,1545,239
Overall (%)77.6*72.8*57.258.858.4
Web (%)66.062.354.654.755.3
 Smartphone (%) 55.8 48.3  37.1**  34.5** 26.2
 Other web (%) 10.2 14.0 17.5 20.2  29.1
Paper (%)4.32.41.32.22.2
Phone (%)7.48.11.31.80.9

Note.—Completion rates were base-weighted to account for unequal probability of selection.

*Significant result at alpha = 0.05 level.

**Significant result at alpha = 0.001 level.

Table 4.

Completion Rates by Sequencing of Text Messages

Cooperative Respondents
Other Respondents
SequencingEarly text reminderLate text reminderText inviteEarly text reminderLate text reminder
Unweighted n1,1711,1445,3015,1545,239
Overall (%)77.6*72.8*57.258.858.4
Web (%)66.062.354.654.755.3
 Smartphone (%) 55.8 48.3  37.1**  34.5** 26.2
 Other web (%) 10.2 14.0 17.5 20.2  29.1
Paper (%)4.32.41.32.22.2
Phone (%)7.48.11.31.80.9
Cooperative Respondents
Other Respondents
SequencingEarly text reminderLate text reminderText inviteEarly text reminderLate text reminder
Unweighted n1,1711,1445,3015,1545,239
Overall (%)77.6*72.8*57.258.858.4
Web (%)66.062.354.654.755.3
 Smartphone (%) 55.8 48.3  37.1**  34.5** 26.2
 Other web (%) 10.2 14.0 17.5 20.2  29.1
Paper (%)4.32.41.32.22.2
Phone (%)7.48.11.31.80.9

Note.—Completion rates were base-weighted to account for unequal probability of selection.

*Significant result at alpha = 0.05 level.

**Significant result at alpha = 0.001 level.

Table 5 presents the web, paper, phone, and overall completion rates by time of text message. There were no statistically significant differences between text messages sent in the morning and afternoon on completion rates for Cooperative Respondents or Other Respondents. There were also no significant differences on the device used to complete the web survey by time of texts.

Table 5.

Completion Rates by Time of Text Message

Cooperative Respondents
Other Respondents
Time of text messageMorningAfternoonMorningAfternoon
Unweighted n1,1411,1747,7987,896
Overall (%)73.177.756.759.5
Web (%)60.867.753.656.2
 Smartphone (%) 49.6 55.1 31.9 34.1
 Other web (%) 11.2 12.6 21.7 22.1
Paper (%)3.931.91.9
Phone (%)8.471.21.4
Cooperative Respondents
Other Respondents
Time of text messageMorningAfternoonMorningAfternoon
Unweighted n1,1411,1747,7987,896
Overall (%)73.177.756.759.5
Web (%)60.867.753.656.2
 Smartphone (%) 49.6 55.1 31.9 34.1
 Other web (%) 11.2 12.6 21.7 22.1
Paper (%)3.931.91.9
Phone (%)8.471.21.4

Note.—Completion rates were base-weighted to account for unequal probability of selection.

Table 5.

Completion Rates by Time of Text Message

Cooperative Respondents
Other Respondents
Time of text messageMorningAfternoonMorningAfternoon
Unweighted n1,1411,1747,7987,896
Overall (%)73.177.756.759.5
Web (%)60.867.753.656.2
 Smartphone (%) 49.6 55.1 31.9 34.1
 Other web (%) 11.2 12.6 21.7 22.1
Paper (%)3.931.91.9
Phone (%)8.471.21.4
Cooperative Respondents
Other Respondents
Time of text messageMorningAfternoonMorningAfternoon
Unweighted n1,1411,1747,7987,896
Overall (%)73.177.756.759.5
Web (%)60.867.753.656.2
 Smartphone (%) 49.6 55.1 31.9 34.1
 Other web (%) 11.2 12.6 21.7 22.1
Paper (%)3.931.91.9
Phone (%)8.471.21.4

Note.—Completion rates were base-weighted to account for unequal probability of selection.

Figure 1 shows the base-weighted cumulative overall completion rates by week for each condition. The solid vertical lines represent the text invitation, early text reminder, and late text reminder, respectively, and the dotted lines represent the four mailings to sampled persons, including the letter invitation, two postcard reminders, and the letter and paper questionnaire.

Overall Cumulative Completion Rates by Contact for Sequencing Conditions.
Figure 1.

Overall Cumulative Completion Rates by Contact for Sequencing Conditions.

Regarding Cooperative Respondents, as shown in figure 1, the completion rate at week 1 was similar between the early and late text reminder groups since both groups received the text invitation, but diverged in weeks 3 and 4, when more sampled persons who had received the early text reminder responded. The early text reminder group had a significantly higher completion rate at week 4 than the late text reminder group (χ21=4.11, p=.04). The difference diminished and became non-significant after those in the late text reminder group received their text reminder. For the rest of the field period, the early text reminder group had a slightly higher completion rate though the difference was not statistically significant. For this group, the text reminders seemed to be more effective than the text invitation at increasing completion rates, with the early text reminder being somewhat more effective than the late text reminder, at least early in the field period. The combination of mail contacts and texting led to a consistent increase in response after each contact. Since all groups received the same mailings, differences observed for weeks when a text was sent to one group and not another can likely be attributed to texting.

Regarding Other Respondents, as shown in figure 1, the text invite group had a significantly higher completion rate than the early and late text reminder groups at week 1 (χ22=177.08, p<.0001). However, the pattern changed quickly after the letter invitation was mailed. The early text reminder group continued to have higher completion rates than the other two groups for the rest of the field period. Particularly, the early text reminder group had a significantly higher completion rate than the Text Invite group (χ21=11.37, p=.0007) at week 2. The differences between these groups become nonsignificant afterward, with the early text reminder group having slightly higher completion rates than the text invite group. The late text reminder group had significantly lower completion rates than both other groups from week 2 to 8, suggesting that including an additional contact such a via text message is more effective than no text contacts with this group. The differences between the three conditions diminished after sample members received the late text reminder in that group.

The findings suggest that a text invitation is effective at improving response, but its effects diminish quickly. An early text reminder appears to be more effective than a late text reminder at increasing completion rates. Though not tested specifically, it seems that the most effective contact strategy would include the text invitation, the early text reminder, and the late text reminder, in addition to the mail contacts.

4.2 Response Speed

We also examined how quickly respondents completed the survey. Table 6 presents the mean number of days to complete the web and paper surveys. For Cooperative Respondents, there were no significant differences between the early and late text reminder groups. For Other Respondents, however, there were significant differences on the mean number of days to complete the web survey (F2, 5713=88.16, p<.0001), as well as the mean number of days to complete the paper survey (F2, 465=116.99 p<.0001). The mean number of days to complete the web survey for the early text reminder group was 13.4 days as compared to 18.2 days for the text invite group and 21 days for the late text reminder group. Respondents in the early text reminder group also completed the paper survey faster than those in the text invite group. These findings suggest that the early text reminder was the most effective in terms of getting sample members to complete the survey request as quickly as possible.

Table 6.

Mean Number of Days to Complete by Sample by Sequencing and Time of Text

Cooperative Respondents
Other Respondents
Early text reminderLate text reminderText inviteEarly text reminderLate text reminder
Sequencing
 Web completes
  No. of days to complete11.5*11.7*18.213.4*21.0
  Sample size7197101,9981,8691,850
  t-test/F-test (p-value)t(5684.4) = 2.63 (.009)F(2, 5713) = 88.16 (<.0001)
 Paper completes
  No. of days to complete63.4*63.5*63.8*53.353.3
  Sample size5838135158175
  t-test/F-test (p-value)t(5684.4) = 2.63 (.009)F(2, 465) = 116.99 (<.0001)
MorningAfternoonMorning
Afternoon
Time of text message
 Web completes
  No. of days to complete11.411.818.2*16.9*
  Sample size6857442,8922,824
  t-test (p-value)t(1426.7) = -0.17 (.863)t(5684.4) = 2.63 (.009)
 Paper completes
  No. of days to complete63.463.556.656.1
  Sample size4749228240
  t-test (p-value)t(1426.8) = –0.46 (.645)t(460.0) = 0.65 (.519)
Cooperative Respondents
Other Respondents
Early text reminderLate text reminderText inviteEarly text reminderLate text reminder
Sequencing
 Web completes
  No. of days to complete11.5*11.7*18.213.4*21.0
  Sample size7197101,9981,8691,850
  t-test/F-test (p-value)t(5684.4) = 2.63 (.009)F(2, 5713) = 88.16 (<.0001)
 Paper completes
  No. of days to complete63.4*63.5*63.8*53.353.3
  Sample size5838135158175
  t-test/F-test (p-value)t(5684.4) = 2.63 (.009)F(2, 465) = 116.99 (<.0001)
MorningAfternoonMorning
Afternoon
Time of text message
 Web completes
  No. of days to complete11.411.818.2*16.9*
  Sample size6857442,8922,824
  t-test (p-value)t(1426.7) = -0.17 (.863)t(5684.4) = 2.63 (.009)
 Paper completes
  No. of days to complete63.463.556.656.1
  Sample size4749228240
  t-test (p-value)t(1426.8) = –0.46 (.645)t(460.0) = 0.65 (.519)

Note.—*Significant result at p <.05.

Table 6.

Mean Number of Days to Complete by Sample by Sequencing and Time of Text

Cooperative Respondents
Other Respondents
Early text reminderLate text reminderText inviteEarly text reminderLate text reminder
Sequencing
 Web completes
  No. of days to complete11.5*11.7*18.213.4*21.0
  Sample size7197101,9981,8691,850
  t-test/F-test (p-value)t(5684.4) = 2.63 (.009)F(2, 5713) = 88.16 (<.0001)
 Paper completes
  No. of days to complete63.4*63.5*63.8*53.353.3
  Sample size5838135158175
  t-test/F-test (p-value)t(5684.4) = 2.63 (.009)F(2, 465) = 116.99 (<.0001)
MorningAfternoonMorning
Afternoon
Time of text message
 Web completes
  No. of days to complete11.411.818.2*16.9*
  Sample size6857442,8922,824
  t-test (p-value)t(1426.7) = -0.17 (.863)t(5684.4) = 2.63 (.009)
 Paper completes
  No. of days to complete63.463.556.656.1
  Sample size4749228240
  t-test (p-value)t(1426.8) = –0.46 (.645)t(460.0) = 0.65 (.519)
Cooperative Respondents
Other Respondents
Early text reminderLate text reminderText inviteEarly text reminderLate text reminder
Sequencing
 Web completes
  No. of days to complete11.5*11.7*18.213.4*21.0
  Sample size7197101,9981,8691,850
  t-test/F-test (p-value)t(5684.4) = 2.63 (.009)F(2, 5713) = 88.16 (<.0001)
 Paper completes
  No. of days to complete63.4*63.5*63.8*53.353.3
  Sample size5838135158175
  t-test/F-test (p-value)t(5684.4) = 2.63 (.009)F(2, 465) = 116.99 (<.0001)
MorningAfternoonMorning
Afternoon
Time of text message
 Web completes
  No. of days to complete11.411.818.2*16.9*
  Sample size6857442,8922,824
  t-test (p-value)t(1426.7) = -0.17 (.863)t(5684.4) = 2.63 (.009)
 Paper completes
  No. of days to complete63.463.556.656.1
  Sample size4749228240
  t-test (p-value)t(1426.8) = –0.46 (.645)t(460.0) = 0.65 (.519)

Note.—*Significant result at p <.05.

Table 6 also includes the mean number of days to complete the web and paper survey by time of text message. For Cooperative Respondents, there were no significant differences between the early and late text reminder groups on the mean number of days to complete the web or paper survey. For Other Respondents, those who received the text message in the afternoon completed the web survey a day faster than respondents who received the text message in the morning (t5684.4=2.63, p=.009). There were no significant differences on the mean number of days to complete the paper survey between morning and afternoon.

4.3 Data Quality

We examined two data quality measures: item nonresponse rates for web and paper completes and response time for web completes. If sequencing or time of text messaging are negatively associated with data quality through either higher rates of item nonresponse or speeding, it would suggest that any benefits of text messaging may need to be balanced with their impact on data quality. As shown in table 7, neither sequencing nor time of text messaging had significant effects on web response time or item nonresponse to the web or paper modes. These results are encouraging in that using text as a contact mode does not appear to impact these data quality measures.

Table 7.

Data Quality Measures by Sequencing and Time of Text

Cooperative Respondents
Other Respondents
Early text reminderLate text reminderText inviteEarly text reminderLate text reminder
Sequencing
 Web completes
  Response time in minutes (mean)8.18.07.07.27.1
  Sample size7197101,9981,8681,850
  t-test/F-test (p-value)t(1421.3) = 0.62 (.533)F(2, 5713) = 0.36 (.696)
  Item nonresponse rate (mean)1.3%1.2%0.9%1.0%0.8%
  Sample size7197101,9981,8681,850
  t-test/F-test (p-value)t(1424.7) = 0.53 (.596)F(2, 5713) = 0.57 (.565)
 Paper completes
  Item nonresponse rate (mean)5.2%5.5%5.2%2.5%3.5%
  Sample size5838135158175
  t-test/F-test (p-value)t(61.0) = –0.12 (.904)F(2, 465) = 2.45 (.087)
MorningAfternoonMorningAfternoon
Time of text message
 Web completes
  Response time in minutes (mean)8.08.17.17.1
  Sample size6857442,8922,824
  t-test/F-test (p-value)t(1418.5) = –0.54 (.587)t(5709.7) = 0.12 (.908)
  Item nonresponse rate (%) (mean)1.0%1.5%0.8%1.0%
  Sample size6857442,8922,824
  t-test/F-test (p-value)t(1399.3) = –1.60 (.109)t(5665.6) = –1.26 (.206)
 Paper completes
  Item nonresponse rate (%)5.0%5.6%4.1%3.2%
  Sample size4749228240
  t-test/F-test (p-value)t(93.9) = –0.23 (.822)t(463.4) = 0.91 (.365)
Cooperative Respondents
Other Respondents
Early text reminderLate text reminderText inviteEarly text reminderLate text reminder
Sequencing
 Web completes
  Response time in minutes (mean)8.18.07.07.27.1
  Sample size7197101,9981,8681,850
  t-test/F-test (p-value)t(1421.3) = 0.62 (.533)F(2, 5713) = 0.36 (.696)
  Item nonresponse rate (mean)1.3%1.2%0.9%1.0%0.8%
  Sample size7197101,9981,8681,850
  t-test/F-test (p-value)t(1424.7) = 0.53 (.596)F(2, 5713) = 0.57 (.565)
 Paper completes
  Item nonresponse rate (mean)5.2%5.5%5.2%2.5%3.5%
  Sample size5838135158175
  t-test/F-test (p-value)t(61.0) = –0.12 (.904)F(2, 465) = 2.45 (.087)
MorningAfternoonMorningAfternoon
Time of text message
 Web completes
  Response time in minutes (mean)8.08.17.17.1
  Sample size6857442,8922,824
  t-test/F-test (p-value)t(1418.5) = –0.54 (.587)t(5709.7) = 0.12 (.908)
  Item nonresponse rate (%) (mean)1.0%1.5%0.8%1.0%
  Sample size6857442,8922,824
  t-test/F-test (p-value)t(1399.3) = –1.60 (.109)t(5665.6) = –1.26 (.206)
 Paper completes
  Item nonresponse rate (%)5.0%5.6%4.1%3.2%
  Sample size4749228240
  t-test/F-test (p-value)t(93.9) = –0.23 (.822)t(463.4) = 0.91 (.365)
Table 7.

Data Quality Measures by Sequencing and Time of Text

Cooperative Respondents
Other Respondents
Early text reminderLate text reminderText inviteEarly text reminderLate text reminder
Sequencing
 Web completes
  Response time in minutes (mean)8.18.07.07.27.1
  Sample size7197101,9981,8681,850
  t-test/F-test (p-value)t(1421.3) = 0.62 (.533)F(2, 5713) = 0.36 (.696)
  Item nonresponse rate (mean)1.3%1.2%0.9%1.0%0.8%
  Sample size7197101,9981,8681,850
  t-test/F-test (p-value)t(1424.7) = 0.53 (.596)F(2, 5713) = 0.57 (.565)
 Paper completes
  Item nonresponse rate (mean)5.2%5.5%5.2%2.5%3.5%
  Sample size5838135158175
  t-test/F-test (p-value)t(61.0) = –0.12 (.904)F(2, 465) = 2.45 (.087)
MorningAfternoonMorningAfternoon
Time of text message
 Web completes
  Response time in minutes (mean)8.08.17.17.1
  Sample size6857442,8922,824
  t-test/F-test (p-value)t(1418.5) = –0.54 (.587)t(5709.7) = 0.12 (.908)
  Item nonresponse rate (%) (mean)1.0%1.5%0.8%1.0%
  Sample size6857442,8922,824
  t-test/F-test (p-value)t(1399.3) = –1.60 (.109)t(5665.6) = –1.26 (.206)
 Paper completes
  Item nonresponse rate (%)5.0%5.6%4.1%3.2%
  Sample size4749228240
  t-test/F-test (p-value)t(93.9) = –0.23 (.822)t(463.4) = 0.91 (.365)
Cooperative Respondents
Other Respondents
Early text reminderLate text reminderText inviteEarly text reminderLate text reminder
Sequencing
 Web completes
  Response time in minutes (mean)8.18.07.07.27.1
  Sample size7197101,9981,8681,850
  t-test/F-test (p-value)t(1421.3) = 0.62 (.533)F(2, 5713) = 0.36 (.696)
  Item nonresponse rate (mean)1.3%1.2%0.9%1.0%0.8%
  Sample size7197101,9981,8681,850
  t-test/F-test (p-value)t(1424.7) = 0.53 (.596)F(2, 5713) = 0.57 (.565)
 Paper completes
  Item nonresponse rate (mean)5.2%5.5%5.2%2.5%3.5%
  Sample size5838135158175
  t-test/F-test (p-value)t(61.0) = –0.12 (.904)F(2, 465) = 2.45 (.087)
MorningAfternoonMorningAfternoon
Time of text message
 Web completes
  Response time in minutes (mean)8.08.17.17.1
  Sample size6857442,8922,824
  t-test/F-test (p-value)t(1418.5) = –0.54 (.587)t(5709.7) = 0.12 (.908)
  Item nonresponse rate (%) (mean)1.0%1.5%0.8%1.0%
  Sample size6857442,8922,824
  t-test/F-test (p-value)t(1399.3) = –1.60 (.109)t(5665.6) = –1.26 (.206)
 Paper completes
  Item nonresponse rate (%)5.0%5.6%4.1%3.2%
  Sample size4749228240
  t-test/F-test (p-value)t(93.9) = –0.23 (.822)t(463.4) = 0.91 (.365)

4.4 Sample Composition and Nonresponse

To explore the effects of sequencing and time of text messaging on nonresponse, we compared the sample composition across conditions for demographic and selected survey variables. Overall, the sample composition was very similar with only a few significant differences found when comparing conditions (see tables S3–S6 in the supplementary data online). The oldest age group, 55 and older, made up a higher proportion of completes for the early text reminder groups for both Cooperative and Other Respondents. The Other Respondents also had a higher proportion of African American or Black respondents in the afternoon text group. Importantly, no significant differences were found in the key substantive indicators in the study, participation in hunting, fishing, or wildlife activities, implying these minimal demographic differences did not affect survey results.

5. CONCLUSIONS AND DISCUSSION

To explore the effectiveness of texting as a mode of contact, we conducted an experiment that varied when text messages were sent during the study period and the time of day the text messages were sent to examine effects on response rates, response speed, data quality, and nonresponse. We found that the text invite did create an initial increase in response, but its effects diminished quickly. This lends some evidence to the hypothesis based on social exchange theory that electronic contact strategies (i.e., email and text) may be more trusted and attended to after a postal letter that establishes the legitimacy of the research ask (Dillman et al. 2014), even in longitudinal studies like this one where respondents previously consented to receiving text messages. An early text reminder was more effective than a late text reminder and raised the completion rate early in the field period, reducing the sample that needed to be sent additional follow-up mailings. The early text reminder also resulted in survey participants completing the survey more quickly. The mean number of days to complete the web survey was lower for those that received the early text reminder than those who were sent the text invite or the late text reminder.

Some of the benefits of the text reminder may arise because additional contacts, regardless of mode, tend to improve overall response rates. It is possible that a similar effect could have been observed by sending an additional mail contact. However, texting also provides additional benefits that additional mail contacts may not, including: (i) text messages can be sent at a lower cost than a mailed letter or postcard, (ii) text messages may be attended to by people who may not receive or read mail at their address, and (iii) text messages can encourage a more timely response, especially when sample members are provided a direct link to a web survey that they can instantly complete on their own.

The early and late text reminders not only helped encourage web response but also helped increase response rates to the paper survey, relative to the text invitation. These findings are consistent with other studies that found contact via one mode can help drive response to the survey by another mode (Dillman et al. 2014; Dillman 2017). The mean number of days to complete the paper survey was also about 10 days shorter for those who received the text reminder compared with the text invite.

Sample composition did not significantly vary across experimental conditions, indicating that including text message contacts did not significantly impact the overall demographic representation of the completed sample. Data quality measures, including mean response time on the web and item nonresponse for web and paper did not significantly vary across the conditions, suggesting that the text reminders did not negatively impact data quality. Sending the text in the morning versus afternoon did not have a significant impact on overall response, nonresponse, or other data quality measures, though sample members in the Other Respondents group were more likely to complete the survey a day faster when sent afternoon texts rather than morning texts.

There are multiple limitations of this study that should be explored through additional research and experimentation. First, some form of texting was used in each of our conditions because we found texting had been helpful in wave 1, so we did not test a group that did not receive a text invitation or reminder. This could be important for future studies to further understand potential benefits of texting versus no texting. Similarly, we did not directly test a group that received both a text invitation and reminder or multiple reminders, which we see as important for future experiments. Further, this experiment was conducted within a longitudinal study where consent to text was provided as part of the initial screener response. People may be more receptive to texts after they have already participated in an initial study than when they have not previously participated or consented to receive text messages. Future research should explore the use of unconsented texting in cross-sectional survey designs. Further studies could also explore whether certain days of week are better for sending text messages than others. Finally, the text messages in this experiment focused on encouraging web response and used specific messaging for this study. Future research could explore encouraging response to other modes, such as calling in to complete the survey with an interviewer and experimenting with the messaging used to encourage respondents to complete the survey.

Overall, our research suggests that text messages can be an important contact mode in mixed-mode survey designs. Sending text messages for survey reminders is likely more effective than sending a text invitation, especially for studies where no other outreach has been made to establish trust and legitimacy, and text messaging does not appear to negatively impact data quality. Further research should compare a contact strategy with texting versus no texting, examine whether sending multiple text reminders improves response more than a single text reminder, and experiment with different message content and strategies, including the use of unconsented texting in cross-sectional study designs. As text messaging continues to grow as a contact mode for mixed-mode study designs, it will be important to understand how to best integrate texting with other contact modes and how these design choices impact response in different survey populations and study designs.

SUPPLEMENTARY MATERIALS

Supplementary materials are available online at academic.oup.com/jssam.

The authors would like to thank our NORC at the University of Chicago colleagues David Sterrett and Emily Alvarez for their expertise on the National Survey of Fishing, Hunting, and Wildlife-Associated Recreation data collection, Kate Hobson for guidance on text message implementation, Semilla Stripp for support with analysis, and Nola du Toit for developing the data visualizations. This study design and analysis was not preregistered in an independent, institutional registry.

This work was supported by NORC at the University of Chicago as an internal research and development initiative. The experiment uses data collected by NORC at the University of Chicago for the U.S. Fish and Wildlife Service [F20AP00134].

REFERENCES

Andreadis
I.
(
2021
), “Web Survey Response Times What to Do and What Not to Do,” in Proceedings of the Survey Research Methods Section, Washington, DC: American Statistical Association, pp.
1774
1782
.

Beebe
T. J.
,
Davern
M. E.
,
McAlpine
D. D.
,
Call
K. T.
,
Rockwood
T. H.
(
2005
), “
Increasing Response Rates in a Survey of (Mail and Telephone)
,”
Medical Care
,
43
,
411
414
.

Beebe
T. J.
,
Jacobson
R. M.
,
Jenkins
S. M.
,
Lackore
K. A.
,
Rutten
L. J. F.
(
2018
), “
Testing the Impact of Mixed-Mode Designs (Mail and Web) and Multiple Contact Attempts within Mode (Mail or Web) on Clinician Survey Response
,”
Health Services Research
,
53
Suppl 1
,
3070
3083
.

Beebe
T. J.
,
Locke
G. R.
,
Barnes
S. A.
,
Davern
M. E.
,
Anderson
K. J.
(
2007
), “
Mixing Web and Mail Methods in a Survey of Physicians
,”
Health Services Research
,
42
,
1219
1234
.

Beebe
T. J.
,
McAlpine
D. D.
,
Ziegenfuss
J. Y.
,
Jenkins
S.
,
Haas
L.
,
Davern
M. E.
(
2012
), “
Deployment of a Mixed-Mode Data Collection Strategy Does Not Reduce Nonresponse Bias in a General Population Health Survey
,”
Health Services Research
,
47
,
1739
1754
.

Bošnjak
M.
,
Neubarth
W.
,
Couper
M. P.
,
Bandilla
W.
,
Kaczmirek
L.
(
2008
), “
Prenotification in Web-Based Access Panel Surveys: The Influence of Mobile Text Messaging versus E-Mail on Response Rates and Sample Composition
,”
Social Science Computer Review
,
26
,
213
223
.

Cabrera-Álvarez
P.
,
Lynn
P.
(
2022
), “Text Messages to Incentivise Response in a Web-First Sequential Mixed-Mode Survey,” Understanding Society Working Paper Series [online], 4. Available at https://www.iser.essex.ac.uk/research/publications/working-papers/understanding-society/2022-04.

Cook
C.
,
Heath
F.
,
Thompson
R. L.
(
2000
), “
A Meta-Analysis of Response Rates in Web- or Internet-Based Surveys
,”
Educational and Psychological Measurement
,
60
,
821
836
.

Daikeler
J.
,
Silber
H.
,
Bošnjak
M.
(
2022
), “
A Meta-Analysis of How Country-Level Factors Affect Web Survey Response Rates
,”
International Journal of Market Research
,
64
,
306
333
.

Dal Grande
E.
,
Chittleborough
C. R.
,
Campostrini
S.
,
Dollard
M.
,
Taylor
A. W.
(
2016
), “
Pre-Survey Text Messages (SMS) Improve Participation Rate in an Australian Mobile Telephone Survey: An Experimental Study
,”
PLoS One
,
11
,
e0150231
. Available at https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0150231.

De Bruijne
M.
,
Wijnant
A.
(
2014
), “
Improving Response Rates and Questionnaire Design for Mobile Web Surveys
,”
Public Opinion Quarterly
,
78
,
951
962
.

de Leeuw
E.
(
2005
), “
To Mix or Not to Mix Data Collection Modes in Survey
,”
Journal of Official Statistics
,
21
,
233
255
.

de Leeuw
E.
,
Hox
J.
,
Huisman
M.
(
2003
), “
Prevention and Treatment of Item Nonresponse
,”
Journal of Official Statistics
,
19
,
153
176
.

Dillman
D. A.
(
2017
), “
The Promise and Challenge of Pushing Respondents to the Web in Mixed-Mode Surveys
,”
Survey Methodology
,
43
,
3
30–30
.

Dillman
D. A.
,
Smyth
J. D.
,
Christian
L. M.
(
2014
),
Internet, Phone, Mail, and Mixed-Mode Surveys: The Tailored Design Method
(4th ed.),
Hoboken, NJ
:
Wiley
.

DuBray
P.
(
2013
), “Use of Text Messaging to Increase Response Rates,” in Conference Session, Federal Committee on Statistical Methodology (FCSM) 2013 Research Conference, Washington, D.C. Available at https://nces.ed.gov/FCSM/pdf/F3_DuBray_2013FCSM_AC.pdf.

Dykema
J.
,
Jones
N. R.
,
Piche
T.
,
Stevenson
J.
(
2013
), “
Surveying Clinicians by Web: Current Issues in Design and Administration
,”
Evaluation and the Health Professions
,
36
,
352
381
.

Groves
R. M.
(
1989
),
Survey Errors and Survey Costs
,
New York, NY
:
Wiley
.

Groves
R. M.
,
Singer
E.
,
Corning
A.
(
2000
), “
Leverage-Saliency Theory of Survey Participation: Description and an Illustration
,”
The Public Opinion Quarterly
,
64
,
299
308
.

Lundquist
P.
,
Särndal
C. E.
(
2013
), “
Aspects of Responsive Design with Applications to the Swedish Living Conditions Survey
,”
Journal of Official Statistics
,
29
,
557
582
.

Millar
M. M.
,
Dillman
D. A.
(
2011
), “
Improving Response to Web and Mixed-Mode Surveys
,”
Public Opinion Quarterly
,
75
,
249
269
.

Moore
J. C.
,
Durrant
G. B.
,
Smith
P. W. F.
(
2016
), “
Data Set Representativeness during Data Collection in Three UK Social Surveys: Generalizability and the Effects of Auxiliary Covariate Choice
,”
Journal of the Royal Statistical Society. Series A (Statistics in Society)
,
181
,
229
248
.

Olson
K.
,
Smyth
J. D.
,
Wood
H. M.
(
2012
), “
Does Giving People Their Preferred Survey Mode Actually Increase Survey Participation Rates? An Experimental Examination
,”
Public Opinion Quarterly
,
76
,
611
635
.

Patrick
M. E.
,
Couper
M. P.
,
Jang
B. J.
,
Laetz
V.
,
Schulenberg
J. E.
,
O’Malley
P. M.
,
Bachman
J.
,
Johnston
L. D.
(
2022
), “
Building on a Sequential Mixed-Mode Research Design in the Monitoring of a Future Study
,”
Journal of Survey Statistics and Methodology
,
10
,
149
160
.

Perrin
A.
(
2021
), “Mobile Technology and Home Broadband 2021,” Pew Research Center. Available at https://www.pewresearch.org/internet/2021/06/03/mobile-technology-and-home-broadband-2021/.

Rao
J. N. K.
,
Scott
A. J.
(
1979
), “Chi-Squared Tests for Analysis of Categorical Data from Complex Surveys,” in
Proceedings of the Survey Research Methods Section
,
Washington, DC
:
American Statistical Association
, pp.
58
66
.

Smyth
J. D.
,
Olson
K.
,
Millar
M. M.
(
2014
), “
Identifying Predictors of Survey Mode Preference
,”
Social Science Research
,
48
,
135
144
.

This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://dbpia.nl.go.kr/pages/standard-publication-reuse-rights)

Supplementary data