Abstract

Survey respondents typically report having voted at a rate higher than the nation in fact turned out on election day. This may be the result of errors people make when trying to remember whether they voted and of motivated misreporting due to social desirability bias. This paper explores whether a new sequence of questions designed to reduce both types of errors reduces reports of turnout in telephone interviews. An experiment embedded in the 2006 American National Election Studies Pilot Study indicates that the question sequence did do so, and that it resulted in turnout estimates more consistent with official records than did the simple, direct traditional ANES question.

Introduction

Post-election surveys typically yield higher voter turnout estimates than do government records (e.g., Clausen 1968; Traugott and Katosh 1979), and validation studies suggest that some respondents claimed to have voted when county election records showing voter registrations and turnout suggest that they did not (e.g., Clausen 1968; Parry and Crossley 1950; Traugott and Katosh 1979). Higher estimates of turnout from surveys may result from (1) missing government records for respondents who voted; (2) lower survey participation among nonvoters than among voters; (3) increased turnout caused by interviewing respondents before an election and thereby boosting their interest in politics and/or their sense of civic duty; (4) errors in methods used to calculate rates of actual turnout, such as using the voting age population as the denominator rather than the voting-eligible population (Burden 2000; Clausen 1968; McDonald and Popkin 2001; Presser, Traugott, and Traugott1990; Traugott 1989); (5) misreporting by respondents motivated to portray themselves favorably (Aarts 2002; Andolina et al. 2003; Blais et al. 2004; Brockington and Karp 2002; Holbrook, Green, and Krosnick 2003; Holbrook and Krosnick 2010; Lutz 2003; Lyons and Scheb 1999; although see Locander, Sudman, and Bradburn 1976; Presser 1990); and (6) accidentally misremembering having voted when one did not (Belli et al. 1999; Belli, Moore, and Van Hoewyk 2006; Stocké 2007; although see Abelson, Loftus, and Greenwald 1992; Belli, Traugott, and Rosenstone 1994).

Past studies have explored whether these last two factors can be reduced by replacing the simple direct question (SDQ) asked in many past American National Election Studies (ANES) surveys with the short or long “experimental questions” shown in table 1. These experiments have sometimes found that the experimental questions yielded lower turnout estimates than did the SDQ, but not always (see table 2).1

Table 1.

Wording of Turnout Measures

Simple Direct Question (SDQ)
    “In talking to people about elections, we often find that a lot of people were not able to vote because they weren’t registered, they were sick, or they just didn’t have time. How about you—did you vote in the elections this November?” (43 words)
Long Experimental Question
    “In talking about elections, we sometimes find that people who thought about voting actually did not vote. Also, people who usually vote may have trouble saying for sure whether they voted in a particular election. In a moment, I’m going to ask you whether you voted on Tuesday, November 8, which was [day(s)/week(s)] ago. Before you answer, think of a number of different things that will likely come to mind if you actually did vote this past election day; things like whether you walked, drove, or were driven by another person to your polling place [pause], what the weather was like on the way [pause], the time of day it was [pause], and people you went with, saw, or met while there [pause]. After thinking about it, you may realize that you did not vote in this particular election [pause]. Now that you’ve thought about it, which of these statements best describes you? I did not vote in the November 8 election. I thought about voting this time but didn’t. I usually vote but didn’t this time. I am sure I voted in the November 8 election.” (187 words)
Short Experimental Question
    “The next question is about the elections in November. In talking to people about elections, we often find that a lot of people were not able to vote because they weren’t registered, they were sick, or they just didn’t have time. Which of the following statements best describes you? One, I did not vote in the November 3 election; two, I thought about voting this time but didn’t; three, I usually vote but didn’t this time; or four, I am sure I voted in the November 3 election.” (88 words)
New Multi-Question Sequence (MQS)
    “In talking to people about elections, we often find that a lot of people were not able to vote because they weren’t registered, they were sick, they didn’t have time, or something else happened to prevent them from voting. And sometimes, people who USUALLY vote or who PLANNED to vote forget that something UNUSUAL happened on election day this year that prevented them from voting THIS time. So please think carefully for a minute about the election held on November 7, [PAUSE] and past elections in which you may have voted and answer the following questions about your voting behavior. During the past six years, did you usually vote in national, state, and local elections, or did you usually NOT vote?”
    “During the months leading up to the election held on November 7, did you ever plan to vote in that election, or didn’t you plan to do that?”
    “In the election held on November 7, did you definitely vote in person on election day, definitely mail in a completed absentee ballot before election day, definitely not vote, or are you not completely sure whether you voted in that election?” (190 words)
Simple Direct Question (SDQ)
    “In talking to people about elections, we often find that a lot of people were not able to vote because they weren’t registered, they were sick, or they just didn’t have time. How about you—did you vote in the elections this November?” (43 words)
Long Experimental Question
    “In talking about elections, we sometimes find that people who thought about voting actually did not vote. Also, people who usually vote may have trouble saying for sure whether they voted in a particular election. In a moment, I’m going to ask you whether you voted on Tuesday, November 8, which was [day(s)/week(s)] ago. Before you answer, think of a number of different things that will likely come to mind if you actually did vote this past election day; things like whether you walked, drove, or were driven by another person to your polling place [pause], what the weather was like on the way [pause], the time of day it was [pause], and people you went with, saw, or met while there [pause]. After thinking about it, you may realize that you did not vote in this particular election [pause]. Now that you’ve thought about it, which of these statements best describes you? I did not vote in the November 8 election. I thought about voting this time but didn’t. I usually vote but didn’t this time. I am sure I voted in the November 8 election.” (187 words)
Short Experimental Question
    “The next question is about the elections in November. In talking to people about elections, we often find that a lot of people were not able to vote because they weren’t registered, they were sick, or they just didn’t have time. Which of the following statements best describes you? One, I did not vote in the November 3 election; two, I thought about voting this time but didn’t; three, I usually vote but didn’t this time; or four, I am sure I voted in the November 3 election.” (88 words)
New Multi-Question Sequence (MQS)
    “In talking to people about elections, we often find that a lot of people were not able to vote because they weren’t registered, they were sick, they didn’t have time, or something else happened to prevent them from voting. And sometimes, people who USUALLY vote or who PLANNED to vote forget that something UNUSUAL happened on election day this year that prevented them from voting THIS time. So please think carefully for a minute about the election held on November 7, [PAUSE] and past elections in which you may have voted and answer the following questions about your voting behavior. During the past six years, did you usually vote in national, state, and local elections, or did you usually NOT vote?”
    “During the months leading up to the election held on November 7, did you ever plan to vote in that election, or didn’t you plan to do that?”
    “In the election held on November 7, did you definitely vote in person on election day, definitely mail in a completed absentee ballot before election day, definitely not vote, or are you not completely sure whether you voted in that election?” (190 words)
Table 1.

Wording of Turnout Measures

Simple Direct Question (SDQ)
    “In talking to people about elections, we often find that a lot of people were not able to vote because they weren’t registered, they were sick, or they just didn’t have time. How about you—did you vote in the elections this November?” (43 words)
Long Experimental Question
    “In talking about elections, we sometimes find that people who thought about voting actually did not vote. Also, people who usually vote may have trouble saying for sure whether they voted in a particular election. In a moment, I’m going to ask you whether you voted on Tuesday, November 8, which was [day(s)/week(s)] ago. Before you answer, think of a number of different things that will likely come to mind if you actually did vote this past election day; things like whether you walked, drove, or were driven by another person to your polling place [pause], what the weather was like on the way [pause], the time of day it was [pause], and people you went with, saw, or met while there [pause]. After thinking about it, you may realize that you did not vote in this particular election [pause]. Now that you’ve thought about it, which of these statements best describes you? I did not vote in the November 8 election. I thought about voting this time but didn’t. I usually vote but didn’t this time. I am sure I voted in the November 8 election.” (187 words)
Short Experimental Question
    “The next question is about the elections in November. In talking to people about elections, we often find that a lot of people were not able to vote because they weren’t registered, they were sick, or they just didn’t have time. Which of the following statements best describes you? One, I did not vote in the November 3 election; two, I thought about voting this time but didn’t; three, I usually vote but didn’t this time; or four, I am sure I voted in the November 3 election.” (88 words)
New Multi-Question Sequence (MQS)
    “In talking to people about elections, we often find that a lot of people were not able to vote because they weren’t registered, they were sick, they didn’t have time, or something else happened to prevent them from voting. And sometimes, people who USUALLY vote or who PLANNED to vote forget that something UNUSUAL happened on election day this year that prevented them from voting THIS time. So please think carefully for a minute about the election held on November 7, [PAUSE] and past elections in which you may have voted and answer the following questions about your voting behavior. During the past six years, did you usually vote in national, state, and local elections, or did you usually NOT vote?”
    “During the months leading up to the election held on November 7, did you ever plan to vote in that election, or didn’t you plan to do that?”
    “In the election held on November 7, did you definitely vote in person on election day, definitely mail in a completed absentee ballot before election day, definitely not vote, or are you not completely sure whether you voted in that election?” (190 words)
Simple Direct Question (SDQ)
    “In talking to people about elections, we often find that a lot of people were not able to vote because they weren’t registered, they were sick, or they just didn’t have time. How about you—did you vote in the elections this November?” (43 words)
Long Experimental Question
    “In talking about elections, we sometimes find that people who thought about voting actually did not vote. Also, people who usually vote may have trouble saying for sure whether they voted in a particular election. In a moment, I’m going to ask you whether you voted on Tuesday, November 8, which was [day(s)/week(s)] ago. Before you answer, think of a number of different things that will likely come to mind if you actually did vote this past election day; things like whether you walked, drove, or were driven by another person to your polling place [pause], what the weather was like on the way [pause], the time of day it was [pause], and people you went with, saw, or met while there [pause]. After thinking about it, you may realize that you did not vote in this particular election [pause]. Now that you’ve thought about it, which of these statements best describes you? I did not vote in the November 8 election. I thought about voting this time but didn’t. I usually vote but didn’t this time. I am sure I voted in the November 8 election.” (187 words)
Short Experimental Question
    “The next question is about the elections in November. In talking to people about elections, we often find that a lot of people were not able to vote because they weren’t registered, they were sick, or they just didn’t have time. Which of the following statements best describes you? One, I did not vote in the November 3 election; two, I thought about voting this time but didn’t; three, I usually vote but didn’t this time; or four, I am sure I voted in the November 3 election.” (88 words)
New Multi-Question Sequence (MQS)
    “In talking to people about elections, we often find that a lot of people were not able to vote because they weren’t registered, they were sick, they didn’t have time, or something else happened to prevent them from voting. And sometimes, people who USUALLY vote or who PLANNED to vote forget that something UNUSUAL happened on election day this year that prevented them from voting THIS time. So please think carefully for a minute about the election held on November 7, [PAUSE] and past elections in which you may have voted and answer the following questions about your voting behavior. During the past six years, did you usually vote in national, state, and local elections, or did you usually NOT vote?”
    “During the months leading up to the election held on November 7, did you ever plan to vote in that election, or didn’t you plan to do that?”
    “In the election held on November 7, did you definitely vote in person on election day, definitely mail in a completed absentee ballot before election day, definitely not vote, or are you not completely sure whether you voted in that election?” (190 words)
Table 2.

Design Features and Results of Past Experiments Comparing Long and Short Experimental Questions to Simple Direct Question

Turnout estimateTests of significance of differences
StudyDate of election asked aboutField periodMode and sampleNResponse rateSimple direct questionLong experimental questionShort experimental questionSimple direct vs. long experimental questionSimple direct vs. short experimental question
Belli et al. (1994)Nov. 8, 1994Nov. 9–Dec. 20, 1994Telephone survey of 350 residents of Ann Arbor, MI, and 350 residents of Ypsilanti, MI, who were registered to vote and had a listed telephone number; sampled from voter registration lists70029%88%87%χ2(1) = 0.07, p = .80
Belli et al. (1999), Study 1Nov. 5, 1996Nov. 6, 1996–Jan. 31, 1997National RDD telephone survey of adults1,32969%70%61%Logistic regression coefficient = –0.39, p < .001
Belli et al. (1999), Study 2Vote-by-mail—ballots counted on Jan. 30, 1996Feb. 1–Mar. 12, 1996Telephone survey of Oregon residents who were eligible to vote1,32760%72%72%Logistic regression coefficient = –0.02, n.s.
Duff et al. (2004) Nov. 5, 2002Nov. 6–Dec. 6, 20022002 pre-post election ANES data—telephone survey of U.S. adult citizens who are eligible to vote (included a sample of those interviewed in 2000 and a fresh cross-section)1,34455.8% for pre-election; 89.1% reinterview for post-election65%57%χ2(1) = 8.77, p = .003
Belli et al. (2006)Nov. 5, 1998Dec. 1, 1998–Feb. 1999National RDD telephone survey of adults interviewed in the Survey of Consumer Attitudes in December 1998 and January and February 19991,464Dec.: 65% Jan.: 62% Feb.: 62%63%48%58%Logistic regression coefficient = –0.62, p < .001Logistic regression coefficient = –0.23, p = .08
Turnout estimateTests of significance of differences
StudyDate of election asked aboutField periodMode and sampleNResponse rateSimple direct questionLong experimental questionShort experimental questionSimple direct vs. long experimental questionSimple direct vs. short experimental question
Belli et al. (1994)Nov. 8, 1994Nov. 9–Dec. 20, 1994Telephone survey of 350 residents of Ann Arbor, MI, and 350 residents of Ypsilanti, MI, who were registered to vote and had a listed telephone number; sampled from voter registration lists70029%88%87%χ2(1) = 0.07, p = .80
Belli et al. (1999), Study 1Nov. 5, 1996Nov. 6, 1996–Jan. 31, 1997National RDD telephone survey of adults1,32969%70%61%Logistic regression coefficient = –0.39, p < .001
Belli et al. (1999), Study 2Vote-by-mail—ballots counted on Jan. 30, 1996Feb. 1–Mar. 12, 1996Telephone survey of Oregon residents who were eligible to vote1,32760%72%72%Logistic regression coefficient = –0.02, n.s.
Duff et al. (2004) Nov. 5, 2002Nov. 6–Dec. 6, 20022002 pre-post election ANES data—telephone survey of U.S. adult citizens who are eligible to vote (included a sample of those interviewed in 2000 and a fresh cross-section)1,34455.8% for pre-election; 89.1% reinterview for post-election65%57%χ2(1) = 8.77, p = .003
Belli et al. (2006)Nov. 5, 1998Dec. 1, 1998–Feb. 1999National RDD telephone survey of adults interviewed in the Survey of Consumer Attitudes in December 1998 and January and February 19991,464Dec.: 65% Jan.: 62% Feb.: 62%63%48%58%Logistic regression coefficient = –0.62, p < .001Logistic regression coefficient = –0.23, p = .08

Note.—The question labels used in columns 6, 7, and 8 of this table refer to the questions listed in table 1.

Table 2.

Design Features and Results of Past Experiments Comparing Long and Short Experimental Questions to Simple Direct Question

Turnout estimateTests of significance of differences
StudyDate of election asked aboutField periodMode and sampleNResponse rateSimple direct questionLong experimental questionShort experimental questionSimple direct vs. long experimental questionSimple direct vs. short experimental question
Belli et al. (1994)Nov. 8, 1994Nov. 9–Dec. 20, 1994Telephone survey of 350 residents of Ann Arbor, MI, and 350 residents of Ypsilanti, MI, who were registered to vote and had a listed telephone number; sampled from voter registration lists70029%88%87%χ2(1) = 0.07, p = .80
Belli et al. (1999), Study 1Nov. 5, 1996Nov. 6, 1996–Jan. 31, 1997National RDD telephone survey of adults1,32969%70%61%Logistic regression coefficient = –0.39, p < .001
Belli et al. (1999), Study 2Vote-by-mail—ballots counted on Jan. 30, 1996Feb. 1–Mar. 12, 1996Telephone survey of Oregon residents who were eligible to vote1,32760%72%72%Logistic regression coefficient = –0.02, n.s.
Duff et al. (2004) Nov. 5, 2002Nov. 6–Dec. 6, 20022002 pre-post election ANES data—telephone survey of U.S. adult citizens who are eligible to vote (included a sample of those interviewed in 2000 and a fresh cross-section)1,34455.8% for pre-election; 89.1% reinterview for post-election65%57%χ2(1) = 8.77, p = .003
Belli et al. (2006)Nov. 5, 1998Dec. 1, 1998–Feb. 1999National RDD telephone survey of adults interviewed in the Survey of Consumer Attitudes in December 1998 and January and February 19991,464Dec.: 65% Jan.: 62% Feb.: 62%63%48%58%Logistic regression coefficient = –0.62, p < .001Logistic regression coefficient = –0.23, p = .08
Turnout estimateTests of significance of differences
StudyDate of election asked aboutField periodMode and sampleNResponse rateSimple direct questionLong experimental questionShort experimental questionSimple direct vs. long experimental questionSimple direct vs. short experimental question
Belli et al. (1994)Nov. 8, 1994Nov. 9–Dec. 20, 1994Telephone survey of 350 residents of Ann Arbor, MI, and 350 residents of Ypsilanti, MI, who were registered to vote and had a listed telephone number; sampled from voter registration lists70029%88%87%χ2(1) = 0.07, p = .80
Belli et al. (1999), Study 1Nov. 5, 1996Nov. 6, 1996–Jan. 31, 1997National RDD telephone survey of adults1,32969%70%61%Logistic regression coefficient = –0.39, p < .001
Belli et al. (1999), Study 2Vote-by-mail—ballots counted on Jan. 30, 1996Feb. 1–Mar. 12, 1996Telephone survey of Oregon residents who were eligible to vote1,32760%72%72%Logistic regression coefficient = –0.02, n.s.
Duff et al. (2004) Nov. 5, 2002Nov. 6–Dec. 6, 20022002 pre-post election ANES data—telephone survey of U.S. adult citizens who are eligible to vote (included a sample of those interviewed in 2000 and a fresh cross-section)1,34455.8% for pre-election; 89.1% reinterview for post-election65%57%χ2(1) = 8.77, p = .003
Belli et al. (2006)Nov. 5, 1998Dec. 1, 1998–Feb. 1999National RDD telephone survey of adults interviewed in the Survey of Consumer Attitudes in December 1998 and January and February 19991,464Dec.: 65% Jan.: 62% Feb.: 62%63%48%58%Logistic regression coefficient = –0.62, p < .001Logistic regression coefficient = –0.23, p = .08

Note.—The question labels used in columns 6, 7, and 8 of this table refer to the questions listed in table 1.

The current study assessed whether a new multi-question sequence (MQS; see the bottom of table 1) lowers turnout estimates in telephone interviews. This approach sought to achieve the same goals as prior experimental questions but with design improvements, based on the following logic:

  • 1) Some respondents may feel a societal expectation that they vote, so during a survey (especially an oral interview), such respondents who did not vote may prefer not to acknowledge their failure to conform to this social norm and be motivated to misreport having voted (see Karp and Brockington 2005).

  • 2) Allowing respondents to report that they intended to vote in the most recent election and/or that they often voted in previous elections may allow people to convey an image of being responsible citizens and therefore to feel less pressure to report that they voted when they did not.

  • 3) Respondents who intended to vote in the most recent election but did not, and respondents who voted in previous elections but did not vote in the most recent one, might experience “source confusion,” mislabel memories of their intentions or prior behaviors as actually voting in the most recent election (e.g., Johnson, Hashtroudi, and Lindsay 1993; see also Hill and Hurley 1984, c.f. Presser and Traugott 1992), and misreport having voted when they did not.

  • 4) Source confusion may be more likely as the time interval between the election and the survey interview grows (because memories may become less vivid over time),2 and more common among people who are interested in politics (who are especially likely to have intended to vote and to have voted often in the past).3

  • 5) Inducing respondents to think about whether they intended to vote in the most recent election and whether they voted in previous elections may help respondents separate their memories of intentions and past behavior from their memories of behavior in the most recent election, thereby minimizing source confusion and increasing reporting accuracy.

  • 6) It is undesirable to offer answer choices that are not mutually exclusive when asking closed-ended questions that require respondents to select just one answer (Bradburn, Sudman, and Wasink 2004). Respondents who thought about voting but did not and who usually voted in past elections could legitimately select any of three of the answer choices offered by the long or short experimental questions, and it is not clear which of these the respondents should choose.

  • 7) Offering double-barreled answer choices (such as the second and third answer choices in the long and short experimental questions) should be avoided, because some respondents may find themselves wanting to endorse part of an answer choice but not another part of it.

  • 8) Any question measuring participation in an election that does not explicitly assess “early voting” done before election day may fail to accurately assess turnout, since this is increasingly common (Gronke, Galanes-Rosenbuam, and Miller 2007).

  • 9) To reduce cognitive burden and administration time, it is preferable to decompose a long, complex, and cognitively difficult question into a series of short, easy-to-answer questions.

The MQS (1) decomposed the long experimental question into several shorter questions to reduce cognitive burden, (2) allowed respondents to report one attribute at a time, and (3) offered mutually exclusive and exhaustive sets of answer choices. The MQS was intended to alert respondents to the possibility of memory errors, encourage respondents to think carefully when answering, and reduce social desirability pressures by providing respondents with the opportunity to report that they usually voted and/or that they had planned to vote (Belli et al. 1999). Finally, the MQS allowed respondents to report that they voted through absentee or early voting and to report that they were not sure whether they voted.

We tested whether the MQS reduced estimates of turnout relative to the SDQ, whether the MQS was especially effective at reducing turnout reports from people who were interviewed a long time after the election and who were most involved in politics (measured using past voting behavior, strength of party identification, political efficacy, and an index of attention to and interest in politics), and whether the MQS produced more accurate estimates of turnout rates in various states.

Data

This test was carried out in the ANES 2006 Pilot Study. An area probability sample of 1,211U.S. adults eligible to vote was interviewed face-to-face before the 2004 presidential election (RR = 66.1 percent),4 and 1,066 of these people were reinterviewed after that election (reinterview rate = 88.0 percent). Between November 13, 2006, and January 7, 2007, 675 of the respondents from the 2004 pre-election wave were reinterviewed by telephone (reinterview rate = 56.3 percent; eleven deceased respondents were excluded; cumulative response rate = 37.2 percent).

The 675 respondents interviewed in 2006 were randomly assigned to be asked either the SDQ or the MQS (see Supplementary Data, the Appendix in this article, and the online appendix [supplementary data online] for all question wordings, coding of variables, and distributions of responses to the MQS).5

Results

The two groups of respondents who were randomly assigned to be asked the SDQ or the MQS did not differ significantly in terms of six demographics and nine political attributes (see the online appendix). However, compared with the people who were asked the SDQ, people who were asked the MQS were significantly more likely to have said that they did not vote in 2000 (26.8 percent versus 19.4 percent), more likely to be Hispanic (5.6 percent versus 1.8 percent), and less likely to be White (75.2 percent versus 81.9 percent).6 We therefore controlled for these variables in our analysis.

Of respondents who were asked the SDQ, 81 percent said they voted, whereas 72 percent of respondents who were asked the MQS said so, a significant difference (b = –0.50, p < .01; see equation I in table 3).7 This effect was marginally significant when controlling for turnout in 2000, race, and ethnicity (b = –0.37, p = 0.06; see equation II in table 3), and maintained its strength when controlling for all demographic variables (b = –0.48, p < .05; see equation III in table 3).

Table 3.

Logistic Regression Coefficients Showing the Effect of the Question-Wording Manipulation on Estimated Turnout (standard errors in parentheses)

PredictorEquation
IIIIII
Experimental questions–0.50** (0.14)–0.37+ (0.19)–0.48* (0.20)
Voted in 20002.43** (0.21)2.12** (0.21)
Male0.16 (0.22)
Race/ethnicity Black–0.31 (0.38)–0.12 (0.40)
Hispanic–0.08 (0.47)–0.11 (0.57)
Asian–0.38 (0.70)–0.50 (0.73)
Other race (non-white)–0.59 (0.44)–0.75 (0.47)
Education High School only1.00* (0.40)
Some college1.24* (0.50)
A 4-year college degree1.36* (0.54)
Advanced degree1.99** (0.51)
Age–4.84* (2.35)
Age27.62** (2.74)
Income $20,000–39,9990.31 (0.39)
$40,000–59,9990.59+ (0.33)
$60,000–79,9991.00* (0.41)
$80,000–119,9990.96* (0.38)
$120,000 and up0.70 (0.47)
Income not reported0.23 (0.36)
Employment statusEmployed–0.02 (0.29)
Unemployed0.88 (0.82)
RegionLives in southern state0.002 (0.28)
Lives in border state–0.10 (0.19)
N675668668
Predicted probabilities
Simple direct question81%83%85%
Experimental question72%77%76%
PredictorEquation
IIIIII
Experimental questions–0.50** (0.14)–0.37+ (0.19)–0.48* (0.20)
Voted in 20002.43** (0.21)2.12** (0.21)
Male0.16 (0.22)
Race/ethnicity Black–0.31 (0.38)–0.12 (0.40)
Hispanic–0.08 (0.47)–0.11 (0.57)
Asian–0.38 (0.70)–0.50 (0.73)
Other race (non-white)–0.59 (0.44)–0.75 (0.47)
Education High School only1.00* (0.40)
Some college1.24* (0.50)
A 4-year college degree1.36* (0.54)
Advanced degree1.99** (0.51)
Age–4.84* (2.35)
Age27.62** (2.74)
Income $20,000–39,9990.31 (0.39)
$40,000–59,9990.59+ (0.33)
$60,000–79,9991.00* (0.41)
$80,000–119,9990.96* (0.38)
$120,000 and up0.70 (0.47)
Income not reported0.23 (0.36)
Employment statusEmployed–0.02 (0.29)
Unemployed0.88 (0.82)
RegionLives in southern state0.002 (0.28)
Lives in border state–0.10 (0.19)
N675668668
Predicted probabilities
Simple direct question81%83%85%
Experimental question72%77%76%

Note.—These analyses corrected for the complex sampling design of the initial 2004 survey (accounting for clustering using both the stratum and the sampling error computation unit codes found in variables V06P007a and V06P007b of the 2006 ANES codebook). Unweighted results shown. Using the weights provided by the ANES without trimming reduced the efficiency of the analysis by 21 percent, due to some extremely large and extremely small weights. When the analyses shown in table 4 were repeated using these weights, the effects of the experimental questions in equations I, II, and III were not significant: Equation I: b = –0.31, SE = 0.19, p = .12; Equation II: b = –0.30, SE = 0.28, p = .30; Equation III: b = –0.40, SE = 0.30, p = .19). But when weights larger than 6 were the median weight (0.8039) for four cases, the question-wording effects were again significant: Equation I: b = –0.39, SE = 0.16, p = .02; Equation II: b = –0.41, SE = 0.22, p = .07; Equation III: b = –0.54, SE = 0.23, p = .03.**p < .01, *p < .05, +p < .10.

Table 3.

Logistic Regression Coefficients Showing the Effect of the Question-Wording Manipulation on Estimated Turnout (standard errors in parentheses)

PredictorEquation
IIIIII
Experimental questions–0.50** (0.14)–0.37+ (0.19)–0.48* (0.20)
Voted in 20002.43** (0.21)2.12** (0.21)
Male0.16 (0.22)
Race/ethnicity Black–0.31 (0.38)–0.12 (0.40)
Hispanic–0.08 (0.47)–0.11 (0.57)
Asian–0.38 (0.70)–0.50 (0.73)
Other race (non-white)–0.59 (0.44)–0.75 (0.47)
Education High School only1.00* (0.40)
Some college1.24* (0.50)
A 4-year college degree1.36* (0.54)
Advanced degree1.99** (0.51)
Age–4.84* (2.35)
Age27.62** (2.74)
Income $20,000–39,9990.31 (0.39)
$40,000–59,9990.59+ (0.33)
$60,000–79,9991.00* (0.41)
$80,000–119,9990.96* (0.38)
$120,000 and up0.70 (0.47)
Income not reported0.23 (0.36)
Employment statusEmployed–0.02 (0.29)
Unemployed0.88 (0.82)
RegionLives in southern state0.002 (0.28)
Lives in border state–0.10 (0.19)
N675668668
Predicted probabilities
Simple direct question81%83%85%
Experimental question72%77%76%
PredictorEquation
IIIIII
Experimental questions–0.50** (0.14)–0.37+ (0.19)–0.48* (0.20)
Voted in 20002.43** (0.21)2.12** (0.21)
Male0.16 (0.22)
Race/ethnicity Black–0.31 (0.38)–0.12 (0.40)
Hispanic–0.08 (0.47)–0.11 (0.57)
Asian–0.38 (0.70)–0.50 (0.73)
Other race (non-white)–0.59 (0.44)–0.75 (0.47)
Education High School only1.00* (0.40)
Some college1.24* (0.50)
A 4-year college degree1.36* (0.54)
Advanced degree1.99** (0.51)
Age–4.84* (2.35)
Age27.62** (2.74)
Income $20,000–39,9990.31 (0.39)
$40,000–59,9990.59+ (0.33)
$60,000–79,9991.00* (0.41)
$80,000–119,9990.96* (0.38)
$120,000 and up0.70 (0.47)
Income not reported0.23 (0.36)
Employment statusEmployed–0.02 (0.29)
Unemployed0.88 (0.82)
RegionLives in southern state0.002 (0.28)
Lives in border state–0.10 (0.19)
N675668668
Predicted probabilities
Simple direct question81%83%85%
Experimental question72%77%76%

Note.—These analyses corrected for the complex sampling design of the initial 2004 survey (accounting for clustering using both the stratum and the sampling error computation unit codes found in variables V06P007a and V06P007b of the 2006 ANES codebook). Unweighted results shown. Using the weights provided by the ANES without trimming reduced the efficiency of the analysis by 21 percent, due to some extremely large and extremely small weights. When the analyses shown in table 4 were repeated using these weights, the effects of the experimental questions in equations I, II, and III were not significant: Equation I: b = –0.31, SE = 0.19, p = .12; Equation II: b = –0.30, SE = 0.28, p = .30; Equation III: b = –0.40, SE = 0.30, p = .19). But when weights larger than 6 were the median weight (0.8039) for four cases, the question-wording effects were again significant: Equation I: b = –0.39, SE = 0.16, p = .02; Equation II: b = –0.41, SE = 0.22, p = .07; Equation III: b = –0.54, SE = 0.23, p = .03.**p < .01, *p < .05, +p < .10.

Among the respondents who were asked the MQS and who said that they did not vote in 2006, 42 percent said that they had usually voted in the past, and 52 percent said that they had planned to vote in 2006. These sizeable numbers are consistent with the argument that the MQS reduced turnout reports by allowing respondents to report having performed admirable behaviors and/or by helping respondents avoid source confusion.

We next tested whether the question-wording effect was stronger among respondents more likely to experience memory errors. None of the moderators we tested significantly interacted with the question-wording manipulation to influence turnout reports (time between election day and the survey interview: b = –0.03, SE = 0.76, n.s.; an index of interest in and attention to politics: b = 0.18, SE = 0.91, n.s.; reported turnout in 2000: b = 0.31, SE = 0.48, n.s.; strength of party identification: b = 0.48, SE = 0.63, n.s.; political efficacy: b = 0.61, SE = 1.03, n.s.).8 Furthermore, the seven demographic variables did not moderate the question-wording effect (see the online appendix [supplementary data online]).

To compare the accuracy of the MQS and the SDQ, we computed the association of turnout estimates yielded by each method with the actual turnout rates in twenty-seven states with sufficient survey sample sizes (obtained from the United States Election Project, elections.gmu.edu; see table 4). The average absolute difference between the state turnout estimate generated using the SDQ responses and the actual turnout rate for that state was significantly larger than the average absolute difference between state turnout estimate produced by the MQS and the actual turnout rate (40.27 versus 26.13 percentage points; t(26) = 4.97, p = .001). The SDQ yielded a higher turnout rate than the MQS for twenty of the twenty-seven states (74 percent), significantly more than would be expected by chance alone (p = .01). Furthermore, the correlation of actual state-level turnout rates with survey state-level estimates of turnout was substantially stronger using the MQS (r = 0.45) than when using the SDQ (r = 0.28). These state-by-state results constitute a series of replications and are consistent with the hypothesis that the MQS yielded more accurate measurements than did the SDQ. However, the 2004 ANES sample was not designed to yield a representative sample of the population of each state, and the numbers of respondents interviewed in most states were very small, so this analysis should be viewed with caution.

Table 4.

Error in State-Level Estimates of Turnout for the Simple Direct and Experimental Questions

StatePercent reported turnout with the simple direct questionPercent reported turnout with the experimental questionsNumber of respondentsOfficial turnout rate (VEP)Error for simple direct turnout questionaError for experimental turnout questionsbDifference in error rates (simple direct question – experimental question
Alabama88.2475.0033.0037.5050.7437.5013.24
Arkansas80.0044.4414.0038.7042.605.7436.86
California73.8182.3576.0040.2033.6142.15–8.54
Colorado84.6240.0018.0047.3037.327.3030.02
Connecticut100.00100.003.0046.6053.4053.400.00
Florida83.3375.0024.0039.6043.7335.408.33
Illinois100.0064.7123.0040.2059.8024.5135.29
Indiana64.7146.1530.0036.6028.119.5518.56
Iowa87.5057.1415.0048.1039.409.0430.36
Louisiana90.9154.5522.0029.7061.2124.8536.36
Maryland88.8966.6721.0046.7042.1919.9722.22
Massachusetts73.9170.0033.0048.8025.1121.203.91
Michigan86.6788.0040.0052.1034.5735.90–1.33
Minnesota100.0081.2528.0060.1039.9021.1518.75
Missouri100.0066.6710.0050.0050.0016.6733.33
New Hampshire60.0050.007.0041.4018.608.6010.00
New Jersey62.5066.6720.0039.5023.0027.17–4.17
New York80.0070.5942.0034.9045.1035.699.41
Ohio100.0092.3119.0047.5052.5044.817.69
Oregon75.0083.3314.0052.5022.5030.83–8.33
Pennsylvania100.0080.0010.0044.1055.9035.9020.00
Tennessee80.0083.3311.0041.4038.6041.93–3.33
Texas70.0052.9437.0030.9039.1022.0417.06
Utah87.5050.0016.0034.3053.2015.7037.50
Virginia88.8982.7647.0044.0044.8938.766.13
Washington81.2566.6725.0047.3033.9519.3714.58
Wisconsin71.4373.6833.0053.2018.2320.48–2.25
Average Error40.2726.1314.14
StatePercent reported turnout with the simple direct questionPercent reported turnout with the experimental questionsNumber of respondentsOfficial turnout rate (VEP)Error for simple direct turnout questionaError for experimental turnout questionsbDifference in error rates (simple direct question – experimental question
Alabama88.2475.0033.0037.5050.7437.5013.24
Arkansas80.0044.4414.0038.7042.605.7436.86
California73.8182.3576.0040.2033.6142.15–8.54
Colorado84.6240.0018.0047.3037.327.3030.02
Connecticut100.00100.003.0046.6053.4053.400.00
Florida83.3375.0024.0039.6043.7335.408.33
Illinois100.0064.7123.0040.2059.8024.5135.29
Indiana64.7146.1530.0036.6028.119.5518.56
Iowa87.5057.1415.0048.1039.409.0430.36
Louisiana90.9154.5522.0029.7061.2124.8536.36
Maryland88.8966.6721.0046.7042.1919.9722.22
Massachusetts73.9170.0033.0048.8025.1121.203.91
Michigan86.6788.0040.0052.1034.5735.90–1.33
Minnesota100.0081.2528.0060.1039.9021.1518.75
Missouri100.0066.6710.0050.0050.0016.6733.33
New Hampshire60.0050.007.0041.4018.608.6010.00
New Jersey62.5066.6720.0039.5023.0027.17–4.17
New York80.0070.5942.0034.9045.1035.699.41
Ohio100.0092.3119.0047.5052.5044.817.69
Oregon75.0083.3314.0052.5022.5030.83–8.33
Pennsylvania100.0080.0010.0044.1055.9035.9020.00
Tennessee80.0083.3311.0041.4038.6041.93–3.33
Texas70.0052.9437.0030.9039.1022.0417.06
Utah87.5050.0016.0034.3053.2015.7037.50
Virginia88.8982.7647.0044.0044.8938.766.13
Washington81.2566.6725.0047.3033.9519.3714.58
Wisconsin71.4373.6833.0053.2018.2320.48–2.25
Average Error40.2726.1314.14

Note.—When the weights provided by ANES were used, this pattern was similar: error was 38.23 for the simple direct question vs. 26.01 for the experimental questions; test of difference: t(26) = 3.42, p = .001; the correlation of official estimates with results obtained with the simple direct question correlation was 0.18, in contrast to 0.55 computed using results from the experimental questions. When trimmed weights were used, error was 39.02 for the simple direct question vs. 25.04 for the experimental questions, test of difference: t(26) = 4.03, p < .001. The correlations were 0.26 and 0.55, respectively. The VEP turnout rate is the vote for highest office divided by the voting-eligible population. In a midterm election, the vote for the highest office is the largest number of votes cast in the race for governor, U.S. senator, or combined House of Representatives (see http://elections.gmu.edu/Turnout_2006G.html). ANES interviews in 2006 were conducted in only thirty states. In three of these states, the number of people interviewed was very small, and all respondents in those states were asked the same turnout question, so we could not analyze the effect of question wording in those states. Therefore, our analyses were limited to respondents in the remaining twenty-seven states. The differences reported here between the survey estimates and official estimates are large for both survey conditions. This was also true at the national level, as the official VEP estimate of turnout was 41.3 percent (substantially lower than the ANES estimates). The 2006 ANES survey may have been particularly likely to result in high estimates of turnout because all the respondents had been interviewed at least once before (and many of them twice) and because the cumulative response rate for the 2006 survey was fairly low.

aAbs (Official turnout rate – percent turnout measured with simple direct question).

bAbs (Official turnout rate – percent turnout measured with experimental questions).

Table 4.

Error in State-Level Estimates of Turnout for the Simple Direct and Experimental Questions

StatePercent reported turnout with the simple direct questionPercent reported turnout with the experimental questionsNumber of respondentsOfficial turnout rate (VEP)Error for simple direct turnout questionaError for experimental turnout questionsbDifference in error rates (simple direct question – experimental question
Alabama88.2475.0033.0037.5050.7437.5013.24
Arkansas80.0044.4414.0038.7042.605.7436.86
California73.8182.3576.0040.2033.6142.15–8.54
Colorado84.6240.0018.0047.3037.327.3030.02
Connecticut100.00100.003.0046.6053.4053.400.00
Florida83.3375.0024.0039.6043.7335.408.33
Illinois100.0064.7123.0040.2059.8024.5135.29
Indiana64.7146.1530.0036.6028.119.5518.56
Iowa87.5057.1415.0048.1039.409.0430.36
Louisiana90.9154.5522.0029.7061.2124.8536.36
Maryland88.8966.6721.0046.7042.1919.9722.22
Massachusetts73.9170.0033.0048.8025.1121.203.91
Michigan86.6788.0040.0052.1034.5735.90–1.33
Minnesota100.0081.2528.0060.1039.9021.1518.75
Missouri100.0066.6710.0050.0050.0016.6733.33
New Hampshire60.0050.007.0041.4018.608.6010.00
New Jersey62.5066.6720.0039.5023.0027.17–4.17
New York80.0070.5942.0034.9045.1035.699.41
Ohio100.0092.3119.0047.5052.5044.817.69
Oregon75.0083.3314.0052.5022.5030.83–8.33
Pennsylvania100.0080.0010.0044.1055.9035.9020.00
Tennessee80.0083.3311.0041.4038.6041.93–3.33
Texas70.0052.9437.0030.9039.1022.0417.06
Utah87.5050.0016.0034.3053.2015.7037.50
Virginia88.8982.7647.0044.0044.8938.766.13
Washington81.2566.6725.0047.3033.9519.3714.58
Wisconsin71.4373.6833.0053.2018.2320.48–2.25
Average Error40.2726.1314.14
StatePercent reported turnout with the simple direct questionPercent reported turnout with the experimental questionsNumber of respondentsOfficial turnout rate (VEP)Error for simple direct turnout questionaError for experimental turnout questionsbDifference in error rates (simple direct question – experimental question
Alabama88.2475.0033.0037.5050.7437.5013.24
Arkansas80.0044.4414.0038.7042.605.7436.86
California73.8182.3576.0040.2033.6142.15–8.54
Colorado84.6240.0018.0047.3037.327.3030.02
Connecticut100.00100.003.0046.6053.4053.400.00
Florida83.3375.0024.0039.6043.7335.408.33
Illinois100.0064.7123.0040.2059.8024.5135.29
Indiana64.7146.1530.0036.6028.119.5518.56
Iowa87.5057.1415.0048.1039.409.0430.36
Louisiana90.9154.5522.0029.7061.2124.8536.36
Maryland88.8966.6721.0046.7042.1919.9722.22
Massachusetts73.9170.0033.0048.8025.1121.203.91
Michigan86.6788.0040.0052.1034.5735.90–1.33
Minnesota100.0081.2528.0060.1039.9021.1518.75
Missouri100.0066.6710.0050.0050.0016.6733.33
New Hampshire60.0050.007.0041.4018.608.6010.00
New Jersey62.5066.6720.0039.5023.0027.17–4.17
New York80.0070.5942.0034.9045.1035.699.41
Ohio100.0092.3119.0047.5052.5044.817.69
Oregon75.0083.3314.0052.5022.5030.83–8.33
Pennsylvania100.0080.0010.0044.1055.9035.9020.00
Tennessee80.0083.3311.0041.4038.6041.93–3.33
Texas70.0052.9437.0030.9039.1022.0417.06
Utah87.5050.0016.0034.3053.2015.7037.50
Virginia88.8982.7647.0044.0044.8938.766.13
Washington81.2566.6725.0047.3033.9519.3714.58
Wisconsin71.4373.6833.0053.2018.2320.48–2.25
Average Error40.2726.1314.14

Note.—When the weights provided by ANES were used, this pattern was similar: error was 38.23 for the simple direct question vs. 26.01 for the experimental questions; test of difference: t(26) = 3.42, p = .001; the correlation of official estimates with results obtained with the simple direct question correlation was 0.18, in contrast to 0.55 computed using results from the experimental questions. When trimmed weights were used, error was 39.02 for the simple direct question vs. 25.04 for the experimental questions, test of difference: t(26) = 4.03, p < .001. The correlations were 0.26 and 0.55, respectively. The VEP turnout rate is the vote for highest office divided by the voting-eligible population. In a midterm election, the vote for the highest office is the largest number of votes cast in the race for governor, U.S. senator, or combined House of Representatives (see http://elections.gmu.edu/Turnout_2006G.html). ANES interviews in 2006 were conducted in only thirty states. In three of these states, the number of people interviewed was very small, and all respondents in those states were asked the same turnout question, so we could not analyze the effect of question wording in those states. Therefore, our analyses were limited to respondents in the remaining twenty-seven states. The differences reported here between the survey estimates and official estimates are large for both survey conditions. This was also true at the national level, as the official VEP estimate of turnout was 41.3 percent (substantially lower than the ANES estimates). The 2006 ANES survey may have been particularly likely to result in high estimates of turnout because all the respondents had been interviewed at least once before (and many of them twice) and because the cumulative response rate for the 2006 survey was fairly low.

aAbs (Official turnout rate – percent turnout measured with simple direct question).

bAbs (Official turnout rate – percent turnout measured with experimental questions).

Discussion

These results suggest that during the 2006 telephone interviews, turnout reports were more accurate when obtained with the new MQS than when obtained using the SDQ. The fact that the question-wording manipulation did not have more impact among people interviewed longer after the election or among respondents higher in political involvement (who are more likely to experience source confusion) is consistent with several past studies (Belli et al. 1999; Belli, Moore, and Van Hoewyk 2006; Duff et al. 2007)9 and suggests that the MQS did not reduce reported turnout by reducing memory errors relative to the SDQ. Therefore, future research should explore the mechanism for the principal effect observed here.

All past studies showing that experimental question wordings reduced reported turnout (relative to the SDQ) have involved telephone interviewing (Belli et al. 1999; Belli, Moore, and Van Hoewyk 2006; Duff et al. 2007), as did ours. Most past ANES surveys have involved face-to-face interviewing, so we should hesitate before concluding that the ANES face-to-face interviewing would benefit from use of the new MQS. Specifically, some past studies suggest that relative to telephone interviewing, face-to-face interviewing entails less social desirability response bias and inspires more cognitive effort to generate accurate answers (e.g., de Leeuw and van der Zouwen 1988; Holbrook, Green, and Krosnick 2003; Johnson, Hougland, and Clayton 1989; c.f. Colombotos 1965; Rogers 1976; Wiseman 1972). Therefore, if the new MQS did improve the accuracy of telephone turnout reports by reducing social desirability pressures and/or source confusion, the same benefits may not be observed during face-to-face data collection. Future research is needed to explore this.

Turnout estimates from the MQS were still consistently higher than official rates, perhaps because the MQS did not completely eliminate memory errors and/or social desirability bias. This overestimation may also have occurred for other reasons, such as greater nonresponse among non-voters than among voters and higher real turnout rates among respondents due to pre-election interviews (Greenwald et al. 1987; Traugott and Katosh 1979; though see Greenwald et al. 1988; Mann 2005; Smith, Gerber, and Orlich 2003).

Two additional caveats to the present findings:

  • – These data did not permit testing whether the MQS reduced turnout reporting relative to the SDQ for each individual respondent, and it is possible that the MQS may have caused underreporting of turnout by some respondents.10

  • – The study provides no direct evidence regarding the mechanism by which the MQS reduced reported turnout rates.

Conclusion

Because the MQS yielded lower estimates of turnout than did the SDQ and also reduced cognitive burden and obtained more information about respondents’ voting behavior (e.g., past voting behavior, voting intentions, and early and absentee voting), the new series of questions seems to be a viable alternative to the SDQ and a possible alternative to previous experimental questions (although we could not test this directly). We look forward to future studies comparing the new multi-question sequence to other experimental questions.

Appendix: Question Wordings and Coding of Variables for Core Survey Variables

Some of the measures we used were administered during the 2006 interviews, and others were administered during the 2004 pre- or post-election interviews. Data from the 2004 and 2006 interviews were linked using respondents’ case ID numbers. The year in which a variable was measured is indicated in parentheses after the variable name, along with the variable name from the ANES codebook. Unless otherwise specified, respondents with missing data on a given variable were excluded from analyses in which that variable was used.

Core Survey Variables

Turnout question version (2006): In the survey, 336 respondents, selected randomly, were asked the ANES simple direct turnout question (see row 1 of table 1; v06p776). The other 339 respondents were asked a series of questions measuring turnout (see row 4 of table 1; v06p777–v06p779b). Using answers to these questions, turnout in 2006 was coded 1 for respondents who said they voted (in response to the simple direct question) or were sure they voted (in response to the new questions) and 0 for respondents who said they did not.11 A variable was coded 1 for respondents who were asked the new questions and 0 for respondents who were asked the simple direct question.

Past voting behavior (2004): Respondents were asked, “In 2000, Al Gore ran on the Democratic ticket against George W. Bush for the Republicans and Ralph Nader as the Reform Party candidate. Do you remember for sure whether or not you voted in that election?”[v043002]12 People who said they had voted were coded 1, and people who said they had not were coded 0.13

Days since the election (2006): The numbers of days between the election and the respondent’s 2006 post-election interview was coded to range from 0 (for interviews done six days after the election, the shortest observed time interval) to 1 (for interviews done fifty-nine days after the election, the longest interval). [v06p201a and v06p201b]

Political involvement (2006): Half of the respondents were randomly assigned to be asked the following three questions: (1) “How interested are you in information about what’s going on in government and politics? Extremely interested, very interested, moderately interested, slightly interested, or not interested at all?” [v06p630] (2) How closely do you pay attention to information about what’s going on in government and politics? Extremely closely, very closely, moderately closely, slightly closely, or not closely at all?” [v06p631] (3) How often do you pay attention to what’s going on in government and politics? All the time, most of the time, about half the time, once in a while, or never?” [v06p632] The other half of the respondents were randomly assigned to be asked the following two questions: (1) “Some people don’t pay much attention to political campaigns. How about you? Would you say that you have been VERY MUCH interested, SOMEWHAT interested, or NOT MUCH interested in the political campaigns this year?” [v06p633] and (2) “Some people seem to follow what’s going on in government and public affairs most of the time, whether there’s an election going on or not. Others aren’t that interested. Would you say you follow what’s going on in government and public affairs most of the time, some of the time, only now and then, or hardly at all?” [v06p634] Responses to all questions were coded to range from 0 (least involved) to 1 (most involved) and averaged into an index of political involvement. Respondents with missing data for any question were coded as missing for the index. Respondents were assigned to the two political involvement conditions orthogonally from assignment to turnout question-wording condition, and these two variables were unassociated.

Strength of party identification (2006): Half of the respondents were randomly assigned to be asked: “Generally speaking, do you think of yourself as a Republican, a Democrat, an Independent, or what?” (the order of the first two response options was varied across respondents). The other half of the respondents were randomly assigned to be asked: “As of today, do you think of yourself as a Republican, a Democrat, an Independent, or what?” (the order of the first two response options was varied across respondents). Then all respondents were asked the appropriate follow-up questions: [IF REPUBLICAN] “Would you call yourself a strong Republican or a not-very-strong Republican?” [IF DEMOCRAT] “Would you call yourself a strong Democrat or a not-very-strong Democrat?” [IF INDEPENDENT OR SOMETHING ELSE] “Do you think of yourself as closer to the Republican Party or to the Democratic Party?” [summary variable: v06p680] (the order of these response options was varied across respondents and consistent with the order of the initial party identification question). Responses were coded 1 for respondents who identified as strong Republicans or Democrats, 2/3 for respondents who identified as weak Republicans or Democrats, 1/3 for respondents who initially reported that they were independent or something else but reported that they were closer to one party or the other in the follow-up question, and 0 for respondents who said they were independents or something else and did not lean toward either major party. Assignment to the two question-wording conditions was orthogonal to assignment to turnout question-wording condition.

Political efficacy (2006): Half of the respondents were randomly assigned to be asked the following two questions: (1) “I’d like to read you a few statements about public life. I’ll read them one at a time. Please tell me how strongly you agree or disagree with each of them. ‘Public officials don’t care much what people like me think.’ Do you agree strongly, agree somewhat, neither agree nor disagree, disagree somewhat, or disagree strongly?” [v06p650] and (2) “‘People like me don’t have any say about what the government does.’ Do you agree strongly, agree somewhat, neither agree nor disagree, disagree somewhat, or disagree strongly?” [v06p651] The other half of the respondents were randomly assigned to be asked the following two questions: (1) “How much do public officials care what people like you think? A great deal, a lot, a moderate amount, a little, or not at all?” [v06p652] and (2) “How much can people like you affect what the government does? A great deal, a lot, a moderate amount, a little, or not at all?” [v06p653] Responses to each question were coded to range from 0 (lowest efficacy) to 1 (highest efficacy) and averaged together to form a political efficacy index. Respondents with missing data for any question were coded as missing for the index. Assignment to the two question-wording conditions was orthogonal to assignment to the turnout question-wording condition.

References

Aarts
Kees
.
2002
. “Electoral Turnout in West European Democracies.” Paper presented at the Annual Meeting of the American Political Science Association, Boston.

Abelson
Robert P.
Loftus
Elizabeth F.
Greenwald
Anthony G
.
1992
. “
Attempts to Improve the Accuracy of Self-Reports of Voting
.” In
Questions about Questions
, edited by
Tanur
J. M.
138
53
.
New York
:
Russell Sage Foundation
.

Andolina
Molly
Keeter
Scott
Jenkins
Cliff
Zukin
Krista
.
2003
. “
A Guide to the Index of Civic and Political Engagement.
College Park, MD
:
Center for Information and Research on Civic Learning and Engagement
.

Belli
Robert F.
Moore
Sean E.
Van Hoewyk
John
2006
. “
An Experimental Comparison of Question Forms Used to Reduce Vote Overreporting
.”
Electoral Studies
25
:
751
59
.

Belli
Robert F.
Traugott
Michael W.
Beckmann
Matthew N
.
2001
. “
What Leads to Voting Overreports? Contrasts of Overreporters to Validated Voters and Admitted Nonvoters in the American National Election Studies
.”
Journal of Official Statistics
17
:
479
98
.

Belli
Robert F.
Traugott
Michael W.
Young
Margaret
McGonagle
Katherine A
.
1999
. “
Reducing Vote Overreporting in Surveys: Social Desirability, Memory Failure, and Source Monitoring.
Public Opinion Quarterly
63
:
90
108
.

Belli
Robert F.
Traugott
Santa
Rosenstone
Steven J
.
1994
. “Reducing Overreporting of Voter Turnout: An Experiment Using a Source Monitoring Framework.” ANES Technical Report Number 35.
Ann Arbor, MI
:
American National Election Studies
.

Berent
Matthew K
Krosnick
Jon A.
Lupia
Arthur
.
2011
. “The Quality of Government Records and Over-Estimation of Registration and Turnout in Surveys: Lessons from the 2008 ANES Panel Study’s Registration and Turnout Validation Exercises.” Working Paper No. nes012554, February 15, 2011, Version.
Ann Arbor, MI, and Palo Alto, CA
:
American National Election Studies
. Available at http://www.electionstudies.org/resources/papers/nes012554.pdf.

Bernstein
Robert
Chadha
Anita
Montjoy
R
.
2001
. “
Overreporting Voting: Why It Happens and Why It Matters
.”
Public Opinion Quarterly
65
:
22
44
.

Blais
André
Gidengil
Elisabeth
Nevitte
Neil
Nadeau
Richard
.
2004
. “
Where Does Turnout Decline Come From?
European Journal of Political Research
43
:
221
36
.

Bradburn
Norman M.
Sudman and Associates
Seymour
.
1979
.
Improving Interview Method and Questionnaire Design
.
San Francisco
:
Jossey-Bass
.

Bradburn
Norman M.
Sudman
Seymour
Wasink
Brian
.
2004
.
Asking Questions: The Definitive Guide to Questionnaire Design
.
San Francisco
:
Jossey-Bass
.

Brockington
David
Karp
Jeffrey
.
2002
. “Social Desirability and Response Validity: A Comparative Analysis of Overreporting Turnout in Five Countries.” Paper presented at the Annual Meeting of the American Political Science Association, Boston.

Burden
Barry C.
2000
. “
Voter Turnout and the National Election Studies.
Political Analysis
8
:
389
98
.

Clausen
Aage
.
1968
. “
Response Validity: Vote Report.
Public Opinion Quarterly
32
:
588
606
.

Colombotos
John
.
1965
. “
The Effects of Personal vs. Telephone Interviews on Socially Acceptable Responses.
Public Opinion Quarterly
29
:
457
58
.

de Leew
Edith D.
van der Zouwen
Johannes
.
1988
. “
Data Quality in Telephone and Face-to-Face Surveys: A Comparative Meta-Analysis
.” In
Telephone Survey Methodology
, edited by
Groves
R. M.
Biemer
P. P.
Lyberg
L. E.
Massey
J. T.
Nicholls
W. L.
Waksberg
J
.
283
99
.
New York
:
Wiley
.

Duff
Brian
Hanmer
Michael J.
Won-Ho
Park
White
Ismail K
.
2007
. “
Good Excuses: Understanding Who Votes with an Improved Turnout Question.
Public Opinion Quarterly
71
:
67
90
.

Granberg
Donald
Holmberg
Sören
.
1991
. “
Self-Reported Turnout and Voter Validation.
American Journal of Political Science
35
:
448
59
.

Greenwald
Anthony G.
Carot
Catherine G.
Beach
Rebecca
Young
Barbara
1987
. “
Increasing Voting Behavior by Asking People If They Expect to Vote.
Journal of Applied Psychology
72
:
315
18
.

Greenwald
Anthony G.
Klinger
Mark R.
Vande Kamp
Mark E.
Kerr
Katherine L
.
1988
. “The Self-Prophecy Effect: Increasing Voter Turnout by Vanity-Assisted Consciousness Raising.” Unpublished manuscript, University of Washington, Seattle.

Gronke
Paul
Galanes-Rosenbuam
Eva
Miller
Peter A
.
2007
. “
Early Voting and Turnout.
PS: Political Science and Politics
40
:
639
45
.

Hill
Kim Q.
Hurley
Patricia A
.
1984
. “
Nonvoters in Voters’ Clothing: The Impact of Voting Behavior Misreporting on Voting Behavior Research.
Social Science Quarterly
65
:
199
206
.

Holbrook
Allyson L.
Green
Melanie C.
Krosnick
Jon A
.
2003
. “
Telephone versus Face-to-Face Interviewing of National Probability Samples with Long Questionnaires: Comparisons of Respondent Satisficing and Social Desirability Response Bias.
Public Opinion Quarterly
67
:
79
125
.

Holbrook
Allyson L.
Krosnick
Jon A
.
2008
. “Can Question Wording Reduce Turnout Overreporting in Surveys? The 2002 and 2004 American National Election Study Experiments.” Unpublished manuscript.

———
.
2010
. “
Social Desirability Bias in Voter Turnout Reports: Tests Using the Item Count Technique.
Public Opinion Quarterly
74
:
37
67
.

Johnson
Marcia K.
Hashtroudi
Shahin
Lindsay
D. Stephen
.
1993
. “
Source Monitoring.
Psychological Bulletin
114
:
3
28
.

Johnson
Timothy P.
Hougland
James G.
Jr.
Clayton
Richard R
.
1989
. “
Obtaining Reports of Sensitive Behavior: A Comparison of Substance Use Reports from Telephone and Face-to-Face Interviews.
Social Science Quarterly
70
:
174
83
.

Karp
Jeffrey A.
Brockington
David
.
2005
. “
Social Desirability and Response Validity: A Comparative Analysis of Overreporting Voter Turnout in Five Countries.
Journal of Politics
67
:
825
40
.

Locander
William
Sudman
Seymour
Bradburn
Norman
.
1976
. “
An Investigation of Interview Method, Threat, and Response Distortion.
Journal of the American Statistical Association
71
:
269
75
.

Lutz
George
.
2003
. “Participation, Cognitive Involvement, and Democracy: When Do Low Turnout and Low Cognitive Involvement Make a Difference, and Why?” Paper presented at the European Consortium for Political Research Joint Sessions of Workshops, Edinburgh, UK.

Lyons
William
Scheb
John M.
II
.
1999
. “
Early Voting and the Timing of the Vote: Unanticipated Consequences of Electoral Reform.
State and Local Government Review
31
:
147
52
.

Mann
Christopher B
.
2005
. “
Unintentional Voter Mobilization: Does Participation in Pre-Election Surveys Increase Voter Turnout?
Annals of the American Academy of Political and Social Science
601
:
155
68
.

McDonald
Michael P.
Popkin
Samuel L
.
2001
. “
The Myth of the Vanishing Voter.
American Political Science Review
95
:
963
74
.

Parry
Hugh J.
Crossley
Helen M
.
1950
. “
Validity of Responses to Survey Questions.
Public Opinion Quarterly
14
:
61
80
.

Presser
Stanley
.
1984
. “
Is Inaccuracy on Factual Survey Items Item-Specific or Respondent-Specific?
Public Opinion Quarterly
48
:
344
55
.

———.

1990
. “
Can Context Changes Reduce Vote Overreporting in Surveys?
Public Opinion Quarterly
54
:
586
93
.

Presser
Stanley
Traugott
Michael
.
1992
. “
Little White Lies and Social Science Models: Correlated Response Errors in a Panel Study of Voting.
Public Opinion Quarterly
56
:
77
86
.

Presser
Stanley
Traugott
Michael W.
Traugott
Santa
.
1990
. “Vote ‘Over’ Reporting in Surveys: The Records or the Respondents?” ANES Technical Report No. 39.
Ann Arbor, MI
:
American National Election Studies
.

Rogers
Theresa F
.
1976
. “
Interviews by Telephone and in Person: Quality of Responses and Field Performance.
Public Opinion Quarterly
40
:
51
65
.

Smith
Jennifer K.
Gerber
Alan S.
Orlich
Anton
.
2003
. “
Self-Prophecy Effects and Voter Turnout: An Experimental Replication.
Political Psychology
24
:
593
604
.

Stocké
Volker
.
2007
. “
Response Privacy and Elapsed Time since Election Day as Determinants for Vote Overreporting.
International Journal of Public Opinion Research
19
:
237
46
.

Stocké
Volker
Stark
Tobias
.
2007
. “
Political Involvement and Memory Failure as Interdependent Determinants of Vote Overreporting.
Applied Cognitive Psychology
21
:
239
57
.

Tourangeau
Roger
Groves
Robert M.
Redline
Cleo D
.
2010
. “
Sensitive Topics and Reluctant Respondents: Demonstrating a Link between Nonresponse Bias and Measurement Error.
Public Opinion Quarterly
74
:
413
32
.

Traugott
Michael W.
Katosh
John P
.
1979
. “
Response Validity in Surveys of Voting Behavior.
Public Opinion Quarterly
43
:
359
77
.

Traugott
Santa
.
1989
. “Validating Self-Reported Vote: 1964–1988.” Paper presented at the Annual Meeting of the American Statistical Association, Washington, DC

Wiseman
Frederick
.
1972
. “
Methodological Bias in Public Opinion Surveys.
Public Opinion Quarterly
36
:
105
8
.

1.

It is also possible that a longer question inspires respondents to be more thoughtful and therefore to generate more accurate turnout reports (Bradburn, Sudman, and Associates 1979).

2.

Some studies have reported evidence consistent with this prediction (Abelson, Loftus, and Greenwald 1992; Belli et al. 1999), though others have disconfirmed it (Belli, Moore, and Van Hoewyk 2006; Holbrook and Krosnick 2008).

3.

Some past studies have found overreporting to be strongest among respondents who were higher in political involvement (Presser 1984; Stocké and Stark 2007; Tourangeau, Groves, and Redline 2010) or more strongly identified with a political party (Belli, Traugott, and Beckmann 2001; Bernstein, Chadha, and Montjoy 2001; Granberg and Holmberg 1991; Stocké and Stark 2007).

4.

This rate most closely corresponds to AAPOR’s Response Rate 3.

5.

ANES data and codebooks are available at http://electionstudies.org/index.htm.

6.

Assignment to question wording was not confounded with other aspects of the survey’s administration.

7.

Logistic regression coefficients are denoted by b. No respondents said they were not sure whether they voted.

8.

All results reported previously were computed without weighting the sample, but the same findings were obtained when weights were applied.

9.

Belli, Moore, and Van Hoewyk (2006) speculated that they may have found no moderation effect of a question-wording effect by time between the election and the interview because they examined data that were collected beginning a month after the election rather than beginning immediately after the election. Because our study’s data collection began immediately after the 2006 election, our failure to find evidence of moderation here cannot be explained in that way. Duff et al. (2007) did not report tests of the statistical significance of tests of moderation, so it is difficult to know which of their findings are consistent with ours.

10.

We did not compare survey reports to government records because of concerns about the validity of these record checks and their substantial cost (in terms of time and money) (Berent, Krosnick, and Lupia 2011; Presser, Traugott, and Traugott 1990).

11.

Random assignment to condition was done using a simple random number generator. The CATI programming (see http://electionstudies.org/studypages/2006pilot/2006pilot_CATIcode.pdf) randomly assigned a random half of the respondents to be asked the simple direct question and assigned the other half to be asked the experimental questions.

12.

Ralph Nader was listed on the ballot in most states as a candidate for the Green Party, but the 2004 ANES questionnaire referred to Nader as the Reform Party candidate.

13.

During the 2006 interviews, respondents were not asked whether they had voted in any specific elections prior to the 2006 election.

Supplementary data