Considering "don't know" options for surveys

The survey Academy Series: 2nd Chapter

 

Showing a “don’t know” option in the response alternatives of a survey is an important decision which has not yet generated enough debate.

 

Firstly, it should be noted that there are some questions where the respondents may not know what to answer. If the “don’t know” option is not present, and if skipping the question without an answer is not allowed, the respondent has only two choices left: Abandoning the survey or answering randomly. 

 

Obviously, neither of these options is desirable for a researcher. That is why showing a “don’t know” option, or allowing respondents to skip a question with no answer, might be more convenient. But, which of these two alternatives is best?

 

There are many factors that should be considered. On the one hand, if respondents are allowed to skip questions, we will get more “missing values”: some of these values will come from people who actually did not know what to say, but others will come from people who did not want to take the time to think of an answer and skipped the question instead.

 

Likewise, if a “don’t know” option is given explicitly, it will be not only be chosen by people who do not know what to answer, but also by those who try to minimize their effort to answer the questionnaire (this behavior is known as “satisficing”, see eg Krosnick, 1991).

 

Clearly, allowing respondents to skip questions and including a “don’t know” option has its downsides.

01-4

In face-to-face or telephone surveys, the most common strategy to solve this dilemma is to avoid mentioning the category “don’t know” in the answers. However, if during the questionnaire one respondent spontaneously states to not know what to respond, the response is coded just like that. Thus, the respondent is allowed to jump to the next question, while always ensuring that the interviewer has insisted on getting a response.

 

In the absence of an interviewer who can insist on the importance of selecting an answer, or who can manually add one that is not shown, the use of a forced answer without a “don’t know” option has become widespread in online surveys. As mentioned before, this generates problems, but is certainly attractive for the data users (the researchers) who don’t have to worry about what to do with the missing data and the “don’t know”s. Couper (2008) criticized this practice: it is true that this strategy avoids the existence of missing values, but it is also true that this might increase survey drop-outs, and if there is too much abandonment, the researcher could end up dealing with representativity problems.

 

Additionally, a quality issue arises. If respondents don’t know what to answer but they give an answer anyway, not only does their answer not add any valuable information, but it also distorts the information already collected. In response to this dilemma, Couper (2008) recommended allowing respondents to skip questions.

 

Nevertheless, this solution doesn’t eliminate the risk of having many more missing values than those corresponding to people who, in fact, don’t know what to answer or don’t want to answer (in other words, increases the “satisficing” behavior). In order to try to reduce this risk, De Leeuw, Boevé and Hox (2013) proposed using the Internet interactivity to simulate a relationship between the respondent and the computer, more similar to the one between a respondent and an interviewer: they suggested giving permission to skip a response and clearly specify a “don’t know” option. When a respondent clicks one of these two options, a message appears certifying that their response has been registered properly and asking them if they really want to keep advancing the questionnaire by answering “don’t know”, thereby stressing the importance of their response to the research. In this way, skipping a question or selecting “don’t know” is not a comfortable option anymore, because the respondent has to respond to the confirmation message too. A respondent who wants to minimize the effort of answering the survey would be discouraged, assuming that those who really don’t know or don’t want to answer will choose the corresponding options.

 

Using data from the LISS panel, De Leeuw et al (2013) found that the percentage of respondents who select the choice “don’t know” decreased significantly when a confirmation message is used (from 24% to 8% when “don’t know” is proposed as a button). This suggest that, without the confirmation message, there is a much higher “satisficing” level. They also found that if a “don’t know” is proposed, the reliability is lower, but if a confirmation message is used, the reliability increases significantly.

 

From this research, they concluded that it is better to allow respondents to skip a question without explicitly proposing a “don’t know” option, unless it is really a question where the respondents can be expected to not know the answer. In the cases where a “don’t know” option is necessary, it is recommended to combine this alternative with a confirmation message. Additionally, showing a confirmation message when a question is being skipped looks like an effective option too.
 

References:

Couper, M. P. (2008). Designing effective web surveys. Cambridge, MA: Cambridge University Press. Google Scholar, CrossrefDe Leeuw, E.D., Boevee, A. and J. Hox (2013). “Does one really know?: Avoiding noninformative answers in a reliable way”. Presentation at the General Online Conference, 2013 (Mannheim). Krosnick, J.A. (1991). Response Strategies for Coping with the Cognitive Demands of Attitude Measures in Surveys. Applied Cognitive Psychology 5:213-36.

 

Subscribe to our blog and receive the latest updates here or in your email