Balanced questions reduce "yes" responses

The survey Academy Series: 5th Chapter

The quality of a survey depends on how respondents perceive the questions we ask them. Therefore, when we prepare a survey, we must always think about the respondents and what reaction can be caused by our way of asking questions. The more we know about respondents’ behavior, the better we can design surveys. That is why surveys’ methodology can benefit from the psychology’s knowledge area. A clear example of this is the study of the human tendency to say “yes”.

This curious tendency was studied for the first time by Berg and Rapaport (1954) using a questionnaire with no questions, in which the respondent had to choose a response option for each scale shown. For example, when there were two possible answers yes and no- the respondent must select one of them.

In a survey design like this, if a scale proposes 2 options, for example “yes” and “no”, we would expect, after repeating the experiment with a high number of respondents, that a 50% of them chose “yes” and the other 50% selected “no”. However, Berg and Rapaport discovered that the percentage of respondents that chose positive answers (“yes”, “in agreement”, “true”) was always higher than what we could expect by mere probability. Therefore, they concluded, that in general, people tend to say “yes”. This phenomenon is known as “yes-saying” or “acquiescence bias”.

We must point out that this tendency oscillates quite a lot in between countries. For example, in Asian countries is usual to observe a higher tendency to say “yes”, in comparison to Mediterranean countries, such as Spain. Even so, the results obtained by Berg and Rapaport have been replicated on many occasions and in different countries, even changing the type of sample or the scales used in other questions, and yet still a conclusive effect has been shown: the tendency to saying “yes” is universal.

This discovery has to be taken into account when we design a survey, as the results can very greatly depending on how the questions are formulated.

For example, if a survey includes the next question: “Do you think X brand offers good products? yes/no”, we will obtain a greater percentage of respondents with a positive opinion about X brand than if the question were formulated in this way: “Do you think X Brand offers bad products? yes/no”. The answers would vary even if we ask the same respondents to answer the survey in its two variations.

Why does this happen? Regarding the first question, respondents will tend to say yesmore often, which will mean YES, products are GOOD”. But, in the second question, the tendency that leads respondents to say “yes”, will mean YES, the products are BAD”, which is equivalent to the opposite option of the first question (“They are NOT GOOD products”).

Three different theories were proposed in order to explain this phenomenon of having willingness to say “yes”. The first theory holds that this phenomenon happens because of a disposition of people’s character, that by courtesy tend to agree with everything they are told (Goldberg, 1990). The second one explains that when a unique orientation in a statement is chosen (for example to announce “products are bad”), some respondents think the statement reflects what the researcher believes. And therefore, they imagine that the statement has to be true, because the researcher (“the expert”) knows more than them about the survey’s topic (Lenski & Leggett, 1960). The last theory states that respondents who do not want to make an effort in answering will say yes to any assertion shown to them, to save them the trouble of taking the time to think the question through carefully.


When we talk about online surveys, it is assumed that the absence of an interviewer helps to reduce the acquiescence bias which comes from the first theory: the tendency to be courteous. But the other two bias sources are still found, perhaps more than in other methods. In fact, since the majority of online panels use economic incentives to recruit and encourage their respondents and many of them allow anyone to register as a panelist (“opt-in panels”), there are people who decide to participate only because of the economic gain. These people may not be willing to make the necessary efforts to answer surveys properly.

This can be observed more often in online surveys than in traditional surveys, as traditional questionnaires do not economically reward their respondents (and of course, even more in surveys where an authentic probabilistic sampling is used). That is why bias coming from the third theory could be worse when we are talking about online survey. Therefore, in online surveys, it is crucial to take into account this tendency of “yes-saying” and try to limit it as much as possible.

A way of reducing this problem is using balanced questions: “Do you think that X brand offers good or bad products?" and making the options for answer "good" and "bad". Using this format, it is possible that respondents keep their tendency to choose the positive category. But in research studies about this phenomenon, it has been seen that the bias is much lower when balanced questions with equally balanced scales are used.

So, we can say that it is more advisable to use balanced questions whenever it is possible, proposing both the positive and negative sense in the statement, not only in the scale but also in the question itself.

If you are interested in learning more about the best practices for online surveying, you can read our free ebook by clicking the banner below.

Free ebook - The essentials online data collection



Bibliographic references


Berg, IA and GM Rapaport (1954). “Response bias in an unstructured questionnaire”. The Journal of Psychology. Taylor & Francis

Goldberg, LR (1990). “An alternative “description of personality”: The Big-Five factor structure

Journal of personality and Social Psychology, Vol 59, No. 6, 1216-1229.

Krosnick, JA. (1991). Response strategies for coping with the cognitive demands of attitude measures in surveys. Applied Cognitive Psychology, 5, 213-236.

Lenski, G.E. and J.C. Leggett (1960). “Caste, Class, and Deference in the Research Interview”. American Journal of Sociology , Vol. 65, No. 5, 463-467

Subscribe to our blog and receive the latest updates here or in your email