Alternative methods for selecting web survey samples

When thinking about participation rates, is it better to use a profiled sample or a non-profiled one? What about fieldwork efficiency? Or the accuracy of the answers? And satisfaction with the survey?

 

While, intuitively, it seems like fieldwork efficiency should be improved by using a profiled sample (i.e. reducing the number of filter-outs), in terms of participations rates, accuracy or survey satisfaction, our intuition may not be that clear. In our latest publication “Alternative methods for selecting web survey samples” (IJMR, 2018) we put our intuition to the test. In doing so, we also explored two alternative methods of profiling: the use of passive data and invitations closer to the “moment-of-truth”.  

 

Sampling methods

In a classic survey design, respondents are selected using quotas on socio-demographic variables to guarantee that the final sample has similar distributions of these variables as the target population. In addition to quotas, filter questions are usually included in the survey to exclude respondents that not correspond to the target. This approach has, at least, two drawbacks. First, on the respondent side, being filtered out may be a frustrating experience. They were willing to participate, they started the survey, but they were filtered-out anyway. In some cases, without getting the incentive. Second, on the panel side, it may require inviting many of panelists which increases the cost of the project (both time and money).

 

A second approach to sample respondents is to use information about different aspects of the panelist’s life (e.g. media consumption), previously collected and stored by Netquest called ‘profiling information’. While it may not correspond exactly to the definition of the target population, it can be used as a proxy to increase the chances of selecting respondents with the desired profile.

 

The third approach consists of using information about the browsing behavior (i.e. URLs, timestamp of the visit, and time spent on each visit) of the panelists. This information is used to decide who to invite to a given survey. For instance, if the target population is “people who like sports”, we can select panelists who regularly visit sport-related websites.

 

At last, we explore the effect of the moment when the survey was completed. Using passive data also allows inviting panelists to a survey just after a specific event occurred (e.g. buying a flight ticket). By contacting the respondents closer to the “moment-of-truth”, they might be able to remember the event better  and report about it more accurately.

 

Experimental design

The target population of the study included all people who have visited and/or purchased a flight on the website of at least one of the most common airline companies in Spain during a two month timeframe. We asked them about the airline company, the last flight purchased and background questions about the survey context/evaluation.

We compared four groups according to the method of selection in the sample (see above): Using filters only (‘Group 1’), using filters and profiling information (‘Group 2’’), using filters and passive data  (‘Group 3’), ‘In-the-next-48 hr’ (‘Group 4’). In the latter, respondents were contacted with 48 hours after the visit or purchased occurred.

 

Main results

In summary, we tested three alternative methods of sampling (profiling, passive data and “in-the-moment of truth” against the classic design filter questions). We also observed the effect the selected method had on different variables related to survey quality. We’ve summarize the main results below (see the publication for a detailed explanation).

 

  1. Fieldwork efficiency (i.e. ‘Incidence’): is defined as the number of valid participations (respondents matching the target population who finished the survey) divided by the number of respondents who finished the survey in anyway.  It was improved by over 25% when using profiling or passive data for the sample selection compared to the classic design. Doing the survey within the 48 hour afterwards improves fieldwork efficiency even further (29%).
  2. Participation rates: In comparison to Group 1, the Participation Rate was increased between 27 and 44 percentage points and Dropout Rates were reduced between 3 and 5 percentage points when using profiling data and even more when using passive data to select the sample. However, doing the survey in the following 48 hours did not improve it further.
  3. Data quality and accuracy: We found little support of improvements in data quality (measured as the proportion of ‘Don’t know’ answers)  or accuracy (measured by the distance between surveys and an objective source of information). However, the new method did not harm data quality/accuracy either.
  4. Survey evaluation: Both groups selected using passive data (Groups 3 and 4) show a more positive survey evaluation. This suggests that metered panelists have in general a more positive attitude towards surveys.

beautiful-brainstorming-businesswomen-601170

 

Going back to the questions that opened this post, we found that the use of additional information from profiling or passive data improves the participation rate and fieldwork efficiency without hurting the data quality or accuracy. However, trying to contact the panelists in the following 48 hours after an event of interest does not seem really worth it. While it improves incidence, it also makes the fieldwork longer and more complex to handle. Nevertheless, maybe doing research in-the-moment (rather than in the 48 hour window) will result in larger improvements. Keep an eye open for more of our publications, as we will revisit this question.

 

Subscribe to our blog and receive the latest updates here or in your email