Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
SlideShare a Scribd company logo
#QualtricsConverg
e
Best Practices for Survey
Design
Dave Vanette, Principal Research Scientist, Qualtrics
#QualtricsConverg
e
1. Use best practices from the extensive survey methodology
literature!
1. Pre-test your survey!
2 TECHNIQUES TO AVOID MOST
QUESTIONNAIRE PROBLEMS
#QualtricsConverg
e
A set of strategies that respondents use in
order to avoid engaging with survey
questions.
Satisficing
#QualtricsConverg
e
Selecting the first reasonable response
Agreeing with assertions
Straightlining
Saying “don’t know”
Responding completely at random
Forms of satisficing behavior
#QualtricsConverg
e
There are two primary components that we can
influence as researchers to reduce satisficing:
Task difficulty
• Make questions as easy as possible
• Minimize distractions
• Keep the duration short
Combating satisficing
#QualtricsConverg
e
There are two primary components that we can
influence as researchers to reduce satisficing:
Task difficulty
• Make questions as easy as possible
• Minimize distractions
• Keep the duration short
Respondent motivation
• Leverage survey importance
• Keep the duration short
• Use incentives and encouragement to increase engagement
Combating satisficing
#QualtricsConverg
e
Open vs. closed questions
Ranking vs. rating
Number of scale points
Construct-specific scales
Labels on scale points
Response options
#QualtricsConverg
e
Ask open questions whenever you cannot be certain of the universe
of possible answers to a categorical question.
• “Other – specify” does NOT work
• The only way to be sure you know the universe of possible
answers is to pre-test the question extensively
• Ask open questions whenever eliciting a number
Responses to open questions are often more reliable and more
valid.
Open questions
#QualtricsConverg
e
Costs:
• They take more time for respondents
• Respondents often don’t provide much of use in web
surveys
• You have to code the responses
• Variance and/or bias
• More work for you
Open questions
#QualtricsConverg
e
Evaluating relative performance, importance, preference, etc.
“Rank the following political parties in order of most preferred to least preferred”
Republican
Democrat
Independent
Ranking
#QualtricsConverg
e
Methods of ranking
• Full ranking of all objects
• Partial ranking: e.g., 3 most important, most and least
important
• Minimal ranking: e.g., most important
• Number of items to be ranked needs to be small or
need to rank items only at the ends of the distribution
Ranking
#QualtricsConverg
e
Benefits of ranking:
• Allows/forces absolute comparisons
• Non-differentiation isn’t a problem
• Reliability is high
Ranking
#QualtricsConverg
e
Benefits of ranking:
• Allows/forces absolute comparisons
• Non-differentiation isn’t a problem
• Reliability is high
•Costs of ranking
• Difficult cognitive task, especially if all of the items are
quite different or all very desirable or undesirable
• Can be time-consuming
• Analysis is more complicated
Ranking
#QualtricsConverg
e
“How much did you learn from the survey design
breakout?”
Learned a great deal
Learned a lot
Learned a moderate amount
Learned a little
Learned nothing at all
Rating
#QualtricsConverg
e
Benefits of rating:
• Easier for respondents
• Easier to analyze the data
• Preferred by respondents
Rating
#QualtricsConverg
e
Benefits of rating:
• Easier for respondents
• Easier to analyze the data
• Preferred by respondents
Costs of rating:
• Less effort may lead to lower data quality
• Responses are less reliable over time
• Susceptible to response style
• Avoiding ends of scales, acquiescence, etc.
• May lead to correlated response patterns
Rating
#QualtricsConverg
e
What to do?
• When life forces choices, use ranking
• Otherwise use ratings
(be aware of straightlining risk)
Ranking vs. Rating
#QualtricsConverg
e
Goals:
• Differentiate between meaningful levels of a construct
• Avoid ambiguity between scale points
• Maximize reliability
Number of scale points
#QualtricsConverg
e
Use 7-point scales for bipolar constructs
• (e.g. Extremely good-Extremely bad)
• Use bipolar scales for bipolar constructs
Number of scale points
#QualtricsConverg
e
Number of scale points
Use 7-point scales for bipolar constructs
• (e.g., Extremely good-Extremely bad)
Use 5-point unipolar scales for unipolar constructs
• (e.g., Instructor cared a great deal-Instructor didn’t care at
all)
#QualtricsConverg
e
Use middle alternatives, especially with bipolar scales
Use branching to get more detailed bipolar measures
“Generally speaking, do you consider yourself to be a
Democrat, Republican, Independent, or what?”
• Would you say you are a strong (X) or weak (X)?
• Would you say you lean toward one party or the other?
(for Independents)
Number of scale points
#QualtricsConverg
e
Generic Likert (avoid):
“Qualtrics cares about the success of their clients”
Strongly Agree
Agree
Neither agree nor disagree
Disagree
Strongly disagree
Use construct-specific response scales
whenever possible
#QualtricsConverg
e
“How much does Qualtrics care about the success of
their clients?”
Cares a great deal
Cares a lot
Cares a moderate amount
Cares a little
Does not care at all
Use construct-specific response
scales whenever possible
#QualtricsConverg
e
Goals:
• Respondents should find it easy to interpret the meanings of the scale points
• Respondents should believe the meaning of each scale point to be clear
• All respondents should interpret the meanings of the scale points identically
• The labels should differentiate respondents from one another validly as
much as possible
• The resulting scale includes points that correspond to all points on the
underlying construct’s continuum
Labeling scale points
#QualtricsConverg
e
• Numbers alone are ambiguous – generally best to omit
them
• Label all scale points – labels may attract people if only
some points have them
• Respondents presume equal spacing of scale points and
the underlying construct continuum – reinforce this with
labels
Labeling scale points
#QualtricsConverg
e
Goals:
• Univocality
• Only mention the construct that you
want to measure
• Avoid double-barreled questions
• Meaning uniformity
• Each question should mean the same
thing to all respondents
• Economy of words
• Use as many words as are needed to
convey the idea clearly to all
respondents…and no more
Question wording
#QualtricsConverg
e
Word selection guidelines:
• Select words with one meaning (dictionary)
• Simple words (few syllables)
• Simple sentences (few words)
• Readability scores
• Homonyms (fare/fair)
• Heteronyms (lead/lead)
Question wording
#QualtricsConverg
e
In general, questions should be worded to:
• Be simple, direct, comprehensible
• Not use jargon
• Be specific and concrete (rather than general and abstract)
• Avoid ambiguous words
• Avoid double-barreled questions
• Avoid negations
• Avoid leading questions
• Include filter questions
• Be sure questions read smoothly aloud
• Avoid emotionally-charged words
• Allow for all possible responses
Question wording
#QualtricsConverg
e
Send it out for review
• Collaborators, colleagues, friends, experts, etc. can all
help catch problems that you didn’t notice
Do a few cognitive interviews
• “How did you get to that response?”
Pre-test
• Always pre-test a new questionnaire on non-experts
even if it’s only been edited
You’ve just built or edited a
questionnaire…now what?
#QualtricsConverg
e
1. Any pre-testing is better than none
▪ Friends, colleagues, people in this room,
non-experts
▪ A small sample of respondents
2. At very least you’ll catch glaring errors
▪ Typos, broken skip logic, question/response
option mismatch, etc.
3. But hopefully you’ll get some qualitative
feedback, too
▪ What was confusing? None of this
▪ What was difficult? Less of this
▪ What was easy? More of this
WHAT
DOES
PRE-
TESTING
LOOK LIKE?
#QualtricsConverg
e
1. Satisficing is a big threat – don’t enable
it with your questionnaires
2. Choose the right response format for
your research
• Open text
• Ranking
• Rating
• Use the right number of scale
points
• Use construct–specific response
options
• Verbally label your scale points
3. Question wording matters a lot, be
deliberate
Review
#QualtricsConverg
e
1.“Survey Research” by Krosnick (Ann. Rev. Psych, 1999)
2.“The Psychology of Survey Response” by Tourangeau, Rips, & Rasinski
(2000)
3.“The Science of Asking Questions“ by Schaeffer & Presser (Ann. Rev. Soc,
2003)
4.“Thinking About Answers” by Sudman & Bradburn (1996)
5.“Question and Questionnaire Design” by Krosnick & Presser (in the
Handbook of Survey Research, 2010)
6.“Answering Questions: A Comparison of Survey Satisficing and
Mindlessness” by Vannette & Krosnick (The Wiley-Blackwell Handbook of
Mindfulness, 2014)
7.“The Palgrave Handbook of Survey Research” by Vannette & Krosnick
(forthcoming from Palgrave this year)
Further reading
#QualtricsConverg
e
THANKS!

More Related Content

Best Practices for Survey Design

  • 1. #QualtricsConverg e Best Practices for Survey Design Dave Vanette, Principal Research Scientist, Qualtrics
  • 2. #QualtricsConverg e 1. Use best practices from the extensive survey methodology literature! 1. Pre-test your survey! 2 TECHNIQUES TO AVOID MOST QUESTIONNAIRE PROBLEMS
  • 3. #QualtricsConverg e A set of strategies that respondents use in order to avoid engaging with survey questions. Satisficing
  • 4. #QualtricsConverg e Selecting the first reasonable response Agreeing with assertions Straightlining Saying “don’t know” Responding completely at random Forms of satisficing behavior
  • 5. #QualtricsConverg e There are two primary components that we can influence as researchers to reduce satisficing: Task difficulty • Make questions as easy as possible • Minimize distractions • Keep the duration short Combating satisficing
  • 6. #QualtricsConverg e There are two primary components that we can influence as researchers to reduce satisficing: Task difficulty • Make questions as easy as possible • Minimize distractions • Keep the duration short Respondent motivation • Leverage survey importance • Keep the duration short • Use incentives and encouragement to increase engagement Combating satisficing
  • 7. #QualtricsConverg e Open vs. closed questions Ranking vs. rating Number of scale points Construct-specific scales Labels on scale points Response options
  • 8. #QualtricsConverg e Ask open questions whenever you cannot be certain of the universe of possible answers to a categorical question. • “Other – specify” does NOT work • The only way to be sure you know the universe of possible answers is to pre-test the question extensively • Ask open questions whenever eliciting a number Responses to open questions are often more reliable and more valid. Open questions
  • 9. #QualtricsConverg e Costs: • They take more time for respondents • Respondents often don’t provide much of use in web surveys • You have to code the responses • Variance and/or bias • More work for you Open questions
  • 10. #QualtricsConverg e Evaluating relative performance, importance, preference, etc. “Rank the following political parties in order of most preferred to least preferred” Republican Democrat Independent Ranking
  • 11. #QualtricsConverg e Methods of ranking • Full ranking of all objects • Partial ranking: e.g., 3 most important, most and least important • Minimal ranking: e.g., most important • Number of items to be ranked needs to be small or need to rank items only at the ends of the distribution Ranking
  • 12. #QualtricsConverg e Benefits of ranking: • Allows/forces absolute comparisons • Non-differentiation isn’t a problem • Reliability is high Ranking
  • 13. #QualtricsConverg e Benefits of ranking: • Allows/forces absolute comparisons • Non-differentiation isn’t a problem • Reliability is high •Costs of ranking • Difficult cognitive task, especially if all of the items are quite different or all very desirable or undesirable • Can be time-consuming • Analysis is more complicated Ranking
  • 14. #QualtricsConverg e “How much did you learn from the survey design breakout?” Learned a great deal Learned a lot Learned a moderate amount Learned a little Learned nothing at all Rating
  • 15. #QualtricsConverg e Benefits of rating: • Easier for respondents • Easier to analyze the data • Preferred by respondents Rating
  • 16. #QualtricsConverg e Benefits of rating: • Easier for respondents • Easier to analyze the data • Preferred by respondents Costs of rating: • Less effort may lead to lower data quality • Responses are less reliable over time • Susceptible to response style • Avoiding ends of scales, acquiescence, etc. • May lead to correlated response patterns Rating
  • 17. #QualtricsConverg e What to do? • When life forces choices, use ranking • Otherwise use ratings (be aware of straightlining risk) Ranking vs. Rating
  • 18. #QualtricsConverg e Goals: • Differentiate between meaningful levels of a construct • Avoid ambiguity between scale points • Maximize reliability Number of scale points
  • 19. #QualtricsConverg e Use 7-point scales for bipolar constructs • (e.g. Extremely good-Extremely bad) • Use bipolar scales for bipolar constructs Number of scale points
  • 20. #QualtricsConverg e Number of scale points Use 7-point scales for bipolar constructs • (e.g., Extremely good-Extremely bad) Use 5-point unipolar scales for unipolar constructs • (e.g., Instructor cared a great deal-Instructor didn’t care at all)
  • 21. #QualtricsConverg e Use middle alternatives, especially with bipolar scales Use branching to get more detailed bipolar measures “Generally speaking, do you consider yourself to be a Democrat, Republican, Independent, or what?” • Would you say you are a strong (X) or weak (X)? • Would you say you lean toward one party or the other? (for Independents) Number of scale points
  • 22. #QualtricsConverg e Generic Likert (avoid): “Qualtrics cares about the success of their clients” Strongly Agree Agree Neither agree nor disagree Disagree Strongly disagree Use construct-specific response scales whenever possible
  • 23. #QualtricsConverg e “How much does Qualtrics care about the success of their clients?” Cares a great deal Cares a lot Cares a moderate amount Cares a little Does not care at all Use construct-specific response scales whenever possible
  • 24. #QualtricsConverg e Goals: • Respondents should find it easy to interpret the meanings of the scale points • Respondents should believe the meaning of each scale point to be clear • All respondents should interpret the meanings of the scale points identically • The labels should differentiate respondents from one another validly as much as possible • The resulting scale includes points that correspond to all points on the underlying construct’s continuum Labeling scale points
  • 25. #QualtricsConverg e • Numbers alone are ambiguous – generally best to omit them • Label all scale points – labels may attract people if only some points have them • Respondents presume equal spacing of scale points and the underlying construct continuum – reinforce this with labels Labeling scale points
  • 26. #QualtricsConverg e Goals: • Univocality • Only mention the construct that you want to measure • Avoid double-barreled questions • Meaning uniformity • Each question should mean the same thing to all respondents • Economy of words • Use as many words as are needed to convey the idea clearly to all respondents…and no more Question wording
  • 27. #QualtricsConverg e Word selection guidelines: • Select words with one meaning (dictionary) • Simple words (few syllables) • Simple sentences (few words) • Readability scores • Homonyms (fare/fair) • Heteronyms (lead/lead) Question wording
  • 28. #QualtricsConverg e In general, questions should be worded to: • Be simple, direct, comprehensible • Not use jargon • Be specific and concrete (rather than general and abstract) • Avoid ambiguous words • Avoid double-barreled questions • Avoid negations • Avoid leading questions • Include filter questions • Be sure questions read smoothly aloud • Avoid emotionally-charged words • Allow for all possible responses Question wording
  • 29. #QualtricsConverg e Send it out for review • Collaborators, colleagues, friends, experts, etc. can all help catch problems that you didn’t notice Do a few cognitive interviews • “How did you get to that response?” Pre-test • Always pre-test a new questionnaire on non-experts even if it’s only been edited You’ve just built or edited a questionnaire…now what?
  • 30. #QualtricsConverg e 1. Any pre-testing is better than none ▪ Friends, colleagues, people in this room, non-experts ▪ A small sample of respondents 2. At very least you’ll catch glaring errors ▪ Typos, broken skip logic, question/response option mismatch, etc. 3. But hopefully you’ll get some qualitative feedback, too ▪ What was confusing? None of this ▪ What was difficult? Less of this ▪ What was easy? More of this WHAT DOES PRE- TESTING LOOK LIKE?
  • 31. #QualtricsConverg e 1. Satisficing is a big threat – don’t enable it with your questionnaires 2. Choose the right response format for your research • Open text • Ranking • Rating • Use the right number of scale points • Use construct–specific response options • Verbally label your scale points 3. Question wording matters a lot, be deliberate Review
  • 32. #QualtricsConverg e 1.“Survey Research” by Krosnick (Ann. Rev. Psych, 1999) 2.“The Psychology of Survey Response” by Tourangeau, Rips, & Rasinski (2000) 3.“The Science of Asking Questions“ by Schaeffer & Presser (Ann. Rev. Soc, 2003) 4.“Thinking About Answers” by Sudman & Bradburn (1996) 5.“Question and Questionnaire Design” by Krosnick & Presser (in the Handbook of Survey Research, 2010) 6.“Answering Questions: A Comparison of Survey Satisficing and Mindlessness” by Vannette & Krosnick (The Wiley-Blackwell Handbook of Mindfulness, 2014) 7.“The Palgrave Handbook of Survey Research” by Vannette & Krosnick (forthcoming from Palgrave this year) Further reading