This is the third part of a series on cognitively accessible survey design.
The first part of this series discussed the process of optimizing and considered how understanding and designing for survey respondents’ motivations can increase your chances of receiving optimized responses. The second part looked at some common access barriers that survey respondents may encounter when optimizing, and suggested some ways to mitigate them.
This third and final part will look at some factors that can affect the readability of a questionnaire as well as common access conflicts that may arise, and suggest some workarounds for both.
There are a variety of factors that can make questionnaires difficult for respondents sensory processing or cognitive access needs to read.
Likert scales can pose difficulties for screen reader users. Screen readers parse these questions as tables, which means that they need to be marked up in a particular way to create a logical reading order. Depending on the survey platform you are using, it may or may not be possible to ensure that this format is screen reader-accessible.
Even if it is, Likert scales can also be difficult to visually parse for some people who are Autistic, dyslexic, or who have eye movement disorders. These respondents may struggle with visual tracking across rows or differentiating between rows. This can make it difficult for them to be sure their ratings are corresponding with the right items. This in turn results in increased cognitive load for respondents, and can produce inaccurate survey data.
Instead of using typical Likert scales, consider using a series of single-select multiple choice questions to capture the same information.
People who experience access barriers related to readability may struggle with parsing forms in general. For these users, it is helpful to isolate distinct form elements using visual signifiers and blank space. One strength of digital forms, including most survey platforms, is the option to create page breaks.
Place question blocks, or even individual questions, onto separate pages. Used bold and italic text, headings and bulleted lists to visually differentiate explanatory information, such as when providing explanations of rating criteria.
When breaking up a long question block across multiple pages, it is a good idea to repeat relevant information about the question, such as context or explanations of ratings frameworks, at the top of the page. I provide an option at the start of the survey that respondents can select to forego these repeated explanations, if they prefer a more succinct experience.
Balancing Access Conflicts
Many considerations that can make a questionnaire more accessible for some respondents have the potential to pose access barriers for other respondents. It is good practice to try to identify and eliminate or mitigate these barriers.
Designing question blocks with a focus on supporting respondents to express themselves fully can result in a very long questionnaire which requires a greater time commitment and more cognitive effort to complete. This can be mitigated by making expressive questions optional, and separating them into distinct, clearly-identified blocks, on separate pages, so that respondents who want a more succinct experience can easily skip over them. Respondents should be informed of this possibility in the introduction to the questionnaire.
Repeating definitions, explanations and contextual information at the top of question blocks can be helpful for users with executive or cognitive dysfunction. However, it can also be distracting or tiring for respondents who are using screen readers or who struggle with visual processing. One way to navigate this conflict is to provide a question at the beginning of the survey allowing users to indicate whether they would like explanatory information to be repeated at the top of each page or question block, and to implement survey logic so that if they select no, the survey will skip over those repeated sections.
Similarly, some respondents may benefit from having detailed explanations about certain terms and concepts, while others may find this overwhelming. This can be addressed by including a question at the start of the survey asking whether they prefer a more comprehensive or concise survey experience, with survey logic that skips over the detailed explanations if they choose the latter.
It is not realistic to assume that any space, event, document or interface can be made “fully accessible,” not only because of access conflicts, but because any space, event, document or interface may be fundamentally inaccessible to someone. Therefore, while care should be taken to make a project’s primary mode of delivery as accessible as possible, it should also be recognized that there may be people who require an alternative mode of delivery, such as a verbal or email interview. These should be offered whenever possible, and their availability should be made known to prospective participants during the recruitment phase.