Reading Time: 15 minutes
Remote Research Overview

Fundamentally, remote research involves the researcher and participant being in two separate locations. As it stands right now, remote research is undoubtedly the safest option during the COVID-19 pandemic. Video meeting platforms such as Zoom and WebEx have clearly demonstrated their worth for researchers by keeping a much-needed point of contact viable with participants and stakeholders.

Other platforms such as Loopback, Loop 11, and Recollective provide an excellent option to conduct qualitative, remote research. While some of the more advanced features can be costly, these platforms offer unique notetaking and analytic tools that could actually expand and expedite how your researchers do their work. Research Collective has looked into these tools — and several others — over the past few months, and has identified multiple ways to use them effectively to conduct research for clients during the COVID-19 pandemic.

Here are a few general recommendations that should help make your remote research run smoother:

Remote Usability Testing (One-On-One Interviews)

Remote usability tests usually involve one researcher and a participant. Sometimes, this paradigm is also referred to as a one-on-one interview or in-depth interview (IDI). In many ways, a remote usability test mimics the dynamic that researchers and designers follow in an in-person setting. However, there are a few things that should be modified from this traditional approach to make a remote usability test run successfully across a sample of participants.

We recommend that before you plan your remote usability test, you evaluate the platforms you have access to, as well as your research questions and goals. For straightforward studies in which only a few, basic platform capabilities are needed, consider using a readily available video conferencing platform. These basic capabilities would include things such as audio, video, and screen sharing. Using a lesser-known platform may lead to skepticism and distrust amongst prospective participants, depending on who or how they found out about the study in the first place (i.e., Craigslist vs. word of mouth from a colleague).

More so than an in-person usability test, participants need clear expectations and setup instructions to get started on the platform you selected. Keep the instructions brief and use simple language that all participants can understand, regardless of their reading level. In cases where participants need to download a plug-in or a digital prototype, consider providing a pictorial walkthrough of what the download process should look like. This can be especially useful amongst less tech savvy participants, who may be prone to abandoning the study altogether if they can’t figure out how to get started quickly enough.

Design the remote usability test session to be as brief as possible, while still serving your research goals. Finding this balance may require some internal testing of your study with colleagues or friends, as well as exploring pitfalls or topics that might slow down a participant. For example, the researcher might expect the initial introductions and “building rapport” part of the study to take less than five minutes, when in a real session these open ended, “softball” questions can quickly get out of hand with some participants.

Be sure to have a shortlist of prompts or statements printed out that you can refer to in order to keep participants on track. It can be difficult through a video-based platform for participants to pick up on facial cues (e.g., breaking eye contact) or other utterances you might use in person to shift the conversation to a new topic. Your efforts should be more direct, while maintaining civility.

Task instructions should be written in brief, simple, straightforward language. If a task involves multiple parts, consider breaking up these parts and displaying them (or saying them) separately as needed. Just like an in-person usability study, a remote usability study should have the task instructions presented at all times for quick reference. There are several ways to do this through a simple platform such as Zoom or WebEx. For example, you could use the “chat” feature to present the task instructions. Or, you could print out each task on a separate page and place it in front of your webcam as needed. These strategies free up the screen sharing feature so you can keep an eye on the participant’s screen (if needed).

Lastly, be mindful of your study honorarium (i.e., participant compensation). As a guideline, remote research commands lower incentive amounts than in-person studies. This is due to the fact that no travel, parking fees, or potentially even time off work is required on the behalf of the participant. For a straightforward remote usability study with a sample of general population users, consider an incentive range between $15 – $25 per 15 minute increments. A 60 minute remote usability test, for example, would run about $60 – $100 per participant. The upper end of this recommendation might come into play if your study requires extra effort by the participants, such as using multiple tools, sharing documents, or downloading a lot of content or prototypes. This extra honorarium may be necessary to convince prospective participants that the study is worth the effort.

Surveys & Questionnaires

To clear things up though — the terms “surveys” and “questionnaires” mean different things to different people. For our purposes, we use these two terms interchangeably.

Surveys and questionnaires should be an integral and ongoing part of every UX and Human Factors group. They can be used in their simplest form to collect ongoing data about your website’s Net Promoter Score (NPS), or to evaluate how your product performs in respect to a System Usability Scale (SUS). If you aren’t familiar with the SUS, we’ve written a popular blog article on SUS that many professionals continue to use as a reference. In more complex forms, surveys and questionnaires can also serve as the cornerstone of diary studies as well. That is, studies in which participants complete the same set of questions at the end of a predetermined time interval (e.g., hourly, daily, weekly).

A survey usually consists of different types of closed and open-ended question types. The former refers to questions where the participant must choose one (or more) options from a list of responses a researcher has defined ahead of time. The latter implies that the participant can write in his or her own response(s). Of course, there are question types that integrate a combination of both closed and open-ended questions as well.

Common closed-ended question formats include: yes-no questions, multiple-choice questions, semantic differential questions, and Likert-type questions. For the sake of brevity though, this article does not cover the full spectrum of question types and variations that are possible.

Here are a few general recommendations for the development and structure of a survey:

  • Design survey content to work equally well on mobile devices. Most participants will opt to complete surveys on a mobile device if you give them the option. Make sure that your survey layout is equally readable on both desktop and mobile platforms. Even systems like Qualtrics and SurveyGizmo can struggle when it comes to presenting all of its question types on a smaller mobile screen. Question types such as rank-ordering items and matrices are especially prone to layout problems. Make sure to test your survey on the smallest screen size you expect participants to use. You may have to use different question types, if your first choice won’t present well.
  • Avoid nesting components in the same question (or statement). Here’s an example of a survey question with too much going on in it: “Please rate your level of AGREEMENT or DISAGREEMENT with the following statement: the controls have a smooth, metallic surface that I enjoy interacting with.” There are at least three components nested in this statement. The researcher really wants to know if the participant enjoys interacting with the control. But they’ve made these other assumptions that render the statement useless from an analytic standpoint. How do you know which part of this statement the participant has focused on? All of it? Part of it? Which part? The risk for variability between participants’ interpretations is too great to put faith in the results.
  • Use simple, understandable language. Often, when we aren’t sure exactly what we want to say in a question, we use a word that is unnecessarily complex or has an ill-defined meaning to give us a bigger “target”. For example, the use of the term, “interesting” has many meanings depending on the context of use and speaker’s tone that it really means nothing at all. Similar to the dilemma described above, the participant has too much room for interpretation. If you’re not sure if the language you’ve used is too high-brow (or low-brow), use a literacy tool (e.g., Flesch-Kincaid Readability assessment) to evaluate the reading level of your text automatically.
  • Consider how question order might affect responses. Sometimes it can be difficult to hide your research’s purpose by the direction your questions take. In some cases, this can be damaging to the researcher because it may bias participants to be more (or less) favorable toward your research objective. A good practice to follow is to start as broadly as possible, and become increasingly specific as participants answer more questions. In other situations, it might be more valuable to randomize question order altogether, or use a counterbalanced or Latin Square design. Think through the strengths and weaknesses of each option with your colleagues, and be sure to test the survey internally to get feedback before launching it out to your sample of participants.
  • Double-check your question or survey logic. These are the more tedious aspects of building a survey, but essential to making your survey actually work. A simple logical error can prevent participants from completing the survey, or see questions that they shouldn’t.
  • Evaluate whether all of your questions are “required”. Some responses are helpful, but not necessary to address your research questions. If it turns out that some of your questions are non-essential, many survey platforms have a way to indicate which questions are required or not. This can help reduce some of the burden placed on participants. Question types such as open-ended responses can be the Achilles’ heel of a well-designed survey; some participants get burnt out when they see these questions and abandon it altogether. This is especially true amongst participants who are completing the survey from their mobile device. On the other hand, those with access to a physical keyboard may not mind it as much.
(Remote) Asynchronous Focus Groups

Remote asynchronous focus groups (i.e., Bulletin Board Focus Groups — BBFGs)  are an excellent way to present complex or in-depth topics to participants that might be too overwhelming to manage during a live (i.e., synchronous) focus group — remote or in-person. Often, an asynchronous focus group is conducted over at least two days through an online platform. Participants generally respond via text to topics or questions presented by the study moderator. The study moderator presents new topics or questions to the sample as a whole at specific time intervals (e.g., every 2 hours). Participants can choose to complete these questions at their leisure throughout the day, taking extra time to think about and respond to each question. On some platforms, users can also post pictures or videos if needed. In our experience, audio and video isn’t necessary for a good study, but it can be useful if participants need to complete sets of tasks or perform a think-aloud exercise.

One valuable aspect of this type of study is that the study moderator can set the platform rules to prevent participants from seeing each others’ responses to a topic until they create an original response. In other words, I cannot read what you’ve posted until I post something first. This feature helps keep participants honest. There’s no copy and pasting going on, and there is limited bias due to “groupthink” — at least to the initial question presented by the moderator.

Once a participant posts their response, they can see what others have said. Then, they can comment on each others’ posts, seek clarification, or ask their own questions (if permitted).

Our experience is that remote asynchronous focus groups can be a better way to conduct studies with hard to reach or geographically dispersed users. Plus, since there is no situational urgency to respond as quickly as possible (i.e., like in a live focus group), the asynchronous aspect allows more time for participants to think about what they want to say and how it should be said. This can result in better qualitative data to draw from for reports and stakeholders.

There are a variety of asynchronous focus group platforms available to suit your organization’s needs. A couple of our favorites include Recollective (no affiliation to Research Collective) and Loop 11. We’ve also had great experiences collaborating with Peacock 9, who offers a unique, in-house BBFG platform of their own.

(Remote) Synchronous Focus Groups

Remote focus groups can afford similar results as in-person focus groups, but require consideration of a few other factors to make the study run effectively. In fact, many of the tips and tricks discussed in the “Remote Usability Testing” section of this article (see above) should be incorporated in a remote focus group. In addition to these recommendations, here are a few additional tips to get the most out of a remote synchronous focus group:

  • Online focus groups should involve fewer participants than in-person focus groups. When it comes to live, remote focus groups, it’s not always clear when each person should speak. Even when video is enabled, we lack the instantaneous facial cues, body language, and sounds that we rely on during face-to-face conversations to prevent talking over each other. As a result, you lose a fair amount of time to delays and uncertainty as your sample size increases. Research Collective recommends that a remote focus group should be scheduled to provide for at least 15 minutes per person in the group. An hour-long study, for example, would permit enough “speaking time” for about 4 participants. If you need more participants, consider increasing your session time accordingly.
  • Set up a virtual waiting room. Squaring away all of the participants in the beginning of a remote focus group can feel a lot like herding cats when you don’t have a neutral space for them to wait before the study begins. Without a waiting room, late participants (and yes, there will be late ones) will disrupt the beginning of your study instructions. You’ll end up repeating yourself multiple times, potentially leading to participant confusion and frustration.
  • Provide a brief tutorial of important platform features and study edicate in the beginning of the session. While Zoom and WebEx are common industry tools, not everyone will understand how to use them. In many cases, this study experience will be the participant’s first exposure to these tools. You can minimize disruptions and churn during the study by walking participants through key features. For example, how to mute and unmute the mic. It’s also useful to set the ground rules for “good” study edicate, such as asking participants to mute their mic when they aren’t speaking, or to be mindful of letting everyone speak. This can help minimize the few loud voices in the group, while building up the confidence of those who may shy away from speaking in front of the group.
  • Building rapport takes longer in a remote setting. Consider setting aside extra time in the beginning parts of your remote focus group to build rapport with participants (and for them to build rapport with one another). Unlike an in-person study where the moderator can physically stand up to command attention or redirect the group, the moderator in a remote setting gets lost amongst the boxes of talking heads. As a result, it’s hard to establish your role as the person responsible for directing the conservation. Use the first 5 – 10 minutes of your study session to thoroughly explain the purpose and expectations of the study session. This helps build participant confidence in you, and it helps participants learn the tone of your voice. That way, they will know to listen when you speak up to redirect during a topic that is heading off the rails.
  • Use a high-quality microphone or conference phone. It should go without saying that as the study moderator, you should prevent all background noises on your side. But equally important to this is using a voice capturing tool that will provide a clear signal of your voice to participants. In some cases, this might require using a landline connection if your home office has spotty cellular reception. It may be okay to wear a set of headphones, but some participants might find it unprofessional. If you do go this route, consider an inconspicuous wireless option with a reputation for excellent call quality (we like the Jabra Elite Active 65t).
Educational & Design Workshops

Now is a great time to focus on building up the knowledge and skills of the UX and Human Factors researchers and designers on your team. Investing in your employees will provide better quality and more consistent research efforts for years to come. Not to mention, educational workshops can be a great team-building opportunity, and a time where employees feel that their company has acknowledged their potential and desire to learn. Collaborative tools such as Mural and Stormboard are also useful and cost-effective ways to collaborate with others outside of your organization for design feedback,

Research Collective offers several customized educational workshop programs, ranging from fundamental topics such as “usability testing” to advanced topics such as the HFE process for medical devices seeking 510K clearance. Send us a message if you’d like to collaborate on a customized topic for your group.

Works Cited:

Freedman, J. L., Fraser, S. C. (1966). “Compliance without pressure: The foot-in-the-door technique”. Journal of Personality and Social Psychology. 4 (2): 195–202.

Schwarzwald, J., Bizman, A., & Raz, M. (1983). The foot-in-the-door paradigm: Effects of second request size on donation probability and donor generosity. Personality and Social Psychology Bulletin, 9(3), 443-450.

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.