Remote Research Overview

Fundamentally, remote research involves the researcher and participant being in two separate locations. As it stands right now, remote research is undoubtedly the safest option during the COVID-19 pandemic. Video meeting platforms such as Zoom and WebEx have demonstrated their worth by keeping a much-needed point of contact viable with participants and stakeholders.

Remote Research Platforms

Other platforms such as Loopback, Loop 11, and Recollective provide an excellent option to conduct qualitative, remote research. While some features can be costly, these platforms offer unique note-taking and analytic tools that could expand and expedite how your researchers do their work. Research Collective has investigated these tools — and several others — and has identified multiple ways to use them effectively.

Recommendations

Here are a few general recommendations that should help make your remote research run smoother:

Build in extra time to troubleshoot and launch the study platform.

Often, the biggest hurdle to remote research is just getting the participant setup on the remote testing platform. Even when using “simple” platforms like Join. Me, it may take an inexperienced computer user 15 – 30 minutes to download the appropriate plugins and launch the tool. This can eat up valuable research time, and cause you to run out of time at the end of your session.

Re-evaluate the scope of your research.

Scope creep is common when there are few barriers against asking more questions at the end of a study. In an in-person setting, a longer session may be appropriate if you give participants a break. However, breaks are problemmatic during remote research (e.g., not able to reconnect to the session). Keeping the study’s scope in check may lead to shorter sessions overall.

Avoid or limit the need for participants to download content in order to participate in a study.

People are generally skeptical about downloading content onto their phone or computer, especially when that request is coming from a company they just heard about through a recruiting advertisement.

If possible, use a meeting platform (e.g., Zoom, WebEx, FaceTime) that the participant already knows and trusts. If they do have to download content (e.g., prototype for testing), make the process as streamlined as possible. This can be especially useful among older adults who may not be as tech-savvy.

Remote Usability Testing (One-On-One Interviews)

Remote usability tests usually involve one researcher and a participant – a one-on-one interview or in-depth interview (IDI). A remote usability test mimics the dynamic that researchers and designers follow in an in-person setting. However, there are a few things that must be modified to make a remote usability test run successfully.

Before you plan your remote usability test, evaluate the platforms you have access to, as well as your research questions and goals. For straightforward studies in which only basic platform capabilities are needed, consider using a readily available video conferencing platform.

These basic capabilities include things such as audio, video, and screen sharing. Using a lesser-known platform may lead to skepticism and distrust amongst prospective participants.

Expectations and Instructions

Participants need clear expectations and setup instructions to get started on the platform you selected. Keep the instructions brief and use simple language that participants can understand, regardless of their reading level. In cases where participants need to download a plug-in or a digital prototype, consider providing a pictorial walkthrough. This can be especially useful for less tech savvy participants, who may be prone to abandoning the study altogether if they can’t figure out how to get started quickly enough.

Duration

Design the remote usability test session to be as brief as possible, while still serving your research goals. Finding this balance may require some pilot testing. For example, you might expect the introductions and “building rapport” part of the study to take  five minutes, when in a real session these open ended questions can quickly get out of hand with some participants.

Keep a shortlist of prompts or statements that you can refer to in order to keep participants on track. It is difficult for video participants to pick up on facial cues (e.g., breaking eye contact) or utterances you use in person to shift topics. Your efforts should be more direct, while maintaining civility.

Instructions

Make your task instructions brief, simple, and straightforward. If a task involves multiple parts, break them up and display them (or saying them) separately as needed. Just like an in-person usability study, a remote study should have the task instructions presented at all times for quick reference.

There are several ways to do this through a simple platform such as Zoom or WebEx. For example, you could use the “chat” feature to present the task instructions. Or, you could print out each task on a separate page and place it in front of your webcam. These strategies free up the screen sharing feature so you can keep an eye on the participant’s screen (if needed).

Honoraria

Lastly, be mindful of your study honorarium (i.e., participant compensation). As a guideline, remote research commands lower incentive amounts than in-person studies. This is due to the fact that no travel, parking fees, or potentially even time off work is required of the participant. For a straightforward remote usability study with general population users, consider an incentives between $15 – $25 per 15 minute increments.

A 60 minute remote usability test, for example, would run about $60 – $100 per participant. The upper end might apply to studies that require extra effort, such as using multiple tools, sharing documents, or downloading several prototypes. This extra honorarium may be necessary to convince participants that the study is worth the effort.

Surveys & Questionnaires

To clear things up though — the terms “surveys” and “questionnaires” mean different things to different people. For our purposes, we use these two terms interchangeably.

Surveys and questionnaires should be an integral and ongoing part of every UX and Human Factors group. They can be used to collect ongoing data about your website’s Net Promoter Score (NPS), or to evaluate how your product performs in respect to a System Usability Scale (SUS).

If you aren’t familiar with the SUS, we’ve written a popular blog article on SUS that many professionals use as a reference. In more complex forms, surveys and questionnaires can also serve as the cornerstone of diary studies as well. That is, studies in which participants complete the same set of questions at the end of a predetermined time interval (e.g., hourly, daily, weekly).

A survey usually consists both closed and open-ended question types. The former refers to questions where the participant chooses one (or more) options from a pre-defined list. The latter enables the participant to write in his or her own response(s). Of course, there are question types that integrate a combination of both closed and open-ended questions as well.

Closed-ended question formats include: yes-no questions, multiple-choice questions, semantic differential questions, and Likert-type questions. For the sake of brevity though, this article does not cover the full spectrum of question types and variations that are possible.

Questionnaire Recommendations

Here are a few general recommendations for the development and structure of a survey:

Design survey content to work equally well on mobile devices.

Most participants will opt to complete surveys on a mobile device if you give them the option. Make sure that the layout is equally readable on desktop and mobile platforms. Even systems like Qualtrics and SurveyGizmo struggle when it comes to presenting all question types on smaller mobile screens.

Question types such as rank-ordering items and matrices are especially prone to layout problems. Make sure to test your survey on the smallest screen size you expect participants to use. You may have to use different question types, if your first choice won’t present well.

Avoid nesting components in the same question (or statement).

Here’s an example of a survey question with too much going on in it: “Please rate your level of AGREEMENT or DISAGREEMENT with the following statement: the controls have a smooth, metallic surface that I enjoy interacting with.” There are at least three components nested in this statement.

The researcher really wants to know if the participant enjoys interacting with the control. But they’ve made these other assumptions that render the statement useless from an analytic standpoint. How do you know which part of this statement the participant has focused on? All of it? Part of it? Which part? The risk for variability between participants’ interpretations is too great to put faith in the results.

Use simple, understandable language.

Often, when we aren’t sure exactly what we want to ask, we use a word that is unnecessarily complex or has an ill-defined meaning to give us a bigger “target”. For example, the term, “interesting” has many meanings depending on context speaker’s tone that it really means nothing at all.

Similar to the dilemma described above, the participant has too much room for interpretation. Use a literacy tool (e.g., Flesch-Kincaid Readability assessment) to evaluate the reading level of your text automatically.

Consider how question order might affect responses.

Sometimes it can be difficult to hide your research’s purpose by the direction your questions take. This can bias participants to be more (or less) favorable toward your research objective. A good practice is to start as broadly as possible, and become increasingly specific as participants answer more questions.

In other situations, it might be more valuable to randomize question order, or use a counterbalanced or Latin Square design. Think through the strengths and weaknesses of each option with your colleagues, and be sure to test the survey internally to get feedback before launching it out to your sample of participants.

Double-check your question or survey logic.

These are the more tedious aspects of building a survey, but essential to making your survey actually work. A simple logical error can prevent participants from completing the survey, or see questions that they shouldn’t.

Evaluate whether all of your questions are “required”.

Some responses are helpful, but not necessary to address your research questions. If it turns out that some of your questions are non-essential, many survey platforms have a way to indicate which questions are required or not. This can help reduce some of the burden placed on participants.

Question types such as open-ended responses can be the Achilles’ heel of a well-designed survey; some participants get burnt out when they see these questions and abandon it altogether. This is especially true amongst participants who are completing the survey from their mobile device. On the other hand, those with access to a physical keyboard may not mind it as much.

(Remote) Asynchronous Focus Groups

Remote asynchronous focus groups (i.e., Bulletin Board Focus Groups — BBFGs)  are an excellent way to present complex or in-depth topics to participants that might be too overwhelming to manage during a live (i.e., synchronous) focus group — remote or in-person. Often, an asynchronous focus group is conducted over at least two days through an online platform. Participants generally respond via text to topics or questions presented by the study moderator.

The study moderator presents new topics or questions to the sample as a whole at specific time intervals (e.g., every 2 hours). Participants can choose to complete these questions at their leisure throughout the day, taking extra time to think about and respond to each question. On some platforms, users can also post pictures or videos if needed. In our experience, audio and video isn’t necessary for a good study, but it can be useful if participants need to complete sets of tasks or perform a think-aloud exercise.

One valuable aspect of this type of study is that the study moderator can set the platform rules to prevent participants from seeing each others’ responses to a topic until they create an original response. In other words, I cannot read what you’ve posted until I post something first. This feature helps keep participants honest. There’s no copy and pasting going on, and there is limited bias due to “groupthink” — at least to the initial question presented by the moderator.

Once a participant posts their response, they can see what others have said. Then, they can comment on each others’ posts, seek clarification, or ask their own questions (if permitted).

Advantages of (Remote) Asynchronous Focus Groups

Our experience is that remote asynchronous focus groups can be a better way to conduct studies with hard to reach or geographically dispersed users. Plus, since there is no situational urgency to respond as quickly as possible (i.e., like in a live focus group), the asynchronous aspect allows more time for participants to think about what they want to say and how it should be said. This can result in better qualitative data to draw from for reports and stakeholders.

There are a variety of asynchronous focus group platforms available to suit your organization’s needs. A couple of our favorites include Recollective (no affiliation to Research Collective) and Loop 11. We’ve also had great experiences collaborating with Peacock 9, who offers a unique, in-house BBFG platform of their own.

(Remote) Synchronous Focus Groups

Remote focus groups can afford similar results as in-person focus groups, but require consideration of a few other factors to make the study run effectively. In fact, many of the tips and tricks discussed in the “Remote Usability Testing” section of this article (see above) should be incorporated in a remote focus group. In addition to these recommendations, here are a few additional tips to get the most out of a remote synchronous focus group:

Recommendations for(Remote) Synchronous Focus Groups

Online focus groups should involve fewer participants than in-person focus groups.

When it comes to live, remote focus groups, it’s not always clear when each person should speak. Even when video is enabled, we lack the instantaneous facial cues, body language, and sounds that we rely on during face-to-face conversations to prevent talking over each other.

As a result, you lose a fair amount of time to delays and uncertainty as your sample size increases. Research Collective recommends that a remote focus group should be scheduled to provide for at least 15 minutes per person in the group. An hour-long study, for example, would permit enough “speaking time” for about 4 participants. If you need more participants, consider increasing your session time accordingly.

Set up a virtual waiting room.

Squaring away all of the participants in the beginning of a remote focus group can feel a lot like herding cats when you don’t have a neutral space for them to wait before the study begins. Without a waiting room, late participants (and yes, there will be late ones) will disrupt the beginning of your study instructions. You’ll end up repeating yourself multiple times, potentially leading to participant confusion and frustration.

Provide a brief tutorial of important platform features in the beginning of the session.

While Zoom and WebEx are common industry tools, not everyone will understand how to use them. In many cases, this study experience will be the participant’s first exposure to these tools. You can minimize disruptions and churn during the study by walking participants through key features.

For example, how to mute and unmute the mic. It’s also useful to set the ground rules for “good” study edicate, such as asking participants to mute their mic when they aren’t speaking, or to be mindful of letting everyone speak. This can help minimize the few loud voices in the group, while building up the confidence of those who may shy away from speaking in front of the group.

Building rapport takes longer in a remote setting.

Consider setting aside extra time in the beginning parts of your remote focus group to build rapport with participants (and for them to build rapport with one another). Unlike an in-person study where the moderator can physically stand up to command attention or redirect the group, the moderator in a remote setting gets lost amongst the boxes of talking heads. As a result, it’s hard to establish your role as the person responsible for directing the conservation.

Use the first 5 – 10 minutes of your study session to thoroughly explain the purpose and expectations of the study session. This helps build participant confidence in you, and it helps participants learn the tone of your voice. That way, they will know to listen when you speak up to redirect during a topic that is heading off the rails.

Use a high-quality microphone or conference phone.

It should go without saying that as the study moderator, you should prevent all background noises on your side. But equally important to this is using a voice capturing tool that will provide a clear signal of your voice to participants. In some cases, this might require using a landline connection if your home office has spotty cellular reception.

It may be okay to wear a set of headphones, but some participants might find it unprofessional. If you do go this route, consider an inconspicuous wireless option with a reputation for excellent call quality (we like the Jabra Elite Active 65t).

Educational & Design Workshops

Now is a great time to focus on building up the knowledge and skills of the UX and Human Factors researchers and designers on your team. Investing in your employees will provide better quality and more consistent research efforts for years to come. Not to mention, educational workshops can be a great team-building opportunity, and a time where employees feel that their company has acknowledged their potential and desire to learn.

Collaborative tools such as Mural and Stormboard are also useful and cost-effective ways to collaborate with others outside of your organization for design feedback,

Research Collective offers several customized educational workshop programs, ranging from fundamental topics such as “usability testing” to advanced topics such as the HFE process for medical devices seeking 510K clearance. Send us a message if you’d like to collaborate on a customized topic for your group.

Works Cited:

Freedman, J. L., Fraser, S. C. (1966). “Compliance without pressure: The foot-in-the-door technique”. Journal of Personality and Social Psychology. 4 (2): 195–202.

Schwarzwald, J., Bizman, A., & Raz, M. (1983). The foot-in-the-door paradigm: Effects of second request size on donation probability and donor generosity. Personality and Social Psychology Bulletin, 9(3), 443-450.

Categories

Tags