Usable Research Reports: Using Task Analysis featured image

Usable Research Reports: Using Task Analysis

Reading Time: 4 minutes

Usable Research Reports

When it comes to generating usable results, detail is key. What exactly did the users have trouble with? Broad statements like “5 out of 15 users did not successfully complete Task 14” are sometimes found in usability study reports. A statement like this is not specific enough. How is the development team supposed to move forward with that result? How will a root cause be determined? Instead, we should be asking “What part about Task 14 did the user not understand?” or “What didn’t they perceive or do?” Fortunately, task analysis can help with this.

Task Analysis

An effective tool in generating specific results is the task analysis, an exercise commonly found in the beginning of the project. A task analysis in human factors is a process that systematically breaks down the device use process into discrete sequences of tasks. It allows researchers to more fully understand what is required of the user to successfully use the product. In other words, task analyses end up providing the groundwork for not only formative and validation studies, but also the root cause analysis and even parts of final reporting. Creating usable results isn’t about writing them better; it’s about being proactive enough to gather usable data in the first place.

To provide a little bit more structure, the FDA human factors guidance (U.S. Food and Drug Administration, 2016) itself outlines some of the questions a task analysis can help to answer: 

  • What use errors might users make on each task?
  • Which circumstances might cause users to make use errors on each task?
  • What harm might result from each use error?
  • How might the occurrence of each use error be prevented or made less frequent?
  • How might the severity of the potential harm associated with each use error be reduced?

With the answers to all these questions, the human factors team is able to more efficiently zoom in on possible usability issues, how serious they are, and how use-risks might be mitigated. Task analyses allow researchers to more specifically identify how and where a user went wrong when they observe a use error, then report it with precision as well. 

What does a task analysis look like?

Let’s consider an example task: Download the Google Calendar app for a smartphone. 

The first objective is to break down step(s) into more granular sub-tasks. When you first consider the task of “Download the Google Calendar app”, the immediate impression is anything but “challenging” or “demanding”. However, if you look a little deeper, even something so simple can be more complicated than you think. Consider this:

task analysis table with task and sub-tasksEspecially for those who are tech-savvy, “Download the app” is very simple. Users don’t even consider them to be “subtasks” — it’s just how you download a smartphone app! In many cases, a seemingly straightforward task is full of implicit subtasks. One task yielded six implicit sub-steps in this example.

The problem, of course, is that some tasks are not always obvious for users, especially new ones. If we observe a user struggling to download the calendar app, the best way to fix the issue is by knowing precisely why he or she struggled. Otherwise, we’re just guessing.

PCA

Before fielding a study, we can break down tasks by sub-tasks in order to isolate potential problems. But we can take it one level further by approaching each task and subtask from various perspectives. In nearly all tasks, something needs to be perceived (seen, touched, or heard), something needs to be understood, and something needs to be done. These are all processes in which a use error may occur.

Perception Cognition Action

We can call these perspectives PCA:

  • What must the user perceive in order to safely and correctly complete the task (see a button, hear the signal, feel the correct cord, etc.)
  • The user must understand what in order to safely and correctly complete the task (understand the purpose of a step, know where something is located, understand where something is located, etc.)
  • What must the user actively do in order to safely and correctly complete the task (flip a switch, press a button, enter text, etc.)

For instance, applied to our little task analysis table, PCA could look something like this:

task analysis table with task, sub-tasks, and PCAEach of these PCA items must occur in order to complete this single, straightforward task. It can be a lot to write up and it can be time-intensive. However, these three perspectives do a fantastic job of identifying each aspect of successfully completing a task. Using PCA, we can systematically predict specific use errors, which makes them far easier to identify if/when they do occur.

Conclusion

Armed with this level of detail coming into a study, rather than identifying tasks users did not complete successfully, it is easier to report something more specific: “During Step 7, the user did not see the app search bar and did not understand how to otherwise find the Google Calendar app within the app store.” Now, that is something developers can work with.


In order to view the other parts of this blog mini-series, please follow the links below!

Part 1: Formative vs. Validation

What are formative and validation studies? How do they differ from one another? What are their goals?  How do those topics impact how the results are reported?

Part 2: Using Task Analysis

What is a human factors task analysis? What are its contents?  How do you define critical tasks and task success criteria? How does its quality impact how usable results will be?

Part 3: Usability Studies

How do you collect high-quality data?  How will that impact how usable results will be later in the research process. 


References:

U.S. Food and Drug Administration. 2016. Applying Human Factors and Usability Engineering to Medical Devices.