Eye-tracking provides valuable insights into the cognition of healthcare providers, patients, caregivers, and medical device users. Recent advancements have made this technology more accessible and affordable. Yet, despite its growing popularity, numerous misconceptions about eye-tracking persist.

At Research Collective, we have extensive experience using eye-tracking to better understand human behavior and interactions in healthcare settings. In discussing eye-tracking with clients, we are time and again confronted with myths about its capabilities, limitations, and applications. The goal of this post is to dispel some of the most common myths we’ve heard and promote a clearer understanding of what this technology has to offer. Let the debunking begin!

Myth 1: Any study would benefit from eye-tracking.

This is a big one.

Eye-tracking captures invaluable data regarding what people are paying attention to as they use medical devices and perform tasks. For instance, it offers objective evidence showing whether users actually read the warnings before using a medical device. (Spoiler alert: they often don’t.) It can also reveal which tasks or user interface elements are the most difficult and error-prone.

However, eye-tracking is a specialized tool, not a catch-all solution, and it can be costly to use. Like any research method, it’s great for answering some questions but not so useful for others. Knowing when it’s not the right tool for the job is just as important as knowing when it is. As a rule of thumb, avoid eye-tracking when you can’t frame your research questions in terms of visual behavior or eye movements, or when your motivation is purely exploratory.

Myth 2: All eye-tracking data is useful.

Eye-tracking datasets are huge. Dozens of metrics are recorded up to 2,000 times per second throughout testing. Even so, answering your research questions rarely requires analyzing more than a few metrics collected during brief windows of time. To get the most out of eye-tracking, it’s essential to identify these metrics beforehand and stick to the script during your analyses. For instance, in a typical medical device usability study, data on the total number of fixations, cumulative viewing time, and dwell time on key user interface elements may be useful, while data on eye movement speed or blink duration may not be.

With so much juicy data at your disposal, it can be tempting to binge on analyses, but it’s vital not to overanalyze any dataset. This often leads to Type I errors (false positive trends), which promote misinterpretations and limit your ability to find meaningful insights. Think of eye-tracking data like an iceberg, with much of the data underwater and better left untouched. Contemporary eye-tracking analysis software lets you customize data filters for your precise areas and times of interest, which helps separate the wheat from the chaff.

Myth 3: Interpreting gaze data is straightforward.

Interpreting gaze data should be highly contextual and informed by other factors such as the environment, task, context, and surrounding behaviors. Keep in mind that gaze data shows you where attention was directed during a study, not why it went there in the first place or what the observer was thinking during that time. In fact, the exact same data could warrant different interpretations in different contexts. Imagine a user spending a long time viewing a warning listed on a medical device before using it. This could mean that they understood its importance and took care to remember it for later, or it could mean that they failed to understand the wording despite spending time trying to comprehend it.

Good interpretations of gaze trends are usually supported by alternative evidence. For instance, in the example above, the latter conclusion becomes far more viable if the user later violated the warning during testing. Debriefing interviews can also be useful to contextualize gaze patterns, particularly if the gaze replay video is reviewed. This video superimposes gaze data in real-time over the session recording, which reminds users what they were focusing on throughout testing. This helps them provide the necessary context and helps the moderator ask fruitful questions.

Some eye-tracking metrics are relatively less ambiguous, while interpreting gaze data is often a blend of art and science. Plenty of clever ways exist to help you substantiate your conclusions. The bottom line is to never interpret gaze patterns in a silo.

Myth 4: Eye-tracking is all about where observers are looking.

Okay, okay – eye-tracking is at least partly about where observers are looking. It’s in the name! The myth here is that it’s all about this.

Metrics like pupil dilation and blink rate are unrelated to gaze location but can help you draw inferences about a variety of user states and mental processes – including cognitive workload, fatigue, and task engagement. This stems from well-established relationships between these measures and certain neurotransmitters that play key roles in attention, memory, and arousal. Because their interpretations are grounded in the physiology of the brain, these measures are also easier to decipher than their gaze-based counterparts. (Not to mention pretty darn cool!)

For instance, the combination of increased pupil size and decreased blink rate is a tell-tale sign of a high cognitive workload during demanding tasks. Measuring cognitive workload is especially beneficial in healthcare research because mental overload can result in medical errors with serious consequences. Eye-tracking unveils aspects of healthcare workflows or user interfaces that are unnecessarily taxing, which is the first step to improving their safety and effectiveness.

Myth 5: If you can use the software, you can do eye-tracking.

Eye-tracking platforms have greatly advanced and are easier to use than ever. With that said, simply being able to press the right buttons does not make you an eye-tracking expert. The true difference between a good and bad eye-tracking researcher lies in their ability to evaluate and interpret the data correctly. This requires a deep understanding of the technology itself, human vision and attention, the best metrics to tackle different research questions, and much more.

Aspiring eye-tracking researchers should expect a learning curve before fully grasping the nuances of the field. Additional effort upfront goes a long way in avoiding costly mistakes down the line and ensuring that your research is of high quality. A few additional resources are listed at the end of this post for those interested – including a shameless self-plug from yours truly.

Conclusion

Eye-tracking is a wonderful tool when used correctly, but researchers use it incorrectly too often. Avoid oversimplifications and blanket assumptions that discount its complexity. Consider eye-tracking as a supplemental method that adds a new dimension to your research – not a substitute for existing methods. At Research Collective, we appreciate the nuance and value that this technology brings, and it’s proven to be a valuable asset in our research efforts.

If you’d like to learn more, we created a video on the same topic!

Which myths did we miss? Are there any other common misconceptions that you’ve heard about eye-tracking? We’d love to hear your thoughts!

Contact us to learn more about eye-tracking and how we can assist your human factors needs.

 

Helpful References

Bojko, A., & Adamczyk, K.A. (2010). More than just eye candy: Top ten misconceptions about eye tracking. User Experience, 9(3), 4-8.

Carter, B.T., & Luke, S.G. (2020). Best practices in eye tracking research. International Journal of Psychophysiology, 155, 49-62.

Eckstein, M.K., Guerra-Carrillo, B., Miller Singley, A.T., & Bunge, S.A. (2017). Beyond eye gaze: What else can eyetracking reveal about cognition and cognitive development? Developmental Cognitive Neuroscience, 25, 69-91.

Godfroid, A., & Hui, B. (2020). Five common pitfalls in eye-tracking research. Second Language Research, 36(3), 277-305.

Pauszek, J.R. (2023). An introduction to eye tracking in human factors healthcare research and medical device testing. Human Factors in Healthcare, 3, 100031.

Categories

Tags