AI in eye tracking: how reliable is it? (research)

How reliable is the use of AI in neuromarketing? We have tried to answer this question through research. We compared the results of AI eye tracking models with the results of real participants.

In February and March 2023, researchers interviewed from Thomas More in the context of the TETRA research project ‘Neuromarketing’, employees from 13 different Belgian companies from the guidance group. The interviews show that companies are interested in more information about new techniques, including the use of AI in neuromarketing. They may also consider starting this. The reason behind this priority is the scalability that the use of AI offers.

Also contributing to this article: Dieter Struyf, Nele De Witte in Audrey Verrall.

Previous research in the project

In an earlier study in the project, we analyzed six advertising videos using four different neuromarketing techniques: eye tracking, facial expressions, skin conductance and EEG. From this we can conclude that eye tracking in itself is an accessible technique for measuring visual attention and engagement.

Due to the interest of companies in scalable techniques, we have decided to further focus the research project on testing AI models for eye tracking. This was done in two steps to answer two questions:

  • Is predictive eye tracking comparable to eye tracking in which real participants participate?
  • What is the evaluation of predictive eye tracking in A/B testing?

1. Is predictive eye tracking comparable to eye tracking in which real participants participate?

In the first step, we analyzed the eye tracking responses to the videos from the previous study. Using two different providers of predictive AI models: Neurons and Brainsight. Although these providers use different metrics, they both have algorithms trained on an extensive and specific data set to predict what people might be looking at.

That’s how they offer heat maps and other metrics, such as ‘Clarity‘. This indicates how easy it is for viewers to scan the most important information in an image. The goal was to compare the results of these providers with those of traditional eye tracking with real participants. This is to assess whether the use of AI models performs equally well.

When comparing the results, it appears that they are in line with the data collected by traditional eye tracking with real participants. Yet biases do occur when using AI models. Below we discuss the three most important biases we have encountered.

1. Logos or the name of the brand

First, the AI ​​models predict that attention will be more likely to be focused on the logo or name of the brand. While real participants would also observe other elements.

2. Faces of people

There is also a bias towards faces. As shown in the following photos, the heatmap of AI models is focused on the faces of the people in the video. But real people notice other elements too.heatmap comparison looking at faces AI model and participants

3. Emphasis on the center

In addition, the heat maps created by AI models often place too much emphasis on the center if the model does not observe other elements on which to base its prediction. In the following photos we also see a small bias regarding the brand name.
heatmap bias on the center

2. What is the evaluation of predictive eye tracking in A/B testing?

In the second step, we conducted an A/B test using AI models for eye tracking. These campaigns have also been tested in practice by the companies themselves. For example, by publishing two versions of a campaign during the same period on social media. The goal was to compare the results and see how AI models make predictions in an A/B test.

This part of the research shows that the predictions of predictive eye tracking tools are in line with the success of real-world campaigns. AI models can provide valuable information without the need to conduct A/B testing in the real world. That would save time and costs.

However, there are limitations when using AI models for A/B testing. Current predictive eye tracking analyzes visual elements. They cannot read texts, so they cannot provide additional information about the difference between two versions of an email. This can also be the case with posters that are less visual and contain more text. Or if the visuals don’t show a big difference. In such cases you may wonder whether it is currently worth investing in predictive eye tracking. Or is it better to evaluate your campaign using traditional marketing theories?

While AI is a scalable option and can provide valuable information, it is important to explore its biases and limitations. The biases in using AI for eye tracking include its tendency to predict more attention to faces, brand names, and the middle of videos. Companies should consider this in their analyses. In addition, AI can be more efficient for testing visual campaigns or content instead of textual.

Source: www.frankwatching.com