Discover the stupidity of AI emotion recognition with this little browser game

Technology companies don’t just want to identify you using facial recognition – they also want to read your emotions with the help of AI. For many scientists, however, claims about computers’ ability to understand emotions are fundamentally wrong, and a small browser web game created by researchers at Cambridge University aims to show why.

Go to emojify.info and you will be able to see how your emotions are “read” by your computer through your webcam. The game will challenge you to produce six different emotions (happiness, sadness, fear, surprise, disgust and anger), which AI will try to identify. However, you will probably find that the software readings are far from accurate, often interpreting even exaggerated expressions as “neutral”. And even when you produce a smile that convinces your computer that you are happy, you will know that you were faking it.

This is the purpose of the site, says creator Alexa Hagerty, a researcher at the Center for the Future of Intelligence and the Center for the Study of Existential Risk at the University of Cambridge Leverhulme: to demonstrate that the basic premise underlying emotion recognition technology is that facial movements are intrinsically linked to changes in feeling, it is flawed.

“The premise of these technologies is that our internal faces and feelings are correlated in a very predictable way,” says Hagerty The Verge. “If I smile, I’m happy. If I frown, I get angry. But APA did a major review of the evidence in 2019 and found that people’s emotional space cannot be readily inferred from their facial movements. “In the game, says Hagerty,” you have a chance to move your face quickly to embody six different emotions, but the fact is that you didn’t feel six different things inwardly, one after the other in sequence. “

A second mini-game on the site shows this point, asking users to identify the difference between blinking and blinking – something that machines cannot do. “You can close your eyes and it can be an involuntary action or a meaningful gesture,” says Hagerty.

Despite these problems, emotion recognition technology is rapidly gaining momentum, with companies promising that such systems can be used to screen job candidates (giving them an “employability score”), detect potential terrorists or assess whether commercial drivers are sleepy or sleepy. (Amazon is even deploying similar technology in its own vans.)

Of course, human beings also make mistakes when we read emotions on people’s faces, but handing this work over to machines has specific disadvantages. On the one hand, machines cannot read other social clues as humans can (as with the blink / blink dichotomy). Machines also tend to make automated decisions that humans cannot question and can conduct surveillance on a large scale without our knowledge. In addition, as with facial recognition systems, emotion detection AI is often prejudiced, evaluating black faces more often as showing negative emotions, for example. All of these factors make the detection of AI emotions much more problematic than the ability of humans to read the feelings of others.

“The dangers are multiple,” says Hagerty. “With the failure of human communication, we have many options to correct this. But, once you are automating something or reading is done without your knowledge or extension, those options run out. “

Source