How Myers-Briggs and AI are being misused

Let’s say you are a job seeker who has a good idea of ​​what employers want to hear. Like many companies today, your potential new workplace will take a personality test as part of the hiring process. You plan to give answers that show that you are enthusiastic, a hard worker and a real person.

Then they put you on the camera while you take the test verbally, and you frown slightly during one of your answers, and their facial analysis program decides that you are “difficult”.

Sorry, next please!

This is just one of the many problems with the increasing use of artificial intelligence in hiring, says the new documentary “Persona: The Dark Truth Behind Personality Tests”, premiering Thursday on HBO Max.

The film, by director Tim Travers Hawkins, begins with the origins of the Myers-Briggs Type Indicator personality test. The mid-20th century idea of ​​a mother-daughter team, classifies people based on four factors: introversion / extroversion, sensation / intuition, thinking / feeling and judgment / perception. The questionnaire, which has a cult similar to that of astrology for its 16 four-letter “types”, has evolved into a hiring tool used throughout corporate America, along with successors such as the “Big Five”, which measures five main traits of personality: openness, conscientiousness, extraversion, kindness and neuroticism.

“Persona” argues that the written test contains certain built-in prejudices; for example, the potential to discriminate against those who are not familiar with the type of language or scenarios used in the test.

And according to the film, incorporating artificial intelligence into the process makes things even more problematic.

The technology examines applications written for red flag words and, when a camera interview is involved, examines candidates for facial expressions that might contradict the answers.

Four generations of Briggs Meyers women.
Four generations of Briggs Meyers women.
HBO Max

“[It] operates based on 19th century pseudoscientific reasoning that emotions and character can be standardized from facial expressions, ”said Ifeoma Ajunwa, associate professor of law and director of the AI ​​Decision Research Program at the University of North Carolina Law, to The Post by email.

Ajunwa, who participates in the film, says the potential for bias is enormous. “Given that automated systems are usually trained on white male faces and voices, the facial expressions or vocal tones of women and racial minorities can be misjudged. In addition, there is a concern with privacy arising from the collection of biometric data ”.

A widely used recruiting firm, HireVue, analyzed candidates’ “facial movements, word choice and voice before classifying them in relation to other candidates based on an automatically generated ’employability’ score,” reported the Washington Post. Since then, the company has discontinued the practice, it announced last month.

Although they claim that “visual analysis no longer adds significant value to assessments”, the change came after a clamor about the potentially damaging effects.

Cathy O’Neil is a data science consultant, author of “Weapons of Mathematical Destruction: How Big Data Increases Inequality and Threats Democracy” and one of the experts interviewed in “Persona”. His company, O’Neil Risk Consulting & Algorithmic Auditing (ORCAA), provided a practice audit at HireVue after the announcement.

“No technology is inherently harmful; it’s just a tool, ”she told The Post via email. “But just as a sharp knife can be used to cut bread or kill a man, facial recognition can be used to harm individuals or communities. . . This is particularly true because people often assume that technology is objective and even perfect. If we have blind faith in something deeply complex and deeply opaque, it is always a mistake. “

A typical question from the Myers-Briggs personality test.
A typical question from the Myers-Briggs personality test.
HBO Max

In recent years, there have been a number of legislative actions around the use of facial algorithms. But New York City is the first to introduce a bill that would specifically regulate its use in the hiring process. This would require companies to disclose to candidates who are using the technology and to conduct an annual audit to detect trends.

Just as a sharp knife can be used to cut bread or kill a man, facial recognition can be used to harm individuals or communities.

Data Science Consultant Cathy O’Neil

But Ajunwa thinks this is not going far enough. It is “a necessary first step to preserve workers’ civil liberties,” she said. But “what we need is federal regulations that are linked to federal anti-discrimination laws and that apply in all states, not just in New York City”

For those who knew Isabel Briggs Myers, seeing the test hand in hand with AI, being used to relentlessly determining whether people are “desirable” seems a far cry from their original intention, which was to help users find their true vocations.

As one of Briggs Myers’ granddaughters says in the film, “I think there are ways to use it that she would like to correct.”

.Source