What Google’s dismissal of researcher Timnit Gebru means for AI ethics

Google sparked an uproar earlier this month when it fired Timnit Gebru, the co-leader of a team of researchers at the company studying the ethical implications of artificial intelligence. Google says it has accepted its “resignation”, but Gebru, who is black, says she was fired for drawing unwanted attention to the lack of diversity in Google’s workforce. She was also at odds with her supervisors due to their request for her to withdraw an article co-authored on ethical issues associated with certain types of AI models that are essential to Google’s business.

On this week’s Trend Lines podcast, WPR’s Elliot Waldman was joined by Karen Hao, the senior AI reporter at MIT Technology Review, to discuss Gebru’s downfall and its implications for the increasingly important field of AI ethics.

Listen to the full interview with Karen Hao on the Trend Lines podcast:

If you like what you hear, subscribe to Trend Lines:
What Google’s dismissal of researcher Timnit Gebru means for AI ethics Apple Podcasts Badge Spotify Badge Podcasts

The following is a partial transcript of the interview. It has been slightly edited for clarity.

World Policy Review: First, could you tell us a little bit about Gebru and the type of stature she has in the AI ​​field, given the pioneering research she did, and how did she end up on Google to begin with?

Karen Hao: It can be said that Timnit Gebru is one of the pillars of the AI ​​ethics field. She earned her Ph.D. in AI Ethics at Stanford under the guidance of Fei-Fei Li, who is one of the pioneers in the entire AI field. When Timnit completed her doctorate at Stanford, she joined Microsoft for a postdoc, before finishing at Google after they approached her based on the impressive work she had done. Google was starting its AI ethics team, and they thought it would be a great person to co-lead it. One of the studies for which she is known is the one she co-authored with another black researcher, Joy Buolamwini, on the algorithmic discrimination that appears in commercial facial recognition systems.

The article was published in 2018 and, at the time, the revelations were quite shocking because they were auditing commercial facial recognition systems that were already being sold by tech giants. The article’s findings showed that these systems that were being sold under the premise that they were highly accurate were, in fact, extremely inaccurate, specifically on female faces and darker skin. In the two years since the newspaper was published, several events have led these tech giants to overturn or suspend the sale of their facial recognition products to the police. The seed of these actions was, in fact, planted by the newspaper that Timnit is coauthor. Therefore, it is a very large presence in the field of AI ethics and has done an innovative job. She also founded a non-profit organization called Black in AI that really advocates diversity in technology and in AI specifically. She is a force of nature and a well-known name in space.

We should be thinking about how to develop new AI systems that don’t rely on this brute-force method of extracting billions and billions of sentences from the internet.

WPR: What exactly are the ethical issues that Gebru and his co-authors identified in the article that led to his resignation?

Hao: The article addressed the risks of large-scale language models, which are essentially AI algorithms trained in an enormous amount of text. So you can imagine that they are trained on all articles that have been published on the internet – all subreddits, Reddit topics, Twitter and Instagram captions – everything. And they are trying to learn how we build English phrases and how they can be able to generate English phrases. One of the reasons Google is very interested in this technology is because it helps to boost its search engine. In order for Google to provide relevant results when you search for a query, it needs to be able to capture or interpret the context of what you are saying, so that if you type three random words, it can gather the intent of what you are looking for.

What Timnit and his co-authors point out in this article is that this relatively recent area of ​​research is beneficial, but it also has some very significant disadvantages that need further discussion. One is that these models consume an enormous amount of electricity to supply power because they work in really large data centers. And given the fact that we are in a global climate crisis, the field should be thinking about the fact that, in doing this research, it could aggravate climate change and have downstream effects that disproportionately impact marginalized communities and developing countries. development. Another risk they point to is the fact that these models are so large that they are very difficult to examine and also capture large areas of the Internet that are very toxic.

So they end up normalizing the sexist, racist or abusive language that we don’t want to perpetuate in the future. But, due to the lack of analytical capacity of these models, we cannot completely dissect the types of things they are learning and then eliminate them. Ultimately, the conclusion of the article is that these systems bring great benefits, but also great risks. And, as a field, we should spend more time thinking about how we can actually develop new language AI systems that don’t rely so heavily on this method of brute force just training you on billions and billions of sentences extracted from the internet.

WPR: And how did Gebru’s Google supervisors react to that?

Hao: What’s interesting is that Timnit said – and this was supported by his former teammates – that the article was actually approved for a conference. This is a very classic process for your team and within Google’s broader AI research team. The purpose of this research is to contribute to the academic discourse, and the best way to do that is to submit it to an academic congress. They prepared this article with some outside contributors and submitted it to one of the top AI ethics conferences next year. It had been approved by her manager and others, but then, at the last minute, she received a notice from superiors above her manager saying she needed to remove the paper.

Little was revealed to her about why she needed to retract the paper. She then started asking lots of questions about who was telling her to retract the article, why they were asking her to retract the article and whether or not there could be modifications to make it more palatable for presentation. She was stuck and did not receive any further clarification, so she ended up sending an e-mail just before she went on Thanksgiving break, saying she would not remove the paper unless certain conditions were met.

Silicon Valley has a conception of how the world works based on the disproportionate representation of a specific subset of the world. That is, usually heterosexual white men of the upper class.

She asked who was giving the feedback and what the feedback was. She also requested meetings with more senior executives to explain what happened. The way their searches were handled was extremely disrespectful and it was not the way researchers are traditionally handled on Google. She wanted an explanation of why they did this. And if they didn’t meet those conditions, she would have a frank conversation with them about a last meeting at Google, so that she could create a transition plan, leave the company smoothly and publish the newspaper outside of Google. Then she went on vacation, and in the middle of that, one of her direct reports sent a message to her saying that they had received an email saying that Google had accepted her resignation.

WPR: In terms of the issues raised by Gebru and his co-authors in their article, what does it mean for the AI ​​ethics field to have what appears to be this massive level of moral hazard, where communities that are most at risk with impacts that Gebru and his identified co-authors – the environmental ramifications and such – are marginalized and often have no say in the technological space, while the engineers who build these AI models are largely isolated from the risks?

Hao: I think it hits the heart of what has been an ongoing discussion within this community for the past two years, which is that Silicon Valley has a conception of how the world works based on the disproportionate representation of a particular subset of the world. That is, usually heterosexual white men of the upper class. The values ​​they have of their cross section of the experience they have now lived have somehow become the values ​​by which everyone needs to live. But it doesn’t always work that way.

They do a cost-benefit analysis that it is worth creating these very large language models and spending all that money and electricity to obtain the benefits of this type of research. But it is based on your values ​​and your life experience, and it may end up not being the same cost-benefit analysis that someone would do in a developing country, where he would rather not have to deal with the repercussions of climate change later on. That was one of the reasons why Timnit was so adamant about ensuring that there was more diversity at the decision-making table. If you have more people who have different life experiences who can analyze the impact of these technologies through their lenses and bring their voices into the conversation, then maybe we would have more technologies that don’t divert their benefits so much to one group at the expense of others.

Editor’s note: The top photo is available under CC BY 2.0 license.

.Source