Google’s new AI ethics leader calls for more “diplomatic” conversations

After months of internal conflict and opposition from Congress and thousands of Google employees, Google today announced that it will reorganize its AI ethics operations and place them in the hands of VP Marian Croak, who will lead a new responsible research and engineering center for AI for expertise.

A blog and a six-minute video interview with Croak that Google released today announcing the news does not mention former Ethical AI team co-leader Timnit Gebru, who Google abruptly dismissed in late 2020, or the leader of Ethical AI Margaret “Meg” Mitchell, who a Google spokesman said VentureBeat was placed under internal investigation last month.

The statement also does not mention the steps taken to address the need to “rebuild trust” required by members of the Google AI team. Several members of the Ethical AI team said they found out about the change in leadership from a report published Wednesday night by Bloomberg.

“Marian is a highly accomplished pioneer scientist that I admired and even trusted. It is incredibly painful to see it legitimize what Jeff Dean and his subordinates did to me and my team, ”Gebru told VentureBeat.

In the video, Croak discusses self-driving cars and disease diagnosis techniques as possible areas of focus for the future, but made no mention of great language models. Recent AI research, citing a cross-spectrum of experts, found that companies like Google and OpenAI have only a few months to set standards on how to deal with the negative social impact of large language models.

In December, Gebru was fired after sending an email to colleagues advising them to no longer participate in diversity data collection efforts. An article she was working on at the time criticized great language models, like the kind Google is known for producing, for harming marginalized communities and deceiving people into believing that models trained with huge corpora of text data represent progress. genuine in understanding the language.

In the weeks following his resignation, members of the Ethical AI team also called for Gebru to be reinstated in his previous position. More than 2,000 Googlers and thousands of other supporters signed a letter in support of Gebru and in opposition to what the letter calls “unprecedented search censorship”. Members of Congress who proposed legislation to regulate algorithms also raised a number of questions about the Gebru episode in a Letter to Google CEO Sundar Pichai. Earlier this month, news emerged that two software engineers resigned in protest against Google’s treatment of black women like Gebru and former recruiter April Curley.

In today’s video and blog post about the change on Google, Croak said that people need to understand that the fields of responsible AI and ethics are new, and called for a more conciliatory tone of conversation about how AI can harm people. people. Google created its ethical principles for AI in 2019, shortly after thousands of employees opposed participation in the U.S. military’s Project Maven.

“So there are many divergences, there are many conflicts in terms of trying to standardize a normative definition of these principles and whose definition of justice and security we are going to use, and so there are many conflicts now in the field, and it can be polarizing at times, and what what I would like to do is just that people have a more diplomatic conversation, perhaps so that we can really move forward in this field, ”said Croak.

Croak said the new center will work internally to assess the AI ​​systems that are being deployed or designed and then “partner with our colleagues and PAs and mitigate potential damage”.

The Gebru episode on Google prompted some AI researchers to promise not to review Google Research articles until the change was made. Shortly after Google sacked Gebru, Reuters reported that the company asked its researchers to adopt a positive tone when addressing issues called sensitive topics.

Croak’s appointment to the post represents the latest controversial development at the top of the AI ​​ethics ranking in Google Research and DeepMind, which Google acquired in 2014. Last month, a Wall Street Journal The report found that DeepMind co-founder Mustafa Suleyman was removed from management roles before leaving the company in 2019 due to intimidation from coworkers. Suleyman also served as head of ethics at DeepMind, where he discussed issues such as climate change and health. Months later, Google hired Suleyman to work as a consultant on policy and regulatory issues.

The way Google behaves when it comes to using AI responsibly and defending itself against forms of algorithmic oppression is immensely important because AI adoption is growing in business and society, but also because Google is the world leader in production of published AI research. A study published last fall found that major technology companies treat AI ethics funding in a way analogous to how large tobacco companies funded health research decades ago.

VentureBeat contacted Google to ask about steps to reform internal practices, issues raised by Google employees and a number of other issues. This story will be updated if we hear back.

More to come

VentureBeat

VentureBeat’s mission is to be a digital city square for technical decision makers to gain insight into transformative technology and transact. Our website provides essential information on technologies and data strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on subjects of interest
  • our newsletters
  • leading closed-minded content and discounted access to our award-winning events such as Transform
  • network resources and more

Become a member

Source