Google scientists reportedly said to make AI look more “positive” in research work

Google Headquarters in Mountain View, California

Google is reportedly telling research scientists to interpret AI in a more positive light.

Stephen Shankland / CNET

Alphabet, the father of Google, has been asking its scientists to ensure that AI technology looks more “positive” in their research work, says a Reuters report on Wednesday. A new review procedure is reportedly in place for researchers to consult Google’s legal, political or public relations teams for a “review of sensitive topics” before exploring things like facial analysis and racial, gender and political affiliation.

“Advances in technology and the increasing complexity of our external environment are increasingly leading to situations where seemingly harmless projects raise ethical, reputational, regulatory or legal issues,” says one of the policy’s internal pages, according to Reuters .

Read More: Google CEO apologizes for handling the departure of AI researcher Timnit Gebru

Other Google authors were told to “take great care to create a positive tone,” said the internal correspondence shared with Reuters.

The report follows Google CEO Sundar Pichai apologizing earlier this month for handling artificial intelligence the departure of researcher Timnit Gebru from the company and saying that it would be investigated. Gebru left Google on December 4, saying he had been forced to leave the company through an email sent to co-workers.

The email criticized Google’s Diversity, Equity and Inclusion operation, according to Platformer, who posted the full text of its letter. Gebru said in the posted email that she was asked to cancel a research article she was working on after receiving feedback on it.

“You shouldn’t know who contributed to this document, who wrote that feedback, what process was followed or anything,” she wrote in the email. “You write a detailed document discussing any feedback you can find, asking questions and clarifications, and it is completely ignored.

“Silencing marginalized voices like this is the opposite of the NAUWU principles we discussed. And doing that in the context of ‘responsible AI’ adds a lot of salt to the wounds, ”she added. NAUWU means “nothing about us without us”, the idea that policies should not be made without the contribution of the people they affect.

Google did not immediately respond to a request for comment.

Source