Google reportedly asked employees for a “positive tone” in a research article

Google added a layer of analysis for research articles on sensitive topics, including gender, race and political ideology. A senior manager also instructed the researchers to “create a positive tone” in a newspaper this summer. The news was first released by Reuters.

“Advances in technology and the increasing complexity of our external environment are increasingly leading to situations where seemingly harmless projects raise ethical, reputational, regulatory or legal issues,” says the policy. Three officials told Reuters the rule began in June.

The company has also asked employees to “avoid launching its technology in a negative light” on several occasions, Reuters said.

Officials who are working on an AI recommendation article, which is used to personalize content on platforms like YouTube, were told to “take great care to create a positive tone,” according to Reuters. The authors then updated the article to “remove all references to Google products”.

Another article on using AI to understand foreign languages ​​”softened a reference to how the Google Translate product was making mistakes,” wrote Reuters. The change came in response to a request from the reviewers.

Google’s standard review process aims to ensure that researchers don’t inadvertently reveal trade secrets. But the review of “sensitive topics” goes beyond that. Employees who wish to assess Google’s own services for bias should first consult with legal, public relations and policy teams. Other sensitive topics are said to include China, the oil industry, location data, religion and Israel.

The search giant’s publishing process has been in the spotlight since the resignation of IA ethics expert Timnit Gebru in early December. Gebru said she was fired because of an email she sent to the list of lists from Google Brain Women and Allies, an internal group of Google AI research workers. In it, she talked about Google managers pushing her to pull out an article about the dangers of large-scale language processing models. Jeff Dean, head of AI at Google, said she sent it very close to the deadline. But Gebru’s own team countered that statement, saying the policy was applied “in an unequal and discriminatory manner”.

Gebru contacted Google’s public relations and policy team in September about the newspaper, according to The Washington Post. She knew that the company could have problems with certain aspects of the search, since it uses large language processing models in its search engine. The deadline for making changes to the article was not until the end of January 2021, giving researchers enough time to address any concerns.

A week before Thanksgiving, however, Megan Kacholia, vice president of Google Research, asked Gebru to withdraw the article. The following month, Gebru was fired.

Google did not immediately respond to a request for comment from The Verge.

Source