Google told scientists to use ‘a positive tone’ in AI research, documents show | Technology

Subscribe to the US Guardian Today newsletter

This year, Google took steps to increase control over its scientists’ articles, launching a review of “sensitive topics” and, in at least three cases, asked authors to refrain from launching their technology in a negative light, according to internal communications and interviews with researchers involved in the work.

Google’s new review procedure asks researchers to consult with legal, policy and public relations teams before addressing topics such as face and sentiment analysis and categorizations of race, gender or political affiliation, according to internal pages explaining the policy.

“Advances in technology and the growing complexity of our external environment are increasingly leading to situations where seemingly harmless projects raise ethical, reputational, regulatory or legal issues,” said one of the pages to the research team. Reuters was unable to determine the posting date, although three current employees said the policy began in June.

Google declined to comment on this story.

The “sensitive topics” process adds a round of scrutiny to Google’s standard review of documents for pitfalls such as the disclosure of trade secrets, said eight current employees and former employees.

For some projects, Google employees intervened at later stages. A senior Google manager, reviewing a study on content recommendation technology shortly before publication this summer, told the authors to “be very careful to create a positive tone,” according to internal correspondence read to Reuters.

The manager added: “This does not mean that we must hide from the real challenges” presented by the software.

A researcher’s subsequent correspondence to reviewers shows authors “updated to remove all references to Google products”. A draft seen by Reuters mentioned YouTube, owned by Google.

Four researchers, including senior scientist Margaret Mitchell, said they believed Google was beginning to interfere with crucial studies of potential damage to the technology.

“If we are researching what is appropriate due to our experience and we are not allowed to publish this for reasons that are not in line with high quality peer review, we are facing a serious censorship problem,” said Mitchell.

Google says on its public-facing website that its scientists have “substantial” freedom.

Tensions between Google and some of its employees emerged this month after the abrupt departure of scientist Timnit Gebru, who led a team of 12 people with Mitchell focused on ethics in artificial intelligence (AI) software.

Gebru says Google fired her after she questioned an order not to publish research alleging that speech-imitating AI could harm marginalized populations. Google said it accepted and accelerated its resignation. It was not possible to determine whether Gebru’s article underwent a review of “sensitive topics”.

Jeff Dean, Google’s senior vice president, said in a statement this month that Gebru’s article focused on potential damage without discussing ongoing efforts to resolve it.

Dean added that Google supports the AI ​​ethics scholarship and is “actively working to improve our paper review processes, because we know that many checks and balances can get complicated”.

Sensitive themes

The explosion in AI research and development across the technology industry has prompted authorities in the United States and elsewhere to propose rules for its use. Some have cited scientific studies that show that facial analysis software and other AI can perpetuate prejudice or erode privacy.

In recent years, Google has incorporated AI into all of its services, using the technology to interpret complex search queries, decide recommendations on YouTube, and automatically fill phrases in Gmail. Its researchers published more than 200 articles last year on responsible AI development, out of more than 1,000 projects in total, said Dean.

Studying Google’s services for prejudice is among the “sensitive topics” in the new company policy, according to an internal web page. Among dozens of other “sensitive topics” listed are the oil industry, China, Iran, Israel, Covid-19, home security, insurance, location data, religion, autonomous vehicles, telecommunications and systems that recommend or personalize web content.

The Google article, for which the authors were instructed to adopt a positive tone, discusses the AI ​​recommendation, which services like YouTube employ to personalize users’ content feeds. A draft reviewed by Reuters included “concerns” that this technology could promote “misinformation, discriminatory or otherwise unfair results” and “insufficient diversity of content”, as well as leading to “political polarization”.

The final publication states that the systems can promote “accurate information, fairness and diversity of content”. The published version, entitled What are you optimizing for? Aligning recommendation systems with human values, credit omitted for Google researchers. Reuters was unable to determine why.

An article this month on AI to understand a foreign language softened a reference to how the Google Translate product was making mistakes after a request from the company’s reviewers, a source said. The published version says that the authors used Google Translate, and a separate sentence says that part of the search method was “proofread and fix inaccurate translations”.

In an article published last week, a Google employee described the process as a “long haul”, involving more than 100 e-mail exchanges between researchers and reviewers, according to internal correspondence.

The researchers found that AI can spit out personal data and copyrighted material – including a page from a “Harry Potter” novel – that was taken off the internet to develop the system.

A draft described how such disclosures may infringe copyright or violate European privacy law, said a person familiar with the matter. After the company’s analysis, the authors removed the legal risks and Google published the article.

.Source