Google allegedly told AI scientists to ‘find a positive tone’ in searches

Illustration for the article entitled Google Allegedly told AI scientists to achieve a positive tone in research

Photograph: Leon Neal / Staff (Getty Images)

Almost three weeks after Black’s abrupt departure specialist in ethics of artificial intelligence Timnit Gebru, more details are emerging about the new and obscure set of policies that Google has launched for its research team.

After reviewing internal communications and talking to researchers affected by the rule change, Reuters reported on Wednesday that the tech giant recently added a “sensitive topic” review process to its scientists’s articles and, on at least three occasions, explicitly asked scientists to refrain from launching Google technology in a negative light.

Under the new procedure, scientists are required to meet with special legal, political and public relations teams before conducting AI research related to so-called controversial topics that may include facial analyzes and categorizations of race, gender or political affiliation.

In an example reviewed by Reuters, scientists who researched the recommendation that AI used to fill user feeds on platforms like YouTube – a property of Google – wrote a document detailing concerns that the technology could be used to promote “misinformation. , discriminatory or otherwise unfair results ”and“ insufficient diversity of content ”, in addition to leading to“ political polarization. ”After review by a senior manager who instructed researchers to adopt a more positive tone, the final publication suggests that systems can promote “accurate information, fairness and diversity of content”

“Advances in technology and the increasing complexity of our external environment are increasingly leading to situations where seemingly harmless projects raise ethical, reputational, regulatory or legal issues,” says an internal Web page describing the policy.

In recent weeks – and particularly after the gebru departure, a widely known researcher who allegedly fell out of favor with bosses after raising the alarm about the infiltration of censorship into the search process – Google has faced increasing scrutiny about potential biases in its internal search division.

Four team researchers who spoke to Reuters validated Gebru’s claims, saying they also believe that Google is beginning to interfere in critical studies of the technology’s potential to cause harm.

“If we are researching what is appropriate because of our knowledge and we are not allowed to publish it for reasons that are not in line with high quality peer review, we are facing a serious censorship problem,” Margaret Mitchell, a senior scientist of the company, he said.

In early December, Gebru claimed that she had been fired by Google after she objected to the order not to publish research claiming that AI capable of imitating speech could put marginalized populations at a disadvantage.