Google to change search process after turmoil over sacking of scientists | Technology

Google will change procedures before July to review the work of its scientists, according to a city hall recording heard by Reuters, part of an effort to contain internal turmoil over the integrity of its artificial intelligence (AI) research.

In comments at a team meeting last Friday, Google Research executives said they were working to regain confidence after the company fired two prominent women and rejected their work, according to an hour-long recording, the content of which was confirmed by two sources.

The teams are already testing a questionnaire that will assess project risks and help scientists navigate the analysis, said Maggie Johnson, head of operations at the research unit, at the meeting. This initial change would be implemented by the end of the second quarter, and most documents would not require extra verification, she said.

Reuters reported in December that Google had submitted a review of “sensitive topics” for studies involving dozens of issues, such as China or prejudice in its services. Internal reviewers have demanded that at least three AI articles be modified to avoid putting Google’s technology in a negative light, Reuters reported.

Jeff Dean, Google’s senior vice president who oversees the division, said on Friday that analysis of “sensitive topics” “is and was confusing” and that he has tasked a senior research director, Zoubin Ghahramani, to clarify the rules , according to the recording.

Ghahramani, a professor at the University of Cambridge who joined Google in September from Uber, said during the mayor’s office, “We need to be comfortable with the discomfort” of self-critical research.

Google declined to comment on Friday’s meeting.

An internal email, seen by Reuters, offered new details about the concerns of Google researchers, showing exactly how Google’s legal department modified one of three AI documents, called Extracting training data from large language models.

The e-mail, dated February 8, from a co-author of the newspaper, Nicholas Carlini, was sent to hundreds of colleagues, seeking to draw attention to what he called “deeply insidious” editions of corporate lawyers.

“Let’s be clear here,” said the approximately 1,200-word email. “When we, as academics, write that we have a ‘concern’ or find something ‘worrying’ and a Google lawyer demands that we change to sound better, this is much more than a Big Brother intervening.”

Mandatory edits, according to your email, included changes from “negative to neutral”, such as changing the word “concerns” to “considerations” and “hazards” to “risks”. Lawyers also demanded the exclusion of references to Google’s technology; the authors’ discovery that AI leaked copyrighted content; and the words “violation” and “sensitive”, said the email.

Carlini did not respond to requests for comment. Google, in response to questions about the email, contested its claim that lawyers were trying to control the tone of the newspaper. The company said it had no problems with the topics investigated by the newspaper, but found some legal terms used imprecisely and conducted a full edition as a result.

Audit of racial heritage

Last week, Google also appointed Marian Croak, a pioneer in Internet audio technology and one of Google’s few black vice presidents, to consolidate and manage 10 teams that study issues such as racial prejudice in algorithms and technology for the disabled.

Croak said at Friday’s meeting that it would take time to address concerns among AI ethics researchers and mitigate damage to Google’s brand. “Please consider me entirely responsible for trying to reverse this situation,” she said on the tape.

Johnson added that the AI ​​organization was hiring a consulting firm for a comprehensive assessment of the impact of racial equity. The department’s unprecedented audit would lead to recommendations “that will be very difficult,” she said.

Tensions in Dean’s division deepened in December after Google fired Timnit Gebru, co-leader of his AI research team, after his refusal to retract an article on language-generating AI. Gebru, who is black, accused the company at the time of evaluating her work differently because of her identity and of marginalizing employees from underrepresented backgrounds. Almost 2,700 employees signed an open letter in support of Gebru.

During City Hall, Dean worked out which grant the company would support. “We want responsible AI and ethical AI investigations,” said Dean, setting an example of studying the environmental costs of technology. But he said it was problematic to quote data “by almost a factor of a hundred” while ignoring more accurate statistics, as well as Google’s efforts to reduce emissions. Dean had previously criticized Gebru’s article for not including important findings about the environmental impact.

Gebru defended the quote from his newspaper. “It is a terrible appearance for Google to come out so defensively against a newspaper that it has been quoted by so many of its similar institutions,” she told Reuters.

Employees continued to post about their frustrations over the past month on Twitter while Google investigated and then dismissed Margaret Mitchell, an AI ethical co-leader, for moving electronic files outside the company. Mitchell said on Twitter that he acted “to raise concerns about race and gender inequality and to talk about the problematic dismissal of Dr. Gebru by Google”.

Mitchell contributed to the article that prompted Gebru’s departure and a version published online last month without an affiliation with Google called “Shmargaret Shmitchell” as a co-author.

Asked to comment, Mitchell, through a lawyer, expressed disappointment with Dean’s criticism of the newspaper and said his name was removed at the behest of the company.

.Source