Timnit Gebru’s departure from Google exposes a crisis in AI

This year has had many things, including bold claims of artificial intelligence discoveries. Industry commentators speculated that the GPT-3 language generation model may have achieved “artificial general intelligence”, while others praised Alphabet’s subsidiary Alphafold’s protein-folding algorithm and its ability to “transform biology”. While the basis for such claims is more tenuous than effusive headlines, it hasn’t done much to dampen enthusiasm across the industry, whose profits and prestige depend on AI proliferation.

It was against this backdrop that Google fired Timnit Gebru, our dear friend and colleague, and a leader in the field of artificial intelligence. She is also one of the few black women in AI research and an adamant advocate of bringing more BIPOC, women and non-Western people to the field. In any measure, she excelled in the work that Google hired her to do, including demonstrating racial and gender disparities in facial analysis technologies and developing reporting guidelines for datasets and AI models. Ironically, that and her vocal defense for those underrepresented in AI research are also the reasons, she says, that the company fired her. According to Gebru, after demanding that she and her colleagues remove a critical research paper from large-scale (profitable) AI systems, Google Research told its team that it had accepted his resignation, despite the fact that she had not waived. (Google declined to comment on this story.)

Google’s appalling treatment of Gebru exposes a double crisis in AI research. The field is dominated by an elite, mainly the white male workforce, and is controlled and financed mainly by major players in the industry – Microsoft, Facebook, Amazon, IBM and, yes, Google. With Gebru’s resignation, the civility policy that attracted the young effort to build the necessary protective barriers around AI was destroyed, raising questions about the racial homogeneity of the AI ​​workforce and the ineffectiveness of programs of corporate diversity to the center of speech. But that situation has also made it clear that – as sincere as a company like Google’s promises may sound – corporate-funded research can never be divorced from the realities of power and the flows of revenue and capital.

This should concern us all. With the proliferation of AI in areas such as health, criminal justice and education, researchers and advocates are raising urgent questions. These systems make determinations that directly shape life, while they are inserted in organizations structured to reinforce stories of racial discrimination. AI systems also concentrate power in the hands of those who design and use them, while obscuring the responsibility (and reliability) behind the complex computing veneer. The risks are profound and the incentives decidedly perverse.

The current crisis exposes the structural barriers that limit our ability to build effective protections around AI systems. This is especially important because the populations subject to damage and prejudice from AI predictions and determinations are mainly BIPOC people, women, religious and gender minorities and the poor – those who have suffered the impact of structural discrimination. Here we have a clear racialized divide between those who benefit – corporations and mostly white male researchers and developers – and those most likely to be harmed.

Take facial recognition technologies, for example, that have been shown to “recognize” darker-skinned people less often than lighter-skinned people. That alone is already alarming. But these racialized “mistakes” are not the only problems with facial recognition. Tawana Petty, director of organization for Data for Black Lives, points out that these systems are disproportionately deployed in predominantly black neighborhoods and cities, while cities that have succeeded in banning and suppressing the use of facial recognition are predominantly white.

Without independent critical research that focuses on the perspectives and experiences of those who support the damage of these technologies, our ability to understand and challenge the exaggerated claims made by the industry is significantly impaired. Google’s treatment of Gebru makes it increasingly clear where the company’s priorities appear to be when critical work backs down on its business incentives. This makes it almost impossible to ensure that AI systems are accountable to the people most vulnerable to their damage.

Industry checks are further compromised by the close ties between technology companies and seemingly independent academic institutions. Researchers from corporations and universities publish articles together and jostle at the same conferences, with some researchers even occupying simultaneous positions in technology companies and universities. This blurs the boundary between academic and corporate research and obscures the incentives that guarantee this work. It also means that the two groups look terribly similar – AI research in academia suffers from the same pernicious problems of racial homogeneity and gender as its corporate counterparts. In addition, major computer science departments accept copious amounts of funding for Big Tech research. We just have to look at Big Tobacco and Big Oil for worrying models that expose how much influence on the public understanding of complex scientific issues that large companies can exercise when knowledge creation is left in their hands.

.Source