Rally for representation in AI.

Happy Friday, and welcome back to the Anti-Racism Daily!  I’ve watched this story unfold over the past week and see so many topics that we’ve touched on in this newsletter into one story. Read the injustices against Dr. Timnit Gebru and its implications in tech, and consider how you can protect critical voices in your own industry or area of passion.

Tomorrow's newsletter is our weekly Study Hall, where I answer questions and share insights from the community. Reply to this email to ask yours.

And thank you all for your generous support! Because of you, we can offer this newsletter free of charge and also pay our staff of writers and editors. Join in by making a one-time gift on ourwebsiteorPayPal, orsubscribe for $7/monthon Patreon. You can also Venmo (@nicoleacardoza). To subscribe, go toantiracismdaily.com.

Nicole


TAKE ACTION



GET EDUCATED


By Nicole Cardoza (she/her)

Dr. Timnit Gebru is a well-respected leader in the field of ethical A.I., an industry that’s committed to making artificial intelligence more inclusive and representative of our diverse population. She co-authored a groundbreaking paper that showed facial recognition to be less accurate at identifying women and people of color. She co-founded the Black in AI affinity group. And she was the co-leader of Google’s Ethical A.I. team – that is, until they abruptly forced her out of the company (Dr. Timnit Gebru’s Twitter).

Many leaders in the field indicate that her termination may be because of a research paper she was writing with her colleagues that outlined some of the inequities of large language models – or the body of data used to train A.I. software. As a result, more than 2,000 Googlers and over 3,700 supporters in academia and industry have signed a petition supporting Gebru and calling what happened to her “a retaliatory firing” and a case of “unprecedented research censorship.”

MIT Technology Review was allowed to publish some of the core findings, and they all are critical insights to making A.I. more inclusive. It notes the environmental and financial costs of running large data systems and how large databases are difficult to audit for embedded biases. It warns that these language models might not understand the context of words when wielded for racist or sexist purposes. It emphasizes that communities with less of a public lexicon than dominant culture won’t have an equal share of voice, meaning that their perspectives will be lost in the algorithms. And it warns how A.I. can be wielded to cause harm by impersonating real people or misconstruing their words. Read the full overview in MIT Technology Review.


Although the company may have viewed these topics as controversial, they’re certainly not new. Many researchers – including Gebru – have been advocating for the development and implementation of A.I. to be more inclusive, equitable, and accountable. Dr. Safiya U. Noble, author and assistant professor at the University of Southern California, has penned several pieces on the bias of algorithms, including this piece on how horribly “Black girls” are depicted when typed into Google (Time). Author Rashida Richardson published a study on how police precincts that have engaged in “corrupt, racially biased, or otherwise illegal” practices contribute their data to predictive models that are taught to perpetuate the same harm (SSRN). We’ve covered the inequities in facial recognition software in a previous newsletter. As Deborah Raji notes in her article in MIT Technology Review, many people like to say that the “data doesn’t lie.” But it does, often centering a white, male perspective on issues that should reflect all of us – and disproportionately harm marginalized communities.

"
The fact is that AI doesn’t work until it works for all of us.

Deborah Raji, a Mozilla fellow interested in algorithmic auditing and evaluation, for MIT Technology Review

But how are we expected to hold the industry accountable if they won’t make that commitment themselves? The controversy surrounding Gebru’s termination isn’t isolated, but one of many calls for Google’s accountability. And just a few weeks ago, the National Labor Relations Board found Google guilty of violating workplace rights for spying on, interrogating, and firing workers (Ars Technica). According to its 2020 Diversity and Inclusion report, only 24.7% of its technical workforce are women, and 2.4% are Black.

And similar stories are heard across Big Tech. Facebook has been pushed repeatedly to account for racial biashateful rhetoric, and election misinformation on its platform, and has recently announced new efforts that still fall short. Employees have rallied for accountability, staging walkouts and other protests (CBS News). 

The unfair treatment that Gebru has experienced only further exemplifies the point. It doesn’t just deflect from the facts that she and her team have been working on. It’s a direct statement on the value of Black women and their worth in technology; indeed, a clear demonstration of some of the systemic barriers that got us to this point. And I want to underline this because it’s indicative of many conversations we have in this newsletter – the challenges that people of color, particularly Black people, experience when they are actively working to reshape oppressive systems.

"
We’re not creating technology in our own imagination. They create technology in their imagination to serve their interest, it harms our communities, and then we have to perform cleanup. Then while we’re performing cleanup, we get retaliated against.

Timnit Gebru, in an interview with VentureBeat written by Khari Johnson

Google CEO Sundar Pichai apologized for the situation (Axios). I highly recommend reading the apology and Gebru’s response to it, using some of the points made in our newsletter on apologies. Gebru also references gaslighting, which we’ve broken down in another newsletter. But the damage is already done. Google has lost a prolific leader in AI ethics, and many have lost their faith in them. It also casts a disturbing picture of how major corporations can attempt to silence individuals whose voices are necessary for us to move into a more equitable future.


KEY TAKEAWAYS


  • Dr. Timnit Gebru, a leading researcher in ethical A.I. was unfairly terminated in her position at Google.

  • A.I. has been known for misrepresenting or harming, marginalized communities because of lack of representation and accountability from Big Tech

  • It's important that we protect those trying to reshape inequitable systems, especially when they represent marginalized communities


RELATED ISSUES



PLEDGE YOUR SUPPORT


Thank you for all your financial contributions! If you haven't already, consider making a monthly donation to this work. These funds will help me operationalize this work for greatest impact.

Subscribe on Patreon Give one-time on PayPal | Venmo @nicoleacardoza

Previous
Previous

Acknowledge whiteness in classical art.

Next
Next

Amplify mental health resources for immigrants.