Nia Norris Nicole Cardoza Nia Norris Nicole Cardoza

Uncover racial bias in photography.

Cameras have been historically calibrated for lighter skin. When color film was developed, the first model to pose for camera calibration in photo labs was a woman named Shirley Page. After that, all color calibration cards were nicknamed “Shirley cards.” For decades, the “Shirley cards” featured only white women and were labeled “normal.” It wasn’t until the 1970s that Kodak started testing cards with Black women (NPR). They released Kodak GoldMax, a film advertised as being able to photograph “a dark horse in low light” – a thinly veiled promise of being able to capture subjects of color in a flattering way (NYTimes).

Good morning and happy Tuesday! When the YouTube iOS app was first released, about 10% of users were somehow uploading their videos upside-down. Engineers were puzzled until they took a closer look – they had inadvertently designed the app for right-handed users only. Phones are rotated 180 degrees in left-handed users' hands, and because the team was predominantly right-handed, this flaw missed internal testing (Google).

This unconscious bias is prevalent in much of the technology we use right now. Today, Nia outlines the role that bias has played in the history of photography technology.

Thank you for keeping this independent platform going. In honor of our anniversary, become a monthly subscriber on our website or Patreon this week and we'll send you some swag! You can also give one-time on Venmo (@nicoleacardoza), PayPal or our website.

– Nicole


TAKE ACTION


  • Read about the exclusive history of photography, lack of diversity at tech companies, and racial bias in their products today.

  • If you are a STEM employer, ensure that you are hiring people of color for the development of new technology.

  • Buy technology from companies that are actively working to develop more inclusive hardware and software.


GET EDUCATED


By Nia Norris (she/her)

The word inclusivity may not immediately come to mind when we think about camera design. After all, cameras do the job they have been doing for years: they capture the image in front of them so that we can keep a piece of the moment we are capturing. However, if you have noticed that often it is harder to take photos of more melanated individuals, you might be onto something. Google and Snapchat both recently announced that they are redesigning their cameras to be more inclusive to individuals who have darker skin (The VergeMuse). But what does this mean?

Cameras have been historically calibrated for lighter skin. When color film was developed, the first model to pose for camera calibration in photo labs was a woman named Shirley Page. After that, all color calibration cards were nicknamed “Shirley cards.” For decades, the “Shirley cards” featured only white women and were labeled “normal.” It wasn’t until the 1970s that Kodak started testing cards with Black women (NPR). They released Kodak GoldMax, a film advertised as being able to photograph “a dark horse in low light” – a thinly veiled promise of being able to capture subjects of color in a flattering way (NYTimes).

Although digital photography has led to some advancements, like dual skin-tone color balancing, it can still be a challenge to photograph individuals with a darker skin tone in artificial light. There are special tricks that cinematographers and photographers use for shooting darker skin despite these technological limitations, such as using a reflective moisturizer (NYTimes). Snapchat’s camera filters have been criticized as “whitewashed,” with Black individuals pointing out that the Snapchat camera makes their faces look lighter (The Cut). Snapchat has also released culturally insensitive camera filters including a Juneteenth filter encouraging users to “break the chains” and a Bob Marley filter that amounted to digital blackface (Axios).

After taking heat for digital whitewashing, Snapchat has enlisted the help of Hollywood directors of photography to create what they are calling an “inclusive camera” led by software engineer Bertrand Saint-Preaux to hopefully ease the dysphoria that Black users may feel after taking selfies through the app. Some of these efforts include adjusting camera flash and illumination variations in order to produce a more realistic portrait of users of color (Muse). Similarly, Google is changing its auto-white balancing and algorithms for the Pixel camera. They’re also creating a more accurate depth map for curly and wavy hair types (The Verge). Apple started this process a few years ago when they developed the iPhone X in 2017 (Engadget).

It’s not just the quality of photography that needs to be changed. We must also consider bias in the way that AI analyzes images. Twitter’s “saliency algorithm” has come under fire for racial bias in their preview crops of photos. Twitter automatically favors white faces in preview crops, no matter which image was posted first to the site. Twitter is currently planning to remove the algorithmic cropping from the site entirely in response (BBC).


This is not the first time that the company has simply removed an AI’s ability to recognize an image instead of redeveloping the AI to be more inclusive. In 2015, it was pointed out that Google Photos was labeling Black individuals as “gorillas.” Instead of fixing the AI, the company simply removed gorillas from their recognition software. In 2018 Wired followed up by testing photos of animals and although Google Photos could reliably identify multiple types of animals, there were simply no search results for “gorillas,” “chimps,” “chimpanzees,” and “monkeys” (Wired). Less than 1% of Google’s technical workforce is Black (NBC News).

Since photography is almost exclusively digital at this point, hopefully companies will take more initiative to better develop cameras that adequately capture people of color in a flattering way. We also need to adopt inclusive AI practices to ensure everyone's treated equally in social media. When we are seeking to develop inclusive tech, people of color need to have a seat at the table to help ensure that both the software and hardware we use are not racially biased.


Key Takeaways


  • Since film photography was developed, cameras have historically favored white individuals.

  • Currently, tech companies are working to develop more inclusive cameras after criticism from people of color.

  • The way we consume photography is also biased by the way algorithms and AI show us photographs through social media.


RELATED ISSUES



PLEDGE YOUR SUPPORT


Thank you for all your financial contributions! If you haven't already, consider making a monthly donation to this work. These funds will help me operationalize this work for greatest impact.

Subscribe on Patreon Give one-time on PayPal | Venmo @nicoleacardoza

Read More
Nicole Cardoza Nicole Cardoza Nicole Cardoza Nicole Cardoza

Rally for representation in AI.

Dr. Timnit Gebru is a well-respected leader in the field of ethical A.I., an industry that’s committed to making artificial intelligence more inclusive and representative of our diverse population. She co-authored a groundbreaking paper that showed facial recognition to be less accurate at identifying women and people of color. She co-founded the Black in AI affinity group. And she was the co-leader of Google’s Ethical A.I. team – that is, until they abruptly forced her out of the company (Dr. Timnit Gebru’s Twitter).

Happy Friday, and welcome back to the Anti-Racism Daily!  I’ve watched this story unfold over the past week and see so many topics that we’ve touched on in this newsletter into one story. Read the injustices against Dr. Timnit Gebru and its implications in tech, and consider how you can protect critical voices in your own industry or area of passion.

Tomorrow's newsletter is our weekly Study Hall, where I answer questions and share insights from the community. Reply to this email to ask yours.

And thank you all for your generous support! Because of you, we can offer this newsletter free of charge and also pay our staff of writers and editors. Join in by making a one-time gift on ourwebsiteorPayPal, orsubscribe for $7/monthon Patreon. You can also Venmo (@nicoleacardoza). To subscribe, go toantiracismdaily.com.

Nicole


TAKE ACTION



GET EDUCATED


By Nicole Cardoza (she/her)

Dr. Timnit Gebru is a well-respected leader in the field of ethical A.I., an industry that’s committed to making artificial intelligence more inclusive and representative of our diverse population. She co-authored a groundbreaking paper that showed facial recognition to be less accurate at identifying women and people of color. She co-founded the Black in AI affinity group. And she was the co-leader of Google’s Ethical A.I. team – that is, until they abruptly forced her out of the company (Dr. Timnit Gebru’s Twitter).

Many leaders in the field indicate that her termination may be because of a research paper she was writing with her colleagues that outlined some of the inequities of large language models – or the body of data used to train A.I. software. As a result, more than 2,000 Googlers and over 3,700 supporters in academia and industry have signed a petition supporting Gebru and calling what happened to her “a retaliatory firing” and a case of “unprecedented research censorship.”

MIT Technology Review was allowed to publish some of the core findings, and they all are critical insights to making A.I. more inclusive. It notes the environmental and financial costs of running large data systems and how large databases are difficult to audit for embedded biases. It warns that these language models might not understand the context of words when wielded for racist or sexist purposes. It emphasizes that communities with less of a public lexicon than dominant culture won’t have an equal share of voice, meaning that their perspectives will be lost in the algorithms. And it warns how A.I. can be wielded to cause harm by impersonating real people or misconstruing their words. Read the full overview in MIT Technology Review.


Although the company may have viewed these topics as controversial, they’re certainly not new. Many researchers – including Gebru – have been advocating for the development and implementation of A.I. to be more inclusive, equitable, and accountable. Dr. Safiya U. Noble, author and assistant professor at the University of Southern California, has penned several pieces on the bias of algorithms, including this piece on how horribly “Black girls” are depicted when typed into Google (Time). Author Rashida Richardson published a study on how police precincts that have engaged in “corrupt, racially biased, or otherwise illegal” practices contribute their data to predictive models that are taught to perpetuate the same harm (SSRN). We’ve covered the inequities in facial recognition software in a previous newsletter. As Deborah Raji notes in her article in MIT Technology Review, many people like to say that the “data doesn’t lie.” But it does, often centering a white, male perspective on issues that should reflect all of us – and disproportionately harm marginalized communities.

"
The fact is that AI doesn’t work until it works for all of us.

Deborah Raji, a Mozilla fellow interested in algorithmic auditing and evaluation, for MIT Technology Review

But how are we expected to hold the industry accountable if they won’t make that commitment themselves? The controversy surrounding Gebru’s termination isn’t isolated, but one of many calls for Google’s accountability. And just a few weeks ago, the National Labor Relations Board found Google guilty of violating workplace rights for spying on, interrogating, and firing workers (Ars Technica). According to its 2020 Diversity and Inclusion report, only 24.7% of its technical workforce are women, and 2.4% are Black.

And similar stories are heard across Big Tech. Facebook has been pushed repeatedly to account for racial biashateful rhetoric, and election misinformation on its platform, and has recently announced new efforts that still fall short. Employees have rallied for accountability, staging walkouts and other protests (CBS News). 

The unfair treatment that Gebru has experienced only further exemplifies the point. It doesn’t just deflect from the facts that she and her team have been working on. It’s a direct statement on the value of Black women and their worth in technology; indeed, a clear demonstration of some of the systemic barriers that got us to this point. And I want to underline this because it’s indicative of many conversations we have in this newsletter – the challenges that people of color, particularly Black people, experience when they are actively working to reshape oppressive systems.

"
We’re not creating technology in our own imagination. They create technology in their imagination to serve their interest, it harms our communities, and then we have to perform cleanup. Then while we’re performing cleanup, we get retaliated against.

Timnit Gebru, in an interview with VentureBeat written by Khari Johnson

Google CEO Sundar Pichai apologized for the situation (Axios). I highly recommend reading the apology and Gebru’s response to it, using some of the points made in our newsletter on apologies. Gebru also references gaslighting, which we’ve broken down in another newsletter. But the damage is already done. Google has lost a prolific leader in AI ethics, and many have lost their faith in them. It also casts a disturbing picture of how major corporations can attempt to silence individuals whose voices are necessary for us to move into a more equitable future.


KEY TAKEAWAYS


  • Dr. Timnit Gebru, a leading researcher in ethical A.I. was unfairly terminated in her position at Google.

  • A.I. has been known for misrepresenting or harming, marginalized communities because of lack of representation and accountability from Big Tech

  • It's important that we protect those trying to reshape inequitable systems, especially when they represent marginalized communities


RELATED ISSUES



PLEDGE YOUR SUPPORT


Thank you for all your financial contributions! If you haven't already, consider making a monthly donation to this work. These funds will help me operationalize this work for greatest impact.

Subscribe on Patreon Give one-time on PayPal | Venmo @nicoleacardoza

Read More