ARCHIVES
EDUCATION | COVID-19 | TECH | SOCIAL | WORK | ENVIRONMENT | POLITICS | CRIMINAL JUSTICE
Don’t forget to subscribe ›
Advocate for equitable gaming.
Earlier this month, the state of California sued Activision Blizzard after a two-year investigation confirmed the organization had maintained a toxic workplace environment that’s hostile to women (Polygon). The company, which has created blockbuster games like World of Warcraft, Diablo, and Call of Duty, initially denied allegations, causing tensions to flare with consumers and employees alike. Hundreds of employees walked out to protest the company’s response (The Verge). The details of this particular lawsuit are incendiary but certainly not unique. There’s been a persistent lack of representation and inclusivity in the video game industry since its inception which has influenced how the gaming industry has treated marginalized employees and consumers.
TAKE ACTION
Support the work of Games for Change, a nonprofit that empowers game creators and social innovators to drive real-world impact through games and immersive media.
Explore a list of accessible games for kids that portray diverse characters and stories.
Learn more about the accessibility of video games on the site Can I Play That?
GET EDUCATED
By Nicole Cardoza (she/her)
Earlier this month, the state of California sued Activision Blizzard after a two-year investigation confirmed the organization had maintained a toxic workplace environment that’s hostile to women (Polygon). The company, which has created blockbuster games like World of Warcraft, Diablo, and Call of Duty, initially denied allegations, causing tensions to flare with consumers and employees alike. Hundreds of employees walked out to protest the company’s response (The Verge). The details of this particular lawsuit are incendiary but certainly not unique. There’s been a persistent lack of representation and inclusivity in the video game industry since its inception which has influenced how the gaming industry has treated marginalized employees and consumers.
Toxic workplaces contribute to the lack of representation in video games, both on and off the screen. Approximately 24% of the workforce in the video gaming industry is female (Forbes). 7% of game developers identify as Latinx, and only 2% as Black or African-American (Venture Beat). These racial disparities reflect broader inequities found across the tech industry and similarly affect the diversity reflected in the product (Fortune). It’s not just racial and gender disparities, however. People with disabilities are also woefully underrepresented in the workforce. This is particularly damaging, considering that 92% of people with impairments play video games, and there are more than 33 million gamers with disabilities in the U.S. alone (Venture Beat).
When considering gameplay, 79.2% of lead characters in games are male, over half are white, and just 8.3% of main characters in games are females of non-white ethnicities (Newsweek). Many games portray negative and outdated stereotypes of Middle Eastern people, often reducing characters to mean terrorists and blatantly appropriating language and culture (Axios). Another study notes that Black male characters are virtually always violent, which influences conscious and unconscious bias. One study indicated that players were more likely to correlate Black faces with negative words after playing a violent video game as a Black character than after playing a violent video game as a white character (TechCrunch). Often, efforts towards diversity are prompted by external calls for accountability, like adding darker-skinned representation in the Sims 4 or textured hairstyles to Animal Crossing: New Horizons (Allure).
These allegations need to be addressed, especially as the gaming industry continues to grow. The U.S. video game industry grew 20% over the past year due to the pandemic and is expected to reach $30B in annual revenue by the end of the year (IGC). And video game consumption over-indexes with Black and Latinx youth; in a few years, they’re expected to be the predominant audience (TechCrunch). This audience is also at least 40% women, which shatters old stereotypes of the outdated notion of who “the gamer” is (Forbes). This diverse group of consumers deserve accessibility and to see themselves accurately represented.
Moreover, video games are increasingly creating and cultivating culture. The latest version of Animal Crossing, released last spring on Nintendo Switch, became the virtual destination for weddings, protests, and political campaigning (The Verge). Ariana Grande just went on tour virtually on the game Fortnite (The Verge). And just last week, Netflix announced that they plan to add video games to their streaming content within the next year (Bloomberg). Video games aren’t just a part of culture, they’re defining it as well – and will influence our collective perceptions of race, gender, class, and disabilities.
As consumers, we can’t necessarily change major video game hardware and software from the inside out. But the rise of direct-to-consumer goods and the creator economy does give us an opportunity to support diverse gamers. First, spend more time researching the developers behind the games that you might play right now, and you may be surprised about what you learn. Then, you can search for games created by indie, or independent, game developers on platforms like itch.io. And don’t be shy to advocate for representation within the games that you play! Sometimes, that simple action can make a world of difference.
Key Takeaways
43% of white Americans say that they are “very confident” in their tap water, while only 24% of Black Americans and 19% of Hispanic Americans indicate the same degree of confidence.
Corporations are often allowed to bottle and resell municipal tap water at a high mark-up, skirting rules and regulations that disproportionately affect lower-income communities.
We need to mobilize around protecting the source of clean water, and center Indigenous communities who steward the land and waters.
Understand the TikTok strike.
Over the past month, many Black social media creators organized a strike to stop creating and posting dance choreography on the social media app TikTok. The social media app is built around reposting and remixing content from other creators, and a popular feature is learning and recording dances to trending songs. When Black female rapper Megan Thee Stallion released her new song, “Thot Shit,” on June 11, many Black creators agreed not to create choreography.
Good morning and happy Wednesday! Don't overlook the Black creator TikTok strike – it may read as petty social media drama on the surface, but this organized response is a larger declaration for respect and representation in the growing creator economy. Learn more in today's newsletter! And follow us on TikTok if you haven't already.
Thank you for your support! This daily, free, independent newsletter is made possible by your support. Consider making a donation to support our work. You can start a monthly subscription on Patreon or our website, or give one-time using our website, PayPal, or Venmo (@nicoleacardoza).
– Nicole
TAKE ACTION
Learn more about the Black TikToker Strike by following the hashtag #BlackTikTokStrike.
Support marginalized creators on social media: use more engagement tools on posts you see from creators you enjoy. Like, comment, share, and save the images and videos that they post.
Understand how strikes work and the best way to support them.
Consider: What do you know about the origins of your favorite digital trends? I.e your favorite gif, TikTok dance, or meme?
GET EDUCATED
By Nicole Cardoza (she/her)
Over the past month, many Black social media creators organized a strike to stop creating and posting dance choreography on the social media app TikTok. The social media app is built around reposting and remixing content from other creators, and a popular feature is learning and recording dances to trending songs. When Black female rapper Megan Thee Stallion released her new song, “Thot Shit,” on June 11, many Black creators agreed not to create choreography. Ironically, the music video for the song in question centers women of color as essential workers and highlights the type of hostility that Black creators experience online.
Get a 1-min breakdown of the issue on the ARD TikTok >
This is because of a growing conversation around compensation and equity for Black people on TikTok. Black creators often are behind the TikTok trends that go viral, but rarely gain recognition; white TikTok users are oftentimes miscredited as creators and gain sponsorships and media recognition (Teen Vogue). Black creators have also been vocal algorithmic censorship of content related to Black Lives Matter last summer, which further increased racial disparities of who’s celebrated on the platform (Time).
But this isn’t a TikTok-specific issue. Much of popular culture today leans heavily on language, dance, and other cultural cues taken directly from the Black community – particularly the Black LGBTQ+ community. From dances to hairstyles, phrases, and music, dominant culture often adopts Black culture and makes it mainstream. And white people, who benefit from more power and privilege in our society, are more likely to gain recognition for echoing these cultural acts – even if they had no hand in creating them. Learn more in a previous newsletter.
Moreover, the Black community still has to fight for their cultural markers to be accepted within culture at the same time as those with power and privilege enjoy them. Consider recent initiatives to allow natural hairstyles in schools (Chalkbeat), or the fight to normalize AAVE as a valid vernacular (Black Youth Project). With this context, it’s clear how a strike on short dance choreography reflects a broader stance on the cultural appropriation of Black culture throughout history.
It’s also important to recognize the role of withholding labor in the history of Black movements. Black people have gone on strike by withholding labor to extract fair compensation since before the Civil War. Consider the Great Railroad Strike of 1877, where over 100,000 railroad workers halted trains and stopped working for over two months in pursuit of better wages and conditions. There’s also the Memphis Sanitation Strike of 1968, where 1,300 Black workers walked off the job, demanding that the city recognize their union, increase wages, and end inhumane conditions. As garbage stacked up across the city streets, the workers never relented, attracting the support of Dr. Martin Luther King, Jr., who visited to show support and delivered his famous “Mountaintop” speech. Learn more in a previous newsletter. And just last year, when players from major league sports stopped playing for 48 hours after the shooting of Jacob Bake, the world took note – and fundamentally shifted how sports leagues respond to social issues (Vox). Their efforts – alongside other labor strikes led by other people of color – didn’t just raise awareness of critical issues, but carved a path for more equitable practices in labor unions altogether (Teen Vogue).
You can argue that TikTok influencers aren’t exactly the same type of wage workers who took part in past strikes. But let’s not overlook the influence of the “creator economy” and those that lead it. As digital communities flourish, nearly 50M people around the world consider themselves creators and receive some type of compensation from their work (Forbes). Creators offer a ton of value by creating content and community that might be inaccessible otherwise, particularly those from marginalized communities that offer an alternative to what’s mainstream. But being a creator is a difficult job with little infrastructure or safety (Teen Vogue). It’s powerful to see creators withholding their labor without that type of support behind them, and advocate for more equitable practices for this burgeoning labor market.
Perhaps this strike will encourage everyone that enjoys content online to reflect and consider: how do we value the creators of the content we consume? What labor may we take for granted – both online and off? And how can the strikes of the past transform our future?
Key Takeaways
Black creators on TikTok are on strike to take a stance against cultural appropriation and lack of credit for the choreography they introduce to the platform
Strikes throughout history have been a powerful way to shape perceptions about labor and value
Popular culture is rooted in Black cultural markers, but rarely celebrates or protects those that create it
RELATED ISSUES
7/13/2020 | Respect the roots of Black hair.
6/30/2020 | Boycott as a form of protest.
7/16/2020 | Respect AAVE.
PLEDGE YOUR SUPPORT
Thank you for all your financial contributions! If you haven't already, consider making a monthly donation to this work. These funds will help me operationalize this work for greatest impact.
Subscribe on Patreon | Give one-time on PayPal | Venmo @nicoleacardoza
Uncover racial bias in photography.
Cameras have been historically calibrated for lighter skin. When color film was developed, the first model to pose for camera calibration in photo labs was a woman named Shirley Page. After that, all color calibration cards were nicknamed “Shirley cards.” For decades, the “Shirley cards” featured only white women and were labeled “normal.” It wasn’t until the 1970s that Kodak started testing cards with Black women (NPR). They released Kodak GoldMax, a film advertised as being able to photograph “a dark horse in low light” – a thinly veiled promise of being able to capture subjects of color in a flattering way (NYTimes).
Good morning and happy Tuesday! When the YouTube iOS app was first released, about 10% of users were somehow uploading their videos upside-down. Engineers were puzzled until they took a closer look – they had inadvertently designed the app for right-handed users only. Phones are rotated 180 degrees in left-handed users' hands, and because the team was predominantly right-handed, this flaw missed internal testing (Google).
This unconscious bias is prevalent in much of the technology we use right now. Today, Nia outlines the role that bias has played in the history of photography technology.
Thank you for keeping this independent platform going. In honor of our anniversary, become a monthly subscriber on our website or Patreon this week and we'll send you some swag! You can also give one-time on Venmo (@nicoleacardoza), PayPal or our website.
– Nicole
TAKE ACTION
Read about the exclusive history of photography, lack of diversity at tech companies, and racial bias in their products today.
If you are a STEM employer, ensure that you are hiring people of color for the development of new technology.
Buy technology from companies that are actively working to develop more inclusive hardware and software.
GET EDUCATED
By Nia Norris (she/her)
The word inclusivity may not immediately come to mind when we think about camera design. After all, cameras do the job they have been doing for years: they capture the image in front of them so that we can keep a piece of the moment we are capturing. However, if you have noticed that often it is harder to take photos of more melanated individuals, you might be onto something. Google and Snapchat both recently announced that they are redesigning their cameras to be more inclusive to individuals who have darker skin (The Verge, Muse). But what does this mean?
Cameras have been historically calibrated for lighter skin. When color film was developed, the first model to pose for camera calibration in photo labs was a woman named Shirley Page. After that, all color calibration cards were nicknamed “Shirley cards.” For decades, the “Shirley cards” featured only white women and were labeled “normal.” It wasn’t until the 1970s that Kodak started testing cards with Black women (NPR). They released Kodak GoldMax, a film advertised as being able to photograph “a dark horse in low light” – a thinly veiled promise of being able to capture subjects of color in a flattering way (NYTimes).
Although digital photography has led to some advancements, like dual skin-tone color balancing, it can still be a challenge to photograph individuals with a darker skin tone in artificial light. There are special tricks that cinematographers and photographers use for shooting darker skin despite these technological limitations, such as using a reflective moisturizer (NYTimes). Snapchat’s camera filters have been criticized as “whitewashed,” with Black individuals pointing out that the Snapchat camera makes their faces look lighter (The Cut). Snapchat has also released culturally insensitive camera filters including a Juneteenth filter encouraging users to “break the chains” and a Bob Marley filter that amounted to digital blackface (Axios).
After taking heat for digital whitewashing, Snapchat has enlisted the help of Hollywood directors of photography to create what they are calling an “inclusive camera” led by software engineer Bertrand Saint-Preaux to hopefully ease the dysphoria that Black users may feel after taking selfies through the app. Some of these efforts include adjusting camera flash and illumination variations in order to produce a more realistic portrait of users of color (Muse). Similarly, Google is changing its auto-white balancing and algorithms for the Pixel camera. They’re also creating a more accurate depth map for curly and wavy hair types (The Verge). Apple started this process a few years ago when they developed the iPhone X in 2017 (Engadget).
It’s not just the quality of photography that needs to be changed. We must also consider bias in the way that AI analyzes images. Twitter’s “saliency algorithm” has come under fire for racial bias in their preview crops of photos. Twitter automatically favors white faces in preview crops, no matter which image was posted first to the site. Twitter is currently planning to remove the algorithmic cropping from the site entirely in response (BBC).
This is not the first time that the company has simply removed an AI’s ability to recognize an image instead of redeveloping the AI to be more inclusive. In 2015, it was pointed out that Google Photos was labeling Black individuals as “gorillas.” Instead of fixing the AI, the company simply removed gorillas from their recognition software. In 2018 Wired followed up by testing photos of animals and although Google Photos could reliably identify multiple types of animals, there were simply no search results for “gorillas,” “chimps,” “chimpanzees,” and “monkeys” (Wired). Less than 1% of Google’s technical workforce is Black (NBC News).
Since photography is almost exclusively digital at this point, hopefully companies will take more initiative to better develop cameras that adequately capture people of color in a flattering way. We also need to adopt inclusive AI practices to ensure everyone's treated equally in social media. When we are seeking to develop inclusive tech, people of color need to have a seat at the table to help ensure that both the software and hardware we use are not racially biased.
Key Takeaways
Since film photography was developed, cameras have historically favored white individuals.
Currently, tech companies are working to develop more inclusive cameras after criticism from people of color.
The way we consume photography is also biased by the way algorithms and AI show us photographs through social media.
RELATED ISSUES
12/11/2020 | Rally for representation in AI.
6/26/2020 | Face the bias in facial recognition software.
1/21/2021 | Invest in new media.
PLEDGE YOUR SUPPORT
Thank you for all your financial contributions! If you haven't already, consider making a monthly donation to this work. These funds will help me operationalize this work for greatest impact.
Subscribe on Patreon | Give one-time on PayPal | Venmo @nicoleacardoza
Invest in new media.
Earlier this month, Twitter removed Donald Trump from their platform, citing his incitement of violence at the Capitol as against their “Glorifications of Violence” policy (Twitter). The social media platform has been abused by many, as it has provided megaphones for individuals who support policies that discriminate and enact violence upon already marginalized folks. On the other hand, it has also allowed for organizations fighting for real change to amplify their message and to a larger audience more quickly and globally than ever before.
Happy Thursday! Yesterday I felt a collective exhale from many across the nation. It's easy to pretend that a new administration will change everything. But in fact, it's all of us that need to change, and recognize that the systemic injustices we face are far beyond the presidency.
One way to do that is by reckoning with how we process information. We need to invest in emerging platforms and people to have diverse and nuanced perspective. Make broaden your media consumption part of your new year's resolutions. Ida has created a series on "democratizing knowledge" to introduce us to emerging and necessary platforms for education and growth. Here's our first. I love that, through these recommendations, can build relationships with leaders and organizers we may never meet in traditional outlets.
And creators like me get to be connected to people like YOU! Remember to send along a question for Saturday's Study Hall, and join us on our digital community. As always, this work is possible because of you. Consider subscribing for $7/month on Patreon, or give one-time on our website, PayPal or Venmo (@nicoleacardoza).
Nicole
TAKE ACTION
Move beyond just retweeting or sharing media that provides critical information and resources. Rather, also take the time to read and reflect on how you can support suggested action items that can be sustained over the long term.
Subscribe to Zeynep Tufekci’s newsletter Insight or read her book Twitter and Tear Gas: The Power and Fragility of Networked Protest in order to understand the possibilities and limitations of new media in large-scale activism.
Read #identity: Hashtagging Race, Gender, Sexuality and Nation to learn about the ways that different new media scholars are conceptualizing digital spaces of resistance and advocacy, as well as discrimination and surveillance.
Diversify what you read each day! Search for publications by writers you admire, spend time researching your favorite topics on social media to find diverse perspectives, and encourage your friends, family and colleagues to do the same.
GET EDUCATED
By Ida Yalzadeh (she/her)
Earlier this month, Twitter removed Donald Trump from their platform, citing his incitement of violence at the Capitol as against their “Glorifications of Violence” policy (Twitter). The social media platform has been abused by many, as it has provided megaphones for individuals who support policies that discriminate and enact violence upon already marginalized folks. On the other hand, it has also allowed for organizations fighting for real change to amplify their message and to a larger audience more quickly and globally than ever before.
There is, however, a caveat to this last point. Twitter has been criticized for its limitations in creating long term social change. Sociologist Zeynep Tufekci argues that its ephemeral nature doesn’t naturally lend itself to the slow and sustained work of movement building (TED). Critical to these tasks of resistance and liberation is “new media,” which encompasses digital forms of distribution that have allowed for a fast and global reach, such as social media platforms. While the role of social media is central to this conversation and the way in which we distribute information, I’d instead like to turn to some other mediums of new media that can serve as potential tools for thinking and organizing collectively.
This is the first of a multi-part series on “democratizing knowledge,” or making knowledge accessible to a wider public. So much of the way that we consume information now allows us to think and gather more expansively than before. Throughout the series, I want to highlight some of the ways that we can use different mediums of information transfer to organize movements of resistance and realize our goal of collective liberation.
Google Docs is one such digital medium that has been used to distribute reading, resources and support to social movements (The Cut). After the 2016 election, Google Docs began to be used more widely in order to aggregate resources for collective action. In the last year, usage surged, despite questions about the platform’s privacy (MIT Technology Review). The June protests in the wake of George Floyd’s murder sparked the creation of Google Docs supporting Black Lives Matter. Everything from letter templates, to bail fund lists, to resource databases were distributed across the internet to support Black folks and their allies. We must also be mindful of the platform’s drawbacks: there are critiques of Google Docs as having the potential to also be seen and forgotten, and just as important, the company’s recent firing of Timnit Gebru implies a larger issue at Google regarding their ethical threshold in the face of capitalism (Wired). That being said, the platform’s ability to easily encourage collaboration and distribute knowledge should not be discounted.
Newsletters are another means of decentralized knowledge distribution that allows for a wide and accessible spread of information. Newsletters are usually operated by one person (or a small number of people) who send information directly to subscribers’ inboxes via platforms like Substack, Tiny Letter and Mailchimp. Much of the discourse around newsletters has focused on the big-name journalists who have quit working for reputable news outlets and started their own newsletters (The New York Times). However, it is also important to emphasize newsletters’ potential for BIPOC writers and activists. Platforms like Substack have given these folks a low barrier to entry in order to to write and widely distribute information about mental health, beauty, culture, queerness, surveillance and belonging.
Much like the weblogs that have long been a mainstay of internet culture, newsletters can be understood as a form of digital resistance (New Media & Society). Anti-Racism Daily—the newsletter you are reading right now—is one such space. First begun by Nicole Cardoza as a side project to propose actionable items to support Black lives, the daily email has managed to find its way into thousands of inboxes. Newsletters like these amplify the voices of writers who might be overlooked by traditional publishing, writers who are advocating for structural change and liberation.
Google Docs and newsletters are only two forms of new media. We live in an age where information is more global and accessible than ever before. Using new media platforms allows collectives to gather, plan and collaborate in a way that lowers the barrier to entry. This is especially important for marginalized groups, who have long been a minority in most American newsrooms (Columbia Journalism Review, Nieman Lab).
We must also see these new media platforms as tools in the fight for collective liberation. While the information that can be distributed across these channels are critical to collective knowledge and organizing, we also need to do the work to act in a collective, meaningful way that can be sustained over the long-term.
KEY TAKEAWAYS
Critical to the current tasks of resistance and liberation is “new media,” which encompasses digital forms of distribution that have allowed for a fast and global reach, such as social media platforms.
Using new media platforms like Google Docs and newsletters allows collectives to gather, plan and collaborate in a way that lowers the barrier to entry.
Such platforms are especially important for marginalized groups, who have long been a minority in most American newsrooms (Columbia Journalism Review, Nieman Lab).
But these platforms are simply tools. We also need to do the work to act in a collective, meaningful way that can be sustained over the long-term.
RELATED ISSUES
11/9/2020 | Decolonize your reading habits.
6/15/2020 | Diversify your media consumption.
PLEDGE YOUR SUPPORT
Thank you for all your financial contributions! If you haven't already, consider making a monthly donation to this work. These funds will help me operationalize this work for greatest impact.
Subscribe on Patreon | Give one-time on PayPal | Venmo @nicoleacardoza
Rally for representation in AI.
Dr. Timnit Gebru is a well-respected leader in the field of ethical A.I., an industry that’s committed to making artificial intelligence more inclusive and representative of our diverse population. She co-authored a groundbreaking paper that showed facial recognition to be less accurate at identifying women and people of color. She co-founded the Black in AI affinity group. And she was the co-leader of Google’s Ethical A.I. team – that is, until they abruptly forced her out of the company (Dr. Timnit Gebru’s Twitter).
Happy Friday, and welcome back to the Anti-Racism Daily! I’ve watched this story unfold over the past week and see so many topics that we’ve touched on in this newsletter into one story. Read the injustices against Dr. Timnit Gebru and its implications in tech, and consider how you can protect critical voices in your own industry or area of passion.
Tomorrow's newsletter is our weekly Study Hall, where I answer questions and share insights from the community. Reply to this email to ask yours.
And thank you all for your generous support! Because of you, we can offer this newsletter free of charge and also pay our staff of writers and editors. Join in by making a one-time gift on ourwebsiteorPayPal, orsubscribe for $7/monthon Patreon. You can also Venmo (@nicoleacardoza). To subscribe, go toantiracismdaily.com.
Nicole
TAKE ACTION
If you consider yourself an “academic, civil society, or industry supporter,” sign the petition and stand with Dr. Timnit Gebru. If you identify as a Black woman, you can also sign this petition.
Read the apology written by Google’s CEO and compare it to our previous newsletter on apologies. Reflect on how it could have been approved. We’ll compare notes in tomorrow’s Study Hall.
If you identify as white, consider: how can you be a stronger advocate for the underestimated communities working in your field?
GET EDUCATED
By Nicole Cardoza (she/her)
Dr. Timnit Gebru is a well-respected leader in the field of ethical A.I., an industry that’s committed to making artificial intelligence more inclusive and representative of our diverse population. She co-authored a groundbreaking paper that showed facial recognition to be less accurate at identifying women and people of color. She co-founded the Black in AI affinity group. And she was the co-leader of Google’s Ethical A.I. team – that is, until they abruptly forced her out of the company (Dr. Timnit Gebru’s Twitter).
Many leaders in the field indicate that her termination may be because of a research paper she was writing with her colleagues that outlined some of the inequities of large language models – or the body of data used to train A.I. software. As a result, more than 2,000 Googlers and over 3,700 supporters in academia and industry have signed a petition supporting Gebru and calling what happened to her “a retaliatory firing” and a case of “unprecedented research censorship.”
MIT Technology Review was allowed to publish some of the core findings, and they all are critical insights to making A.I. more inclusive. It notes the environmental and financial costs of running large data systems and how large databases are difficult to audit for embedded biases. It warns that these language models might not understand the context of words when wielded for racist or sexist purposes. It emphasizes that communities with less of a public lexicon than dominant culture won’t have an equal share of voice, meaning that their perspectives will be lost in the algorithms. And it warns how A.I. can be wielded to cause harm by impersonating real people or misconstruing their words. Read the full overview in MIT Technology Review.
Although the company may have viewed these topics as controversial, they’re certainly not new. Many researchers – including Gebru – have been advocating for the development and implementation of A.I. to be more inclusive, equitable, and accountable. Dr. Safiya U. Noble, author and assistant professor at the University of Southern California, has penned several pieces on the bias of algorithms, including this piece on how horribly “Black girls” are depicted when typed into Google (Time). Author Rashida Richardson published a study on how police precincts that have engaged in “corrupt, racially biased, or otherwise illegal” practices contribute their data to predictive models that are taught to perpetuate the same harm (SSRN). We’ve covered the inequities in facial recognition software in a previous newsletter. As Deborah Raji notes in her article in MIT Technology Review, many people like to say that the “data doesn’t lie.” But it does, often centering a white, male perspective on issues that should reflect all of us – and disproportionately harm marginalized communities.
"
The fact is that AI doesn’t work until it works for all of us.
Deborah Raji, a Mozilla fellow interested in algorithmic auditing and evaluation, for MIT Technology Review
But how are we expected to hold the industry accountable if they won’t make that commitment themselves? The controversy surrounding Gebru’s termination isn’t isolated, but one of many calls for Google’s accountability. And just a few weeks ago, the National Labor Relations Board found Google guilty of violating workplace rights for spying on, interrogating, and firing workers (Ars Technica). According to its 2020 Diversity and Inclusion report, only 24.7% of its technical workforce are women, and 2.4% are Black.
And similar stories are heard across Big Tech. Facebook has been pushed repeatedly to account for racial bias, hateful rhetoric, and election misinformation on its platform, and has recently announced new efforts that still fall short. Employees have rallied for accountability, staging walkouts and other protests (CBS News).
The unfair treatment that Gebru has experienced only further exemplifies the point. It doesn’t just deflect from the facts that she and her team have been working on. It’s a direct statement on the value of Black women and their worth in technology; indeed, a clear demonstration of some of the systemic barriers that got us to this point. And I want to underline this because it’s indicative of many conversations we have in this newsletter – the challenges that people of color, particularly Black people, experience when they are actively working to reshape oppressive systems.
"
We’re not creating technology in our own imagination. They create technology in their imagination to serve their interest, it harms our communities, and then we have to perform cleanup. Then while we’re performing cleanup, we get retaliated against.
Timnit Gebru, in an interview with VentureBeat written by Khari Johnson
Google CEO Sundar Pichai apologized for the situation (Axios). I highly recommend reading the apology and Gebru’s response to it, using some of the points made in our newsletter on apologies. Gebru also references gaslighting, which we’ve broken down in another newsletter. But the damage is already done. Google has lost a prolific leader in AI ethics, and many have lost their faith in them. It also casts a disturbing picture of how major corporations can attempt to silence individuals whose voices are necessary for us to move into a more equitable future.
KEY TAKEAWAYS
Dr. Timnit Gebru, a leading researcher in ethical A.I. was unfairly terminated in her position at Google.
A.I. has been known for misrepresenting or harming, marginalized communities because of lack of representation and accountability from Big Tech
It's important that we protect those trying to reshape inequitable systems, especially when they represent marginalized communities
RELATED ISSUES
8/17/2020 | Fight for equity in remote learning.
6/26/2020 | Face the bias in facial recognition software.
PLEDGE YOUR SUPPORT
Thank you for all your financial contributions! If you haven't already, consider making a monthly donation to this work. These funds will help me operationalize this work for greatest impact.
Subscribe on Patreon | Give one-time on PayPal | Venmo @nicoleacardoza
Fight for equity in remote learning.
Get daily actions in your inbox. Subscribe Now ›
Happy Monday! My day job is in education (Yoga Foster) so I've been watching the decisions on back-to-school unfold with a blend of anticipation and dread.
For today's newsletter, Jami wrote a fascinating piece on what's unfolding in education this fall. I would love to hear how you're navigating this upcoming school year if you have children in school – reply to this email with your thoughts.
And thank you to everyone that's contributed money to the newsletter! If you haven't already, you can give on our website, Paypal, Venmo (@nicoleacardoza) or subscribe $5/mo on Patreon. And it's certainly not required, but always appreciated.
Nicole
ps – we've received a few questions about what is happening with the USPS. If you haven't already, we highly recommend reading last week's newsletter on the vote by mail situation and how you can take action.
TAKE ACTION
Read your local schools’ reopening plans. How do they support—or fail to support—low-income families?
Contact your local school board, many of whom are specifically seeking responses from the community right now, with your concerns.
Reflect on how your position and access shapes the choices you and/or your family is making during this pandemic. How can you support other families?
Follow Black educators on social media for their perspectives.
GET EDUCATED
In the last month, 180,000 have children tested positive for coronavirus (American Academy of Pediatrics). This 90% four-week increase happened to coincide with many students in the South and Midwest returning to school. Parents all over the country are worried about whether it is safe to send their children back to school— and if not, what to do instead.
The situation we’re in is terrible for all parents, all students, and all families. Talk to any parents, and you’ll hear their fear, their worry. But what sometimes gets lost in the social media arguments about school reopening is that, while this affects everyone, it does not affect everyone equally. As all kids return to some form of school by September, it is low-income families that are going to get hit the hardest— families that are often Black and Brown, due to America’s systemic racism and structural barriers (Pew Research Center).
The disparity comes to light when we look at what happened in the spring. An in-depth LA Times survey of school districts found that districts serving low-income (predominantly Latinx and Black) students had much worse virtual learning outcomes than districts serving higher-income (predominantly white and Asian) (LA Times). Under-resourced districts struggled to get their students devices and internet connections. (Now, even months later, California officials still say that they need over a million computers and hot spots for their students.) One teacher had less than 10% of his students show up for classes. Beyond the barrier of the digital divide, these students also had bigger things on their minds than school: their parents losing their jobs, paying rent— and of course, coronavirus itself. Because Black people have died of coronavirus at a 2.5x higher rate than white people (and Indigenous and Latinx people at a 1.5x higher rate), non-white students have had much more first-hand experience with coronavirus than white students (COVID Racial Data Tracker).
Since the spring, schools have changed their plans, and changed their plans again, due to vacillating instruction from the government and their overly-optimistic ideas about the pandemic’s course (NYTimes). In response, parents are scrambling to find the best option for their own families—choices that are all fraught. I was struck by an article where the interviewer asked teachers what they thought about wealthy parents choosing to “pick the all-distance option, create a home-schooling pod if you need to for a year. Ease the pressure on the system, so the lower-income kids have more access to the resources they need, including if they need in-person learning” (Slate). Black teacher Brandon Hersey’s response was short and to the point: "I think that's racist as f---." The teachers agree: while it seems like a good idea, it just makes in-person school a hot zone for kids with the least options, resources, and access. Because of the inequity in many types of tutoring/homeschooling pods, some schools don’t support them (Fairfax County Public Schools).
Even in areas like mine where everyone is beginning the school year virtually, remote learning exacerbates the differences between the haves and have-nots. In response to working parents’ childcare concerns, my school district partnered with Right at School, a company that will “support students in their remote learning, providing small groups and a quiet space for schoolwork, as well as supplementing with fun activities and group fitness” to the tune of $225 per week (Right at School). In other words: the school will provide a semblance of in-person school, but it’s outsourced, and parents have to pay. There was no information on whether it would be provided for free or at a discount for lower-income households (I contacted my school board and am waiting for a response). In both this case and the one that the Slate teachers were worried about— where the rich stay home and the poor go to school— school is segregated between those who can pay and those who can’t.
Our government has left us with no good options, but some organizations are trying to develop more equitable solutions. Yenda Prado notes that learning pods could be successful if they are available to all who need them most; if this system could be scaled and supported institutionally (Online Learning Research Center). “Learning pods – when done in certain ways and contexts – can be a form of equity work that supports families and schools,” she writes. “When families, particularly those that have been marginalized, come together in times of crisis to address their children’s needs – that becomes equity work. It is incumbent on all us to support their efforts by developing systemic solutions at scale to the current educational challenges.” San Francisco is attempting to do this by creating learning hubs for underserved children (San Francisco Chronicle).
Many parents have important reasons for opting their children out of in-person learning. But opting out of in-person learning doesn’t have to mean opting out of collective action. Whether we have children or not, we can all put pressure on our local organizations to best support the kids in our communities who need it most.
KEY TAKEAWAYS
180,000 children have tested positive for coronavirus in the past month.
Our individual decisions about schooling affect the community.
Virtual learning exacerbates the educational inequities between students of color and white students.
RELATED ISSUES
7/15/2020 | End racial bias in school discipline.
7/8/2020 | Investigate school district funding disparities.
7/2/2020 | Remove police from our public schools.
PLEDGE YOUR SUPPORT
Thank you for all your financial contributions! If you haven't already, consider making a monthly donation to this work. These funds will help me operationalize this work for greatest impact.
Subscribe on Patreon | Give one-time on PayPal | Venmo @nicoleacardoza
Weekly Reflection: White terrorists, Black spaces, and deleting Facebook.
Get daily actions in your inbox. Subscribe Now ›
Hi everyone,
Each Saturday I go through the questions and reflections from the community in response to the actions we've been taking everyday. It's a good way to reflect further on the key themes – and catch up on any actions you missed this week. This weekly series needs a better title...
Many of the questions this week are too broad for a simple one-paragraph response, and are added to the list for future newsletters! And kind reminder that these daily newsletters should be part – but not all – of your anti-racism education and actions. There is no such thing as "enough" until we are all free! They're designed to introduce you to issues, but certainly can't paint the whole picture in 800-1000 words. Keep learning and listening.
As always, you can invest one-time on Paypal or Venmo (@nicoleacardoza) or monthly on Patreon to keep these conversations growing. I'm so grateful to be learning and unlearning with each of you.
Nicole
TAKE ACTION
1. Choose one newsletter from this week. Share with a friend to read, and discuss afterwards. Commit together to diving deeper, answering your questions, and learning more.
WEEKLY REFLECTION
Why is it that white men, whether at schools or otherwise, are very rarely reported and / or labeled as terrorists?
In response to Don't Vote for Trump, which analyzes the white supremacy movement in America
Racism, put simply. Ibram X. Kendi puts it simply – terror in America (and in many parts of the world) has been branded as something delivered by Black and Brown communities (The Atlantic). White men that commit acts of terror are usually referred to a "troubled individual" "acting alone," but in reality (like in the examples from our newsletter on Confederate symbols) they are nearly always perpetuating a violent and racist ideology – one that's embedded in the fabric of our society.
Teen Vogue has done some powerful reporting on this, analyzing both the response to the Parkland school shooting (Teen Vogue) and criticizing how certain people get named terrorists (Teen Vogue). And the NYTimes analyzes how the white supremacy ideology thrives online and in the Trump administration (NYTimes).
Thank you to person that submitted this question WITH the research they've done so far!
"
In the American imagination, danger comes mainly in black or brown, to the point that people miss the threat emanating from individuals who happen to be white. In recent years, white terrorists motivated by all sorts of bigotry have shot up white churches and synagogues and concerts and schools and bars and yoga studios. White people, not to mention the rest of us, are being terrorized—primarily by other white people. Any day can be the day they meet the final face of white terror, too.
The fundamental question of our time is whether we have enough respect for humanity to protect against white terror. Do we have the desire and the courage to preserve and extend pockets of equality, liberty, and democracy in the face of those who would subvert and destroy them?
– Ibram X. Kendi in The Atlantic
Do we all leave Facebook so we're not complicit?
In response to 6.30 newsletter Boycott as a form of protest, which reviewed the power of boycotts in the anti-racism movement.
Facebook is a powerful tool for staying connected with friends and running businesses. It might do you more good to stay informed and active online than going dark altogether. But that choice is up to you. It's highly unlikely enough people will boycott the service to cause Facebook to change, but it is likely that brands pulling millions of advertising money will. My opinion? I'd focus on getting the large company you may work for to pull ad revenue (if they use FB) than deactivate your own account.
But I get a lot of questions like these that feel less tactical, more moral (although this particular reader sounded very tactical, so just using this as an example). For moral questions, I leave that decision up to you. Ask yourself the questions and decide: what are you willing to sacrifice? What is being called for in this moment? Are you taking action for yourself or the greater good? Will this action be your only action? Is this action the most comfortable one?
How do I find black-owned businesses?
Google!
I’m a white woman and want to support Black businesses. I also want to be cognizant of Black spaces and not infiltrating them with our whiteness. Can you touch on this?
Supporting Black-owned businesses with your dollars is always a good thing (and please remember to make it sustainable, not just a one-time thing, because Black-owned businesses have operationalized to manage demand that might disappear once the protests fade). But you bring up an interesting intersection of supporting these businesses and gentrifying them with your presence.
I'll spend time on gentrification more broadly later on. But I think there needs to be a distinction between Black-owned businesses and Black spaces. Black people – and other people of color – deserve their own space to connect and heal (great article on this by The Arrow) but that might not be a Black-owned business, which may be designed to cater to any population. And start asking yourself questions about whether you belong in that space, how people will feel with your presence, and how you would be actively contributing. It takes a level of awareness about how to navigate spaces – an awareness that Black people, and other people of color, have had to practice their entire lives.
PLEDGE YOUR SUPPORT
Thank you for all your financial contributions! If you haven't already, consider making a monthly donation to this work. These funds will help me operationalize this work for greatest impact.
Subscribe on Patreon | Give one-time on PayPal | Venmo @ nicoleacardoza
Face the bias in facial recognition software.
Get daily actions in your inbox. Subscribe Now ›
Happy Friday,
I'm fascinated with the intersection of equity and technology, so I've been following today's topic for a few years. It illuminates how racism thrives not just in the systems we build, but the technology we build, too. And even if these technologies were completely unbiased, we can still wield them as tools or weapons to pursue misguided agendas.
Today looks at another example of harmful policing and the political implications, but I hope it also encourages you to take a second look at your smartphone, your laptop, and your favorite apps, and consider: who was this built for? How does it help or hurt this movement?
Tomorrow will be a Q&A email so send in your requests. And your support makes these email possible! You can contribute one-time on Paypal or Venmo (@nicoleacardoza) or give monthly on Patreon to keep these going.
- Nicole
TAKE ACTION
Sign the petition banning law enforcement from using facial recognition. (The bill referenced on the petition is this one, which I discuss at the end of this email).
Learn how facial recognition is being used in your local community. Here is a map with countless examples across the U.S.
GET EDUCATED
Let's face the facts: Facial recognition software is biased.
Facial recognition software has been struggling to save face for a while. So it wasn't a good look that when week – in the midst of the protests, no less – the ACLU accused the Detroit police office with what they're calling the first known wrongful arrest involving facial-recognition technology.
Robert Williams was arrested in his driveway and detained for 30 hours under suspicion of theft. Images of the suspect, stealing from a watch store in downtown Detroit, were run through facial recognition software, and Robert Williams was a match. It wasn't until officers interrogated him that they realized his faces didn't match the pictures – at all. Read more on CNN Business >
It might not be surprising to know that Robert Williams is Black. If you've been following the facial recognition conversation over the past few years, you might have guessed from the beginning. Because there have been dozens of studies that show that facial recognition software can be disproportionately inaccurate when it tries to identify Black people and other people of color.
Joy Buolamwini, a researcher at the M.I.T. Media Lab and founder of the Algorithmic Justice League, published one of the first comprehensive studies on facial recognition bias in 2018 after her firsthand experience (more via the NYTimes). The study found that software was much more likely to misidentify the gender of Black women than white men. Her work, including her popular Ted Talk, paved the way for larger discussion on the flaws of facial recognition.
“Facial recognition is increasingly penetrating our lives, but there is still time to prevent it from worsening social inequalities. To do that, we must face the coded gaze.”
Joy Buolamwini in her op-ed for the NYTimes
More reports were quick to follow, include one from the National Institute of Standards and Technology that found that African American people were 10 to 100 times more likely to be misidentified than Caucasians, and the Native American population had the highest error rates. (Full study on the NIST website). It also found that women were more likely to be misidentified than men. The elderly and children were more likely to be misidentified than those in other age groups. Middle-aged white men generally benefited from the highest accuracy rates (via Washington Post). Another study by UC Boulder found that facial analysis services are also "universally unable to classify non-binary genders" (Eureka Alert).
A main reason for these discrepancies is that facial recognition software can only be as smart as the data that feeds it. And most facial recognition training data sets are estimated to be more than 75% male and more than 80% white (Brookings). Unsurprisingly, the lack of diversity in tech also means that there are few women or people of color that are on the teams building this software, and increasing representation is likely to create a more equitable product (USA Today).
Have you tried opening your iPhone while wearing a face mask, and have it not work? That type of facial recognition error is simply a slight annoyance. But consider its application in policing, especially knowing the systemic racism persistent without the use of technology. And then consider that other algorithms used in criminal justice are also biased, like this algorithm that tries to assess the likelihood of future crimes (via ProPublica). I don't think we need another way to discriminate against those systemically marginalized. More on the dangers of policing in this article in The Week >
And its applications extend beyond just dangerous policing to nearly everything we do. It's being used to monitor fans at concerts (the Guardian), authorize us at the airport (Medium), and even as security in schools (Edweek). It's also not just a tool, but a weapon: the stories of the Chinese government using advanced facial recognition technology to track and control the Uighurs, a Muslim minority, is bonechilling (NYTimes).
Even if you haven't seen the news around facial recognition software, it's likely seen you: over half of all Americans are in a law enforcement face recognition network (via Georgetown Law). So the next time the police run a grainy photo of a suspect in a robbery, they could arrest you in their place.
The Facial Recognition and Biometric Technology Moratorium Act of 2020, a federal bill announced yesterday, is a step to introduce federal regulation to ensure the safety of everyone, particularly those systemically marginalized, even going so far as divesting funds from law enforcement that uses it inappropriately. Read more on The Verge >
“Facial-recognition technology doesn’t just pose a grave threat to our privacy, it physically endangers Black Americans and other minority populations in our country. As we work to dismantle the systematic racism that permeates every part of our society, we can’t ignore the harms that these technologies present.”
– Oregon Sen. Edward Markey via Fortune Magazine
Thank you for all your financial contributions! If you haven't already, consider making a monthly donation to this work. These funds will help me operationalize this work for greatest impact.
Subscribe on Patreon | Give one-time on PayPal | Venmo @nicoleacardoza