“We’re changing the system with this one”: Black students using critical race algorithmic literacies to subvert and survive AI-mediated racism in school

Tiera Chante Tanksley (Department of Psychology, University of California Los Angeles, Los Angeles, California, USA)

English Teaching: Practice & Critique

ISSN: 2059-5727

Article publication date: 2 April 2024

Issue publication date: 8 April 2024

389

Abstract

Purpose

This paper aims to center the experiences of three cohorts (n = 40) of Black high school students who participated in a critical race technology course that exposed anti-blackness as the organizing logic and default setting of digital and artificially intelligent technology. This paper centers the voices, experiences and technological innovations of the students, and in doing so, introduces a new type of digital literacy: critical race algorithmic literacy.

Design/methodology/approach

Data for this study include student interviews (called “talk backs”), journal reflections and final technology presentations.

Findings

Broadly, the data suggests that critical race algorithmic literacies prepare Black students to critically read the algorithmic word (e.g. data, code, machine learning models, etc.) so that they can not only resist and survive, but also rebuild and reimagine the algorithmic world.

Originality/value

While critical race media literacy draws upon critical race theory in education – a theorization of race, and a critique of white supremacy and multiculturalism in schools – critical race algorithmic literacy is rooted in critical race technology theory, which is a theorization of blackness as a technology and a critique of algorithmic anti-blackness as the organizing logic of schools and AI systems.

Keywords

Citation

Tanksley, T.C. (2024), "“We’re changing the system with this one”: Black students using critical race algorithmic literacies to subvert and survive AI-mediated racism in school", English Teaching: Practice & Critique, Vol. 23 No. 1, pp. 36-56. https://doi.org/10.1108/ETPC-08-2023-0102

Publisher

:

Emerald Publishing Limited

Copyright © 2024, Emerald Publishing Limited


Introduction

The rapid proliferation of AI into schools and classrooms has raised a myriad of concerns among educational stakeholders about the need to prepare students to use AI in ethical, critical and effective ways. Though the catalyst for these concerns is numerous, a high number stem from fears about how to maintain academic integrity, critical thinking and the sanctity of high quality learning in the age of generative AI (Davis, 2023; Gordon, 2023). For instance, in the months following ChatGPT’s high profile release, there were innumerable reports of AI-facilitated cheating – a controversy that garnered international attention and sparked renewed conversation about the dangers of unethical use of technology by students (Vicci, 2023; Jimenez, 2023). In response, there has been a concerted effort to implement tech-conscious policies and procedures to curtail academic plagiarism, many of which included moves to ban the use of AI by students (LAUSD, 2022; Johnson, 2023) or the implementation of anti-cheating software designed to detect, deter and report unethical use of AI (Fowler, 2023; Beam, 2023). Similar concerns have been raised about the uncritical use of AI, particularly when students are looking to chatbots to augment traditional learning processes such as reading, writing and researching. This is because despite being perceived as an objective and authoritative source of knowledge, AI chatbots have been known to conflate fact with fiction, often disseminating inaccurate, insensitive and at times racially violent information about historically marginalized groups (Metz, 2023; Alba, 2023;Vincent, 2023). Without a critically discerning eye, students could (mis)understand AI-generated responses as fact, highlighting the dangers of misinformation in an increasingly networked digital age (Orsek, 2023). Finally, educators have noticed a rise in the ineffective and inefficient use of AI, whereby students struggle to select an AI tool that is appropriate for a given task, or have difficulties composing prompts and search queries that return the most relevant and related results (Salman, 2023). These and other concerns have called attention to the need to develop more robust critical digital literacies that can prepare students to have agentic, informed and empowered relationships to AI that support, rather than hinder, their educational experience (Kristof-Brown, 2023).

As the educational community scrambles to address ballooning fears about the unethical, ineffective and uncritical use of AI by students, critical technology scholars are calling attention to similar concerns about the design and use of AI by institutions of power in ways that exacerbate racial violence and educational inequity for marginalized students. Because anti-blackness exists as the default setting and organizing logic of artificial intelligence technologies (Benjamin, 2019; Noble, 2018a, 2018b; Gray, 2020; Daniels, 2009; Broussard, 2023), race-evasive and ahistorical approaches to AI design and deployment in schools often have devastating results for Black students. Such was the case with Proctorio, an anticheating software that uses facial detection and machine learning technology to identify “behavioral anomalies” of live test takers (Proctorio, 2024). However, audits of the program’s technological infrastructures reveal an inability to “see” Black faces as human – a discriminatory design feature that not only disproportionately flags Black students as cheaters and “unethical” users of technology but also simultaneously increases their exposure to school discipline and carceral contact (Clark, 2021; Feathers, 2021).

Similar biases were found within Gaggle, Securely and GoGuardian – some of the premier programs used by K-12 institutions to detect and deter campus-based violence (Laird and Dwyer, 2023; Herold and Harris, 2019). Despite promises to make schools safer for all, school safety platforms make marginalized student groups, including Black and queer students, hyper-susceptible to physical and emotional harm (Herold, 2022; Feathers and Mehrotra, 2023). This is because these platforms are designed in ways that flag uses of Black English, discussions of racial identity and topics related to queer and trans communities as dangerous, inappropriate and in violation of school safety guidelines. Unfortunately, because many of these platforms are directly connected to law enforcement, algorithmic biases that misidentify Black, queer and trans youth as “unethical” users of AI trigger direct encounters with police both in schools and, in many cases, the privacy of students’ homes (Herold, 2022; Feathers and Mehrotra, 2023). Cumulatively, these instances highlight a growing nexus between the unethical design, implementation and adoption of AI into schools and the quiet expansion of the school-to-prison pipeline via the New Jim Code – a term Benjamin (2019) uses to describe how technologies hide, reinforce and speed up anti-black racism, making the historic process of corralling, containing and surveilling Black students into the carceral apparatus all the more effective and efficient.

The permanence and pervasiveness of anti-black racism in schools, further entrenched and substantiated by anti-black digital technologies, brings race-evasive and ahistoric constructions of “ethical,” “critical” and “effective” use of AI in schools into question. We must challenge the taken-for-granted assumptions that undergird these terms, and instead ask, “Ethical for whom? Critical of what? And efficient to what ends?” To truly prepare Black students to have agentic and empowered relationships to AI, they must be prepared to “do more than survive” (Love, 2019) algorithmic anti-blackness and techno-racial domination within and beyond the school setting. They must develop critical literacies that enable them to resist, rewrite and reimagine algorithmic systems in race-conscious and justice-oriented ways; literacies that are rooted in a rich history of revolution, resistance and Black radical thought; and that invoke historical, intersectional and interdisciplinary knowledge toward collective liberation. Our students need literacies that center, rather than obscure, the voices, experiences and socio-technical expertise of those most directly impacted by AI-mediated racism, including Students of Color, undocumented students, students with disabilities, queer and trans students, formerly and currently incarcerated students, unhoused students and students navigating socially engineered poverty. And, perhaps most saliently, our students need literacies that can disrupt Black death, discipline and dehumanization as the organizing logic of AI to bring forth justice-oriented technologies that protect and sustain Black life, joy and educational wellness.

This paper offers critical race algorithmic literacies (CRAL) as one means of answering these calls. Grounded in histories of revolution, resistance and restorative justice, CRAL demands a bold reenvisioning of the power and purpose of AI in the educational context, as well as a shift in the way we teach, learn and engage with AI systems both within and beyond schools. To showcase the power and potentiality of CRAL, this paper leverages data from a longitudinal, mixed methods research study to explore how Black youth use CRAL to name, interrogate, challenge and reimagine anti-black AI systems that structure their educational experiences. The research questions that guide this paper are:

RQ1.

How do Black students describe the power, potentiality or applicability of CRAL in their schooling experiences?

RQ2.

How, if at all, does CRAL shape Black students’ perception of and engagement with AI in their current school context?

RQ3.

How, if at all, do students use CRAL to dream up, reimagine or redesign anti-black AI systems in ways that advance educational equity and racial justice for marginalized students?

Critical race algorithmic literacies

As a Black woman and a direct descendant of slaves, I understand literacy as indivisible from the goal of Black liberation and the abolition of slavery. Historians have long understood the connection between literacy and liberation, noting that being able to read cultural, religious and legislative texts that codified anti-blackness into law would give enslaved peoples the knowledge to subvert and dismantle death-making systems, such as the plantation system (Mitchell, 2008; Warren, 2005). Contemporarily, educational activists and prison abolitionists have identified a direct and undeniable connection between literacy, consciousness raising and transformative justice (Sarai, 2022; Lyon, 2020; Atterbury, 2023; Chan & Dillon, 2022), noting how recent efforts to ban critical race theory, African American studies and Black liberatory texts from both schools and prisons are an organized attempt to halt the development of critical literacies that would destabilize the “afterlives of slavery” – defined by Saidiya Hartman (2007) as skewed life chances, poor educational outcomes, mass incarceration and socially engineered death and dying. With this historical and sociopolitical context at the forefront, it becomes clear that for Black youth, literacy is much more than the mere acquisition of skills, “but also a tool used to define their identities, advocate for their rights to better themselves, and address issues of inequity for the wider society” (Price-Dennis et al., 2017).

In Pedagogy of the Oppressed, Freire (1970) advances a similar understanding of literacy, arguing that for marginalized groups, literacy is never merely the development of technical skills, but also the accumulation of historical knowledge, cultural sensibilities and subversive practices that enable them to disrupt and dismantle systems of power that maintain their subjugation. His transformative understanding of literacy as the ability to “read the word, and read the world” (Yosso, 2020, p. 54) forms the onto-epistemological underpinnings of critical race media literacy (CRML) – a framework introduced by Yosso (2002) to prepare Youth of Color to analyze, interrogate, challenge and create media products in race-conscious and justice-oriented ways. Guided by the tenets of critical race theory (CRT) in education, CRML uses historical embedding, intersectionality and systemic critique to understand how ideologies of white supremacy, called “race stories” (Sealy-Harrington, 2020), become embedded within educational norms, policies and practices in ways that not only justify but also produce educational inequity and disparate educational outcomes for Students of Color. For instance, Yosso (2002) found direct connections between stereotypical representations of Latinx students as combative, culturally deficient and incapable of academic success perpetuated by “urban dropout” cinema and the subsequent creation of educational policies that exacerbate the systematic disenfranchisement, criminalization and push out of Latinx students. She subsequently offered CRML as a tool with which students could “fight back” against systemic, ideological and socioemotional violence enacted within and by way of media. Since its inception, critical race media literacy scholarship has expanded to include a wide range of media products – including narrative media, digital and social media, news and print media and more – all with the goal of being able to read and resist the ideological, systemic and institutional violence perpetrated against Communities of Color via race stories (Cho and Johnson, 2020; Downie-Chin et al., 2020; Stanton et al., 2020).

While critical race media literacy prepares students to identify, examine and disrupt race stories told within and by way of media (Yosso, 2002, 2020), critical race algorithmic literacy prepares them to interrogate how these same stories become algorithmically codified within technological hardware, software and infrastructures. With roots in Black feminist theory, abolitionist thought and critical race technology theory (CRTT), critical race algorithmic literacy exposes how white supremacist logics become encoded within data, code, decision-making algorithms and machine learning models and to subsequently “read” these socio-technical infrastructures as text (Tanksley, 2023a, 2023b). As a literacy framework that bends, blurs and breaks the mythical boundary between the applied, computational, social and educational sciences, CRAL prepares Black students to read anti-blackness as an algorithm (Tanksley, 2023a, 2023b; Benjamin, 2019) – defined computationally as a set of logical steps or instructions designed to solve a problem (Wing, 2008). CRTT expands this definition to include historical and systemic context, noting that anti-blackness operates as a set of racial logics, formulas and (mis)calculations “entrenched centuries ago” and designed to “imperil and devalue” Black life through medical and environmental racism, mass incarceration, educational inequity, socially engineered poverty, skewed life chances and premature death and dying (Hartman, 2007). With this techno-racial definition of algorithms at the forefront, CRAL pulls back the curtain on (anti)black box logics and infrastructures - in both schools and technologies - and prepares students to resist, subvert and dismantle the algorithmic afterlives of slavery.

In addition to fostering students’ ability to critically read AI technologies as text, CRAL also works to strengthen students’ ability to reimagine and rebuild sociotechnical systems that can foster Black hope, healing and futurity. In this way, CRAL is a precursor to critical race computational thinking (Tanksley, 2023a, 2023b), wherein students leverage CRAL to design and deploy emancipatory technologies that can not only dismantle codified systems of anti-blackness but also offer new, justice-oriented systems in their place. The speculative nature of CRAL is crucial, and calls attention to the profound role that freedom dreaming (Kelley, 2022) plays in abolition and Black liberatory praxis. Freedom dreaming – or the act of dreaming up and subsequently working to build a world that is “radically different from the one we inherited” (Kelley, 2022, p. 4) – is a tactic of critical hope and transformative resistance used across every iteration of the Black liberation struggle, from slavery and Jim Crow segregation, to mass incarceration and the new Jim Crow. Even now, when the multiheaded hydra of anti-black racism has become digitized via the New Jim Code (Benjamin, 2019), a new type of sociotechnical freedom dreaming is needed. This is because “calls for abolition are never simply about bringing harmful systems to an end, but also about envisioning new ones in their place” (Benjamin, 2019, p. 162).

In sum, critical race algorithmic literacy is defined as the accumulation of historical, cultural and computational sensibilities needed to actively identify and intervene upon anti-black racial logics that are encoded within technological hardware, software and infrastructures, and that constrict, constrain and extinguish Black life within and beyond the school setting. Students who have developed critical race algorithmic literacies are able to:

  • make sense of everyday experiences with digital and algorithmically-mediated racism (sociotechnical consciousness);

  • critically navigate, resist and subvert algorithmic racism in everyday technologies (sociotechnical resistance); and

  • reimagine and dream up counter-technologies that protect and sustain Black life, joy and wellness on a techno-structural and sociotechnical level (sociotechnical freedom dreaming).

In the following section, I discuss the theoretical underpinnings of CRAL, and detail how roots in CRTT add an intersectional, Black radical and transdisciplinary lens to the framework.

Theoretical framework

Critical race algorithmic literacy shares similarities with critical race media literacy, and is in many ways a loving extension of this framework into the sociotechnical and techno-structural underpinnings of AI and digital technology. However, CRAL is distinct in its commitment to interrogating race as a technology (Benjamin, 2019), and anti-blackness as an algorithm designed to “solve” the problem of Black life and liberation (Tanksley, 2023a, 2023b). This distinction is due, in large part, to CRAL’s theoretical origins and onto-epistemological lineage. While critical race media literacy is rooted in critical race theory in education, which is a theory of race and a critique of white supremacy and multiculturalism within schools (Dumas, 2016), critical race algorithmic literacy is rooted critical race technology theory (Tanksley, 2022, 2023a, 2023b; Tanksley and Hunter, 2024), which is a theorization of blackness and technology and a critique of anti-blackness as the organizing logic and default of settings of both schools and technologies. This distinction is important, as CRTT is designed to reckon with anti-blackness as a phenomenon that is both distinct from and compounded by white supremacy and racism writ large. As Dumas and ross (2016) note, “only critical theorization of blackness confronts the specificity of anti-blackness, as a social construction, as an embodied lived experience of social suffering and resistance, and perhaps most importantly, as an antagonism, in which the Black is a despised thing-in-itself […] in opposition to all that is pure, human(e), and White” (Dumas and ross, 2016). Thus, to disrupt and dismantle Black death, discipline and dehumanization as the organizing logic of schools (Love, 2023; Dumas and ross, 2016; Jenkins, 2021, 2022) and technologies (Browne, 2015; Nkonde, 2019; Buolamwini and Gebru, 2018), we need a framework that is designed to reckon with algorithmic anti-blackness in both digital and analog settings.

In response to these calls, Critical Race Technology Theory (CRTT or TechCRT) has emerged (Tanksley, 2022, 2023). This framework disrupts majoritarian narratives that characterize AI technologies as postracial, apolitical and inherently democratic and instead acknowledges how racial logics, formulas and (mis)calculations become encoded within AI technologies in ways that create, sustain and exacerbate inequitable and dehumanizing educational experiences for Black youth (Tanksley, 2019, 2022, 2023a; Tanksley and Hunter, 2024). In doing so, CRTT “shifts discourse away from simple arguments about the liberatory possibilities” of AI toward more critical engagements with how technology “is a site of power and control over Black life” (Noble, 2016, p. 2). Thus, CRTT works to expose the racialized layers of subordination embedded within AI that have historically restricted Black students’ access to, representation in, and agency over digital systems that influence their educational, socio-political and technological experiences (Tanksley, 2019, 2023). CRTT is comprised of five tenets:

  1. The Intercentricity of Algorithmic Racism;

  2. The Challenge to Prevailing Narratives of TechnoSolutionism, Objectivity and PostRaciality;

  3. Commitment to Sociotechnical Justice;

  4. The Centrality of Experiential Knowledge; and

  5. The Transdisciplinary Perspective.

Imbued with the onto-epistemological power, potentiality and specificity of CRTT, CRAL can help prepare students to develop more historically anchored, race-conscious and justice-oriented understandings of “critical,” “ethical” and “responsible” use and design of AI. It also equips them with the abolitionist sensibilities and computational skills needed to dream up new technologies and sociotechnical futures that protect – rather than extinguish – Black life, well-being and educational success.

Methodology and methods

Course overview

This paper focuses on the experiences of three cohorts (n = 40) of self-identified Black and Afro-descendent high school students from 13 schools across Southern California who participated in a critical race technology course called “Race, Abolition and Artificial Intelligence.” The goal of the course was to prepare students to interrogate the ubiquity of anti-black racism within socio-technical architectures (e.g. code, data, algorithms, etc.) of AI technologies, including image recognition systems, chatbots, (dis)embodied agents and more. Students were recruited for the larger study through their participation in a culturally relevant college-bridge program that supports college-going Students of Color in Southern California. As a part of their participation in the residential bridge program, students are required to complete the five week “Race, Abolition and AI” course. As an educator, my goals for the course were threefold:

  1. to document and validate the everyday algorithmic literacies that Black youth developed and deployed prior to taking the course;

  2. to reinforce and extend these homegrown literacies into CRAL by providing opportunities to critically interrogate anti-black racism within a wide range of technological hardware, software and infrastructures; and

  3. to provide opportunities for students to operationalize CRAL to reimagine and redesign sociotechnical systems that could heal rather than harm Communities of Color.

Data collection

The larger study included a wide variety of data, including recorded lectures and class discussions, researcher field notes and reflective memos, annotated lesson plans and student work. However, for the purpose of this paper, I was concerned with how the acquisition of critical race algorithmic literacies informed how students were identifying, interrogating, subverting and reimagining AI technologies that quietly shape their educational experiences. Thus, I focused my analysis on a subset of data that directly centered students’ voices, experiences and sociotechnical insights, including their cumulative “talk backs,” final presentations and weekly journal entries.

After completing the five-week course, students participated in one-on-one semi-structured “talk backs,” which gave them an opportunity to share their burgeoning insights on and critiques of AI, and talk more in-depth about how they believed the knowledge, skills and competencies gleaned from the course impacted their daily lives and schooling experiences. These talk backs were an extension of the course, which positioned students as holders and creators of knowledge (Delgado-Bernal, 2002) whose experiential insights and techno-social funds of knowledge are positioned as invaluable to the production of knowledge on education and AI. The course used abolitionist pedagogies (Love, 2019) that actively and explicitly disrupted traditional power hierarchies that regularly silence, devalue and penalize Black students’ raw, authentic accounts of race, racism and power in education. Thus, students came to the talk backs having experienced five weeks of “speaking truth to power” without judgment, silencing or punishment. Students’ ideas, suggestions and critiques were readily and lovingly taken up in the course (e.g. lessons were regularly adapted in real time to match their impromptu questions and burgeoning interests, and the structure of the final presentations was adapted to accommodate students’ stated preferences) and they were reassured throughout their talk backs that the same conditions would apply to this sacred space. Like the course, the talk backs valued and used Black linguistic practices and ways of knowing, including testifying, playing the dozens, overlapped speech, circular narratives, counter-storytelling and more. By doing so, the course and the talks backs actively disrupted the ideological norms and power hierarchies that deter Black youth from sharing their technological and sociopolitical expertise in ways that honor their historical, cultural and ancestral practices.

Each talk back session lasted around 60 minutes, and included questions that explored how students understood the power, applicability and affordances of critical race algorithmic literacy to their everyday lives and schooling experiences (e.g. “Can you think of a time where you connected the concepts we learned in class to your everyday life?”, “How if at all did your use and understanding of AI change as a result of the course?” “Tell me about your final design project. What did you design and why?”, “How, if at all, does your project disrupt anti-blackness?”). Each talk back also included an opportunity for students to analyze current and future tech dilemmas in their school. The two dilemmas discussed were Los Angeles Unified School District’s (LAUSD) decision to implement its own AI Chatbot to provide individualized support for “at risk” students in “high needs” schools, and researchers’ plans to design a campus-based robot that will be designed by and with students.

In addition to talk backs, I also analyzed students’ weekly reflection posts, which were posted to a collaborative journal application that was secure, private and accessible only by members of our course. I chose to use a mobile journaling application rather than traditional educational courseware, such as Google Suites or Canvas, because its user interface is designed to resemble a social media newsfeed. It was my hope that the social media aesthetics of posting, sharing, interacting and scrolling would encourage more authentic, multimodal forms of student engagement and ways of knowing. For instance, students were encouraged to use emojis, embed memes or GIFs, include personal stories and “hot takes,” share popular media content and trending news stories, and more. Each week, students responded to prompts asking them to reflect on their learning from the week and use their emerging algorithmic literacies to analyze real life tech dilemmas, such as the rise in deep fakes; AI-generated art; innovations in virtual and augmented reality; anti-surveillance fashion tech and more.

Data analysis

A thematic analysis (Merriam, 2009) approach informed my coding process and I searched for patterns emerging from continuous and systematic review of the data that could inform my coding scheme in conjunction with educational theories influencing my sense-making around literacy, including critical race algorithmic literacies and critical race technology theory. These frameworks informed my initial coding scheme, which included looking for evidence that students developed and used critical race algorithmic literacies to make sense of anti-black AI systems in everyday settings. To do so, I constructed a preliminary coding tree that reflected the three dimensions of CRAL, which include: sociotechnical consciousness (being able to name, understand and interrogate anti-black technologies and infrastructures in their everyday lives); sociotechnical resistance (being able to subvert, challenge or critically navigate anti-black AI in their everyday lives); and sociotechnical freedom dreaming (reimagining, rewriting or rebuilding anti-black AI systems in ways that support and sustain Black life, joy and educational wellness). Next, I began nuancing and expanding my coding tree based on themes and patterns that emerged from continuous review of the data corpus. As I coded, I jotted down questions and/or clarifications that arose when using the existing codes, as well as potential additions or deletions from the coding scheme. After coding all of the student interviews, I used the emerging coding scheme to code the digital journal entries. While moving across each of these data sets, I worked to collapse, expand and delete codes so that they reflected broader patterns across the larger data set. After continuous, systematic review of the data, I ended up with three thematic findings related to the power, purpose and applicability of CRAL for Black students in increasingly AI-inundated schools.

Findings

Three themes emerged from the data analysis. First, CRAL was associated with an increased critical awareness of AI in general, and in relation to anti-black racism in particular. Second, this critical awareness of anti-blackness and AI was linked to more robust interrogations of the rhetorical underpinnings and material consequences of “unethical” approaches to AI in their current schools. Third, CRAL was associated with efforts to reimagine and redesign AI systems – and the sociopolitical systems and educational structures that surrounded them – in ways that protect, uplift and support Black educational success:

  1. CRAL was associated with an increased critical awareness of AI in general, and in relation to anti-black racism in particular.

In general, students reported that participation in the course fostered a foundational understanding of AI that they had not previously developed in traditional educational settings. Although every single student in the summer program had encountered some sort of AI in the formal learning context, whether through content moderation systems, school safety technologies or ChatGPT, none of them had been explicitly taught about AI in school. In fact, most of the students reported having no prior understanding of what AI was, how it worked, or what role it played in their educational experiences before – or even after – the onset of ChatGPT in November of 2022. As Tiana says, “before this class I actually had no idea what AI really was. I knew ChatGPT was a thing. But I really didn’t know anything about AI and I also didn’t know how ChatGPT worked. So after this class, I became more familiar with AI, and it made me less scared.” Likewise, Kianna says, “Honestly, I don’t think I knew what AI stood for when I came to this class. I didn’t even know how common it was in the way our society functions, or how much we use it […] and I think that’s what made me feel like AI is dangerous.” Ayanna felt similarly, noting, “I wouldn’t say I had great knowledge on AI. It always seemed like something people really talked about, but never really specified what it was, and so I kind of categorized it as like robots or something. It just seemed scary and dangerous.”

Importantly, students emphasized that their acquisition of foundational knowledge about AI in ways that directly explained their experiences with analog and digital racism played a key role in fostering more agentic relationships with AI. Tiana reflected on how the race-conscious activities made her less afraid of AI. She writes:

Something that stood out to me is that the programming of AI is racist because it is coded with the biases of the programmer. I learned that so many things in our society already use AI like Google searches and facial recognition. This made AI less scary and more digestible because I learned that humans are 100% in control of how the AI operates. I really enjoyed the “if/ then” activity because we got to express the stereotypes that we experience and that we observe within our society. I also liked how we were able to recognize that those stereotypes are coded into AI (Figures 1 and 2).

Nia feels similarly, noting:

My most impactful moment was learning about ChatGPT and Google and Bing AI. I think it was because it was interactive. We got to play around with it […] My group had searched “What are Black names?” and there was a couple of names was like T.H.O.T. […] It was so upfront and in our face. I don’t think we had ever really noticed it before, because the schools that we’re in are just teaching us to think that we’re being dramatic about racism when we experience it or when we see it, or we’re overreacting. But I think that was like my third time of [AI-mediated racism] just being in my face. And I was like this is real!

Students reported that participation in the course fostered a racialized awareness of AI that irreversibly impacted the way they understood and experienced everyday AI systems. Ayanna captures this in her first journal entry, admitting that she was “so used to having [AI] in my life,” that she “never really questioned it” before taking the course. However, in her third journal entry, Ayanna reflects, “Every day I start to notice more about how AI is a subtle part of our lives and how things that may go unnoticed pertaining to blatant racism are not a coincidence and more by design to further oppress People of Color.” As Mychal states, “Youth now, we’re so used to technology and if something [racist] happens we’re like, ‘Oh, okay, it’s just technology,' but when we were in your program, you were like, ‘No, this is why that happens.' […] I think it’s important for youth to know that, because now that I know, it’s like, ‘No. That’s not normal. That’s because of this, this, and this.' Kianna echoes Mychal, admitting, “I felt like this was such an eye opening class for me, because I just learned a lot of things I felt [were] applicable, very applicable. I feel like a lot of my math stuff, I’m never gonna use it ever […] But I am gonna use the knowledge I learned in this class.” Many students felt similarly to Kianna and Mychal, stating that CRAL prepared them to resist and subvert AI-mediated racism in real time. This is because, as Tiana puts it, the students left the course “being able to identify anti-blackness in tech and online almost immediately,” which subsequently enabled them to, “use the knowledge of anti-blackness within technology to try to attack it” in everyday settings, like school:

  • (2)

    CRAL was linked to an increased ability to interrogate algorithmic anti-blackness in both digital and analog settings in schools.

Fascinatingly, students’ increased ability to recognize anti-blackness in AI “almost immediately” helped them make sense of their disparate experiences with AI learning technologies. In his third journal entry, Amari writes, “I have noticed myself making connections to what we learned about algorithms and AI recently. It has led me to the observation that Grammarly is also an inherently racist algorithm that is also eurocentric, and I have realized this based on certain words/phrases it wanted to rephrase/capitalize.” Ayanna had a similar realization, noting, “I like to write a lot. So I used to write a lot of my own stories and the little autocorrect thing, if I said anything, and a term that may come out as Black, it would autocorrect it to something else more proper. And it’s just like that wasn’t what I wanted. So I can definitely see now there was some bias in that.”

In addition to interrogating anti-black infrastructures of popular learning technologies, like Grammarly, students also began to interrogate their schools’ race-neutral approach to AI, school discipline and academic dishonesty. In discussing her schools’ rules about AI-facilitated cheating, Nia reflects:

Whenever I thought of the term AI, I always thought of ChatGPT […] and always from my teachers’ [perspectives]. It was kind of frowned upon, because it’s used for cheating. So I never really used it. And in that way, I kind of frowned upon myself, because teachers didn’t think of it as a useful tool for education. And it was mainly a cheating device. So I tried to stay away from it as much as I could […] but then once we started diving deeper into the [Race, Abolition and AI] course, and learning about how AI is really subjective, and it was really like, I don’t know how to explain it but like, made to be kind of against Black people, […] I think that was really when I saw AI for myself. So, as my teachers were telling me ‘oh, it’s a cheating device’, I was realizing that it’s also kind of racist.

Later, when Nia learned that LAUSD is rolling out its own AI chatbot as part of its new academic support system, she calls attention to the contradictions in her district’s previous stance on unethical AI use, noting:

The school that I was in, they were like, ‘Don’t use ChatGPT. If we catch you using ChatGPT then you’re gonna get suspended or you can even get expelled for cheating.' So now I’m just kind of confused why they would even put their own chatbot if the point is to get kids to not cheat on things.

She goes on to critique how school-based AI technologies – not students – are often unethical by design by highlighting how school safety technologies rely on law enforcement to respond to notifications of “unethical” student behavior. She says:

I think that AI is so flawed because […] it’s taught that things that Black people do should be flagged or things that LGBTQ+ people should do should be flagged. It gets taught those things. So then now, you’re just wasting the police’s time, because there’s kids that aren’t even doing anything [wrong] in the first place. And then you’re wrongly accusing children of doing things that they’re probably not even doing in the first place. And that’s a traumatic moment for a police officer to show up to your classroom saying that they need to talk to you because they think that you’re suspicious because you’ve been flagged by AI for doing something that you didn’t even do and now you have to prove to the police that you didn’t do it.

Similarly, Kianna bridges her own experiences with anti-black racism and school discipline policies to interrogate “unethical” uses of AI in school. Even though she’s “never been a bad kid,” Kianna was falsely accused of cheating several times by a teacher well before the onset of ChatGPT and generative AI. She explains that her teacher “had some weird expectation of me […] It’s like I’m supposed to do a lot better than all the other kids, and if I mess up, that’s all. I just mess up.” Having to navigate racialized expectations for Black girls to do “twice the work for half the credit” while simultaneously navigating negative stereotypes about their inherent criminality not only made Kianna feel unsupported and “disposable,” but it also made her uneasy about using AI to support her learning. Even though her AP language course encouraged students to use AI to help with second language acquisition, Kianna was reluctant to use the feature because of fears she would be falsely accused of cheating and unethical use of AI. However, after acquiring critical race algorithmic literacy, her comfortability using AI to augment and support her learning drastically changed. She explains:

I think I’m more prone to use it now. Because Quizlet has this feature where you can use AI and I’m in AP Mandarin this year, so I have to use it. I have to use that little AI feature. And, I’m not mad about it. Now I feel good because I feel like it’s there to help me instead of it being there to help me cheat, or not do things the right way. I feel more inclined to use it because I feel like it’s more of a tool rather than something that would harm me in the end.

For Kianna, who felt ill-prepared to defend herself against false cheating accusations when it came to AI, the race-conscious AI literacies gained from the course empowered her to challenge colorblind approaches to AI in school moving forward. In describing how she plans to speak truth to power in future discussions of AI in her school, Kianna says:

I know what [AI] is. I can explain it to someone now. If there’s a conversation about it, I’m definitely not going to be sitting in the corner and just listening. I’m going to be talking […] And when I talk about AI, I’m gonna be that one annoying person that’s always gonna bring up Black people. I don’t care. I’m gonna bring it up every second I get. I’m always gonna talk about it. People are always annoyed like “Why you always gonna talk about it that way? You always gotta bring race.” But it’s literally everywhere. Everywhere. And the people that say that are the people that aren’t even affected by it.

Similar to Kianna and Nia, Tiana found herself using CRAL to challenge overzealous disciplinary policies around AI and cheating. Specifically, she leverages her newfound understanding of AI to challenge teachers’ universal ban of AI that she feels hinders students’ learning, and sets them up for discipline and hyper-surveillance. She notes, “A lot of schools are also banning AI in the classroom. And so being able to have a conversation with teachers […] and being like, ‘yeah it’s not cool to use AI on your work. But also, did you know that Google search is AI […] Let’s talk about it.’” After discussing how teachers' misunderstandings of the types of technologies that count as AI (and how blanketly banning all AI will negatively impact students’ learning), Tiana shifts her critique toward the unethical design of AI educational technologies used to determine unethical AI use by students. She states:

First, I don’t think AI should be connected to law enforcement at all […] I think instead of focusing on reporting and policing students, I think it should be focusing on providing and helping students […] giving people an opportunity to explain themselves, and not be treated as criminals, but just be treated as humans. Kind of using a restorative justice approach instead of a zero tolerance policy sort of thing.

Tiana’s ability to advocate against and dream beyond the current carceral uses of AI in schools highlight the generative potential of CRAL for youth who are forced to navigate increasingly AI-inundated schools:

  • (3)

    CRAL mediated students’ efforts to reimagine and redesign schools and school-based technologies that foster racial justice and educational equity for marginalized youth.

Finally, CRAL was linked to students’ efforts to reimagine schools and school-based technologies that could remediate educational inequity for Black students. For a reflective journal assignment, students read an article about how robot agents will soon be implemented into urban schools to assist with teacher burnout, educational inequity and school safety. They were asked to respond with questions, comments or suggestions for schools that were interested in using AI in this way. To start, students critiqued techno-solutionist approaches that assume AI will inherently remediate problems of educational inequity. André says, “I’m not too sure on whether or not AI would help in urban classrooms but given with how much knowledge we have regarding the potential of biases playing a role in said AI’s programming, I think it’s best if we do not go with that approach because I feel like it’d be ineffective.” Nia agrees, noting, “I don’t think that it is a good idea and I don’t understand how that would provide equity in the classroom. Also why would we start to put these things in ‘urban classrooms.’”

In addition to challenging techno-solutionism, students also provided alternative, human-focused solutions that wrestled with offline algorithms, such as racially hostile school climates, zero tolerance discipline protocols and chronic underfunding of Black and Brown schools, that directly hinder educational equity for Black students. For instance, Paris writes, “I just think there needs to be an intense training system that reconstructs the way teachers interact with their students instead, making the classroom feel inclusive using trauma-informed methods.” Tiana adds, “I have some concerns that the AI will not increase equity but may be a tool to further marginalize these students because what minoritized students need is to feel like they matter, to have good relationships with their teachers, and to have a culturally relevant curriculum.” After reflecting on the role that historic redlining and chronic underfunding play in educational inequity, Amari poses a provocative question, asking, “Instead of spending money to fund the implementation of AI in these schools, couldn’t we pay the teachers more?” Ayanna feels similarly, noting, “The idea of putting AI in urban classrooms to increase equity is dismissing the underlying issues as to why these classrooms often have less resources compared to the white counterpart.” Tiana draws attention to the need for more historically situated, race-conscious and systems-focused approaches, noting, “I think in some ways like it's trying to create a really quick solution to a big problem instead of tackling or even asking what [marginalized students] actually need. They’re just coming up with something easy, and I feel like we shouldn’t take the easy route when it comes to students and their education and what they deserve.” André agrees, admitting, “I can see that there’s good sentiments, but from the way in which this matter has been treated in the past, and how it’s continued to be treated, it just leaves a bad taste in my mouth. I feel like that’s more of a Band-aid approach […] why we gonna let robots make all the change for us? When are we going to fix it?”

In addition to noting how intersectional, mutually constructing systems of power collide to create systemic and historically rooted disparities, students also used CRAL to name how anti-black infrastructures could advance – rather than remediate – educational inequity for marginalized groups. Kali writes, “Having AI robots does sound helpful when it comes to aiding overworked teachers but it isn’t. Robots aren’t sentient but can be wrongly programmed to hold certain biases against marginalized people that could possibly lead to them grading the papers wrong, just like that thing that they had over zoom to make sure nobody was cheating on tests wasn’t inclusive to anybody.” Likewise, Amari notes, “If AI are generally being programmed to be politically neutral and are very vague and limited in responses, how can they help [Children of Color] who want to learn about their race and history? Is this another ploy of false generosity? Is this a ploy to expose children to society’s ideals earlier and try and shape their minds in a way that would please current societal norms?” Jax reflects, “I am concerned that the robots would not be able to effectively help students of color. Since the robots have a hard time even recognizing Black people, I feel like helping them with certain topics would be very difficult. They don’t provide accurate information about Black history and they don’t even talk about it sometimes.” Likewise, Kianna writes:

One concern I have about placing robots in “urban classrooms” is how these robots will treat Black students based on the ways that Black students have historically been treated: unfairly, and discriminatively. I begin to think about the system that was made to predict crime, and how it targeted mainly Black and Brown communities, and I relate the statistics of Black students, Black girls in particular, as aggressive and violent. Therefore, since AI draws on data taken from the internet, won’t these AI punish Black students more severely than Whites simply based on the over representation of Black students as “bad kids” […]?

Importantly, students did more than simply critique, interrogate and challenge AI systems; rather, they offered technologically astute ideas for how to design, implement and audit AI systems in ways that protect vulnerable student groups. For instance, Jamar says:

Before implementing any AI systems into our school, I feel like they should take this class. They should know about the advantages and disadvantages of AI, and what AI can bring to the table and how it can also bring disaster along with it, and how it’s not always gonna be fully functional in the way that they foresee it […] On top of that, it is also not really meant for certain people, and certain people can’t really access AI. So I feel like they should also be aware of that.

Likewise, Tiana (re)imagines generative AI chatbots as anti-racist educators capable of intervening in anti-black microaggressions in educational settings. She says:

I think an AI that can ask you questions if you did something [racist]. Imagine talking to an AI and the AI is like, “do you realize the impact of what you said?” I feel like an AI like that could be helpful to white kids to not inflict microaggressions and racialized trauma on others, and their biases. And I think an AI conversation about biases would help a white kid really understand what they’re thinking.

Importantly, Tiana understands microaggressions to be symptomatic of larger societal norms around race-neutrality and postracialism that enable white supremacy and anti-blackness to exist unchallenged. In her reimagining of educational technologies, she sees power in tethering anti-racist education to AI and using these systems to disrupt racism in everyday microinteractions.

Likewise, after sharing with the class how their K-12 experiences were rife with rhetorical, psychological, spiritual and even physical violence against Black students, Brooklynne, Kianna, Xander and Ayanna dreamt up a justice-oriented AI agent that could protect students against racial violence levied by educators, counselors, administrators or school resource officers. As they state in their presentation, “teachers often discriminate against students without facing repercussions.” With this in mind, the groupmates dreamt up an AI technology designed to “flag racist, prejudice, ableist or homophobic interactions with students” to “make sure [educators] are creating valuable student-teacher relationships that allow students to grow and prosper mentally and academically.” Importantly, Brooklynne and her group mates felt that an AI agent designed protect, support and advocate on behalf of vulnerable student groups is needed not only “because marginalized students’ perspectives constantly get overlooked in academic settings” but also because it “will allow students to voice their experience, build the strength to call out discrimination and realize that they deserve equity in the classroom” (Figure 3).

Discussion

As the findings in this study show, CRAL can provide a starting point for Black students looking to “do more than survive” (Love, 2019) algorithmic racism and the AI-mediated carceral apparatus in schools. As the findings in this study shows, CRAL can help students learn to read AI technologies – and the sociopolitical systems and educational contexts in which they exist – in race-conscious, historically anchored and systems-focused ways. For instance, CRAL was exhibited when Amari and Ayanna identified racially biased learning technologies, such as Google, ChatGPT and Grammarly, as “ineffective” AI tools since these systems were unable to support inclusive, culturally responsive and holistically humanizing experiences for Black learners.

Students’ critical race algorithmic literacies were further evidenced when they explicitly connected their experiences with analog algorithms – such as low teacher expectations, zero tolerance policies and assumptions of Black incompetence and criminality – to inequitable, distressing and “unethical” experiences with AI in the classroom context. For Nia, CRAL helped her to reread her district’s overzealous disciplinary protocols around unethical AI use, as well as the districts’ subsequent decision to use its own Chatbot, as text. Specifically, she was able to interrogate how technological infrastructure (e.g. biased content moderation systems and image recognition systems) and carceral design logics (e.g. decision to connect the platform to law enforcement) work in ways that contradict the districts’ stated purpose for using AI (e.g. to make schools safer and to increase academic honesty). Even further, she exposes the underlying race stories that inform all of these processes (e.g. Students of Color are inherently troublemakers and lack academic honesty) that ultimately reproduce educational disparities (e.g. schools become “traumatic” and less safe for Black youth) that take schools further away from their stated equity goals.

Likewise, for Kianna, who experienced firsthand how anti-black racial logics structure disparate learning opportunities and disciplinary responses, CRAL helped her read and resist algorithmic racism in both digital and analog settings. First, she makes compelling connections between anti-black logics (e.g. Black students are assumed to be less intelligent and more likely to cheat) and sociotechnical outcomes (e.g. Black students are less likely to use AI to support their learning, which not only hinders their academic progress but creates a self-fulfilling prophecy of poor educational outcomes). Later, Kianna uses her critical awareness of algorithmic anti-blackness to challenge race-neutral conversations about AI in the classroom context (e.g. “When it comes to AI, I’m always gonna bring up Black people”). In these examples, CRAL enabled students to challenge prevailing notions of “unethical” AI, redefining it to mean technologies that contain biased infrastructures that produce algorithmic microaggressions (Tanksley, 2019, 2022) that exacerbate school discipline disparities, feelings of disposability and academic push out.

In addition to helping students interrogate and challenge AI-mediated racism in their everyday schooling experiences, CRAL also supported students in reimagining schools and school-based technologies that supported educational equity. Students like Jax and Tiana imagined AI systems designed to support mental and physical wellness; teach cultural and racial histories that are banned, erased or denigrated; provide anti-racist education and allyship training to perpetrators of racial violence; and intervene in educational injustice and abuse of power in real time. Here, CRAL can encourage students to tinker with prominent definitions of “efficient” and “effective” use of AI, and instead identify racially biased and profoundly carceral uses of AI as “ineffective” and “inefficient” for advancing educational equity and consequential learning.

Finally, students challenged prevailing understandings of “critical” AI usage, calling attention to how schools’ failure to audit AI systems for algorithmic biases before implementing them into schools at mass scale threatens students’ safety, well-being and educational success. They simultaneously critiqued schools’ techno-solutionist beliefs that AI will inherently remediate historic, interlocking and mutually constructing systems as “oversimplified,” “reductive” and “not critical enough.” Instead, students demanded thoughtful consideration of educational norms, policies and practices – like culturally irrelevant curricula, systemic underfunding, educational resource deprivation, school policing and zero tolerance policies – in place of “band aid” solutions that use AI. In these and other ways, CRAL answers the abolitionist call to prepare students to “do more than survive” (Love, 2019) educational inequity and technological racism, cultivating students’ ability to dream up new AI systems that could upend the algorithmic afterlives of slavery that persist in schools.

Conclusion

Although every single student in the Race, Abolition and AI program attended a K-12 institution that used AI technologies, few if any were able to identify and describe what role AI played in their educational experience before – and even after – the onset of ChatGPT in November of 2022. Though students were familiar with ChatGPT, they had not been taught explicitly about how it worked or how to use it; in fact, most of them attended schools that outright banned the technology for fear of cheating. As students in this study repeatedly reported, these disparities occur because assumptions of Black criminality – rather than Black intellectualism, curiosity and capability – often undergirded urban schools’ approach to AI and technology adoption. A national study by the Center for Democracy and Technology confirms students’ experiential insights, revealing that Black students are more likely to be surveilled and penalized for using AI to support learning – even when the tool is used for “ethical” reasons, such as to augment learning or provide accommodations (Laird and Dwyer, 2023). As the findings in this study show, it’s essential for educators, administrator, school resource officers and students to understand how “colorblind,” and “race neutral” approaches to AI are often informed by deep rooted and largely invisible logics of white supremacy and anti-blackness that exacerbate educational inequity for already marginalized student groups. Not only does restricting and penalizing students’ use of AI strip students of opportunities to develop 21st century technology skills needed to thrive in an increasingly AI-inundated world, but these carceral approaches to AI simultaneously make Black students more vulnerable to school punishment, push out and carceral contact, exacerbating the school to prison nexus via the New Jim Code (Benjamin, 2019; Tanksley, 2023a).

While CRAL can support students’ efforts to interrogate how anti-blackness undergirds the race-evasive deployment of AI in schools, CRTT’s emphasis on historical embedding can reveal how schools’ anti-black approach to AI in contemporary times is an extension of a long standing, tech-mediated phenomenon using “unethical technology use” to supercharge the school-to-prison pipeline. Years before they implemented the comprehensive ban on ChatGPT, Los Angeles Unified School District – the second largest district in the country (LAUSD, 2024), home to the largest independent school police department in the USA (Los Angeles School Police Association, 2024). and the district that serves a majority of students in this study – had a similar incident following a high profile EdTech roll out in 2013. In an attempt to close the “digital divide,” the district launched a multi-million dollar EdTech initiative that would provide Apple iPads to each student in the district (Lamb and Weiner, 2018). It is important to note that a sizable portion of the lofty EdTech budget was used to expand the campus police force – a move said to protect the campus from threats linked to students’ device usage and ownership (Gilbertson, 2018). However, after just one week, the rollout was halted following claims that students were engaging in “unethical” use of the devices, namely, “hacking,” and school disciplinary procedures ensued (Blume, 2013). Interestingly, interviews with students accused of “hacking” the devices revealed that the safety software used in the devices was so restrictive that the iPads were virtually unusable for educational purposes. Students reported being unable to complete class assignments or access educational websites because of content moderation settings, and expressed frustration at being denied an invaluable educational opportunity while simultaneously being characterized as delinquents that were undeserving of cutting edge technologies (Sanders, 2013). Rather than recognizing students as technologically astute and capable of advanced computational knowledge, the district used student “hacking” to reaffirm narratives of Black criminality, positioning students as unethical users of technology and, therefore, undeserving and incapable of 21st century learning opportunities.

While the “ipad hacking scandal” subjected Students of Color to public humiliation and harsh disciplinary measures that impeded their opportunity to gain 21st century technology skills, findings from this study suggest that hacking can have life-sustaining and liberatory possibilities for Black students forced to navigate racially unethical, uncritical and ineffective AI roll outs in the coming years. This is because CRAL prepares students to hack sociopolitical and sociotechnical systems toward racial justice and educational equity. According to Benjamin, hacking is a transformative and critically subversive act, because “to hack a system one needs an in-depth understanding of how it works, its strengths and weaknesses, and a vision for how to make it better…make it do something it wasn’t meant to do” (Benjamin, 2013) – in this case, advance educational equity for Black students. Ultimately, it is my belief that CRAL can bring students one step closer to a vision of sociotechnical and educational justice that they dreamt up in the summer course. It’s like Jax says: “we’re changing the system with this one.”

Figures

Class slides defining algorithms in analog contexts

Figure 1.

Class slides defining algorithms in analog contexts

Student examples of “analog” algorithms that mediate their schooling experiences

Figure 2.

Student examples of “analog” algorithms that mediate their schooling experiences

Brooklyn and Peers’ presentation slides for the teacher modification system

Figure 3.

Brooklyn and Peers’ presentation slides for the teacher modification system

References

Alba, D. (2023), “OpenAI chatbot spits out biased musings, despite guardrails”, Bloomberg, available at: www.bloomberg.com/news/newsletters/2022-12-08/chatgpt-open-ai-s-chatbot-is-spitting-out-biased-sexist-results

Atterbury, A. (2023), “After national backlash, Florida lawmakers eye changes to book restrictions”, Politico, available at: https://www.politico.com/news/2024/01/19/florida-book-challenges-fees-00136409

Beam, C. (2023), “The AI detection arms race is on”, Wired, available at: www.wired.com/story/ai-detection-chat-gpt-college-students/

Benjamin, R. (2013), “Playing the game or hacking the system?”, available at: www.huffpost.com/entry/playing-the-game-or-hacki_b_3370009

Benjamin, R. (2019), “Race after technology: Abolitionist tools for the new JIM code”, Polity Press, Boston.

Blume, H. (2013), “LAUSD halts home use of iPads for students after devices hacked”, available at: www.latimes.com/local/lanow/la-xpm-2013-sep-25-la-me-ln-lausd-ipad-hack-20130925-story.html

Buolamwini, J. and Gebru, T. (2018), “Gender shades: intersectional accuracy disparities in commercial gender classification”, In Conference on fairness, accountability and transparency, P MLR, pp. 77-91.

Browne, S. (2015), Dark Matters: On the Surveillance of Blackness, Duke University Press, Durham.

Chan, A. and Dillon, M. (2022), “Prison systems insist on banning books by black authors. It’s time to end the censorship”, available at: www.washingtonpost.com/opinions/2022/01/12/end-prisons-ban-books-black-authors-censorship-malcom-x-toni-morrison/

Cho, H. and Johnson, P. (2020), “Racism and sexism in superhero movies: Critical race media literacy in the Korean high school classroom”, International Journal of Multicultural Education, Vol. 22 No. 2, pp. 66-86.

Clark, M. (2021), “Students of color are getting flagged to their teachers because testing software can’t see them”, The Verge, available at: www.theverge.com/2021/4/8/22374386/proctorio-racial-bias-issues-opencv-facial-detection-schools-tests-remote-learning

Daniels, J. (2009), Cyber Racism: White Supremacy Online and the New Attack on Civil Rights, Rowman and Littlefield Publishers, Lanham.

Davis, A. (2023), “ChatGPT sparks cheating, ethical concerns as students try realistic essay writing technology”, ABC News, available at: www.abc.net.au/news/2023-01-26/chatgpt-sparks-cheating-ethical-concerns-in-schools-universities/101888440

Delgado-Bernal, D. (2002), “Critical race theory, Latino critical theory, and critical raced-gendered epistemologies: recognizing students of color as holders and creators of knowledge”, Qualitative Inquiry, Vol. 8 No. 1, pp. 105-126.

Downie-Chin, T., Cowley, M.P.S. and Worlds, M. (2020), “Whitewashing through film: how educators can use critical race media literacy to analyze Hollywood’s adaptation of Angie Thomas’ the hate U give”, International Journal of Multicultural Education, Vol. 22 No. 2, pp. 129-144, doi: 10.18251/ijme.v22i2.2457.

Feathers, T. (2021), “Proctorio is using racist algorithms to detect faces”, Vice, available at: www.vice.com/en/article/g5gxg3/proctorio-is-using-racist-algorithms-to-detect-faces

Feathers, T. and Mehrotra, D. (2023), “Inside America’s school internet censorship machine”, WIRED, available at: www.wired.com/story/inside-americas-school-internet-censorship-machine/

Fowler, G. (2023), “We tested a new ChatGPT-detector for teachers”, It flagged an innocent student. Washington Post, available at: www.washingtonpost.com/technology/2023/04/01/chatgpt-cheating-detection-turnitin/

Freire, P. (1970), Pedagogy of the Oppressed, Continuum, New York, NY.

Gilbertson, A. (2018), “LA unified police say $8 million needed to keep iPad-toting students safe”, available at: https://archive.kpcc.org/blogs/education/2014/10/31/17495/la-unified-police-say-8-million-needed-to-keep-ipa/

Gordon, C. (2023), “How are educators reacting to chat GPT?”, available at: www.forbes.com/sites/cindygordon/2023/04/30/how-are-educators-reacting-to-chat-gpt/?sh=482f673b2f1c

Hartman, S. (2007), Lose Your Mother: A Journey along the Atlantic Slave Route, Farrar, Straus, and Giroux, New York, NY.

Herold, B. (2022), “Schools are deploying massive digital surveillance systems”, The Results Are Alarming. Education Weekly, available at: www.edweek.org/technology/schools-are-deploying-massive-digital-surveillance-systems-the-results-arealarming/2019/05

Herold, B. and Harris, E. (2019), “Schools are using widespread digital surveillance of students. Does it keep them safe? Or invade their privacy?”, Education Weekly, available at: www.edweek.org/technology/video-schools-are-using-widespread-digitalsurveillance-of-students-does-it-keep-them-safe-or-invade-their-privacy/2019/05

Jenkins, D.A. (2021), “Unspoken grammar of place: anti-blackness as a spatial imaginary in education”, Journal of School Leadership, Vol. 31 Nos 1/2, pp. 107-126.

Jenkins, D.A. (2022), “Feeling black: Black urban high school youth and visceral geographies of anti-Black racism”, Equity and Excellence in Education, Vol. 55 No. 3, pp. 231-243.

Jimenez, K. (2023), “Schools nationwide are banning OPENAI's ChatGPT. Here's what experts say about the future of artificial intelligence in education”, available at: www.usatoday.com/story/news/education/2023/01/30/chatgpt-going-banned-teachers-sound-alarm-new-ai-tech/11069593002/

Johnson, A. (2023), “ChatGPT in schools: Here’s where it’s banned—and how it could potentially help students”, Forbes, available at: www.forbes.com/sites/ariannajohnson/2023/01/18/chatgpt-in-schools-heres-where-its-banned-and-how-it-could-potentially-help-students/?sh=43b628ea6e2c

Kelley, R.D. (2022), Freedom Dreams: The Black Radical Imagination, Beacon Press, Boston, MA.

Kristof-Brown, A. (2023), “Prioritize ChatGPT proficiency to enhance teaching and learning”, available at: www.insidehighered.com/opinion/views/2023/10/27/teach-college-students-use-ai-proficiently-opinion

Laird, E. and Dwyer, M. (2023), “Report – off task: EdTech threats to student privacy and equity in the age of AI”, Center for Democracy and Technology, available at: https://cdt.org/insights/report-off-task-edtech-threats-to-student-privacy-and-equity-in-the-ageof-ai/

Lamb, A.J. and Weiner, J.M. (2018), “Institutional factors in iPad rollout, adoption, and implementation: isomorphism and the case of the Los Angeles unified school District’s iPad initiative”, International Journal of Education in Mathematics, Science and Technology (IJEMST), Vol. 6 No. 2, pp. 136-154, doi: 10.18404/ijemst.408936.

LAUSD (2022), “District-wide artificial intelligence efforts and professional development”, available at: www.lausd.org/cms/lib/CA01000043/Centricity/Domain/21/IOC%20District-Wide%20Artificial%20Intelligence%20Efforts%20and%20Professional%20Development.pdf

Los Angeles School Police Association (2024), “Who we are”, available at: www.laschoolpolice.com/m/pages/index.jsp?uREC_ID=416085&type=d#:∼:text=The%20Los%20Angeles%20School%20Police%20Department%20(LASPD)%20is%20the%20largest,Angeles%20Unified%20School%20District%20(LAUSD)

Los Angeles Unified School District (2024), “Home page”, available at: www.lausd.org/domain/4

Love, B.L. (2019), We Want to Do More than Survive: Abolitionist Teaching and the Pursuit of Educational Freedom, Beacon Press, Boston, MA.

Love, B.L. (2023), Punished for Dreaming: How School Reform Harms Black Children and How we Heal, St. Martin's Press, New York, NY.

Merriam, S.B. (2009), Qualitative Research: A Guide to Design and Implementation, Jossey-Bass, San Francisco, CA.

Metz, C. (2023), “Chatbots may ‘hallucinate’ more often than many realize”, New York Times, available at: www.nytimes.com/2023/11/06/technology/chatbots-hallucination-rates.html

Nkonde, M. (2019), “Automated anti-blackness: facial recognition in Brooklyn, New York”, Harvard Journal of African American Public Policy, Vol. 20, pp. 30-36.

Noble, S.U. (2016), “A future for intersectional black feminist technology studies”, Scholar and Feminist Online, Vol. 13 No. 3, pp. 1-8.

Noble, S.U. (2018a), “Critical surveillance literacy in social media: Interrogating black death and dying online”, Black Camera, Vol. 9 No. 2, pp. 147-160.

Noble, S.U. (2018b), Algorithms of Oppression: How Search Engines Reinforce Racism, NYU Press, New York, NY.

Orsek, B. (2023), “Why media literacy is key to tackling AI-powered misinformation”, The Hill, available at: https://thehill.com/opinion/technology/4108304-why-media-literacy-is-key-to-tackling-ai-powered-misinformation/

Price-Dennis, D., Muhammad, G.E., Womack, E., McArthur, S.A. and Haddix, M. (2017), “The multiple identities and literacies of black girlhood: a conversation about creating spaces for black girl voices”, Journal of Language and Literacy Education, Vol. 13 No. 2, pp. 1-18.

Proctorio (2024), “Frequently asked questions”, available at: https://proctorio.com/faq

Salman, J. (2023), “How AI can teach kids to write – not just cheat”, available at: https://hechingerreport.org/how-ai-can-teach-kids-to-write-not-just-cheat/

Sanders, S. (2013), “Why L.A. Students hacked into iPads: district is ’locking us out”, available at: www.kqed.org/mindshift/31705/why-l-a-students-hacked-into-ipads-district-is-locking-us-out

Sealy-Harrington, J. (2020), “Untelling the story of race”, The Walrus, Vol. 15.

Stanton, C.R., Hall, B. and DeCrane, V.W. (2020), “‘Keep it sacred!’: indigenous Youth-Led filmmaking to advance critical race media literacy”, International Journal of Multicultural Education, Vol. 22 No. 2, pp. 46-65.

Tanksley, T. (2019), “Race, education and #BlackLivesMatter: how social media activism shapes the educational experiences of Black College-Age women”, Doctoral dissertation, UCLA.

Tanksley, T. (2022), “Race, education and #BlackLivesMatter: how Online Transformational Resistance Shapes the Offline experiences of Black undergraduate women”, Urban Education Journal, doi: 10.1177/00420859221092970.

Tanksley, T. (2023a), “Employing a critical race, abolitionist pedagogy in CS: centering the voices, experiences and technological innovations of black youth”, Journal of Computer Science Integration, Vol. 6 No. 1, p. 9, doi: 10.26716/jcsi.2023.12.27.49.

Tanksley, T. (2023b), “AI technology threatens educational equity for marginalized students”, The Progressive Magazine, available at: https://progressive.org/public-schools-advocate/ai-educational-equity-for-marginalized-students-tanksley-20231125/

Tanksley, T. and Hunter, A. (2024), “Black youth, digital activism and racial battle fatigue: How black youth enact hope, humor and healing online”, in Connor, J. (Ed.) Handbook of Youth Activism, Edward Elgar Publishing, Northampton, MA.

Vicci, G. (2023), “ChatGPT making it easier for students to cheat in school”, CNN news, available at: www.cbsnews.com/detroit/news/chatgpt-making-it-easier-for-students-to-cheat-in-school/

Vincent, J. (2023), “OpenAI isn’t doing enough to make ChatGPT’s limitations clear”, The Verge, available at: www.theverge.com/2023/5/30/23741996/openai-chatgpt-false-information-misinformation-responsibility

Wing, J.M. (2008), “Computational thinking and thinking about computing”, Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, Vol. 366 No. 1881, pp. 3717-3725.

Yosso, T.J. (2002), “Critical race media literacy: challenging deficit discourse about Chicanas/os”, Journal of Popular Film and Television, Vol. 30 No. 1, pp. 52-62.

Yosso, T.J. (2020), “Critical race media literacy for these urgent times”, International Journal of Multicultural Education, Vol. 22 No. 2, pp. 5-13.

Further reading

Alemán, S.M. and Alemán, E. Jr (2016), “Critical race media projects: counterstories and praxis (re) claim Chicana/”, Urban Education, Vol. 51 No. 3, pp. 287-314.

Collins, P.H. (1986), “Learning from the outsider within: the sociological significance of black feminist thought”, Social Problems, Vol. 33 No. 6, pp. s14-s32.

Degand, D. (2020), “Introducing critical race media literacy in an undergraduate education course about technology and arts-based inquiry”, International Journal of Multicultural Education, Vol. 22 No. 3, pp. 96-117.

Delgado-Bernal, D. (1998), “Using a Chicana feminist epistemology in educational research”, Harvard Educational Review, Vol. 68 No. 4, pp. 555-583.

Freelon, D., McIlwain, C.D. and Clark, M.D. (2016a), “Beyond the hashtags: #Ferguson, #BlackLivesMatter, and the online struggle for offline justice”, Washington, DC, American University, Center for Media and Social Impact, available at: http://cmsimpact.org/wp-content/uploads/2016/03/beyond_the_hashtags_2016.pdf

Freire, P. (1973), Education for a Critical Consciousness, Seabury, New York, NY

Garcia, A., Seglem, R. and Share, J. (2013), “Transforming teaching and learning through critical media literacy pedagogy”, LEARNing Landscapes, Vol. 6 No. 2, pp. 109-124.

Garcia, A., Mirra, N., Morrell, E., Martinez, A. and Scorza, D.A. (2015), “The council of youth research: critical literacy and civic agency in the digital age”, Reading and Writing Quarterly, Vol. 31 No. 2, pp. 151-167.

Klein, A. (2023), “ChatGPT cheating: what to do when it happens”, EdWeekly, available at: www.edweek.org/technology/chatgpt-cheating-what-to-do-when-it-happens/2023/02

McMillan Cottom, T. (2016), “Black cyberfeminism: ways forward for intersectionality and digital sociology”, Digital Sociologies, pp. 211-232, doi: 10.2307/j.ctt1t89cfr.20.

Morrell, E. and Duncan-Andrade, J. (2005), “Popular culture and critical media pedagogy in secondary literacy classrooms”, International Journal of Learning, Vol. 12 No. 1, p. 11.

Nakamura, L. (2008), Digitizing Race: Visual Cultures of the Internet, Vol. 23, U of MN Press, Minneapolis.

Nichols, P. (2012), “From knowledge to wisdom: Critical evaluation in new literacy instruction”, Voices from the Middle, Vol. 19 No. 4, p. 64.

Noble, S. (2012), “Searching for black girls: old traditions in new media”, Doctoral dissertation, University of Illinois at Urbana-Champaign.

Noble, S.U. (2014), “Teaching Trayvon: race, media, and the politics of spectacle”, The Black Scholar, Vol. 44 No. 1, pp. 12-29.

Acknowledgements

Funding: CU Boulder Engineering Education and AI-augmented Learning Seed Grant; UCI Connected Learning Lab (CLL) Connected Impact Studio Seed Grant

Corresponding author

Tiera Chante Tanksley can be contacted at: tctanksl@g.ucla.edu

Related articles