I’ve recently reimagined an introductory sociology course into a digital sociology course that introduces students to social science research methods and foundational sociological concepts. In this class, students conduct a digital auto-ethnography of their social media feeds on platforms like Instagram and TikTok and use theories like C. Wright Mills’ “sociological imagination” to better understand how their individual identities, attitudes, and behaviors are shaped by social media algorithms. In another project, students conduct a content analysis of posts on varying subreddits on the aggregate platform of Reddit to get a pulse on hegemonic and counterhegemonic attitudes and practices on everything from contemporary dating culture to sports. And each week, students post to our discussion board hosted on Padlet, a site that mirrors social media platforms to make content visually engaging and interactive. Students can heart-react comments, upvote or downvote, or even insert a gif to complement their post. I’ve felt comfortable, excited even, digitizing many elements of my course, from discussion boards to annotation software and even gamification through platforms like Menti and Kahoot.
And yet, despite teaching an introductory course with a digital sociology structure and foundation, I still struggle with what it looks like to invite and incorporate generative artificial intelligence into my classroom ethically and responsibly. I worry about integrating AI without contemplating AI. During that roundtable session, we explored what it meant to teach sociology during the digital revolution. I vividly recall the litany of conflicting emotions that characterized the seemingly explosive emergence and accessibility of generative artificial intelligence technologies in higher education. Instructors collectively panicked, rejoiced, and feared the impact of these rapidly evolving and expanding technologies in their classrooms. In essence, my desire is to critically engage with AI in the classroom to increase my students’ prospects in this world that they need to survive in. One of the most persistent and pervasive questions for me has been: how is the proliferation of generative artificial intelligence going to impact my pedagogy and curriculum? Most importantly, how is this going to impact my students?
While there remains pedagogical and curricular stratification in the deployment of AI knowledge and literacies, I am also concerned about the implications and outcomes of generative artificial intelligence on broader society within structural and intersecting oppressions of classism, racism, and ableism. While I may be entertained by asking ChatGPT to generate a recipe for dinner with the three expired ingredients in my pantry or plan a weekend excursion in Philadelphia for under $200, I am worried about my own professional disposability. I worry even more about the disposability of economically dispossessed folks, people of color, and people with disabilities within a racialized neoliberal landscape where AI radically replaces and displaces humans from so many industries, such as healthcare, media, entertainment, and now education, while the mostly white and male titans of tech reap obscene profits. Will I be teaching my students to be complicit in their own demise? What about my own? AI brings with it existential and structural questions in a society where work is tethered to our being and sense of self and there are worries that AI will limit the need for university. In light of these concerns, I’ve arrived at a paradoxical point in my teaching. While I understand the utility of having my students develop AI literacy, I don’t feel comfortable, willing, or prepared enough to bring AI into my classroom in a material way. This is likely informed by my most pressing concerns: where and how does AI fit into racial capitalism? And if I bring it into the classroom, what exactly am I complicit in contributing to?
In Teaching with AI: A Practical Guide to a New Era of Human Learning, authors José Antonio Bowen and C. Edward Watson guide instructors on how to think, teach, and learn with artificial intelligence. There’s a genealogy to the evolution of AI, though its ubiquity is a relatively recent phenomenon accelerated by the COVID-19 pandemic. Instead of pushing AI out of the classroom, Bowen and Watson suggest that educators embrace the change that is already here by reimagining creativity, assessments, and assignments. But while attending national disciplinary conferences and teaching at a public college where the student body is predominantly first-generation college students and/or students of color, I’ve noticed a discourse that concerns me. At private colleges and universities, instructors are inviting AI use into their classrooms and integrating platforms like Claude, ChatGPT, and Gemini directly into their curriculum. In these classrooms, instructors teach students how to create effective prompts to generate desirable outcomes and results. They also teach students critical AI literacy that includes how to discern what is useful in outputs relative to what ought to be questioned, changed, or modified and why. Instead of policing AI use, instructors teach students how to use, leverage, and develop AI skills that would prove advantageous in a future labor market where an estimated 40-60% of jobs will require AI literacy, and a projected 300 million jobs will be replaced by AI.
This sits in stark comparison to the role of AI in some public colleges, especially ones that are predominantly working class and of color. In these classrooms, AI use is often met with at least skepticism and, sometimes, outright derision. Fears that students will outsource critical thinking and plagiarize their assignments inform the policing, restricting, and regulating the use of AI in many classrooms. Instead of moving forward, many classrooms implemented seemingly archaic curricular changes that included a return to classrooms of yesteryear, like in-class examinations and essays–even oral exams–to assess students’ acquisition of topical content. This remodeling, which is often punitive and fear-based, is suggestive that institutional culture holds a not-so-quiet belief that working-class students and students of color are duplicitous and not to be trusted with emergent technologies, and it also leaves these students underprepared to navigate an increasingly digitally oriented world. As an economically dispossessed mixed-race, first-generation college student myself, I refuse to have internalized classism and racism guide how and when I introduce AI into my classroom. There are some public colleges that offer hope, such as City College CUNY where courses are being offered on the intersections of AI and society. These courses teach students how to “critically examine the cultural, economic, and social impacts of AI, considering topics like ethics, power dynamics, automation, and the future of work.” And while I’m comfortable discussing and debating AI with my students, I’m not ready to bring AI to life in my classroom.

Alyssa Lyons earned her PhD in sociology at the CUNY Graduate Center. Her teaching and research interests revolve around education, racism, sexuality, social class, and gender. She has published in Contexts and the Journal for Ethnic and Migration Studies. She currently blogs for Everyday Sociology.
Comments