Teaching Critical AI Literacy: 

Tools and Strategies for the Classroom

Nolan Higdon, Allison Butler, Sydney Sullivan, Tyler Poisson


Teaching Critical AI Literacy: 

Tools and Strategies for the Classroom

Nolan Higdon, Allison Butler, Sydney Sullivan, Tyler Poisson

Artificial intelligence (AI) is fundamentally transforming education, reshaping the ways in which students learn, educators teach, and institutions operate. As the technologies of AI advance, particularly in the fields of machine learning and natural language processing, their integration into education has rapidly accelerated. Today, students and educators across K-12 and higher education interact with AI systems—directly through tools like ChatGPT, and indirectly through AI-driven platforms such as search engines, plagiarism detectors, and adaptive learning technologies. Despite the growing presence of AI in classrooms, there remains a significant gap in users’ ability to critically engage with and understand these technologies. Critical AI literacy (CAIL) aims to bridge this gap by providing the tools to analyze, critique, and navigate the societal, ethical, and ideological implications of AI systems.

This curriculum, designed for college classrooms, offers a structured framework to support faculty across disciplines who are interested in integrating CAIL into their teaching. Interested instructors are welcome to utilize all the activities for an AI-focused course, or can pick and choose any activity/ies that work with their own curriculum goals. The goal is to help students develop a comprehensive understanding of the AI landscape and its impact on society, while also equipping them with the skills to engage with AI technologies in a critical and responsible manner. At its core, CAIL emphasizes understanding AI’s role in perpetuating power dynamics, equity issues, and ethical dilemmas. By incorporating critical media literacy, students will be prepared to question the technology itself, the social forces that shape and are shaped by AI systems, and will be better prepared for the intersections of intellectual advancement and professional preparation. This approach enables students to think critically about the implications of AI and how it affects issues such as privacy, bias, and data justice.

The theoretical underpinnings of CAIL highlight the need for an interdisciplinary approach to AI literacy, one that goes beyond technical proficiency. While traditional AI literacy programs often focus on coding or algorithmic understanding, CAIL stresses the importance of examining AI’s broader societal impacts. Too often, AI is presented as a tool for efficiency or problem-solving without acknowledging its potential to reinforce biases or exacerbate social inequalities. By exploring these dimensions, students can develop a more nuanced and critical perspective on AI, helping them navigate an increasingly AI-driven world with greater awareness and responsibility.

To ensure the effectiveness of this curriculum, faculty are encouraged to engage students in reflective activities that prompt them to reconsider their initial assumptions about AI. For instance, early exercises might involve students listing the perceived benefits and drawbacks of AI in education, which they will revisit later to gauge how their perspectives evolve throughout the term. This reflective process fosters critical inquiry, enabling students to understand AI not just as a technological tool, but as a powerful societal force with far-reaching implications. Additionally, introducing students to key terms such as algorithms, generative AI, and large language models helps build foundational knowledge for deeper exploration of AI’s complexities.

Ultimately, this curriculum seeks to provide faculty with the scaffolding necessary to integrate CAIL into a wide range of academic disciplines, empowering students to become not only consumers of AI technologies but also critical thinkers capable of evaluating the ethical and societal challenges they pose. As AI continues to evolve, so too must our educational frameworks, ensuring that students are equipped to engage with these technologies responsibly and thoughtfully.

Unit 1: Introduction to AI

Topics & Key Terms 

Activities

Start with a brainstorming session where students create two lists: (1) the perceived benefits of AI in education and (2) its potential drawbacks. Then, ask students to share their lists in small groups and compare their responses with their peers. The list is then transferred to a master list on the board or in a shared document for the whole class to see. In a large discussion, the class should discuss the merits of what is considered a pro versus a con with AI. They should be encouraged to keep their lists,to revisit throughout the term, and possibly draft a new one at the end of the course. The goal is to compare the two lists to illuminate  how their views of AI have changed throughout the course. 

References

Meredith Broussard’s Artificial Unintelligence

The AI Delusion Gary Smith

Assignments

An activity to get students thinking about the limits of AI is to have them generate a biography for themselves using an AI program and then correct it. A discussion with the class about the errors can help students be more attuned to the weaknesses with current AI. 

Another engaging activity is to have students watch or recreate videos of AI-generated bots conversing with each other. In many recorded instances, these bots end up speaking in complete nonsense. Here’s an example. After watching, students should reflect on what this reveals about the limitations of AI technology.

Discussion Questions

  • How do you currently use AI in your daily life?

  • What are your biggest concerns about AI?

  • What excites you most about the possibilities of AI?

 

Unit 2: Media Literacy

Topics & Key Terms 

Media literacy approaches to AI

Activities

A small group activity asks  students to apply the critical media literacy frameworks to an AI-generated image, video, or podcast, then return as a class to compare their answers. It is a great way to learn from one another. For introductory media literacy discussion questions consider Common Sense Media’s and for critical media literacy based discussion questions consider Kellner and Share’s CML questions.

References

Douglas Kellner and Jeff Share. (2019). The Critical Media Literacy Guide Engaging Media and Transforming Education. Brill. 

The Media and Me: A Guide to Critical Media Literacy for Young People

Assignments

Divide the class into groups to research and present on the four main approaches to media literacy: protectionism, media and arts education, media literacy, and critical media literacy. Use real-world examples of AI applications, such as ChatGPT or Canva, to demonstrate these approaches in a class presentation.

Examples:

  • For protectionism, show an article advocating for AI bans in classrooms.

  • For arts and education, have students use Canva to create AI-assisted posters or designs.

  • For media literacy, analyze AI-generated news articles for bias and credibility.

  • For critical media literacy, discuss the environmental impacts of AI technology.

Discussion Questions

  • Why do humans value art? 

  • Does AI-generated art alter our emotional connection to creativity?

  • Should AI be restricted in classrooms to prevent misuse, or should its use be encouraged for creative and academic growth?

  • What are the societal implications of using AI tools for art and storytelling?

  • How does corporate ownership of AI tools influence the messages and information they produce?

Unit 3: Understanding Artificial Intelligence

Topics & Key Terms 

Activities

Students will come up with original definitions for intelligence and consciousness. This article from The Guardian may be helpful. They will then compare their understanding of these concepts with the claims made by AI companies and advocates in advertisements that the students find. The goal is for students to get comfortable weighing in on these debates and be able to critically analyze advertisements. It is worthwhile to encourage students to explore their experiences with CAPTCHA and what this reveals to them about AI. 

References

Harari, Y. N. (2024). Nexus: A Brief History of Information Networks from the Stone Age to AI. New York, New York: Random House.

Noam Chomsky, Ian Roberts and Jeffrey Watumull. (2023, March 8). “Noam Chomsky: The False Promise of ChatGPT.” New York Times. 

Gary Smith. (2025, January 16). “AGI Is Not Already Here. LLMs Are Still Not Even Intelligent.” Mind Matters.  

Assignments

An engaging exercise for students is to generate an AI-written essay based on a course-related prompt. Students will first receive a prompt and use a generative AI program to produce an essay in response. Then, in small groups, they will review and revise the AI-generated essay, identifying errors and making necessary corrections.

Afterward, each group will share their findings with the class in a larger discussion. The goal of this assignment is to help students recognize that AI lacks autonomous intelligence, particularly in understanding and accurately applying content knowledge.

As they analyze the essay, students should consider the following questions:

  • Compare and contrast the results: What are the similarities and differences between the AI-generated essays?

  • Accuracy assessment: What did the generative AI platform get “right”? What did it get “wrong”?

  • AI precision and limitations: What does this exercise reveal about the precision and accuracy of generative AI?

Discussion Questions

  • How do misconceptions about AI shape public opinion and policy?

  • What are the potential dangers of assuming AI is infallible?

  • How might the biases in training data affect AI’s outputs?

Unit 4: The Political Economy of AI

Topics & Key Terms 

Activities

Assign students to role-play as different stakeholders (tech CEOs, policymakers, activists, AI ethicists, gig workers) and hold a mock congressional hearing on AI regulation. The goal is for students to get comfortable understanding the different views and interests as they relate to AI, especially the role of electoral politics and economics. 

References

Assignments

Assign each group an AI platform, such as ChatGPT or Khan Academy, to research and present. Their presentation should cover the platform’s history, controversies, and business model.

Students will investigate and analyze how major tech companies control AI development, influence policy, and profit from AI technologies, uncovering the political and economic structures that shape the industry.

Students will be placed in small groups, each assigned a specific AI tool to investigate. They will then give a short presentation to the class, answering the following questions:

  • Who owns the platform?

  • What conflicts of interest exist?

  • Does the company have a monopoly in its field?

  • What scandals have emerged related to the platform or its owners?

Discussion Questions

  • How does corporate ownership shape the development and deployment of AI tools?

  • What are the potential conflicts of interest in AI companies’ goals to maximize profits?

  • Should AI technologies be regulated more strictly, and if so, how?

Unit 5: AI, Language, and Alienation

Topics & Key Terms 

  • Alienation

  • Language Acquisition

  • Innateness of Language

  • Human Nature

Activities

The class will watch two brief videos, one on Language Acquisition and the other on The Structure of Language. The purpose of these videos is to introduce students to Noam Chomksy’s theory of language acquisition, according to which, humans are born with special knowledge that allows us to acquire our native language, on the basis of hearing others speak but in the absence of explicit instruction, as very young children. The teacher should emphasize that, on this view, the human capacity for language is innate and part of what makes us who we are. After watching these videos students will form groups of two. These groups will record a short video or podcast of their own, or co-write a short summary, explaining/teaching Chomsky’s theory of language acquisition in their own words, as they understand it on the basis of these videos. Groups can share the product of their work in an effort to arrive at a more complete understanding of the theory of language acquisition.

References

Marx’s Theory of Alienation

Alienation definition and related terms from Marx

Chomsky on Biolingusitics

Can AI Models Show Us How People Learn Language?

Embracing liberatory alienation

Assignments

In light of their newfound knowledge about language acquisition, students will apply Marx’s theory of alienation to the concept of large language models. This assignment will take the form of a discussion. After introducing Marx’s theory of alienation (using the resource listed above), the class will form a circle to discuss “whether using LLMs to read and write for us alienates us from language.”. The discussion should be framed as an inquiry, where students are encouraged to raise further questions, defend their arguments with evidence, keep an open mind, and respect everyone for their views and contributions. 

Discussion Questions

  • Do Large Language Models alienate us from language or augment our relationship to it?

  • What does human nature mean to you?

  • Will Large Language Models make us more or less creative?

Unit 6: Critical Theory, Ideology, and AI

Topics & Key Terms 

  • Critical theories

  • Dominant ideologies

  • Theory vs. ideology 

Activities

For a fun activity, students will use critical theoretical frameworks (e.g., Marxist theory, critical race theory, feminist theory, or postcolonial theory) to analyze dominant ideologies embedded in AI-generated content or algorithmic outputs. Provide students with access to an AI text generator (ChatGPT, Gemini, Claude, etc.) or image generator (DALL·E, Midjourney, etc.). Have students prompt the AI to generate content on a politically, socially, or culturally significant topic (e.g., "Describe the ideal leader," "Explain the causes of poverty," "Create an image of a scientist," etc.). Instruct them to document the output carefully, noting language, imagery, framing, and omissions. In small groups, students apply a critical framework to their AI-generated content. Each group identifies implicit assumptions, biases, and ideological positions embedded in the AI’s response. Groups present their findings to the class. In the big group discussion, the class discusses how AI’s outputs reflect and reinforce dominant ideologies.

References

Assignments

Students will work in small groups to profile an individual involved in AI, such as a journalist critical of AI, a CEO of an AI company, victim of an AI scam, or an AI engineer. To ensure a diverse range of perspectives, each group should choose a different type of figure. They will then create a mini-biography focusing on how this person views AI. The goal is for students to document and understand the various ideologies and political positions on AI. This activity should be followed by a class discussion to further explore these perspectives.

Discussion Questions


  • Who trains AI models, and how does that influence outputs?

  • How does AI mediate knowledge production and truth claims?

  • What are the ethical implications of AI’s ideological framing?

Unit 7: The Politics of Representation in AI  

Topics & Key Terms 

  • Bias

  • Stereotypes

  • Microaggressions

Activities

Gary Smith discusses how AI systems are programmed to out-think humans who are engaging in expected behaviors such as intelligent moves in chess. However, if the human starts acting erratically, some AI platforms are not programmed to know how to respond adequately. Students should test this out on an AI system to illustrate the limits and bias of coders and data to shape AI systems by making nonsensical decisions when interacting with an AI platform to see how it responds and compare it to sensical outcomes with the same AI platform. This will help demystify AI for students while illustrating the ways in which bias and stereotypes can be coded into AI systems. 

As a follow up activity to better understand how bias and stereotypes are coded into the AI, students will input their notes or the instructor's materials into an AI program that generates podcasts. Tina Austin has uploaded helpful YouTube videos that show the sexism in AI generated podcasts such as this one and this one. As a class, they will analyze how different identities are represented in the generated content. The class will then discuss the biases reflected in these representations and what they reveal about how AI systems are designed and trained.

References

More than a Glitch: Confronting Race, Gender, and Ability Bias in Tech by Meredith Broussard 

Weapons of Math Destruction How Big Data Increases Inequality and Threatens Democracy By Cathy O’Neil

The Rise of Big Data Policing Surveillance, Race, and the Future of Law Enforcement by Andrew Guthrie Ferguson

Machine Learning and the Police: Asking the Right Questions Annette Vestby, Jonas Vestby Policing: A Journal of Policy and Practice, Volume 15, Issue 1, March 2021, Pages 44–58, https://doi.org/10.1093/police/paz035 Published: 14 June 2019

Assignments

In an exercise similar to the previous essay generating activity, students will use AI to “write” anyour essay for themyou. Take the essay prompt for the your class and plug it into ChatPGT with the prompt “Write this essay for me.” From there, paste this essay into a document and edit it using “suggestions”/ “track changes”  mode. Use comments to discuss what is  you liked/ and disliked about the content created. This essay and the results will be discussed in the following class. 

Use case studies to apply critical theories to AI. For instance, analyze gender bias in AI-generated images during the 2024 election.

Examples:

  • Examine how AI tools disproportionately impact marginalized communities, such as through biased hiring algorithms or predictive policing.

  • Discuss Emily Chang’s work on Silicon Valley’s “bro culture” to highlight gender inequities in tech.

Discussion Questions

AI & Bias

  • How do biases in AI models reflect the biases in human society? What are some real-world consequences of these biases?

  • What can be done to make AI systems more equitable and fair?

  • Is it possible to create a truly unbiased AI? Why or why not?

  • In what ways does AI reinforce existing power structures?

  • How can critical theories help us identify and challenge biases in AI systems?

  • What responsibility do developers have to address the biases in their algorithms?

Unit 8: Disinformation  

Topics & Key Terms 

  • Democracy 

  • Journalism

  • Propaganda

  • Disinformation

  • Fake news

  • Misinformation

  • Deepfakes

Activities

In this in-class activity on algorithmic disinformation, students will explore how algorithms influence the spread of false information and impact journalism. The session begins with a brief discussion on how algorithms shape news consumption and the responsibilities of journalists in understanding these systems. Students will then break into small groups to examine key sections of Algorithmic Literacy for Journalists (ALFJ), focusing on topics like filter bubbles, disinformation amplification, and journalistic challenges. Each group will summarize their section’s key takeaways before analyzing a real or hypothetical case of algorithmic disinformation, such as a viral false news story or a platform’s recommendation system spreading conspiracy theories. They will assess how the algorithm contributed to the issue, the role of engagement metrics, and potential journalistic responses. Groups will present their findings, followed by a class-wide discussion on ethical considerations and strategies for combating algorithmic disinformation. As an extension, students may write a short analysis on how a specific platform shapes disinformation and how journalists can mitigate its impact.

References

Andy Lee Roth, avram aAnderson, Kate Horgan, Adam Armstrong, and Shealeigh Voitl, "Algorithmic Literacy for Journalists (ALFJ),"

The First AI Election When Convincing Deepfakes Influence the Electorate Nolan Higdon Oct 30, 2024

Propaganda Model

The Anatomy of Fake News A Critical News Literacy Education by Nolan Higdon

Assignments

For this assignment, students will work in groups to make a short video analyzing the impact of AI-generated deepfakes on elections, using Nolan Higdon’s The First AI Election: When Convincing Deepfakes Influence the Electorate as a foundation. The deliverable should include a summary of Higdon’s main arguments, focusing on how deepfakes contribute to political disinformation and the threats they pose to democratic processes. Students will then analyze a real or hypothetical case where deepfakes influenced public perception, voter behavior, or media coverage. Next, they should explore the ethical and journalistic challenges of reporting on AI-driven disinformation, considering how journalists can responsibly verify and contextualize deepfake content. Finally, the video should propose solutions for mitigating deepfake disinformation, discussing the role of journalists, tech companies, and policymakers.

Discussion Questions

  • How do deepfakes challenge traditional notions of truth in journalism and political communication? What strategies can journalists and media consumers use to detect and counteract deepfake-driven disinformation?

  • Who should be responsible for regulating deepfake technology in political campaigns—governments, tech companies, journalists, or the public? What are the risks and benefits of different approaches to regulation?

  • Given the potential for AI-generated disinformation to influence elections, how can voters critically evaluate digital media to make informed decisions? What role does media literacy play in combating AI-driven manipulation?

Unit 9: The Ethics of AI  

Topics & Key Terms 

  • Social Justice

  • Introduction to AI Ethics

  • Algorithmic Bias & Fairness

  • AI and Labor

  • AI and Surveillance

  • Autonomous Weapons & Warfare

  • AI in Healthcare

  • Intellectual Property & Creativity

  • Regulation & Policy

Activities

In this in-class activity, students will engage in a structured debate on a pressing ethical issue related to AI, such as bias in algorithms, job displacement, or AI surveillance. They will be divided into small groups, with each group assigned a specific stance on the issue—either for, against, or a nuanced middle ground. After researching their position, groups will present their arguments, respond to counter arguments, and reflect on the complexities of the topic. The activity will conclude with a class discussion, encouraging students to critically evaluate their own views and the broader ethical implications of AI.

References

AI Ethics by Mark Coeckelbergh


Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence by Kate Crawford (Author)


The Alignment Problem: Machine Learning and Human Values 1st Edition by Brian Christian 

Plato Encyclopedia of Philosophy entry on the Ethics of Artificial Intelligence and Robotics 

Assignments

For this assignment, students will take the work they did in small groups in class and turn it into a broader podcast, where they critically engage with the ethical implications of AI through research, debate, and reflection. Each student will be assigned to a small group, and each group will explore a specific AI-related ethical issue, such as bias in algorithms, job displacement, surveillance, or AI in warfare. Groups will be assigned one of three positions—Pro, Con, or a nuanced middle-ground perspective—and will conduct research to build a strong argument supporting their stance. They will then prepare a concise position statement (1-2 pages) summarizing their key arguments with supporting evidence. During the podcast recording, groups will participate in a structured debate, presenting their case, responding to counter arguments, and engaging in open discussion. Following the debate, each student will write a one-page reflection analyzing how their understanding of the issue evolved, whether their views changed, and what insights they gained. Assessment will be based on the depth of research, clarity of arguments, engagement in the debate, and thoughtfulness of the reflection. This assignment encourages students to wrestle with the complexities of AI ethics while developing critical thinking, research, and persuasive communication skills.

Discussion Questions

AI & Academic Integrity

  • How do AI-powered tools complicate traditional definitions of plagiarism and academic dishonesty?

  • Should students be allowed to use AI-generated writing assistants? If so, under what ethical guidelines?

  • What is the difference between getting help from AI and cheating? Where should the line be drawn?

Unit 10: Well Being

Topics & Key Terms 

  • AI Personalized wellness programs

  • AI Mental health support

  • Enhanced data analytics

  • Real-time feedback and engagement

  • Virtual wellness assistants

  • Collaboration and system integration

Activities

In this in-class assignment, students will investigate how AI is currently used to detect health risks like cancer or cardiovascular disease through tools such as diagnostic imaging and predictive algorithms. They will also explore limitations of these technologies, particularly in providing the empathy and nuanced care that human practitioners offer. In small groups, students will present case studies or examples from their research and reflect on how AI might support—or undermine—patient well-being.

To conclude, students will write a 1–2 page reflection examining the ethical implications of using AI in healthcare. Questions to consider include: How might reliance on AI affect patient trust? What are the emotional consequences of receiving health advice from non-human systems? And where should the boundaries lie between automated efficiency and human care?

References

Assignments

For this group assignment, students will design a prototype for an AI-driven website offering mental health support. They will consider both the benefits (such as 24/7 availability, anonymity, and scalability) and the risks (including limited emotional understanding, impersonal interactions, and potential harm during crisis situations). After developing the site concept, each group will submit a reflection that discusses the design process, ethical considerations, and the broader feasibility of using AI in such a sensitive space. The goal is to move beyond surface-level assessments and engage deeply with the complex intersection of technology, care, and ethics.

Discussion Questions

AI & Well-Being

  • Can AI truly replicate human empathy and care in mental health applications? Why or why not?

  • How do AI-driven mental health tools both support and potentially harm users?

  • Should society invest more in systemic improvements to well-being rather than relying on AI-based solutions?

Unit 11: Cyber-Security 

Topics & Key Terms 

  • Data Privacy

  • Encryption

  • VPNs

  • Cyber Threats (e.g., Identity theft, Scams, Cyber attacks, Malware, Phishing)

  • Data Poisoning

Activities

For an in-class activity focused on cybersecurity in AI, students could participate in a role-playing exercise where they simulate an AI-driven security breach scenario. In small groups, students willould assume roles such as cybersecurity professionals, AI developers, and data privacy advocates. Each group willould be given a scenario in which an AI system has been compromised—whether through data manipulation, algorithmic bias, or a cyberattack such as phishing or malware. Their task willould be to identify the vulnerability, assess the potential risks to privacy and security, and devise strategies to mitigate the breach. Afterward, each group willould present their findings, explaining how the breach occurred, how it could have been prevented, and what ethical considerations should be taken into account when designing AI systems. This activity will not only deepen students’ understanding of cybersecurity threats in AI but also emphasize the critical role of ethics, transparency, and accountability in developing secure AI systems.

References

knowledgeflow.org

Cyber Security in the Age of Artificial Intelligence and Autonomous Weapons Edited By Mehmet Emin Erendor

Cyber Security in the Age of Artificial Intelligence and Autonomous Weapons Edited ByMehmet Emin Erendor 

Jan 12, 10:30 AM EST by Victor Tangermann If Even 0.001 Percent of an AI's Training Data Is Misinformation, the Whole Thing Becomes Compromised, Scientists Find It's incredibly easy to catastrophically poison an entire large language model.

Assignments

For this assignment, students will work in groups to create a comprehensive deliverable demonstrating their knowledge of cybersecurity in the age of AI. Each group will research and analyze a specific aspect of AI-related cybersecurity, such as the impact of AI on data privacy, the risks of algorithmic bias in security systems, or the challenges posed by AI-driven cyberattacks like phishing or ransomware. The deliverable could take the form of a report, presentation, or infographic that clearly outlines the cybersecurity risks associated with AI, real-world case studies of breaches, and best practices for securing AI systems. Students will also be tasked with offering recommendations for improving security measures and ethical considerations when developing AI technologies. This assignment will encourage collaboration, critical thinking, and the application of theoretical knowledge to real-world cybersecurity issues in AI.

Discussion Questions

How can AI systems be vulnerable to cyberattacks, and what are some real-world examples where AI was exploited in a security breach?

In what ways does AI exacerbate existing cybersecurity challenges, such as data privacy and algorithmic bias? How can we mitigate these risks?

What ethical responsibilities do AI developers have in ensuring that their systems are secure from cyber threats, and how can they be held accountable?

With AI technologies becoming central to cybersecurity defenses (e.g., in threat detection), what are the potential risks of relying on AI for protecting other AI systems?

How can regulations and policies be designed to balance the innovative potential of AI with the need to safeguard against cybersecurity risks, especially regarding personal data and privacy?

Unit 12: Final Reflection: Revisiting the Good and Bad

Topics & Key Terms 

  • The Future of AI: Utopian vs. Dystopian Perspectives

Activities

Have students revisit their initial lists of AI’s benefits and drawbacks. In small groups, they should discuss how their perspectives have changed and what insights they’ve gained throughout the course.

 

Conclude with a class-wide discussion where students share their revised lists and insights, synthesizing the themes of critical inquiry, ethics, and social justice.

References

Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence by Kate Crawford (Author)

Future Politics Living Together in a World Transformed by Tech Jamie Susskind

Thoughts on the Future of “AI”

Assignments

For the final project in this course, students will develop an "AI Bill of Rights" for the public, critically assessing the ethical, social, and legal implications of artificial intelligence on individuals and communities. Students are required to research current debates surrounding AI ethics, privacy, data protection, and bias in AI systems, referencing case studies, academic literature, and real-world examples. They will analyze the potential risks and benefits AI poses to various stakeholders, including individuals, marginalized communities, and institutions, and evaluate existing policies and frameworks, such as the European Union's AI Act, the Universal Declaration of the Rights of Mother Earth, or the Universal Declaration of Human Rights, to inform their own Bill of Rights. The AI Bill of Rights should list specific rights and protections that individuals should have in an AI-driven society, addressing issues like privacy, transparency, accountability, non-discrimination, access to information, and the ethical use of AI. Each right should be clearly defined with actionable recommendations for its enforcement and implementation, balancing innovation with the protection of individual freedoms. Students should also consider the roles of government, industry, and civil society in upholding these rights. For the presentation, students can submit a written document, a multimedia presentation, or a video, clearly articulating the rationale behind each proposed right and discussing how these rights would affect different sectors, such as healthcare, education, and employment, as well as how they could be enforced globally. Lastly, students should include a reflection on the challenges and complexities of creating ethical frameworks for AI technologies.

Discussion Questions

  • How has your understanding of AI evolved during this course?

  • What do you see as the most pressing ethical issue surrounding AI today?

  • How will you apply the skills and knowledge from this course to your future interactions with AI?