139
Volume:
2026
,
February

Risks, Safeguards, Black Boxes, and Virtue

Submitted By:
Dr. Zohreh Janinezhad, Hawken School, Gate Mills, OH

Teaching AI Literacy Across the Curriculum: A K-12 Handbook by Lyublinskaya, Irina.; Du, Xiaoxue.
Thousand Oaks : Corwin Press, July 28, 2025

The integration of artificial intelligence into K-12 education raises significant ethical considerations that extend beyond technical implementation. As highlighted in Teaching AI Literacy Across the Curriculum, ethical AI use requires deliberate attention to issues of bias, privacy, transparency, equity, and student agency. AI systems trained on historical or incomplete data risk reproducing existing social inequalities, particularly in areas such as grading, placement, and personalized learning. Without critical oversight, these tools may disadvantage students from underrepresented or marginalized backgrounds, as well as risk moral injury for educators when institutional practices conflict with professional judgment. Student data privacy is another central concern. AI-driven platforms often collect sensitive behavioral and academic data, creating risks related to misuse, surveillance, and loss of trust if safeguards are insufficient. The lack of transparency in many AI systems further complicates ethical implementation, as “black box” decision-making can obscure how judgments affecting students’ educational trajectories are made. The authors emphasize that AI literacy is essential for addressing these challenges. By equipping teachers and students to critically evaluate AI systems, schools can promote responsible and equitable use of technology grounded in a virtue ethic that prioritizes fairness, care, and professional responsibility in an increasingly AI-driven educational landscape.

Categories
Leadership Practice
Teaching Practice
Technology