Artificial intelligence in education
Based on Wikipedia: Artificial intelligence in education
In September 2025, The Atlantic published an op-ed from a high school senior that sent shockwaves through academic communities. The student argued that the normalization of AI cheating was eroding critical thinking, academic integrity, creativity, and the shared student experience. Around the same time, UNESCO released updated global guidance for generative AI in education, emphasizing ethical use, teacher training, and data protection. These two events—one expressing deep concern, the other offering cautious optimism—capture the tensions that have defined artificial intelligence in education for nearly sixty years.
The Promise and the Panic
Artificial intelligence in education (often abbreviated as AIEd) is not a new phenomenon. It is a subfield of educational technology that studies how to use AI—from generative chatbots to sophisticated learning systems—to create environments where students can thrive. Yet what has changed dramatically in recent years is the sheer scale of ambition. Considerations in this field now include data-driven decision-making, AI ethics, data privacy, and something called AI literacy: the ability to understand, evaluate, and critically engage with artificial intelligence systems.
The concerns, however, are equally urgent. Educators and researchers have warned that AI could facilitate cheating in ways that undermine academic integrity. Students might become over-reliant on generative tools, reducing their capacity for critical thinking. The perpetuation of misinformation and bias—trained into the algorithms from datasets that reflect societal inequities—poses real risks. And perhaps most troubling: lower-income or rural areas have more limited access to the computing hardware or paid software subscriptions needed for AIEd platform use, creating what amounts to a new digital divide.
Yet amid these valid worries, the push to integrate AI into classrooms continues at pace. The AI in education community has grown rapidly in the global north, driven by venture capital, big tech companies, and open educationalists. In the 2020s, companies who create AI services are targeting students and educational institutions as both consumers and enterprise partners. Similarly, pre-AI boom educational companies are expanding their AI integration or launching new AI-powered services.
A Short History of Intelligent Machines in the Classroom
Efforts to integrate AI into educational contexts have often followed technological advancement in the history of artificial intelligence itself.
In the 1960s, educators and researchers began developing computer-based instruction systems. One notable example was PLATO, developed by the University of Illinois—a pioneering system that laid groundwork for what would come decades later. By the 1970s and 1980s, intelligent tutoring systems (ITS) were being adapted for classroom instruction. Systems such as SCHOLAR—an early ITS designed to offer interaction between a student and a simulated teacher—represented genuine innovation in how humans could be tutored by machines.
The International Artificial Intelligence in Education Society was founded in 1993, formalizing what had been scattered research into a coherent field. For years, progress continued steadily but without the dramatic breakthroughs that AI researchers dreamed of.
Then came the late 2010s and 2020s—and everything changed. Large language models (LLMs) and other generative AI technologies have become the dominant focus of AIEd conversations. Chatbots like ChatGPT entered education spaces with unprecedented speed, forcing institutions to respond. Some schools banned LLMs entirely; many bans were later lifted as educators realized that blanket prohibition was neither realistic nor pedagogically useful.
During this period, AI content detectors have been developed and employed to detect and/or punish unsanctioned AI use in educational contexts. Yet their accuracy remains limited—creating new uncertainties rather than clear answers.
Three Paradigms for the Future of Learning
AIEd applies theory from education studies, machine learning, and related fields. One influential framework suggests three paradigms for AI in education, arranged roughly from least to most learner-centered—and requiring correspondingly different levels of technical complexity from AI systems.
AI-Directed, Learner-as-recipient: In this first paradigm, AIEd systems present a pre-set curriculum based on statistical patterns that do not adjust to learner's feedback. The student receives information; the machine delivers content. This approach is efficient but fundamentally passive.
AI-Supported, Learner-as-collaborator: Here, systems incorporate responsiveness to learner's feedback through natural language processing—wherein AI can support knowledge construction. The student and AI work together. This model has proven especially useful for assessment and feedback, machine translation, proof-reading and copy editing, or as virtual assistants.
AI-Empowered, Learner-as-leader: This third paradigm seeks to position AI as a supplement to human intelligence wherein learners take agency and AI provides consistent and actionable feedback. It is the most ambitious—and the most controversial—vision for what artificial intelligence in education could become.
The Theoretical Landscape
Some scholars frame AI in education within the concept of the socio-technical imaginary—defined as collective visions and aspirations that shape societal transformations and governance through the interplay of technology and social norms. This framing positions AI in the history of "emerging technologies" that have and will transform education: computing, the internet, social media.
Emerging theoretical frameworks in AIEd draw on new materialism and post-humanism—specifically Donna Haraway's concept of sympoiesis (making-with). This perspective views learning as an entanglement of human and non-human actors—students, teachers, and AI algorithms—where knowledge is co-composed in contact zones between human context and algorithmic prediction.
The implications are profound. Learning becomes relational, distributed across both biological and computational substrates. The very nature of education shifts from the transmission of knowledge to a collaboration between minds and machines.
Emotional Intelligence in Machines
One particularly intriguing development is emotional AI in education: the study and development of systems that can detect learners' emotions and/or provide emotional support. Imagine a tutoring system that knows when a student is frustrated, stuck, or disengaged—and responds accordingly. This represents a frontier where technical capability meets psychological nuance.
The Policy Landscape
In the United States, bipartisan support for AI development in K-12 education has been expressed—but specific implementations and best practices remain contentious. Starting in the 2020s, higher-education institutions have begun to develop guidelines and policies to account for AI.
Governmental and non-governmental organizations have published reports advocating for specific AIEd approaches. UNESCO, Article 4 of the European Union's AI Act, and the U.S. Department of Education have all weighed in. In 2024, UNESCO released updated global guidance for generative AI in education, emphasizing ethical use, teacher training, and data protection to ensure responsible integration of AI tools in learning environments.
Policy implementation often faces challenges related to ambiguity—as it is interpreted and enacted differently by various stakeholders. Research indicates that decentralized policies can lead to inconsistent enforcement and confusion among students regarding what constitutes acceptable use, with the burden of ethical navigation falling on individual teachers and students.
The cracks in the system
Despite the optimism, evidence suggests real problems are emerging. Reporting has indicated that students' use of AI in higher education has been increasing since 2022 and is relatively commonplace—but not uniformly positive.
Reliance on generative AI has been linked with reduced academic self-esteem and performance, and heightened learned helplessness: the psychological phenomenon where individuals believe they cannot change their situation through their own actions. Algorithm errors and hallucinations are common flaws in AI agents, making them less trustworthy and reliable for high-stakes learning decisions.
Perhaps most significantly, a major gap in current AI-in-education research is the limited focus on educators' needs and perspectives. A review of over a decade of studies found that most research prioritizes technological design over pedagogical integration—the systems are built without adequate understanding of how teachers will actually use them.
AI agents have been trained on biased datasets, and thus continue to perpetuate societal biases. Since LLMs were created to produce human-like text, algorithmic bias can be introduced and reproduced at scale. AI's data processing and monitoring reinforce neoliberal approaches to education rather than addressing inequalities—suggesting that in some contexts, the technology may deepen the very problems it promises to solve.
Data privacy and intellectual property are further ethical concerns of AIEd. Contemporary LLMs are trained on datasets that are often proprietary and may contain copyrighted or theoretically private materials—in some cases, personal emails. AI integration in classrooms has created new forms of invisible labor for educators, who must navigate ambiguous policies, redesign assessments to be AI-resilient, and adjudicate potential academic integrity violations.
The use of AI detection tools has also been criticised for creating an adversarial relationship between students and institutions, where students may be falsely accused of misconduct based on probabilistic software. Support for the integration of AI technologies into K–12 computer science courses correlates with familiarity with AI and efficacy in teaching AI—but the gap between policy aspiration and classroom reality remains wide.
A field in motion
What emerges from this history is a technology landscape perpetually in flux—shaped by commercial incentives, scholarly critique, institutional anxiety, and genuine hope for improved learning outcomes. The commercial incentives for AIEd innovation may be related to an AI bubble: speculative investment driving development faster than research can validate.
Yet advocates say efforts should be made towards increasing global accessibility and training educators to serve underprivileged areas. The question is no longer whether AI will matter in education—it is which forms of AI will reshape learning, and for whom.