← Back to Library

AI tools, real classrooms

This week's coverage from AI x Education cuts through the hype to reveal a startling reality: the most advanced AI tutors don't just answer questions; they read your brainwaves. Johnny Chang's compilation of recent research and classroom experiments offers a rare, grounded look at how artificial intelligence is moving from a novelty to a structural component of learning, driven by data that suggests engagement can be measured in real-time. For busy educators and policymakers, the urgency isn't about whether to adopt these tools, but how to integrate them without eroding the very human skills they are meant to support.

The Shift from Passive to Neuroadaptive Learning

Chang highlights a breakthrough from the MIT Media Lab that fundamentally changes the definition of a "tutor." The piece details "NeuroChat," a system that pairs a generative AI with a wearable EEG headband to monitor a learner's brain activity. "Unlike regular AI chatbots, NeuroChat uses a wearable EEG headband to monitor a learner's brain activity and understand their real-time engagement levels," Chang writes. This is not merely a faster search engine; it is a dynamic feedback loop where the AI adjusts the complexity of its explanations based on whether the student is actually cognitively processing the material.

AI tools, real classrooms

The implications here are profound. The research found that "NeuroChat significantly increased how engaged learners were, both in terms of their brain activity and their own reports." However, Chang notes a critical caveat that prevents this from being a silver bullet: "Increased engagement with NeuroChat, however, did not lead to immediate improvements in test scores, suggesting that more research is needed to understand how to best translate engagement into learning gains." This distinction is vital. It suggests that while we are getting better at keeping students' attention, we haven't yet cracked the code on ensuring that attention translates to mastery. Critics might argue that relying on biometric data in schools raises significant privacy concerns, yet the potential to stop students from drifting off before they even realize it is a powerful pedagogical tool.

Scaling Mentorship and the Human Element

Beyond neuro-adaptation, the coverage explores how AI is being used to scale the role of the mentor. Chang points to a study from Carnegie Mellon University where an AI system helped educators spot early issues in student project proposals. The findings were encouraging: "GPT-4o's ratings of project proposal quality showed promising agreement with educator ratings, suggesting the potential for LLMs to scale the automatic grading of these proposals." This capability could free up human teachers to focus on high-level guidance rather than initial triage.

"Students generally had a positive perception of the AI mentor system. A large majority of the participants (88.8%) indicated they would want to use the system in the future to help them choose skills and technologies to learn more about."

Chang also weaves in a crucial perspective from Roberto Huie, a master's student interviewed for the piece, who articulates the delicate balance required in this new landscape. Huie warns that "AI dependency is something worth thinking about. Over-reliance on AI can erode fundamental critical thinking and decision-making skills, which are skills that are crucial for us as humans." This concern is not theoretical; it is a practical risk that institutions must manage as they integrate these tools. The argument here is that AI should act as a scaffold, not a crutch. As Chang paraphrases the broader sentiment, the goal is to ensure that "people still need to communicate effectively and think for themselves, so ensuring that balance is really important."

The Global Race and the Cheating Dilemma

The piece also contextualizes these classroom experiments within a broader geopolitical and ethical framework. While the United States and other Western nations debate policies, Chang notes that "Schools in Beijing will be required to provide at least eight hours of AI instruction each academic year starting this fall." This mandatory integration for elementary students signals a national strategy to build a workforce fluent in AI, contrasting sharply with the fragmented approach seen elsewhere. The goal is to "establish a 'teacher-student-machine' learning model," a phrase that encapsulates the future of education as a tripartite partnership.

However, the integration of AI is not without friction. Chang addresses the elephant in the room: academic integrity. Citing a Wall Street Journal report, the coverage notes that "Nearly 40% of middle and high school students who use AI do so without their teachers' permission to complete assignments." The response from the industry has been somewhat dismissive, with one representative stating, "OpenAI did not invent cheating. People who want to cheat will find a way." This framing, while technically true, overlooks the systemic shift in how we assess learning. If the tools are ubiquitous, the assessment models must evolve. Chang suggests that some teachers are finding success by "requiring handwritten first drafts in class with technology prohibited," a low-tech solution to a high-tech problem that forces authentic engagement.

The Environmental and Ethical Cost

Finally, Chang ensures the narrative does not ignore the hidden costs of this digital transformation. Huie raises a point often missed in the rush to adopt new tools: "I read an article a couple of months ago that mentioned a ChatGPT prompt can consume around ten times more energy than a typical Google search." As AI adoption grows, the environmental footprint of education will expand significantly. This is a necessary counterpoint to the efficiency arguments often made for AI. The piece also touches on the risk of misinformation, noting that "deepfake technology and language models can produce biased or misleading content if trained on bad data." This serves as a reminder that the quality of AI output is only as good as the data it consumes, and in an educational setting, the stakes for accuracy are incredibly high.

Bottom Line

Johnny Chang's coverage succeeds by grounding the futuristic promise of AI in the messy reality of current classrooms, balancing the excitement of neuroadaptive learning with sober warnings about dependency and environmental cost. The strongest part of the argument is the evidence that AI can measurably increase engagement, even if the link to test scores remains unproven. The biggest vulnerability, however, lies in the lack of standardized governance; as institutions move at different paces, the risk of a fragmented educational landscape grows. Readers should watch for how the proposed "teacher-student-machine" model evolves, particularly as the tension between automated efficiency and human critical thinking comes to a head.

Sources

AI tools, real classrooms

by Johnny Chang · AI x Education · Read full article

Ready to supercharge your classroom with some AI magic? This week, we're diving headfirst into the latest experiments and insights educators are exploring right now. From turning boring syllabi into engaging podcasts to AI tutors reading students' minds (literally!), we've got stories, research, and implementations that'll spark your imagination and have your students wondering, "What's next?"

Here is an overview of today’s newsletter:

Exploring AI Use Cases in the Classroom

Interview with a Master’s Student – How AI helps students catch up on learning

Key Findings from a Study on AI Mentors for Student Projects

The Latest on AI and Academic Cheating

⭐ AI x Education Webinar Series.

Roll Up Your Sleeves: The Practical AI Toolkit for Busy Educators

Whether you're new to the space or already experimenting with tools, our next webinar will equip you with practical, people-centered strategies to implement AI with confidence and care. Join internationally renowned educator and innovation leader Rebecca Bultsma (Chief Innovation Officer, Amplify & Elevate Innovation; co-host, AmpED to 11 Podcast) as she shares actionable ways to transform instruction, streamline administrative tasks, and personalize learning—no technical background required.

You’ll walk away with:

Step-by-step processes for using AI in your daily workflow

Real-world examples of AI-enhanced lesson planning, assessment, and feedback

Ethical frameworks for responsible AI adoption in schools

Perfect for K–12 educators, school leaders, and anyone eager to make AI approachable and impactful in the classroom.

Practical AI Usage and Policies.

Use Cases of AI in the Classroom.

This LinkedIn tutorial by Darren Coxen highlights a simple yet powerful strategy for educators to create study guides and practice quizzes. By leveraging Google’s NotebookLM, educators can convert syllabus units into podcasts, transform transcripts into study notes with Gamma, and generate quizzes with Quizizz.

Check out EdWeek’s article, “How District Leaders Use AI to Save Time, Help Teachers, and More”, where they share how 22 district leaders are leveraging AI to streamline administrative tasks, enhance brainstorming, and more.

This blog post shares 8 universities and schools transforming education with the help of Google AI. It explores specific use cases in K-12 and higher education, showcasing how campuses integrate AI in the classroom to assist students with data analysis, engage in debates, and more.

A group at Stanford University Human-Centered Artificial Intelligence (HAI) shares their findings in the article, “AI Helps Math Teachers Build Better "Scaffolds" where they explored how AI could support middle school math ...