← Back to Library

‍🎓AI + students: What’s changing

Johnny Chang's latest dispatch cuts through the noise of AI hype to reveal a stark reality: the tools are outpacing the pedagogy, and the gap is widening faster than institutions can adapt. While the industry celebrates free access to premium models for students, the underlying data suggests a dangerous disconnect between how AI is being used and how it should be taught. This isn't just about better homework help; it's about a fundamental shift in what constitutes learning in the digital age.

The Race for Student Attention

Chang opens by highlighting a aggressive move by OpenAI, which has made its premium ChatGPT Plus subscription free for college students in the U.S. and Canada through May. This timing, coinciding with final exams, is no accident. "This move intensifies competition with Anthropic, which recently launched Claude for Education," Chang notes, framing the current landscape as a battlefield where startups are "rapidly reshaping the education landscape." The strategic implication here is clear: the barrier to entry for advanced AI is vanishing, but the barrier to understanding it is rising.

‍🎓AI + students: What’s changing

The author points out that while these tools offer advanced reasoning and research capabilities, the market is bifurcating. Anthropic's approach focuses on a "learning mode" designed to guide reasoning rather than simply providing answers. This distinction is crucial. As Chang writes, "Educational assessment needs to shift focus from products (essays, projects) to processes, as AI can easily generate the former without indicating true student knowledge." This argument lands hard because it exposes a systemic failure in traditional grading. If the output can be faked, the assessment must measure the journey, not just the destination.

Future-ready education requires developing domain expertise, responsible AI usage skills, and strengthened human capabilities like collaboration and critical thinking.

Critics might argue that shifting to process-based assessment is administratively burdensome for overworked faculty. However, Chang's framing suggests that maintaining the status quo is an even greater risk, as it renders degrees increasingly hollow in the face of generative AI.

Who is Actually Using These Tools?

The piece pivots to hard data from an Anthropic report analyzing over a million student interactions. The findings are surprising and perhaps unsettling for educators. Chang reports that "STEM students, especially those studying Computer Science, are among the earliest adopters," noting that while they make up just 5.4% of U.S. degrees, they account for 36.8% of all conversations. This disparity highlights a massive adoption gap. Students in Business, Health, and the Humanities are lagging, suggesting that AI literacy is becoming a specialized skill rather than a universal competency.

Chang further explores the anxiety surrounding this adoption through a report by the Walton Family Foundation and Gallup. "Many Gen Z students use AI tools regularly, with roughly 47% reporting using AI tools on a weekly basis," he writes. Yet, this usage is paired with deep uncertainty. The author emphasizes that "Many Gen Z students also report wanting stronger guidance and policies regarding when and how to use AI." This is a critical insight: the students are not rejecting the technology; they are begging for a framework to use it responsibly. The absence of such frameworks leaves them navigating a minefield of academic integrity on their own.

The Cybernetic Teammate

One of the most compelling sections of Chang's coverage is his analysis of a Harvard Business School working paper titled "The Cybernetic Teammate." The study examines how AI functions within teams, finding that "both individuals and teams using AI were more likely to generate solutions ranked in the top 10% of all submissions." Chang interprets this to mean that AI can "bridge expertise gaps" and help students with diverse skill sets contribute effectively.

The author argues that this technology could foster more equitable collaboration. "The finding that AI can foster more balanced collaboration in teams by reducing dominance effects suggests that AI could help ensure more equitable contributions in group work," Chang writes. This is a powerful counter-narrative to the fear that AI will isolate learners. Instead, it suggests AI could act as a great equalizer in group dynamics, allowing quieter voices to be heard through better-structured inputs.

However, a counterargument worth considering is whether this "balance" is artificial. If the AI does the heavy lifting, does the human input still reflect genuine learning? Chang acknowledges this tension, noting that "even with high retention of AI-generated content, human input remained crucial in shaping the final solutions." The key, he suggests, is that the time saved allows for "deeper learning and exploration," provided the human element remains the driver, not just the passenger.

Bridging the Gap Between Theory and Practice

The final section of the piece turns to the ethical infrastructure required to support this rapid evolution. Chang interviews the team behind Learnest, a non-profit founded to address the "huge chasm between the research and academia side of AI and the practice and industry side." The founders argue that the current discourse is too abstract, filled with buzzwords like "ethics" and "transparency" that lack "specific, implementation-driven details."

Chang captures the urgency of their mission: "Very few people know what can go wrong, but we see there's too much excitement and enthusiasm in the space at the expense of people's understanding of this downside." The proposed solution is a bootcamp that combines "mind" (theory) with "hand" (implementation), teaching practitioners how to build safety guardrails and align models with educational values. This approach is vital. As Chang puts it, the goal is to have practitioners "truly see the value of incorporating ethics at each stage of AI Edtech lifecycle, rather than treating it as an added burden."

The piece also touches on the commercialization of these tools, citing SigIQ.ai's recent funding and Chegg's new "Solution Scout." While these tools promise to democratize access, Chang raises a critical question about the future business models: "will it truly democratize access, or introduce new barriers?" This is the looming shadow over the entire sector. If the best AI tutors become premium subscriptions, the digital divide could widen into an unbridgeable chasm.

We want practitioners to truly see the value of incorporating ethics at each stage of AI Edtech lifecycle, rather than treating it as an added burden.

Bottom Line

Chang's analysis succeeds in moving the conversation from "what can AI do?" to "what must we do now?" The strongest part of the argument is the data-driven revelation that AI adoption is highly uneven, creating a new form of educational inequality between STEM and non-STEM fields. The biggest vulnerability, however, lies in the speed of implementation; while the need for ethical frameworks is clear, the industry's "race-to-the-market" mentality threatens to outpace the necessary guardrails. Readers should watch closely to see if institutions can pivot from banning AI to integrating it, or if the gap between policy and practice will leave students to navigate the risks alone.", "metadata": { "word_count": 1245, "reading_time_minutes": 5, "tone": "critical, analytical, urgent", "formatting": "markdown" } }

Sources

‍🎓AI + students: What’s changing

by Johnny Chang · AI x Education · Read full article

OpenAI has just made its premium ChatGPT Plus subscription available for free to all college students in the U.S. and Canada through May, just in time for final exams! This gives students access to advanced features like O3-mini reasoning model, image generation, voice interaction, and powerful research tools, all designed to help them tackle their studies more effectively.

This move intensifies competition with Anthropic, which recently launched Claude for Education, a tailored version of Claude aimed at higher education institutions. It features a “learning mode” designed to help students learn by guiding their reasoning rather than simply providing answers.

The race is on as AI startups are rapidly reshaping the education landscape, changing the way students learn and work. In this edition, we’ll take a behind-the-scenes look at these edtech startups and explore how students feel about using AI tools.

Here is an overview of today’s newsletter:

Key takeaways from edtech leaders in our recent AI x Education webinar

Insights on student perspectives and real-world use cases of AI

Exploring AI’s potential to enhance student collaboration and group work

How AI edtech platforms are making learning more accessible around the world

Practical AI Usage and Policies.

⭐ Insights from our AI x Education Webinar Series.

At our recent AI x Education webinar, AI in the Classroom: Practical Insights from Leading Edtech Innovators, Bill Salak (CTO/COO of Brainly) and France Hoang (co-founder/CEO of Boodlebox) shared insights from their experiences building AI-powered education platforms that serve hundreds of thousands of students worldwide. Check out the webinar recording below!

TLDR

AI alone isn't enough - successful educational technology must address genuine friction points in today's learning landscape while complementing traditional education methods.

Educational assessment needs to shift focus from products (essays, projects) to processes, as AI can easily generate the former without indicating true student knowledge.

Future-ready education requires developing domain expertise, responsible AI usage skills, and strengthened human capabilities like collaboration and critical thinking.

⭐ Latest Reports on AI.

Anthropic Education Report: How University Students Use Claude

Anthropic recently reviewed over a million anonymized student interactions on their LLM, Claude.ai, a leading AI tool among students. The analysis revealed that STEM students, especially those studying Computer Science, are among the earliest adopters. Despite making up just 5.4% of U.S. degrees, Computer Science students accounted for 36.8% of all conversations. Meanwhile, students in Business, Health, and the Humanities are adopting AI tools at ...