← Back to Library

Is AI making students worse readers?

Johnny Chang opens with a provocation that cuts through the usual tech-optimism: the very tools designed to make learning easier might be atrophy-ing the cognitive muscles required to learn deeply. While much of the discourse focuses on cheating or efficiency, Chang zeroes in on a quieter, more insidious erosion—the loss of the ability to sustain attention on long-form text. This is not just about students skipping pages; it is about a fundamental shift in how the mind processes complex ideas in an age of instant summarization.

The Erosion of Deep Reading

Chang anchors his concern in a troubling trend observed at the highest levels of academia. Citing a recent piece in The Atlantic, he notes that "students today exhibit lower levels of reading compared to a decade ago," a decline so severe that even at elite institutions, "teachers often resort to assigning short excerpts and passages instead." The author argues that this is not merely a failure of discipline but a structural response to a changing information landscape. The pressure to prepare for standardized tests, which favor brevity, has already narrowed the curriculum; now, generative AI threatens to accelerate this by offering a "shortcut" that bypasses the struggle of reading entirely.

Is AI making students worse readers?

The piece highlights how technology is actively reshaping consumption habits. Chang points to Google's Notebook LM, which can "convert readings into digestible podcasts, making it easier for students to simply listen to the dialogue rather than read the text directly." The implication is stark: when the friction of reading is removed, the depth of engagement often disappears. Chang writes, "It's one thing to read the Spark Notes and another to read the actual book," suggesting that the act of struggling through a difficult text is not a bug in the learning system, but a feature essential for critical development.

It's one thing to read the Spark Notes and another to read the actual book.

Critics might argue that audio summaries are a vital accessibility tool for neurodiverse learners or those with language barriers, and that Chang risks conflating accommodation with avoidance. However, the author's distinction lies in the default mode of learning. When the path of least resistance becomes the standard, the cognitive endurance required for deep analysis may never be built.

The Double-Edged Sword of Innovation

The coverage then pivots to the initiatives attempting to harness AI for good, revealing a complex landscape where the same technology poses a threat and offers a solution. Chang details how the Iowa Department of Education invested $3 million in "EPS Reading Assistant," a program using voice recognition and a digital avatar to provide "personalized reading tutoring through a digital avatar named Amira." Similarly, the Institute of Education Sciences launched the U-GAIN centers to explore how AI can "align with established reading research to boost learning outcomes for elementary students."

Here, Chang's framing is notably balanced. He acknowledges that tools like "Rebind," an AI-assisted digital publisher, allow students to "engage directly with the text through interactive features such as annotation tools and a chat window." This transforms reading from a passive act into a dynamic conversation. Yet, the author remains cautious, quoting Marc Watkins on the danger that these technologies "can make students prioritize speed and task completion over skill development." The risk is that efficiency becomes the metric of success, overshadowing the messy, time-consuming process of true comprehension.

Chang also draws on Megan McArdle's reflection on her early career, noting that the "painstaking process of listening to and transcribing intense debates... offered her unique insights into the business world - a depth of understanding she argues would be difficult to attain through quick transcript reviews." This anecdote serves as a powerful counterweight to the allure of AI summaries. It suggests that the friction of the old way was where the learning happened.

The painstaking process of listening to and transcribing intense debates offered unique insights that would be difficult to attain through quick transcript reviews.

A counterargument worth considering is that the "painstaking process" McArdle describes may have been an inefficient use of time that AI can now liberate for higher-order thinking. If AI handles the transcription, can students not spend more time analyzing the content rather than the form? Chang hints at this tension but ultimately sides with the idea that the struggle itself is pedagogically valuable.

Human-AI Collaboration in the Classroom

The final section of Chang's piece offers a roadmap for a hybrid future, showcasing educators who are integrating AI without surrendering agency. He highlights Christiane Reves at Arizona State University, who uses AI to create a "safe environment" for language practice, and Ethan Mollick at Wharton, who encourages students to use AI tools "provided they disclosed its use" and assigns projects that "challenge students beyond AI's current capabilities."

Chang emphasizes that the goal is not to ban these tools but to reframe their role. He points to the Stanford "Tutor CoPilot" study, which found that a human-AI system "significantly improved student learning outcomes, especially for students with less effective tutors." The author interprets this as a democratization of expertise, where AI helps scale the guidance of human mentors rather than replacing them. As Chang puts it, the system "enhances the quality of tutoring, particularly benefiting less-experienced tutors who may lack the depth of practice and knowledge of their more experienced counterparts."

The piece concludes with a call for a socio-ethical approach to AI literacy, citing a curriculum that integrates "technical understanding, informed AI interaction, ethical evaluation of AI, and societal implications." This moves the conversation beyond mere utility to the broader question of what kind of thinkers we want to cultivate. Chang argues that without this critical lens, we risk creating a generation that is technically proficient but intellectually shallow.

Bottom Line

Chang's most compelling argument is that the efficiency of AI summaries comes at the cost of the cognitive stamina required for deep, critical thought. While the piece effectively showcases innovative tools like Tutor CoPilot and Rebind, its greatest strength lies in its refusal to view technology as a neutral solution. The biggest vulnerability in the argument is the assumption that the "struggle" of reading is always productive, without fully addressing how AI might actually remove barriers for struggling readers. The reader should watch for how institutions navigate this tension: will they use AI to deepen engagement, or simply to accelerate the path to the answer?

The friction of reading is not a bug in the learning system, but a feature essential for critical development.

Sources

Is AI making students worse readers?

by Johnny Chang · AI x Education · Read full article

In a recent article by The Atlantic titled "Elite College Students Who Can't Read Books," Rose Horowitch describes how students today exhibit lower levels of reading compared to a decade ago. Even students at highly selective, elite colleges now struggle to read entire books. In fact, teachers often resort to assigning short excerpts and passages instead. Part of the reason students find it difficult to read is due to shortened attention spans and the distractions of social media. Another contributing factor is the pressure to prepare students for standardized tests, which often focus on short excerpts rather than longer, comprehensive reading.

With the rise of AI, students may have even more reason to avoid reading, as AI tools can quickly summarize content and transform it into alternative formats. It’s one thing to read the Spark Notes and another to read the actual book. For example, Google’s Notebook LM Audio Overview feature can convert readings into digestible podcasts, making it easier for students to simply listen to the dialogue rather than read the text directly. In this edition, we will explore how we can ensure that students continue to develop strong reading and critical thinking skills in the age of AI.

Here is an overview of today’s newsletter:

AI x Education Webinar with Sarah Newman

New AI-Powered Tools Enhancing Early Literacy Instruction

Concerns Over AI's Impact on Critical Reading and Thinking Skills

AI Use Cases in Campuses Across the World

Introduction of a human-AI system designed to provide real-time expert guidance to tutors

Join us on 10/25 for our next webinar in the AI x Education Webinar Series, where we will feature Sarah Newman, Director of Art & Education at metaLAB at Harvard. As generative AI becomes increasingly accessible, it undeniably influences how students approach assignments and learning. Should schools and universities ban AI from the classroom, embrace it as a powerful learning tool, or find a balance between the two? With students already leveraging AI creatively, this webinar will explore how institutions can balance innovation and academic integrity.

Sarah Newman will share best practices for creating AI course policies, drawing from her experiences workshopping ideas with students at Harvard and engaging with educators across the US and internationally. Attendees will gain concrete strategies and policy templates and a deeper understanding of how to craft AI guidelines that foster ethical use while enhancing learning, curiosity, and criticality.

Whether you're an educator, ...