← Back to Library

What the studies say about how AI affects your brain: A (very big) compilation

In an era of algorithmic saturation, Alberto Romero delivers a rare, data-driven intervention: a comprehensive synthesis of thirty-plus studies revealing that the human brain is not merely adapting to AI, but actively atrophying under passive use. This is not a speculative essay on the future of work, but a forensic examination of current neural pathways, showing that the very tools promising efficiency are engineering a form of "cognitive debt" that lingers long after the screen goes dark.

The Neural Cost of Convenience

Romero's most startling finding concerns the immediate physiological impact of chatbot interaction. He marshals evidence from MIT Media Lab and other institutions to show that when humans offload thinking to AI, their brain activity doesn't just pause; it regresses. Citing a 2025 study by Kosmyna et al., Romero writes, "The first [group] showed 'the weakest neural connectivity,' up to 55% lower than unaided writers." The implication is severe: the brain is not simply resting while AI works; it is actively disengaging its cognitive control networks.

What the studies say about how AI affects your brain: A (very big) compilation

The author argues that this isn't a temporary dip but an accumulation of "cognitive debt." When participants in the study switched back to writing alone after using AI, their brain activity remained suppressed. Romero notes, "They grew lazier, 'resorting to copy-paste by session 3.'" This suggests a physiological inertia where the brain learns to avoid effort, a phenomenon that echoes historical concerns about "digital amnesia" but with a new, neural dimension.

"The variable is not the presence of AI but rather what the AI asks your brain to do: when AI does the thinking, your brain does less."

However, Romero is careful not to paint a monolithic picture of decline. He highlights a counterpoint from Wang et al., where design students using AI as a creative director showed "significantly higher concentration levels." This distinction is crucial: the damage stems from passive consumption, not active direction. Critics might argue that the sample sizes in these neuroimaging studies are still small and that long-term adaptation could reverse these effects, but the immediate data points to a clear risk of neural atrophy in passive users.

The Illusion of Competence

The second pillar of Romero's analysis addresses a psychological trap more dangerous than mere laziness: the tendency to surrender critical judgment to the machine. He synthesizes research on "automation bias" to show that humans are increasingly unable to detect when an AI is hallucinating or providing flawed logic. Romero cites a Wharton study by Shaw & Nave where participants followed incorrect AI answers "worse than having no AI at all."

The mechanism here is a dangerous feedback loop of confidence. As Romero explains, "Higher confidence in GenAI correlated with less critical thinking." Workers shifted from "thinking by doing" to "choosing from outputs," a shift that creates a "less diverse set of outcomes." This is particularly alarming in high-stakes environments, a point underscored by a Harvard Business School field experiment where AI users were "19 percentage points less likely to produce correct solutions" on tasks outside the AI's capability frontier.

The text reveals a sobering reality: "Wrong answers delivered in flawless prose get accepted." This is the core of the "cognitive surrender" Romero identifies. The polish of the output masks the lack of substance, and the user's trust in the technology overrides their own skepticism.

"Trust in AI was the strongest predictor: high-trust participants had 3.5× greater odds of following faulty answers."

This section effectively dismantles the assumption that AI is a neutral tool. Instead, it acts as a persuasive agent that can override human reasoning. Romero notes that the more persuasive the model, the less accurate its information tends to be, creating a "jagged technological frontier" where performance gains inside the capability zone are offset by catastrophic failures outside it.

The Pedagogy of Design

Perhaps the most actionable insight in Romero's compilation is the bifurcation of educational outcomes based on design philosophy. The evidence suggests that AI is not inherently good or bad for learning; it is entirely dependent on whether it replaces the cognitive work or scaffolds it. Romero contrasts standard chatbots, which led to a "17% lower" score on unassisted exams, with pedagogically tuned tutors that "improved practice grades by 127%."

The distinction lies in the interaction model. When AI acts as an answer machine, it induces "metacognitive laziness," where learners offload the monitoring of their own thinking. Romero writes, "The product improved. The learner didn't." Conversely, when AI is designed to ask questions rather than give answers, it can double learning gains compared to traditional active learning.

This aligns with broader historical shifts in educational technology, where the tool's impact is determined by the pedagogy behind it. Romero highlights a study in Nigeria where AI-guided instruction produced learning gains equivalent to "1.5–2 years of typical schooling" in just six weeks.

"ChatGPT used as an answer machine caused learning declines, whereas a pedagogically designed AI tutor produced gains."

The argument here is a direct challenge to the "replace the teacher" narrative. Romero emphasizes that the most successful implementations involve AI improving the human tutor, who then improves the student. A counterargument worth considering is whether such sophisticated, pedagogically tuned AI is scalable in under-resourced school systems, but the data clearly shows that the design of the interaction is the variable that matters most.

The Loneliness Loop

The final section of Romero's compilation tackles the emotional toll of AI companionship, revealing a paradox where the tool designed to connect us may be isolating us. Citing a longitudinal study by Folk & Dunn, Romero describes a "vicious cycle" where loneliness drives chatbot use, which in turn predicts increased loneliness months later.

The research indicates that while voice mode initially mitigates feelings of isolation, the "advantages diminished at high usage." Romero writes, "Higher daily usage correlated with higher loneliness, dependence, problematic use, and lower socialization." This suggests that AI companionship offers an immediate emotional fix that ultimately degrades the user's capacity for human connection.

"Immediate relief in exchange for long-term dependency."

This finding complicates the narrative of AI as a social equalizer. Instead, it points to a form of "problematic use" that mirrors other behavioral addictions. Romero's synthesis suggests that the emotional cost of AI is not just a side effect but a structural feature of current chatbot design, which prioritizes engagement over genuine well-being.

Bottom Line

Romero's compilation is a vital correction to the hype cycle, proving that the impact of AI on the human mind is not a future speculation but a present-day reality of neural suppression and cognitive surrender. The piece's greatest strength is its refusal to treat AI as a monolith, instead showing how design choices determine whether the technology atrophies our brains or augments our capabilities. The biggest vulnerability remains the speed of adoption; as these tools become ubiquitous, the window to implement the "pedagogically designed" safeguards Romero advocates is rapidly closing.

Deep Dives

Explore these related deep dives:

  • Cognitive load

    This psychological concept explains the mechanism behind the 'cognitive debt' described in the MIT study, where reliance on external tools leads to a measurable decline in internal neural connectivity.

  • Neuroplasticity

    Understanding how the brain physically rewires itself in response to repeated behavior is essential to grasping why the study found that brain activity remained suppressed even after users stopped using AI.

  • Digital amnesia

    This phenomenon describes the specific tendency to forget information that is easily accessible online, providing the behavioral context for why students and workers might 'resort to copy-paste' rather than engage in deep thinking.

Sources

What the studies say about how AI affects your brain: A (very big) compilation

Hey, Alberto here! Each week, I publish long-form AI analysis covering culture, philosophy, and business for The Algorithmic Bridge. Paid subscribers also get Monday how-to guides and Friday news commentary. I publish occasional extra articles. If you’d like to become a paid subscriber, here’s a button for that:

Today: a deep dive into the research literature (30+ studies) on how AI changes your brain. If you know someone who should read this, please send it to them.

INTRODUCTION.

Between 2023 and 2026—that is, between ChatGPT changed the world forever and today—many studies from institutions including MIT, Wharton, Harvard, Stanford, Microsoft, OpenAI, Oxford, Google DeepMind, and Chinese universities have investigated what AI chatbots do to human cognition, learning, and psychology.

These studies include brain scans, randomized controlled trials (RCTs) with thousands of participants, longitudinal surveys, meta-analyses, and field experiments in real classrooms and workplaces (both preprints and peer-reviewed).

But, to the best of my knowledge, no one has compiled them in one easily-readable and easily-accessible place. This is it.

Individual studies get covered as isolated news stories—alarming headline, one-day cycle, and then forgotten—and the result is that everyone has a vague uneasiness, a sense that AI might be bad for thinking, but nobody has the full picture.

Here’s the full picture as we have it so far.

Study by study, I’ve gathered 30+ in total that, together, reveal what science actually knows about what happens to your brain, your thinking, your learning, and your emotional life when you use AI chatbots.

And crucially, what it doesn’t know yet.

The global conclusion that emerges from this compilation is a paradox that will define policy, product design, individual behavior, and how we collectively relate to this new, impressive, and scary technology.

I. YOUR BRAIN ACTIVITY DROPS.

A small but growing number of studies have put people inside brain scanners or strapped EEG sensors to their heads while they use ChatGPT. Neuroimaging tools measure how things affect brain activity, so these are, potentially, the most “reliable” sources (compared to self-report surveys and behavioral testing).

Your Brain on ChatGPT, Kosmyna et al. (arXiv preprint, 2025, N=54): MIT Media Lab tracked brain activity via 32-channel EEG across four sessions over several months in three groups: ChatGPT users, Google searchers, and unaided writers. The first showed “the weakest neural connectivity,” up to 55% lower than unaided writers. They grew lazier, “resorting to copy-paste by session 3.” When ...