← Back to Library

Collaborative learning with AI

This piece cuts through the noise of AI hype to reveal a critical pivot: the shift from AI as a solitary cheat sheet to AI as a collaborative partner in the classroom. Johnny Chang doesn't just list features; he frames a structural tension where technology is outpacing the very institutions designed to govern it, forcing a reckoning with how we define learning itself.

The Shift from Solo to Shared

Chang opens by identifying a quiet revolution in how students interact with generative models. While the public debate often fixates on banning tools, Chang highlights that platforms like Perplexity are already moving toward "Spaces," which function as shared knowledge hubs. "These shared spaces enable students to co-create knowledge hubs while promoting transparency and communication with both teachers and classmates," Chang writes. This framing is crucial because it moves the conversation away from the binary of "cheating versus not cheating" toward a more nuanced discussion about workflow and transparency.

Collaborative learning with AI

The article details how these tools allow students to import syllabi and notes to ask specific, context-aware questions. Chang illustrates this with examples like asking the AI to "summarize the main points from Week 5's lecture" or "compare the solutions to Questions 2 and 5 from Homework 1." This capability transforms the AI from a black-box answer generator into a dynamic study partner that respects the specific boundaries of a course. The argument here is effective because it grounds the technology in the mundane reality of student life—organizing notes and prepping for midterms—rather than abstract futurism.

However, this collaborative model introduces a new complexity for educators. If the AI is helping a group draft a project plan or divide tasks, where does the individual's contribution end and the machine's begin? Chang notes that these tools can "create a shared calendar for our group that includes deadlines and progress check-ins," but the piece stops short of exploring how instructors might audit these collaborative logs to ensure genuine engagement. A counterargument worth considering is that while these tools promote transparency, they could also create an illusion of productivity where the AI does the heavy lifting of organization, leaving students with a polished plan but no deep understanding of the project's mechanics.

AI tools should solve real classroom challenges—whether that's reducing administrative burden for teachers or helping students grasp difficult concepts—not just be a flashy addition.

The Pace of Institutional Adaptation

To ground these technological shifts in reality, Chang interviews Amy Jain, a senior at UC Berkeley, whose perspective offers a vital reality check on the speed of adoption. Jain describes the dissonance between the rapid evolution of AI and the sluggish pace of curriculum updates. "Curriculum updates take time, and while AI advancements are moving at lightning speed, it's difficult for academic systems to keep pace—even at one of the top computer science programs in the world," she observes. This is perhaps the most honest assessment in the piece: the gap between what is possible and what is taught is widening.

Jain distinguishes between the classroom, where change is slow, and the extracurricular sphere, where a "second gold rush" is underway. She notes that "entire clubs like AI Entrepreneurs at Berkeley have sprung up in the last couple of years," and applications to research labs have "skyrocketed." Chang uses this contrast to argue that the real transformation is happening outside the lecture hall, driven by student initiative rather than administrative decree. This is a compelling narrative choice; it suggests that the future of AI literacy will be self-taught and peer-driven, potentially leaving formal education playing catch-up.

Yet, Jain's advice to educators is a cautionary tale against superficial integration. She urges leaders to "focus on the purpose, not the tool," asking whether an AI implementation is a "painkiller or a multivitamin." This metaphor cuts through the marketing fluff that often surrounds ed-tech. The argument holds weight because it prioritizes pedagogical outcomes over novelty. Critics might note, however, that in a resource-constrained environment, distinguishing between a "painkiller" and a "multivitamin" requires a level of institutional agility and funding that many schools simply do not possess.

The Equity Paradox and the Risk of Illusion

The piece concludes with a look at the broader implications for equity and the risks of over-reliance. Jain champions the potential for AI to democratize access, stating, "It almost democratizes education," by providing high-quality tutoring to students in rural areas who lack access to elite resources. This vision of hyper-personalized learning is the most optimistic thread in the article, suggesting that AI could finally dismantle the socioeconomic barriers that have long plagued the education system.

However, Chang balances this optimism with sobering research from the University of Cologne and Rotterdam School of Management. The study found that while AI can act as a "personal tutor," it also "impair[s] learning when students rely on them excessively to solve practice exercises, especially through copy-pasting solutions." The research highlights a dangerous cognitive trap: "students tend to overestimate their learning progress when using LLMs." This is a critical finding that Chang integrates well, warning that without guardrails, students may mistake solution retrieval for genuine mastery.

The article also touches on a global disparity in AI literacy, citing a survey where Malaysian students scored significantly higher than peers in Egypt, Saudi Arabia, and India. This data point complicates the narrative of universal democratization, suggesting that the "digital divide" is evolving into an "AI literacy divide" that could exacerbate existing global inequalities. Chang's inclusion of this data prevents the piece from becoming a purely techno-utopian manifesto.

We're already seeing this with tools like Khanmigo here in the U.S. and Squirrel AI in China... It almost democratizes education.

Bottom Line

Chang's strongest contribution is reframing the AI debate from a moral panic about cheating to a structural challenge of integration and pacing. The piece effectively argues that the technology is moving faster than the curriculum, creating a gap that students are filling with their own ingenuity. The biggest vulnerability, however, lies in the assumption that transparency tools like "Spaces" will naturally lead to ethical use; without robust pedagogical redesign, the risk of students overestimating their competence remains high. The reader should watch for how institutions respond to this pressure: will they double down on detection, or will they finally update the industrial-era model of education to match the speed of innovation?

Sources

Collaborative learning with AI

by Johnny Chang · AI x Education · Read full article

Many students today use AI tools like ChatGPT on their own, but several platforms are now introducing collaborative features designed for group projects. These shared spaces enable students to co-create knowledge hubs while promoting transparency and communication with both teachers and classmates. Tools like Boodlebox and Perplexity are leading this shift, and in today's edition, we'll explore how AI collaboration could reshape the educational landscape.

Here is an overview of today’s newsletter:

Exploration of Perplexity’s new feature “Spaces” for education

Insights from a Berkley student’s perspective on AI in college

Comparison survey of AI literacy across Asian and African countries

The issue of AI detectors wrongly accusing students of cheating

Join us on 10/25 for our next webinar in the AI x Education Webinar Series, where we will feature Sarah Newman, Director of Art & Education at metaLAB at Harvard. As generative AI becomes increasingly accessible, it undeniably influences how students approach assignments and learning. Should schools and universities ban AI from the classroom, embrace it as a powerful learning tool, or find a balance between the two? With students already leveraging AI creatively, this webinar will explore how institutions can balance innovation and academic integrity.

Sarah Newman will share best practices for creating AI course policies, drawing from her experiences workshopping ideas with students at Harvard and engaging with educators across the US and internationally. Attendees will gain concrete strategies and policy templates and a deeper understanding of how to craft AI guidelines that foster ethical use while enhancing learning, curiosity, and criticality.

Whether you're an educator, administrator, or institutional leader, this session will equip you with the tools to thoughtfully and responsibly engage with AI. Register now for free through this link!

Practical AI Usage and Policies.

Perplexity recently released “Spaces”, which are AI-powered collaboration hubs that allow students to collaborate with others and search through their own class/project files in addition to the internet. After setting up the Space, students can invite collaborators such as classmates or teachers, connect internal files, and customize the AI assistant by choosing their preferred AI model and setting specific instructions for how it should respond.

Ways to Use Perplexity Spaces

Create a Knowledge Hub for Your Class

Students can import course documents, notes, and syllabi in one place to draw information from.

Ask Questions from Your Class Material

You can ask specific questions from your materials and retrieve the answers in ...