← Back to Library

Strategy for assessments with AI

Johnny Chang's latest analysis cuts through the noise of AI panic to reveal a pragmatic pivot: the University of Sydney isn't banning artificial intelligence; it is formally integrating it into the curriculum. This is not a story about technology winning, but about institutions finally admitting that the old rules no longer apply. Chang argues that the binary choice between total prohibition and total chaos is a false dilemma, proposing instead a two-lane system that separates secure testing from open-ended, AI-assisted work.

The Two-Lane Strategy

Chang frames the University of Sydney's new policy as a necessary evolution rather than a concession. "The University of Sydney takes a unique approach in allowing for the usage of AI while balancing academic integrity," he writes. The core of this strategy is a structural division: "lane one" for secure, in-person assessments where AI is restricted, and "lane two" for open assessments where AI use is encouraged. This distinction is crucial because it acknowledges that the workforce of the future will require fluency in these tools, not just the ability to avoid them.

Strategy for assessments with AI

The author suggests that this approach ensures students can "demonstrate their understanding of the content without the help of AI while also learning how to work with AI effectively in real-world contexts." This is a compelling reframing of academic integrity. Instead of viewing AI as an intruder, the policy treats it as a tool to be mastered, much like a calculator or a search engine. Critics might note that implementing this split requires significant logistical overhead for educators, who must now design two distinct types of assessments for every course. However, Chang's argument holds that the long-term benefit of producing AI-literate graduates outweighs the short-term friction of policy adjustment.

The role of AI in education and in the workforce will be inevitable, so rather than completely banning it, the University of Sydney takes a unique approach in allowing for the usage of AI while balancing academic integrity.

Beyond the Hype: Human Connection and Equity

Moving beyond the policy mechanics, Chang weaves in a broader critique of how AI is currently deployed in schools. He highlights a critical tension: the rush to automate often comes at the expense of human connection. Citing Alex Kotran, Chang notes that as young children spend hours interacting with chatbots, there is an urgent need to "prioritize emotional well-being and building meaningful human relationships." This is a vital counter-narrative to the techno-optimism that often dominates EdTech discussions.

The coverage also touches on the technical limitations of current systems, specifically regarding children. Chang points out that "Automatic Speech Recognition systems face challenges" when dealing with kids because their "unpredictable speech, accents, and dialects throw off algorithms designed for adults." This oversight has real consequences, potentially denying younger students access to personalized literacy support or speech therapy. It serves as a reminder that AI is not a monolith; it is a collection of tools that often fail the most vulnerable users unless specifically tuned for them.

Furthermore, Chang brings attention to the environmental cost of this technological boom. Quoting a Harvard Kennedy School essay, he notes that as the hype subsides, "it leaves behind irreversible consequences," specifically citing the "massive amounts of carbon emitted during the AI boom" that "cannot simply be put back into the ground." This adds a layer of necessary gravity to the discussion, forcing readers to consider the ecological footprint of the very tools they are being asked to adopt.

The Summit: Personalization vs. Autonomy

Chang's reporting on Stanford's Accelerate EdTech Impact Summit provides a window into the future of the classroom. The panelists, including Adeel Khan of Magic School and Dr. Richard Charles of Denver Public Schools, argued that the true potential of AI lies in personalization. Khan explained how AI tools can "augment teachers' abilities to provide timely and effective feedback," allowing for a dynamic learning process where students receive tailored materials based on their performance.

However, the most striking insight from the summit was the insistence on maintaining the human element. Sara Allen posed a provocative question: "You can start to imagine a world where all these agents talk to each other and no people are actually talking to each other." In response, Dr. Charles emphasized that AI tools should incorporate a "personal human touch." Khan echoed this, recalling his assurance to teachers that "The foundation of our school is the relationships that we keep with our students. And that will not change in a world of AI."

This focus on relationships is the strongest part of Chang's coverage. It moves the conversation from "how do we grade with AI?" to "how do we teach with AI?" The argument is that AI should handle the repetitive tasks—lesson planning, grading, data analysis—freeing up teachers to do what humans do best: mentor, inspire, and connect. As Dr. Charles noted, the goal is to equip students with the skills to "learn and adapt continuously" rather than teaching specific technologies that may become obsolete.

The foundation of our school is the relationships that we keep with our students. And that will not change in a world of AI.

Policy and Preparation

Finally, Chang surveys the policy landscape, noting that states like Ohio are releasing comprehensive strategies to prepare K-12 systems for AI integration. The U.S. Department of Education has also issued guidance to "avoid the discriminatory use of artificial intelligence," ensuring that these tools align with federal civil rights laws. Chang highlights that professional development is not optional; it is the linchpin of success. Without ongoing training, teachers cannot "maximize the potential of AI in the classroom."

The author also points out the risks of unregulated exposure. Khan contrasts the controlled, teacher-led introduction of AI in schools with the dangers of platforms like Snapchat, arguing that students must learn about AI's "limitations, opportunities, and risks under the guidance of teachers." This structured approach is presented as the only viable path forward, preventing students from navigating these complex entities on their own.

Bottom Line

Johnny Chang's piece succeeds by refusing to treat AI as a villain or a savior, instead presenting it as an inevitable variable that requires a new educational equation. The strongest argument is the shift from prohibition to integration, grounded in the reality that the workforce demands AI literacy. The biggest vulnerability remains the execution: without massive investment in teacher training and robust data privacy safeguards, even the best policies risk widening the equity gap. The reader should watch for how the University of Sydney's "two-lane" model plays out in practice over the coming year, as it may well become the blueprint for global education reform.

Sources

Strategy for assessments with AI

by Johnny Chang · AI x Education · Read full article

The University of Sydney recently became one of the first institutions to allow the use of AI for non-secure assessments. Starting next year, students will be permitted to use AI tools like ChatGPT for homework, assignments, and certain types of assessments.

This new approach divides assessments into two categories: secure, in-person assessments ("lane one"), where AI use is restricted, and open assessments ("lane two"), where AI use is encouraged. The structure ensures students can demonstrate their understanding of the content without the help of AI while also learning how to work with AI effectively in real-world contexts. The role of AI in education and in the workforce will be inevitable, so rather than completely banning it, the University of Sydney takes a unique approach in allowing for the usage of AI while balancing academic integrity.

As this policy rolls out, it will be fascinating to observe its impact over the coming year and the lessons it may provide for educational institutions around the world. In this newsletter, we'll cover various perspectives on AI policies and highlight some of the latest developments and findings from educational researchers.

Here is an overview of today’s newsletter:

Diverse perspectives on AI from students, educators, and industry professionals

Latest AI policy updates and developments in the United States

Key takeaways from Stanford’s Accelerate EdTech Impact Summit

Emerging AI Trends in Instructional Design

Practical AI Usage and Policies.

Perspectives on AI.

What Students Are Saying About Teachers Using A.I. to Grade (New York Times)

Hear from students and educators as they weigh in on the following question: Is it unethical for teachers to use artificial intelligence to grade papers if they have forbidden their students from using it for their assignments? Feel free to continue the conversation in the comments below!

Q&A: Putting AI In its Place in an Era of Lost Human Connection at School (The 74 Million)

Alex Kotran, the founder of The AI Education Project (aiEDU), discusses his perspective on the importance of AI readiness in helping students build durable skills such as collaboration, communication, and critical thinking. He emphasizes the need to prioritize emotional well-being and building meaningful human relationships, particularly as young children today spend hours each day interacting with AI chatbots.

Student Short Essay Contest: How is AI Changing What it Means to Learn? (AI Consensus)

AI Consensus is publishing a student-written article answering the question “How is AI changing ...