In a landscape saturated with AI hype cycles and binary predictions of imminent singularity, Helen Toner's debut as interim Executive Director of the Center for Security and Emerging Technology (CSET) offers a rare, grounding counter-narrative. Jordan Schneider's interview with Toner cuts through the noise to reveal a strategic pivot: rather than chasing the next "AGI by 2027" headline, CSET is doubling down on the messy, uneven reality of current AI capabilities and the specific, data-driven mechanics of U.S.-China technological competition. For the busy professional trying to separate signal from static, this conversation provides a crucial map of where policy is actually being made, far beneath the surface of social media shouting matches.
The Institutional Advantage
Schneider frames the conversation around a central paradox: while public discourse on AI has devolved into performative outrage, the actual machinery of government remains surprisingly receptive to evidence. Toner, who has been with CSET since its 2019 founding, pushes back against the cynicism that facts no longer matter. She argues that the real work happens in the quiet corridors of the executive branch, where subject matter experts are desperate for rigorous analysis. "The U.S. government has millions of employees, and the subject matter experts doing the work are interested in details and evidence," Toner asserts. "There is a steady demand from them for the kind of support we provide, and they are very responsive to facts."
This distinction is vital. While the public sphere oscillates between panic and dismissal, the administration is actively seeking the kind of granular data CSET produces to navigate complex issues like outbound investment controls. Toner recalls their direct engagement with the White House: "We worked closely with the Biden administration when they were considering outbound investment controls — asking them, 'How will you implement these controls? Do you have the necessary information to do it effectively?'" This isn't about moral posturing; it's about operational reality. As Toner notes, "if you don't know what's going on, you're going to try things that backfire — and most people want to avoid that."
Critics might argue that this faith in the bureaucracy is naive given the political volatility of the moment, but Toner's point is that the institutional need for data transcends any single administration's rhetoric. The value of CSET lies in its ability to bridge the gap between high-level policy goals and the technical feasibility of execution.
Beyond the "Dark Arts"
Schneider raises a provocative concern about the changing nature of the think tank ecosystem. As the stakes have risen, so has the presence of bad-faith actors and "dark arts" specialists who prioritize influence over accuracy. He asks whether CSET's commitment to good-faith, fact-based research is enough to compete in an environment where money and manipulation are rampant. Toner rejects the binary choice between publishing a white paper and engaging in political warfare. "I don't think the only options are 'put a white paper on your website' or 'go full political dark arts,'" she explains. "There is a lot of space in between."
The organization is evolving its methods to match the times, exploring new formats like video and navigating the tension between institutional credibility and the rising power of individual personal brands. Toner acknowledges the cultural friction this creates: "Some of our people are eager to give that a go, while others — especially those from the intelligence community — are like, 'Oh God, shoot me before you make me go on Twitter.'" Yet, the core mission remains unchanged. The goal is to provide a reality check that prevents policy failures, a task that requires deep expertise rather than viral slogans. As Toner puts it, "When we talk about 'the Facts,' it's not about some ideas being more virtuous than others. But if you want to accomplish something and care about results, then you need to know what the world looks like."
"If you want to accomplish something and care about results, then you need to know what the world looks like."
The Jagged Frontier of AI Progress
Perhaps the most significant conceptual contribution in this piece is Toner's embrace of the "jagged frontier" theory, a concept popularized by Ethan Mollick. This framework challenges the prevailing binary in AI discourse: the view that AI is either a "nothing burger" or an imminent, all-powerful general intelligence. Toner argues that AI progress is likely to remain uneven, excelling at some tasks while failing spectacularly at others. "Right now, most people fall into one of two camps — either they think AI is all hype and a 'nothing burger' — or they're in the 'AGI by 2027' camp," she observes. "Both are non-jagged views of the future."
This perspective has profound implications for policy. If AI development is jagged, it implies a slower, more manageable trajectory that allows for human oversight and regulatory adaptation. "Jaggedness leads to slower AI progression, which gives us time to reflect, experiment, and adapt," Toner argues. This stands in stark contrast to the "fast takeoff" scenarios that often dominate the news cycle, where an automated AI researcher could theoretically compress a decade of progress into months, leaving policymakers with no time to react. While Toner admits this is not a certainty, she believes the possibility of persistent jaggedness is "underrated in the AI community."
Schneider and Toner also touch on how modern tools are leveling the playing field. With the advent of large language models, individual researchers can now perform data analysis that once required a large team. However, Toner insists that CSET's scale still offers unique advantages, particularly in complex data science problems like "entity resolution"—the messy task of identifying that "Google London" and "DeepMind" are the same entity across vast datasets. "As a larger team, we can do things an individual researcher can't," she says. "We can validate our results and test different models — like when to use an expensive, frontier model versus a lighter one that's faster and can handle high volumes."
Bottom Line
Schneider's interview with Toner succeeds in reframing the AI policy debate away from speculative doomsaying and toward the practical, evidence-based work required to govern emerging technologies. The strongest element of the piece is its insistence that the "jagged frontier" is not just a technical detail but a strategic opportunity for democratic governance. However, the argument's vulnerability lies in its reliance on the assumption that policymakers will continue to prioritize evidence over political expediency in an increasingly polarized environment. As CSET moves forward, the true test will be whether this data-driven approach can withstand the pressure of the very "dark arts" it seeks to outmaneuver.