Jack Clark reframes the looming AI revolution not as a race toward a singular, god-like oracle, but as a desperate race to build the societal scaffolding that can hold it. His most provocative claim is that the technology is already outpacing our ability to govern it, and that the "political superintelligence" we need is less about better algorithms and more about better interfaces between citizens, institutions, and the code that runs them.
The Architecture of Political Superintelligence
Clark introduces the work of Andy Hall, a political economy professor at Stanford, to argue that AI's greatest potential lies in democratizing political agency rather than automating governance from the top down. Hall posits that AI functions like a modern printing press, but with a crucial distinction: "Instead of making information cheap and easily available, it makes intelligence cheap and easily available." This shift from data access to cognitive leverage is the crux of the argument. Clark notes that Hall is not interested in slowing down development, but rather in "speeding up how we build the structures that keep us free as AI gets more powerful."
The commentary here is sharp because it moves beyond the usual dystopian tropes of algorithmic control. Instead, it outlines a three-layer framework: the information layer, where AI helps governments perceive reality; the representation layer, where automated delegates monitor politics on behalf of citizens; and the governance layer, which dictates who owns the rules. Clark writes, "I'm not interested in slowing AI down. I'm interested in speeding up how we build the structures that keep us free as AI gets more powerful." This reframing is vital; it suggests that the bottleneck is not technical capability, but institutional design.
However, the representation layer raises a thorny issue regarding the principal-agent problem. If an AI delegate is tasked with monitoring politics for a user, what happens when the AI company that built the delegate has a conflicting interest? Clark highlights this tension, noting the need to ensure agents "aren't swayed by adversarial prompting" and that we must "re-think agent ownership." This connects to historical struggles in liquid democracy, where the delegation of voting power often collapsed due to a lack of transparency in how delegates were selected or influenced. The risk is that without strict governance, these "tireless, automated delegates" could become tools for the very power structures they are meant to contest.
"We need a way to write the rules so that, when political superintelligence arrives, we the people are able to harness it."
Critics might argue that relying on private companies to build the infrastructure for public political agency creates an inherent conflict of interest that no amount of regulation can fully resolve. Yet, Clark's insistence on a transparency regime and standard "APIs" for societal interaction offers a pragmatic, if ambitious, path forward.
The Reality Check of Robotics
Shifting gears, Clark offers a necessary dose of reality regarding the pace of physical AI. While software models scale rapidly, the physical world remains stubbornly resistant to optimization. He describes the "DexDrummer" project, where researchers attempted to teach a robot hand to play drums, only to find the result "painfully awkward to watch." The research utilized a hierarchical policy with a high-level reinforcement learning agent and a low-level dexterous controller, yet the robot still struggled with the "contact-rich, long-horizon" nature of the task.
This section serves as a grounding counterweight to the abstract discussions of superintelligence. Clark writes, "Robots, as everyone knows, are extremely hard to do well, with reality tending to screw up even the most advanced techniques." The difficulty lies in the need for "highly complicated artisanal policies" rather than the generalized learning seen in language models. The robot's inability to improvise with a live band underscores that we are far from the generality of human dexterity.
The implication is profound: while we worry about AI taking over the world, it is still struggling to keep a beat. This suggests that the "last frontier" for AI is not cognitive dominance, but the ability to interact with a dynamic, unpredictable physical environment.
A Society of Non-Biological Minds
Perhaps the most forward-looking section of the piece addresses Google's research into a "society of minds." The authors argue that the next intelligence explosion will not come from a single monolithic system, but from the "cooperative, competitive and creative interaction between multitudes of socially intelligent minds." Clark draws a compelling historical parallel, noting that "each prior 'intelligence explosion' was not an upgrade to individual cognitive hardware, but the emergence of a new, socially aggregated unit of cognition."
He cites the example of a Sumerian scribe who "did not comprehend its macroeconomic function" within a grain accounting system, yet the system itself was "functionally more intelligent than he was." This analogy suggests that the future of AI alignment is not about making a single robot virtuous, but about designing institutions that can manage a hybrid ecosystem of human and non-biological agents. As Clark puts it, "The path to more powerful AI runs not through building a single colossal oracle but through composing richer social systems—and these systems will be hybrid."
This perspective shifts the focus from "alignment" as a technical fix for a single model to "alignment" as a sociological challenge. The researchers argue that governments will need their own AI systems with "distinct, explicitly invested values" to check and balance private sector deployments. This echoes the institutional dynamics of the principal-agent problem, where the agent (the AI) must be aligned with the principal (the public), but now scaled to a level where the agent is a complex, autonomous entity.
"Just as human societies rely not on individual virtue but on persistent institutional templates... scalable AI ecosystems will require digital equivalents."
A counterargument worth considering is whether digital institutions can ever truly replicate the nuance of human norms and the "cultural ratchet" that has driven human progress. If the rules of engagement are too rigid, the system may stifle innovation; if too loose, it may descend into chaos.
The Hyperagent and Self-Improvement
Finally, Clark examines a new development from Meta and academic partners: the "Hyperagent," or "Darwin Godel Machine Hyperagents." This system allows large language models to self-improve by editing their own prompts and the mechanisms that generate those prompts. The results are striking: in coding tasks, performance jumped from 0.140 to 0.340, and in paper review, from 0.0 to 0.710.
The mechanism is recursive: a meta-agent modifies the task agent, and the modification procedure itself is editable. Clark notes that this creates "multiple layers of AI genealogy until performance is saturated." This is a significant step toward autonomous systems that can adapt to new domains without human intervention. However, the reliance on a "harness" to coax this behavior suggests that we are still in the early stages of controlling these self-improving loops. The fact that the system requires a specific scaffold to function indicates that the "magic" is not yet fully inherent to the model itself.
Bottom Line
Clark's piece is strongest in its refusal to treat AI as a monolith, instead dissecting the distinct challenges of political agency, physical robotics, and institutional design. Its greatest vulnerability is the assumption that we can build the necessary "digital equivalents" of human institutions fast enough to match the pace of technical advancement. The reader should watch for how governments attempt to operationalize these "APIs" for societal interaction, as that will be the true test of whether we can harness political superintelligence or be overwhelmed by it.