Jack Clark's latest dispatch cuts through the noise of daily AI hype to reveal a startling truth: the most profound technological shifts are happening in the shadows, invisible to the casual observer but reshaping the world with terrifying speed. While the public sees only memes and chatbots, a parallel "AI economy" is accelerating so rapidly that by 2026, those who can harness it will feel like they are living in a different dimension than everyone else.
The Invisible Beast
Clark begins with a personal confession that serves as a powerful metaphor for the current state of the field. He describes how the demands of fatherhood have pulled him away from the constant grind of AI research, only to realize that the technology has advanced so quietly that it feels like a "silent siren." "I walk around the town in which I live and there aren't drones in the sky or self-driving cars or sidewalk robots or anything like that," he writes. "And yet you and I both know there are great changes afoot. Huge new beasts lumbering from some unknown future into our present, dragging with them change."
This framing is effective because it challenges the reader's intuition. We expect progress to be loud and visible, but the most significant breakthroughs are often abstract and internal. Clark illustrates this by recounting a recent experiment where he used an AI coding assistant to build a complex predator-prey simulation. The result was a sophisticated software program created in minutes, a task that would have taken him weeks a decade ago. "The experience was akin to being a child and playing with an adult - I'd sketch out something and hand it to the superintelligence and back would come a beautifully rendered version of what I'd imagined," Clark notes.
However, the author warns that this power is not equally distributed. Accessing this capability requires a specific, rare combination of curiosity, time, and the ability to formulate the right questions. "Most of AI progress has this flavor: if you have a bit of intellectual curiosity and some time, you can very quickly shock yourself with how amazingly capable modern AI systems are. But you need to have that magic combination of time and curiosity," he argues. This creates a dangerous funnel where only a select few can ride the wave of innovation, while the majority remain passive consumers of "unremarkable synthetic slop content."
The challenge isn't solely solved with interface designs... The challenge is deeper and it relates to how much curiosity an individual person has, how easily they can access powerful AI systems, and how much time they have available to experiment.
Clark predicts a stark divergence by the summer of 2026, where the "AI economy" will move with counter-intuitive speed relative to the rest of the digital world. He draws a parallel to the crypto economy but notes a crucial distinction: "The AI economy already touches a lot more of our 'regular' economic reality than the crypto economy." This suggests that the coming disruption will not be contained within a niche market but will permeate the entire fabric of society. Critics might argue that this timeline is overly optimistic, given the current bottlenecks in compute power and energy, but the underlying trajectory of capability growth remains undeniable.
The Cybersecurity Overhang
The article then pivots to hard evidence of these hidden capabilities, focusing on a new study regarding AI in cybersecurity. Clark highlights research from Stanford, Carnegie Mellon, and Gray Swan AI, which utilized a software scaffold called ARTEMIS to test AI agents against human professionals. The results were startling: AI systems, when properly managed, can match or exceed the performance of seasoned security experts.
The study tested participants on a realistic university network containing 8,000 hosts. While standard AI models often refused the task or stalled, the ARTEMIS framework, which acts as a "complex multi-agent framework consisting of a high-level supervisor, unlimited sub-agents with dynamically created expert system prompts," significantly outperformed existing tools. "ARTEMIS significantly outperforms existing scaffolds," the authors write, noting that other systems resulted in zero findings. "Our participant cohort discovered 49 total validated unique vulnerabilities, with the number of valid findings per participant ranging from 3 to 13."
Clark's analysis here is crucial: the bottleneck is not the AI's raw intelligence, but our ability to elicit it. "The main message to take away from ARTEMIS is that today's AI systems are under-elicited and more powerful than they appear," he states. This reframes the narrative from "AI is dangerous because it's smart" to "AI is dangerous because we don't know how to manage it yet." The economic implications are immediate, with Clark noting that "certain ARTEMIS variants cost $18/hour versus $60/hour for professional penetration testers." This suggests a rapid commoditization of high-level security tasks, potentially destabilizing the industry's labor market.
Bridging the Physical Gap
Moving from the digital to the physical, Clark examines a breakthrough in robotics known as OSMO (Open Source tactile glove for huMan-to-robOt skill transfer). This technology addresses a fundamental problem in robotics: the difficulty of transferring human dexterity to machines. By using a shared tactile glove that both humans and robots can wear, researchers have created a bridge that allows robots to learn from human demonstrations without needing vast amounts of robot-specific training data.
"OSMO is a thin, wearable tactile glove that enables in-the-wild human demonstrations while preserving natural interaction and capturing rich contact information," Clark explains. The innovation lies in its ability to eliminate the "visual domain shift" that usually plagues these systems. "Policies trained solely on human demonstrations with the OSMO glove successfully transfer continuous tactile feedback and outperform vision-only baselines by eliminating contact-related failures," the research shows.
This development is significant because it makes the boundary between human and machine permeable. It suggests a future where robots can learn complex physical tasks simply by watching a human do them, provided they share the same sensory interface. Clark connects this to the broader theme of the newsletter: the tools are becoming more intuitive, but they require a shift in how we interact with them. Just as the Lotka–Volterra equations describe the complex dynamics of predator and prey populations, these new interfaces allow us to model and control the dynamics of human-robot interaction with unprecedented precision.
The Hidden Plumbing of Progress
Finally, Clark turns to the often-overlooked infrastructure required to make AI useful in specialized fields like chip design. He discusses "ChipMain," a software tool developed by researchers in China and the US that transforms semiconductor specifications into structured data that large language models can understand. The core insight here is that the bottleneck has shifted from generating code to enabling deep comprehension of vast specifications.
"The core bottleneck in LLM-aided hardware design has shifted from how to generate code to how to enable LLMs to perform deep comprehension and reasoning over vast specification," the authors of the ChipMain paper write. Clark uses this to illustrate a broader point: the most exciting advancements are often the "plumbing" that makes the magic possible. Without tools like ChipMain, which creates a domain-specific knowledge graph, AI systems remain unable to tackle the most complex engineering challenges.
This aligns with the concept of an "excession"—a term from Iain M. Banks' science fiction referring to an object that exists in more dimensions than we can perceive. Clark suggests that the AI revolution is much like an excression: "Though we exist in four dimensions, it is almost as though AI exists in five, and we will be only able to see a 'slice' of it as it passes through our reality." The visible parts are the chatbots and the images; the invisible parts are the knowledge graphs, the tactile gloves, and the agent scaffolds that are doing the heavy lifting.
Great fortunes will be won and lost here, and the powerful engines of our silicon creation will be put to work, further accelerating this economy and further changing things.
Bottom Line
Jack Clark's most compelling argument is that the AI revolution is not a singular event but a widening chasm between those who can elicit the technology's full potential and those who cannot. The evidence from cybersecurity and robotics proves that the capabilities are already here, hidden behind layers of software scaffolding and interface design. The biggest vulnerability in this trajectory is the lack of public understanding; as the "AI economy" accelerates, the gap between the parallel world of AI practitioners and the rest of society will become a source of profound instability. The reader must watch not just for new models, but for the tools that allow us to manage them.