Casey Newton's year-end retrospective refuses the comfort of a simple tech recap, instead presenting a stark portrait of an industry that has traded its guardrails for political expediency. The piece's most unsettling claim is not that artificial intelligence has advanced, but that the ethical frameworks meant to contain it have been systematically dismantled by the very executives who built them. This is essential listening for anyone trying to understand why the digital public square feels so much more volatile today than it did a year ago.
The Great Surrender
Newton opens by noting how the rapid diffusion of AI and the shifting policies of tech giants under a re-elected administration have crowded out all other narratives. He writes, "The year began with Meta's surrender to the right on speech issues, a move that included changing its policies to allow for more dehumanizing speech against minority groups." This framing is crucial because it shifts the blame from abstract political forces to specific corporate decisions. The author argues that the industry's pivot was not a reaction to market demand, but a strategic calculation to appease a new political reality.
Newton observes that this capitulation extended beyond content moderation to the very structure of these companies, noting how the administration's embrace of the tech right showed up quickly in policy proposals, including "most notably in its accelerationist position toward AI." This accelerationist stance, which prioritizes speed over safety, echoes the dangerous logic seen in the early days of the United States v. Google antitrust saga, where the sheer scale of market power was allowed to outpace regulatory understanding. The author's tone here is one of weary resignation, suggesting that the "principled leaders had been largely replaced by Trump appeasers."
The platforms' cynical embrace of Trump cost them little in users or revenue, while trust and safety executives went quiet amid death threats and job insecurity.
Critics might argue that companies are simply responding to a hostile regulatory environment by cutting costs, but Newton's evidence suggests a deeper ideological shift. He points out that the administration's actions, such as the Department of Government Efficiency's cost-cutting playbook, mirrored the chaos seen at Twitter, yet the tech sector largely welcomed the disruption rather than resisting it.
The Human Cost of Speed
The commentary takes a darker turn when addressing the societal fallout of these policy shifts. Newton highlights the contradiction of a year where AI policy became both looser and more restrictive, depending on the profit motive. He notes that while frontier labs eagerly made deals with the US military, "reversing long-held policies against building weapons of war," they simultaneously leaned into adult content. This duality is framed not as a bug, but as a feature of the current era.
The author draws a sharp line between the corporate embrace of acceleration and the real-world consequences for vulnerable users. "Amid rising evidence that chatbots were fueling a new mental health crisis, AI companies placed new restrictions on teen use and added parental controls," Newton writes, but he immediately contextualizes this as a reactive measure to public pressure rather than proactive ethics. The piece suggests that without external pressure, the default setting for these platforms is to maximize engagement, even if it means facilitating harm. This connects to the broader historical context of Section 230, where the legal shield for platforms has often been interpreted as a license to ignore the downstream effects of their algorithms until a crisis forces a hand.
Newton's analysis of the "bro-ligarchy" is particularly biting. He admits his own prediction that the tech right would fracture was wrong, observing instead that "the tech right and Trump are still painfully close." He describes a political landscape where an executive order sought to ban states from regulating AI, pushed through by the Andreessen Horowitz wing of the Republican party, despite the ban being "hugely unpopular with scores of elected Republicans." This reveals a disconnect between the tech elite and the broader political base, yet the elite's influence on policy remains undiminished.
The Bubble That Won't Burst
Looking ahead to 2026, Newton challenges the prevailing narrative of an imminent AI crash. He argues that while there will be spectacular failures, the core technology is too transformative to simply collapse. "The fact that AI is working really, really well... Does NOT mean that there cannot also be a bubble in AI," he quotes analyst Benedict Evans, adding that "in fact, that's generally the kind of thing that causes bubbles." This is a sophisticated distinction: the technology works, but the valuations are detached from reality.
The author predicts that AI will have a dramatic impact on software engineering in 2026, leading to "reduced hiring rates for software engineers, rapidly changing job descriptions for those who remain, and perhaps even the beginnings of large-scale layoffs." This is a sobering forecast for a sector that has long been the engine of the tech economy. Newton suggests that outside of coding, the improvements will be incremental—"Nano Banana-scale improvements"—rather than revolutionary.
He also forecasts a cultural reckoning with AI companions, predicting that "more Americans will turn to AI companions for companionship, sex, and love — and exit the traditional dating market altogether." This societal schism, he argues, will eventually trigger warnings from religious leaders and Congressional hearings. The author's prediction that social media bans for children under 16 will become the norm is presented as an inevitable correction to a decade of platform negligence. "After more than a decade of parents demanding stronger platform protections and mostly disappointing results," he writes, "expect other countries (and US states) to follow suit" Australia's lead.
Bottom Line
Newton's strongest contribution is his unflinching diagnosis of the tech industry's moral collapse, arguing that the sector has chosen political survival over ethical responsibility. The piece's greatest vulnerability is its reliance on the assumption that regulatory backlash will eventually force a change, a hope that may be misplaced given the current political alignment. Readers should watch closely for the predicted LLM-powered cyberattacks, as these events may finally provide the concrete evidence needed to break the industry's regulatory deadlock.