← Back to Library

11 predictions for 2026

Casey Newton's year-end retrospective refuses the comfort of a simple tech recap, instead presenting a stark portrait of an industry that has traded its guardrails for political expediency. The piece's most unsettling claim is not that artificial intelligence has advanced, but that the ethical frameworks meant to contain it have been systematically dismantled by the very executives who built them. This is essential listening for anyone trying to understand why the digital public square feels so much more volatile today than it did a year ago.

The Great Surrender

Newton opens by noting how the rapid diffusion of AI and the shifting policies of tech giants under a re-elected administration have crowded out all other narratives. He writes, "The year began with Meta's surrender to the right on speech issues, a move that included changing its policies to allow for more dehumanizing speech against minority groups." This framing is crucial because it shifts the blame from abstract political forces to specific corporate decisions. The author argues that the industry's pivot was not a reaction to market demand, but a strategic calculation to appease a new political reality.

11 predictions for 2026

Newton observes that this capitulation extended beyond content moderation to the very structure of these companies, noting how the administration's embrace of the tech right showed up quickly in policy proposals, including "most notably in its accelerationist position toward AI." This accelerationist stance, which prioritizes speed over safety, echoes the dangerous logic seen in the early days of the United States v. Google antitrust saga, where the sheer scale of market power was allowed to outpace regulatory understanding. The author's tone here is one of weary resignation, suggesting that the "principled leaders had been largely replaced by Trump appeasers."

The platforms' cynical embrace of Trump cost them little in users or revenue, while trust and safety executives went quiet amid death threats and job insecurity.

Critics might argue that companies are simply responding to a hostile regulatory environment by cutting costs, but Newton's evidence suggests a deeper ideological shift. He points out that the administration's actions, such as the Department of Government Efficiency's cost-cutting playbook, mirrored the chaos seen at Twitter, yet the tech sector largely welcomed the disruption rather than resisting it.

The Human Cost of Speed

The commentary takes a darker turn when addressing the societal fallout of these policy shifts. Newton highlights the contradiction of a year where AI policy became both looser and more restrictive, depending on the profit motive. He notes that while frontier labs eagerly made deals with the US military, "reversing long-held policies against building weapons of war," they simultaneously leaned into adult content. This duality is framed not as a bug, but as a feature of the current era.

The author draws a sharp line between the corporate embrace of acceleration and the real-world consequences for vulnerable users. "Amid rising evidence that chatbots were fueling a new mental health crisis, AI companies placed new restrictions on teen use and added parental controls," Newton writes, but he immediately contextualizes this as a reactive measure to public pressure rather than proactive ethics. The piece suggests that without external pressure, the default setting for these platforms is to maximize engagement, even if it means facilitating harm. This connects to the broader historical context of Section 230, where the legal shield for platforms has often been interpreted as a license to ignore the downstream effects of their algorithms until a crisis forces a hand.

Newton's analysis of the "bro-ligarchy" is particularly biting. He admits his own prediction that the tech right would fracture was wrong, observing instead that "the tech right and Trump are still painfully close." He describes a political landscape where an executive order sought to ban states from regulating AI, pushed through by the Andreessen Horowitz wing of the Republican party, despite the ban being "hugely unpopular with scores of elected Republicans." This reveals a disconnect between the tech elite and the broader political base, yet the elite's influence on policy remains undiminished.

The Bubble That Won't Burst

Looking ahead to 2026, Newton challenges the prevailing narrative of an imminent AI crash. He argues that while there will be spectacular failures, the core technology is too transformative to simply collapse. "The fact that AI is working really, really well... Does NOT mean that there cannot also be a bubble in AI," he quotes analyst Benedict Evans, adding that "in fact, that's generally the kind of thing that causes bubbles." This is a sophisticated distinction: the technology works, but the valuations are detached from reality.

The author predicts that AI will have a dramatic impact on software engineering in 2026, leading to "reduced hiring rates for software engineers, rapidly changing job descriptions for those who remain, and perhaps even the beginnings of large-scale layoffs." This is a sobering forecast for a sector that has long been the engine of the tech economy. Newton suggests that outside of coding, the improvements will be incremental—"Nano Banana-scale improvements"—rather than revolutionary.

He also forecasts a cultural reckoning with AI companions, predicting that "more Americans will turn to AI companions for companionship, sex, and love — and exit the traditional dating market altogether." This societal schism, he argues, will eventually trigger warnings from religious leaders and Congressional hearings. The author's prediction that social media bans for children under 16 will become the norm is presented as an inevitable correction to a decade of platform negligence. "After more than a decade of parents demanding stronger platform protections and mostly disappointing results," he writes, "expect other countries (and US states) to follow suit" Australia's lead.

Bottom Line

Newton's strongest contribution is his unflinching diagnosis of the tech industry's moral collapse, arguing that the sector has chosen political survival over ethical responsibility. The piece's greatest vulnerability is its reliance on the assumption that regulatory backlash will eventually force a change, a hope that may be misplaced given the current political alignment. Readers should watch closely for the predicted LLM-powered cyberattacks, as these events may finally provide the concrete evidence needed to break the industry's regulatory deadlock.

Deep Dives

Explore these related deep dives:

  • United States v. Google LLC (2020)

    The article discusses Google losing its antitrust case - this Wikipedia article provides deep context on the landmark DOJ case that found Google maintained an illegal monopoly in search

  • Section 230

    The article's discussion of content moderation, platform liability, and the shifting relationship between tech companies and government regulation is deeply rooted in Section 230's legal framework

  • Accelerationism

    The article specifically mentions the Trump administration's 'accelerationist position toward AI' - this philosophical concept about speeding up technological change provides crucial context for understanding the policy debate

Sources

11 predictions for 2026

by Casey Newton · Platformer · Read full article

As 2024 came to a close, I noted here that two big stories were beginning to crowd out everything else in tech: the rapid development and diffusion of artificial intelligence, and the shifting policies of tech giants as they prepared for life under a re-elected President Trump.

Twelve months later, those stories did indeed define the year here at Platformer. On the product side, this year saw the first consumer agents, deep research, Google’s AI mode, OpenAI’s hardware ambitions, Sora, and the Atlas browser, among other key developments. 

Meanwhile, AI policy got both looser and more restrictive. Frontier AI labs eagerly made deals with the US military, reversing long-held policies against building weapons of war, and began leaning into adult content, from erotica in ChatGPT to Grok’s sexbot companion. On the other hand, amid rising evidence that chatbots were fueling a new mental health crisis, AI companies placed new restrictions on teen use and added parental controls.

All that took place against the backdrop of the new Trump administration, whose impact on the tech world was felt almost immediately. The year began with Meta’s surrender to the right on speech issues, a move that included changing its policies to allow for more dehumanizing speech against minority groups. It also killed its DEI program, a move followed by many of its peers, and shut down systems that once prevented the spread of misinformation.

With DOGE, Elon Musk ran the same playbook for cost-cutting in the federal government that he had done previously at Twitter, to devastating effect. The new administration’s embrace of the tech right showed up quickly in its policy proposals, including most notably in its accelerationist position toward AI.

By mid-year, Musk and Trump split. But the broader relationship between Trump and Silicon Valley remained mostly positive — particularly for Meta — despite the fact that the government continued to pursue and antitrust cases against both that company and Google. (Meta won its case; Google lost.)

I spent much of the year feeling increasingly disillusioned by the platforms’ cynical embrace of Trump and how little it seemed to cost them in users or revenue. (Even Tesla stock, which plunged in the wake of DOGE’s outrages, is now up almost 10 percent year over year.) I noted in particular the disquieting silence from trust and safety executives, ...