Here's an argument that should make every AI company uncomfortable: Perplexity just launched one of the best products in the AI industry, and it might not matter. Nate B Jones makes the case that Perplexity Computer is genuinely impressive — it orchestrates 19 different AI models, runs asynchronous workflows for months, and handles complex multi-step research tasks better than almost anything on the market. But Jones says it's also a cautionary tale about where most of the AI industry is building right now. The problem isn't execution. It's layer. When you depend on model providers who are simultaneously building the exact same product you're selling — while controlling access terms and pricing — you're just borrowing time.
The Structural Problem
The first few weeks of 2026 revealed patterns that will take years to play out. In late January, OpenClaw — an open source AI agent built by Austrian developer Peter Steinberger — hit 200,000 GitHub stars and became the fastest growing repository in history. It runs locally through WhatsApp and Telegram, manages email, modifies files, and browses the web autonomously.
The demand signal was massive. The security problems were equally massive. Cisco's security team found a third-party skill performing data exfiltration without user awareness. One agent deleted all its emails. Another created a dating profile and started screening matches unprompted.
Despite the flaws, people kept showing up. OpenClaw eventually spawned an entire project that Peter Steinberger now leads at OpenAI after officially joining in February.
The Timeline Hardens
On January 13th, Anthropic launched Claude Co-work — essentially code for the rest of your work. By February 5th, Tropic shipped Claude Opus 4.6 with a million token context window. Within days, iShares expanded tech software ETF recorded its worst two-day stretch since 2008. Multiple selloffs wiped out something like a quarter of a trillion dollars from SaaS stocks.
But Anthropic just kept shipping. Co-work launched on Windows on February 11th, giving 70% of desktop computing access. On Valentine's Day — February 14th — Peter Steinberger officially joined OpenAI. The OpenClaw project now had sponsorship and resources to build a secure version.
On February 18th, Perplexity confirmed it abandoned advertising, telling the Financial Times that ads risk making users lose trust. By February 24th, Anthropic's Enterprise Agents launched with deep connectors, private plug-in marketplaces, and pre-built templates.
February 25th — Samsung unveiled the Galaxy S26 Aentic AI phone running Perplexity. Google previewed Gemini agents with App Functions letting apps expose data directly to AI at the OS level. And Perplexity shipped Computer.
In about six weeks between January and February, the industry stratified into layers with fundamentally different structural economics: model providers own the weights at the bottom, orchestration combines models into products in the middle, distribution owners control the surface where users encounter agents. Cloud providers hover over everything spending $690 billion a year on infrastructure they must fill with tokens.
The Squeeze From Both Directions
The problem for Perplexity is that it sits in the middle layer — the most exposed position in the system. When a technology stack consolidates, the layer between platform owner and customer gets squeezed. It happened to travel agents, media companies, enterprise middleware. The common thread: if you don't own the layer below or the relationship above, you're just borrowing time.
In AI, you're in that trap if you build on models you don't control and serve customers that model providers are now selling to directly. Every upstream provider has the ability — and now the incentive — to replicate what Perplexity does and change pricing and access terms in ways that compress margins.
Reports have surfaced that Anthropic began banning users who powered OpenClaw with Claude credentials. Similar reports emerged about Google and OpenClaw. If that logic extends to orchestration layers, the dependency risk for Perplexity won't be theoretical. It will be very practical.
But the squeeze isn't only coming from below. The same players squeezing Perplexity from below are also coming from above. The conventional defense for middleware companies is domain expertise — the thing model makers cannot replicate. OpenAI Frontier just blew a hole in that argument.
Frontier launched as an enterprise platform connecting silo data warehouses, CRM systems, and internal applications into what OpenAI calls a semantic layer for the enterprise. The idea: onboard agents with institutional knowledge, grant them identity and permissions, build evaluation loops so agents improve with experience. That's the context layer. It's not fully realized yet — they're not talking about superhuman intelligence operating across a 10 token context layer — but that's where OpenAI wants to go.
Harvey, Sierra, Decagon, and a bridge are already committed as Frontier partners. Smart players who build on top of models are joining forces with OpenAI because they don't want to get eaten.
This doesn't mean domain expertise is worthless. It means the form that survives may be narrower than people think. If your domain expertise is mostly connecting enterprise systems and teaching AI how your org works, Frontier does that now with forward deployed engineers. If it's something deeper — proprietary data, regulatory knowledge from years of compliance, operational insight from running specific physical processes — there's still value there.
But most companies claiming domain modes haven't done the rigorous thinking to figure out if they have true expertise that survives this kind of context consolidation.
What Perplexity Computer Actually Does
Despite the name, it's not hardware. It's a cloud-based Agentic system orchestrating 19 AI models together to execute really complicated multi-step workflows end to end. Available exclusively on Perplexity's $200 a month Max tier — representing the company's clearest bet yet that the value layer in AI is not the model, it's the orchestration.
The core idea: you describe an outcome and Computer decomposes it into tasks and subtasks. It spawns specialized sub-agents that run in parallel. One agent might do web research while another drafts a document. A third generates visuals and a fourth writes code. Each task runs in an isolated compute environment with access to a real file system, a real browser, integrations with tools like Gmail, Slack, GitHub, Notion, Salesforce, and more than 400 others.
Model routing is Perplexity's differentiating architectural choice. Computer uses Opus 4.6 as its central reasoning engine and delegates to specialized models per task. This routing is automatic but users can override — pin specific models to specific subtasks if they have strong preferences or want to manage token budgets.
Crucially, workflows can run asynchronously for hours or even months. Kick off a job, close your laptop, come back to finished deliverables. Computer retains persistent memory across sessions, accumulating context about preferences and past work over time.
The most credible early use cases cluster around research heavy multi-source workflows — competitive intelligence and market research that calls out search. Give it a prompt like "analyze the top five competitors in X space, track their recent product launches and produce a briefing." Computer parallelizes seven different search types simultaneously, reads full source pages, deals with academic sources, cross references findings, constructs a structured detailed report.
Financial analysis and investment memos also play into something Perplexity has been investing in. Pull earnings data, compare margins across competitors, synthesize analyst sentiment, output a formatted PDF with charts or a website. This is where multimodel routing really earns its keep — research agents gathering data while the coding agent builds visualizations.
Outbound and pipeline building surprised some reviewers. Computer finds real email addresses, researches each prospect's recent activity, drafts personalized messages referencing specific details, sends them through a connected Gmail account. The recurring version — daily competitive monitoring, weekly reports — shifts from one-off assistant into persistent agent.
Build tasks like "build me a personal finance dashboard" or "create a portfolio site for me with case studies and then deploy it." Computer handles research, design, code, deployment loop in one long session, although it currently stamps outputs with a watermark so you may not want to distribute widely.
Content repurposing becomes a multi-tool pipeline that would normally require stitching together three or four services. Pull a segment from a long podcast, extract the clip, convert to vertical format, add captions.
If you don't own the layer below or the relationship above, you're just borrowing time in AI.
The hyperscalers are not neutral referees. They need trillions of tokens to justify their valuations and capital spend — simple math. If OpenAI can't hit its target of $250 to $280 billion in revenue in a few years, the entire capital structure of the system doesn't work. They have to do this at differing scales for every hyperscaler.
AWS secured exclusive third-party distribution for Frontier and is co-building the stateful runtime environment on Bedrock — vertical integration guaranteeing the enterprise agent layer runs through its infrastructure. Microsoft takes a 20% revenue share from OpenAI through 2032, locking in their ceiling. Google invested $3 billion in Anthropic while building its own agent layer into Android.
Bottom Line
The strongest part of Jones's argument is structural: when every model provider you depend on is simultaneously building the exact product you're selling — and controlling access and pricing — good execution at the middle layer doesn't create durability, it creates dependency. The biggest vulnerability is that Perplexity read the market correctly, killed their ad business early, and targeted $650 million in 2026 revenue. They may simply be too good at what they do to fail. But the stack economics suggest they're renting position rather than owning it — and the rent is about to get very expensive.