← Back to Library

'Prompting' Just Split Into 4 Skills. You Only Know One. Here's Why You Need the Other 3 in 2026.

Your adapted article text here"}

The Prompting Skills Gap Is Already 10x Wider

Most people practicing AI prompting in 2026 are standing on quicksand. The tools have fundamentally changed but the skills haven't kept up — and the gap between those who adapted and those who didn't is already tenfold.

That's the core argument from Nate B Jones, a prompt engineering researcher who's been tracking how autonomous agents reshape what it means to be "good at prompting." His framework: prompting has split into four distinct disciplines. Most people are only practicing one.

The shift happened fast. Between October 2025 and January 2026, autonomous cloud code sessions nearly doubled. Major companies aren't publishing press releases about their AI deployments anymore — the serious players have agents running in the hundreds or thousands. Telus reported 13,000 custom AI solutions internally. Zapia has over 800 agents. The world hasn't just arrived; it's landed.

What Changed

The old skill was conversational. You sat in a chat window, typed a request, read the output, iterated. If you were good at structuring instructions and providing examples, you were faster than you were a year ago. That skill has a ceiling.

In early 2026, models stopped being chat partners and started being workers — running for hours, sometimes days or weeks. Everything you relied on in conversation must now be encoded upfront: your ability to catch mistakes in real time, your ability to provide missing context when the model asks, your ability to course correct. All of that vanishes because there's no conversation happening during the run.

This isn't a harder version of the same skill. It's different.

The Tuesday Morning Gap

Here's how it plays out concretely. Two people sit down with the same AI model on a February 2026 morning. Same subscription, same context window.

The 2025-skilled person types a request for a PowerPoint deck. They get back something that's about 80% correct — some formatting issues, maybe font collisions. They spend 40 minutes cleaning it up but they're pretty happy because this would have taken two or three hours otherwise.

The 2026-skilled person writes a structured specification in 11 minutes. They hand it off to the same chatbot, but they're thinking of it as an autonomous agent. They go make coffee. They come back to a completed deck that hits every quality bar defined up front. They're able to do this for five other decks before lunch.

Same model, same Tuesday — ten times the output.

The difference isn't smarter people or more technical skills. It's practicing a different skill that Person A doesn't know exists.

Context Engineering: The New Skill

Shopify CEO Toby Lutkkey — unlike most CEOs, actually technical — uses the term "context engineering" to describe the fundamental shift. He defines it as the ability to state a problem with enough context that without any additional pieces of information, the task becomes plausibly solvable.

This is a communication discipline. Not clever prompt tricks or magical words. Can you state a problem so completely that a capable system can solve it without going out and fetching more context? Can you make your request as self-contained as possible?

The bar for human communication has risen dramatically. Lutkkey noted that by being forced to provide complete context to AI, he's become a better CEO — his emails are tighter, his memos are stronger, his decision-making frameworks have improved.

One of his more provocative assessments: much of what people in big companies call politics is actually bad context engineering for humans. Disagreements about assumptions that never surface explicitly play out as politics and grudges because humans tend to be sloppy communicators who rely on shared context that doesn't actually exist.

The Four Disciplences

Here's the framework Jones lays out, designed to remain relevant as agents continue scaling:

Discipline One: Prompt Craft. This is the original skill taught for the last two years. Synchronous, session-based, individual. You sit in front of a chat window, write an instruction, evaluate output, iterate. The skill is knowing how to structure a query — clear instructions, relevant examples and counter-examples, appropriate guardrails, explicit output format, resolution of ambiguity and conflicts.

This is now table stakes. It's like sending email in 1998 — essential but not differentiating.

Discipline Two: Context Engineering. This is the set of strategies for curating and maintaining the optimal set of tokens during an LLM task. The shift from crafting a single instruction to curating the entire information environment an agent operates within: system prompts, tool definitions, retrieved documents, message history, memory systems, MCP connections.

The prompt you write might be 200 tokens. The context window it lands in might be a million. Your 200 tokens are 0.02% of what the model sees. The other 99.98% is context engineering.

This discipline produces .md files, agent specifications, RAG pipeline design, memory architectures. It determines whether a coding agent understands your project's conventions, whether a research agent has access to the right documents, whether a customer service agent can retrieve relevant account history.

The practical implication: people who are 10x more effective with AI aren't writing 10x better prompts. They're building 10x better context infrastructure.

"Can you state a problem so completely that a capable system can solve it without going out and fetching more context?"

The Counterpoint

Jones acknowledges the irony: prompt craft was the whole game when AI interactions were synchronous. You acted as the intent layer, context layer, and quality layer. That model of prompting broke the moment agents started running for hours without checking in.

The skills aren't harder versions of the same skill — they're different disciplines operating at different altitudes and time horizons. And most people are only practicing one.

Bottom Line

The strongest part of this argument is the concrete demonstration that 2025-style chat prompting has hit a ceiling while autonomous agents demand entirely different skills. The gap isn't theoretical — Jones shows it with a Tuesday morning comparison that's hard to argue against.

The vulnerability: the piece cuts off before fully laying out all four disciplines, leaving readers with an incomplete framework. What exactly are Disciplines Three and Four? The argument is strongest when it's showing what changed about context engineering; it's weakest when it's asking us to wait for more.

What readers should watch next: whether organizations actually implement these skills or just talk about them. The gap Jones describes already exists — the question is whether people will close it. Context engineering isn't optional anymore. It's table stakes in 2026.

If you're prompting like it's last month, you're already too late. And I'm not just doing that for clickbait. If you haven't updated how you think about prompting since January 2026, you're already behind. Opus 4.6, Gemini 3.1 Pro, and GPT 5.3 codecs have all shipped in the past few weeks with autonomous agent capabilities that make the chatbased prompting most people are practicing functionally obsolete for serious work.

These models don't just answer better. They work autonomously for a long time, for hours, for days against specs without really checking in. That changes what good at prompting means on a fundamental level. And it's time to revisit how we think about prompting as a result.

Not because prompting stopped mattering. It actually matters more than ever, but because the word prompting is now hiding four completely different skill sets, and most people are only practicing one of them. And the gap between the people who see all four of them and the people who don't is already 10x and widening. In this piece, I'm going to lay out what those four skills are, why the distinction matters now, and exactly how to build the skills you're missing.

This builds on my earlier work on intent engineering, but it goes way beyond it to lay out a full framework for how to think about prompting post February 2026. Intent engineering is just one layer in a larger stack. This is the full stack for prompting post February, post these new autonomous models. First, what changed?

The prompting skill that mattered since 2024 has been really conversational. You sit in a chat window, you type a request, you read the output, you iterate, you get better at phrasing things, you provide examples, you structure instructions. If you're good at that, and if you've been following this video series, you probably are. You've been building real skills.

They work. you're faster than you were a year ago. But that fundamental chatbased skill has a ceiling. And in early 2026, a lot of people are hitting it because the models have stopped really being chat partners and started being workers.

Workers that run for a long time. I'm not kidding when I say days and sometimes weeks. And the thing about a worker that runs for a long time is that everything you relied on in a conversation, like your ability to catch ...