← Back to Library

Building mcp servers in the real world

Gergely Orosz dismantles a pervasive myth in the AI engineering community: that the Model Context Protocol (MCP) is a solution in search of a problem. While the industry buzzes about a surplus of unused public servers, Orosz reveals a quiet revolution happening behind corporate firewalls, where the protocol is already reshaping how non-technical staff access complex data systems. This piece is notable not for predicting the future, but for exposing the immediate, tangible reality that the most valuable AI integrations are invisible to the public eye.

The Hidden Economy of Internal Tools

Orosz begins by grounding the reader in the protocol's origins, noting that MCP was released in November 2024 by Anthropic engineers David Soria Parra and Justin Spahr-Summers. He frames the technology with a compelling historical analogy, describing the protocol as aiming to be the "USB-C" layer for AI applications. Just as the USB-C standard unified a chaotic landscape of proprietary connectors into a single, universal interface, MCP seeks to standardize how AI clients connect to data servers. This comparison is effective because it moves the conversation from abstract "magic" to concrete infrastructure engineering.

Building mcp servers in the real world

The core of Orosz's argument, however, lies in the survey data he gathered from 46 software engineers. He challenges the popular meme that the space suffers from an "absence of users." Instead, he finds that internal usage vastly outpaces public adoption. As Gergely Orosz writes, "One of the most interesting things that we [at Prefect] have observed is that we expected to see every company launch an MCP server, and that their customers would begin interacting with them in that way. But that is not what is happening. Many companies are launching MCP servers, but not publicly." This reframing is crucial; it shifts the metric of success from public GitHub stars to internal workflow efficiency.

The median MCP user is someone who says something like: 'I want to access my company's own data warehouse through an MCP server', and uses an internal MCP that they connect to the agent they're using.

This observation fundamentally alters the stakeholder map. The primary beneficiaries are not just developers, but business analysts and platform teams. Orosz highlights a shift where MCP is replacing traditional "Self-service Business Intelligence." He notes that the promise of the "Semantic Layer" in data—making complex data accessible to non-experts—is finally being realized through this protocol. Critics might argue that replacing established BI tools with LLM-driven agents introduces new risks of hallucination or data misinterpretation, but Orosz suggests the trade-off is worth it for the sheer reduction in friction.

The Mechanics of Agency and Security

The article then pivots to the practical realities of deployment, where the tension between capability and control becomes apparent. Orosz explains that internal servers thrive because organizations can control both the client and the server, allowing for complex, high-stakes interactions that would be risky in a public setting. He points out a specific technical limitation that hinders public adoption: the lack of universal support for "elicitation," or the ability of a server to pause and ask a user for confirmation.

As Gergely Orosz puts it, "Elicitation is like a confirmation in the middle of a tool call, to ask you to provide some structured input... But if the client being used doesn't support it, and most clients don't support it, then you brick the entire conversation." This technical nuance explains why a company like PayPal might avoid public deployment; the inability to confirm a fund transfer via a standard client creates an unacceptable security gap. This is a sobering reminder that while AI agents are powerful, they are only as safe as the lowest common denominator of the tools they interact with.

Security remains the "Achilles heel" of the ecosystem. Orosz does not shy away from this, noting that for a protocol with open-ended security questions, keeping usage within the company firewall is the only prudent path for now. This aligns with the historical trajectory of other infrastructure standards, much like how early Remote Procedure Call (RPC) implementations were often locked down within enterprise networks before broader standardization made them safe for the open web.

From Ticket to Code: The New Workflow

The most compelling evidence Orosz provides comes from specific use cases that illustrate the protocol's transformative potential. He details how teams are using MCP to connect AI agents directly to ticketing systems like Linear and JIRA, or observability tools like Sentry. The result is a dramatic reduction in context switching. A developer no longer needs to copy-paste error logs from a browser into a chat window; they can simply instruct an agent to "verify that the feature implemented in TICKET-123 works as expected."

Gergely Orosz writes, "We've only used internally built MCPs due to how much we tailor our API usages. We've built our own MCP servers for GitHub, JIRA, Datadog, Buildkite and many others." This customization is key. Public servers often offer generic functionality, but internal servers are engineered to fit the specific, messy reality of a company's unique infrastructure. The author also highlights the role of Figma and Playwright, showing how design-to-code and browser automation are becoming seamless parts of the development loop.

We built a new feature and got lots of people at the company to test it. They added rows to a Notion database with their testing results and feedback. I used Claude Code to create a new database with aggregated/categorized test results in it, grouped by underlying issue.

This workflow, described by software engineer Theo Windebank, illustrates a new paradigm where the AI acts as a synthesizer of human feedback, turning unstructured data into actionable tickets. It is a powerful example of the "agent" concept moving from hype to utility. However, Orosz acknowledges the imperfections, noting that agents often get parameters wrong and require retries. This honesty about the current state of the technology adds credibility to his broader optimism.

Bottom Line

Gergely Orosz's analysis succeeds by cutting through the noise of public hype to reveal the quiet, high-value adoption happening in private enterprise. The strongest part of his argument is the data-driven correction of the "more builders than users" myth, proving that the real value of MCP is in internal efficiency, not public spectacle. The biggest vulnerability remains the unresolved security challenges of exposing internal data to AI agents, a hurdle that will likely dictate the pace of public adoption for years to come. Readers should watch not for the next viral public server, but for how their own organizations begin to standardize their internal data access through this new protocol.

Deep Dives

Explore these related deep dives:

  • Remote procedure call

    MCP is fundamentally a protocol for clients to invoke tools on servers, which is the core concept behind RPC. Understanding the history of RPC from the 1970s through CORBA, XML-RPC, and gRPC provides essential context for why protocol design matters and what challenges MCP is trying to solve.

Sources

Building mcp servers in the real world

The Model Context Protocol (MCP) was released almost exactly a year ago by Anthropic, and today, MCP is enjoying quite a moment, with strong growth in the numbers of devs building MCP servers. That might be related to MCP servers being a great way to give agents like Claude Code, Cursor Agent, and other LLMs new capabilities to use services, query documentation, and be more efficient. Adoption is widespread and diverse, across cutting-edge startups and regulated industries like aerospace alike.

One year on, how are engineering teams using this technology, and what does that teach us? To find out, we collected input from 46 software engineers who build and use MCP servers at work, and talked with Jeremiah Lowin, CEO of Prefect and creator of FastMCP, the leading MCP framework for Python, and Den Delimarsky, core MCP maintainer and Principal Engineer at Microsoft.

Thanks to everyone who shared their experience of building with MCP.

Today, we cover:

MCP fundamentals. Brief recap of the protocol.

Usage realities. Internal MCP server usage outpaces its public usage, business stakeholders are heavy MCP users, and other details.

How teams use MCP. Based on a dozen use cases, there are varied ways of using it.

Popular public MCP servers. Stats from widely-used public MCP servers operated by Sentry and Linear, plus an odd conjunction of thousands of DAUs and millions of daily sessions.

Security considerations. Security’s still the Achilles heel of MCPs and LLMs. There are some sensible security practices for treading carefully in the space.

Learnings from building MCPs. Start small and local, choose the development language carefully, design primitives for agents and not humans – & more.

Useful tools for building MCP servers. FastMCP, MCP Inspector, and Cloudflare’s remote MCP guide among the top mentions.

Our look into MCP usage suggests that using, building, and maintaining MCP servers are on the way to becoming part of the software engineering toolset; perhaps they already are. Meantime, best practices are still taking shape. Let’s get into it:

1. MCP fundamentals.

The MCP protocol was released in November 2024 and was developed by two software engineers at Anthropic, David Soria Parra, and Justin Spahr-Summers, who started work on it that July.

The protocol aims to be the “USB-C” layer for AI applications. It’s a standardized protocol to connect Clients (chatbots, IDEs, AI applications) to Servers (data, files, and tools). Here’s how the protocol works, at a ...