top of page

What’s Going On With Model Context Protocol (MCP)?

7 days ago

5 min read

Douglas Cardoso

0

19

“MCP is an open protocol that standardizes how applications provide context to LLMs. Think of MCP like a USB-C port for AI applications. Just as USB-C provides a standardized way to connect devices to various peripherals and accessories, MCP provides a standardized way to connect AI models to different data sources and tools.” — modelcontextprotocol.io

MCP, or Model Context Protocol, is an open-source protocol introduced by Anthropic in 2024 to enable applications to seamlessly provide context to large language models (LLMs).


The easiest way to understand MCP is to think of it as an API layer for AI—one that allows LLMs to connect with apps, tools, and data sources just like APIs connect frontend clients with backend systems. But instead of defining endpoints for REST or GraphQL, you define "tools" and "actions" that the model can invoke directly via the MCP interface.


In practical terms, MCP allows any LLM to become a new kind of UI for applications.


So instead of downloading and interacting with a traditional app, users could use a personal LLM—whether cloud-hosted or running locally on an NPU-powered device—to access that same app's features and data via MCP.


What’s even more powerful is the ability to connect multiple MCP-enabled services into the same conversation. Your LLM could have access to your Google Drive, Slack, calendar, email, and more—allowing you to, for example, analyze a document, summarize key points, draft a response, and send it via Slack or email, all without leaving the AI chat interface.


And just like modern APIs are a requirement for apps today, supporting MCP will soon be a standard expectation for any product that wants to integrate with AI. Building a new tool? You’ll need to provide an MCP interface so that LLMs can interact with it naturally.

But before we get too deep into the future, let’s step back and understand how we got here, what’s already working, and what still needs to be solved.






How Did We Get Here?

Before MCP, getting a generic LLM model to interact with applications required building complex agent chains—often using frameworks like LangChain.

With LangChain, for example, you’d typically create an API that receives a document, craft a prompt to send to the LLM along with that document, get a response, pass that to another service, use its output to generate a new prompt, send that back to the LLM, and finally return the result to the user. It was a multi-step, brittle process.

A common use case: building an LLM agent that answers user questions based on data stored in a database. You’d have to design a chain where the LLM interprets the question, attempts to generate a SQL query, fetches the data, and then wraps the result in a natural-language response.


This approach had serious limitations:

  • Developers spent time building fragile chains instead of focusing on product value.

  • The LLM could easily fail at any step—returning inaccurate prompts, broken queries, or irrelevant responses. Building even a simple, reliable chain could take weeks.

  • If the response wasn’t satisfactory, users often had no choice but to restart the process and hope for a better result.


Despite these challenges, agent chains were the best way to create LLM-based workflows like summarizing documents, querying databases, or coordinating actions between tools.


That changed when Anthropic introduced the Model Context Protocol (MCP)—a cleaner, standardized way to expose real tools and data sources to LLMs, without the need for fragile chaining.





What’s Already Working

MCP was created to simplify and improve what agent chains tried to do: instead of building complex, fragile flows back and forth with an LLM, engineers can now focus on what matters—giving the model access to real data and tools, and delivering real results to users.


To be clear, MCP doesn’t entirely replace agent chains—but it makes building AI agents dramatically easier. Now, anyone can connect an LLM to an application using MCP and return a working interface to the user in minutes instead of weeks.


Take this example: you could create an AI agent that connects to a Stripe account and answers questions like “How many transactions were made today?” or “What’s the average ticket size?” This is possible because Stripe—or a third party—exposes a service through MCP that listens for LLM requests, hits the Stripe API, and returns structured data.


To do that, you need:


  • An MCP Client — a local interface where the LLM runs and interacts with the user. This could be something like Claude Desktop or a custom app built with an MCP SDK.

  • An MCP Server — a local or hosted endpoint that gives the model "powers," like accessing the terminal, reading local files, or initiating actions.

  • MCP Services — integrations that connect the model to external APIs or tools (e.g., GitHub, databases, Slack, email). These services can return data or perform real actions like creating content, sending emails, or updating records.


And it’s not just theoretical. Tools like Cursor, Postman, Amazon Q CLI, and others already have support for MCP.


You can also run open-source LLMs with MCP, which is a game-changer for local-first and privacy-focused workflows.


Support is growing rapidly. What we are seeing is more and more services that support the MCP protocol. For example, you can use thirteen MCP servers from Cloudflare, and then read the documentation to query logs using a natural language interface.


The takeaway is this: just as building a product today means creating a UI and exposing an API, building for the AI-native world means also providing MCP Services.


That way, your users—and their LLM agents—can access your product directly through chat, naturally, using structured access and real actions.


Whether you're building dedicated AI agents or enhancing personal assistants, MCP is the missing link that makes LLMs genuinely useful in practical, connected ways.

What Still Needs to Be Solved

As you’ve seen, MCP allows you to connect multiple services through a single interface. You can create a dedicated AI agent to handle a specific task, or build a broader personal assistant that coordinates multiple tools.


As a company building products, I would start offering my services via MCP as well. This allows more users and developers to access your functionality through AI—and empowers others to build new AI agents using your service, just like they currently use APIs to build integrations and tools.


Soon, we’ll see agents calling other agents through the MCP protocol—composing results, merging data, and triggering actions across services. It’s a new kind of interoperability, but with more automation and intelligence.


But with all that power comes significant security challenges, possibly greater than what we’ve faced with traditional public APIs.


Imagine this: your local LLM is connected to two different MCP services—one with access to your Google Drive, and another with permission to send emails. An attacker could craft a prompt injection that causes the LLM to read files from your Drive and email them to someone, all without your knowledge.


This is a real risk when multiple services are accessible at once. You may unintentionally expose sensitive data from one service to another.


We urgently need better ways to manage permissions across MCP integrations. How should different services share information? Can we control that? What are the isolation boundaries?


The answer lies in much more granular access controls, stronger than what we use for traditional APIs. Giving blanket access to everything just because we “trust” a model or integration is not acceptable in this new environment.


With MCP, companies will need to invest heavily in access control, permissions, and model oversight to ensure that LLMs don't do unintended things—or access data they shouldn’t.


But the reward is clear: the more services and products that integrate with MCP, the more capable and useful LLMs become—delivering not just better answers, but real, meaningful actions in our daily workflows.


Sources:


https://modelcontextprotocol.io/

https://blog.cloudflare.com/thirteen-new-mcp-servers-from-cloudflare/

https://7ctos.substack.com/p/the-ctos-guide-to-mcp/

https://blog.cloudflare.com/mcp-demo-day/

7 days ago

5 min read

Related Posts

bottom of page