We've been looking into MCP for the last few months individually and then in a larger group during our lunch and learn "spinoff." With AI models and tools improving rapidly, MCP has become an important way to enable Agent LLMs (Large Language Models) like Microsoft Copilot or ChatGPT to interact with outside systems. We wanted to pragmatically understand what MCP is, how to create one, when to use it, how to deploy it, and how to secure it.
Model Context Protocol (MCP) was announced by Anthropic in November 2024 as "a new standard for connecting AI assistants to the systems where data lives, including content repositories, business tools, and development environments. Its aim is to help frontier models produce better, more relevant responses." Since that announcement, agents are becoming more widely available as Microsoft predicted in May 2025, and the agents need a standard way of connecting to a system's capabilities. Think of it like a virtual USB-C for agents.
We learned a lot from NetworkChuck's video "You need to learn MCP RIGHT NOW" and recommend it to you as well.
Reducing development complexity
Improving security and governance
Enabling the dynamic discovery of tools
Working across multiple AI platforms (Claude, Copilot, etc.)
We spent several lunches creating a simple MCP server using the .NET C# SDK following a sample example. This was deployed to an Azure App Service as a Web API. Then, using a ChatAgent running locally, we were able to get responses from the MCP. We hosted the models in Microsoft Azure AI Foundry.
We've also experimented running the LLM on our local laptop using Ollama and Podman (Docker Model Runner and Azure Foundry local are other options). Both of these workflows were not overly complicated to get to work.
MCP is an open standard that allows AI agents to securely connect to external tools, APIs, and data sources without custom integrations. Think of MCP as the “USB-C for AI”—a universal interface that simplifies connectivity. While MCP is becoming the preferred method for scalable and secure integration, AI agents can also access systems through direct API calls, SDKs, or other protocols.
Mcp.so has a large list of already created MCP Servers and clients. There are many useful MCP tools for development including GitHub, Azure DevOps, Azure, Microsoft Learn, Playwright, and Figma to name a few.
The real question is "where would we use MCPs to create value and improve systems?" This interacts with questions about using a non-deterministic AI vs deterministic code we create to automate a solution. This will depend on the project, how adaptable the answers can be (AI will not answer the same way, it depends on the prompts, the requests, and more).
Many of the real-world requests we see are related to enhancing agents by giving them read-only access to business' internal information. Whether that is a chatbot that needs to know about internal documents, or a customer service agent that needs to look up data about the customer on the phone, MCP is one way to enable this capability. MCP can also enable agents to "take actions," but due to the non-deterministic nature of when an agent decides to take an action, businesses are more apprehensive about enabling this capability.
As the use and need for AI agents grow, MCP will grow with it. We'll continue to learn about these technologies and be here to help analyze and determine the best tools for the job.