Deep Dive into the MCP C# SDK and the 2025-06-18 Update

Introduction: Connecting .NET Apps to AI with MCP

This comprehensive guide demystifies the Model Context Protocol (MCP) C# SDK – explaining what MCP is, why it matters for .NET and AI integration, and how to get started – while also diving deep into the June 18, 2025 MCP SDK update. We’ll explore new features from that update (improved OAuth security, interactive elicitation, structured outputs, resource links, etc.), with hands-on C# examples, best practices, and analysis of what these changes mean for developers building AI-powered .NET applications.

Artificial intelligence is rapidly becoming a first-class component of modern applications. For .NET professionals, a key challenge is connecting AI models (like large language models, or LLMs) with the rest of our application architecture – our databases, services, and tools. Traditionally, if you wanted an AI assistant or ML model to access external data or perform actions, you had to wire up custom integrations or expose HTTP APIs for each case. This approach is cumbersome and not easily scalable across different tools. Enter the Model Context Protocol (MCP) – a new open standard that promises to simplify and standardize how AI systems interact with external data sources and services. Think of MCP as a kind of “USB-C port” for AI applications, providing a universal interface for connecting AI to a variety of resources.

In this article, we’ll explore what MCP is and why it matters for .NET developers and enterprise architects. We’ll then dive into the MCP C# SDK – an official .NET SDK provided by Microsoft that allows you to build MCP-compatible clients and servers in C#. This guide will also serve as an onboarding tutorial: we’ll walk through getting started with the SDK, including code examples to create your own MCP server and tools.

Crucially, we’ll examine the June 18, 2025 update of the MCP C# SDK (corresponding to the MCP specification version 2025-06-18) that was announced on Microsoft’s .NET Blog in July 2025. This update brought several major features and improvements to the SDK, such as a new OAuth 2.1-based authentication model, an elicitation mechanism for interactive user prompts, support for structured data output from tools, and more. We will break down each of these features, explain what changed and why it’s important, and provide commentary on what it means for developers building AI-integrated applications in .NET.

By the end of this 10,000-word deep dive, you should have a solid understanding of how to leverage the MCP C# SDK in your projects, how the latest update enhances the SDK, and how MCP can fit into your architecture for AI-powered solutions. Whether you’re a .NET software engineer looking to integrate AI into your app, an AI/ML engineer exploring ways to extend model capabilities with external tools, or an enterprise architect evaluating how this fits into a larger strategy, this article has you covered.

Let’s start by understanding the fundamentals of MCP itself.

What is the Model Context Protocol (MCP)?

Model Context Protocol (MCP) is an open standard designed to bridge the gap between AI models (especially large language models) and the external world of data, tools, and services. In simpler terms, MCP defines a structured way for AI systems (like chatbots or agent-like AIs) to request and retrieve information or perform actions through external “tools” or data connectors in a consistent, interoperable manner. The goal is to avoid the proliferation of one-off integrations – instead of each AI platform inventing its own way to plug into, say, a database or a calendar API, MCP provides a common protocol that any compliant client and server can use to talk to each other.

MCP was introduced by Anthropic in late 2024 as a community-driven initiative. Anthropic open-sourced the MCP specification and initial SDKs in November 2024, framing it as a solution to the problem of AI assistants being “trapped behind information silos.” Every new data source normally required custom code, but MCP aims to “replace fragmented integrations with a single protocol”, making it much easier to give AI access to the data and context it needs. This protocol is analogous to a universal adapter: just as you can plug many different devices into a standard port, an AI agent can interface with many different tools via MCP without custom wiring each time.

How MCP Works (Clients and Servers): The architecture of MCP is straightforward. There are MCP servers and MCP clients. An MCP server exposes certain capabilities or data – for example, it might provide access to a database, a third-party service like GitHub or Slack, a file system, or custom business logic. These capabilities are presented as “tools” or endpoints that an AI can invoke. On the other side, an MCP client is typically an AI-powered application or assistant (like a chatbot, agent, or even an IDE plugin) that can connect to MCP servers and call those tools. The MCP specification defines how a client discovers what tools a server offers, how to call them, how data is exchanged (including schemas), and how authentication and permissions are handled in a standardized way.

To make this concrete, imagine you’re building a smart chatbot for customer support. Traditionally, if the chatbot (perhaps powered by an LLM) needs to fetch order information from your database, you’d either give the model a long text dump of data (not ideal), or you’d create a special API endpoint that the chatbot can call via some plugin or proxy. With MCP, you could set up an MCP server that exposes “tools” like GetOrderDetails or LookupCustomer securely. The chatbot (if it’s an MCP-enabled client) can ask the server what tools are available, then invoke those tools with parameters (like a customer ID), and get structured results back. All of this happens through a standardized protocol – meaning the details of the call (the JSON payloads, schema definitions, etc.) follow the MCP spec rather than a custom schema you designed ad-hoc.

Why MCP Matters: MCP’s promise is interoperability and ease of integration. It decouples the AI model provider from the data/tool provider. For example, Anthropic’s Claude, OpenAI’s systems, or other AI platforms could all potentially use MCP to access tools in a uniform way. For organizations, this means you could invest in building MCP servers (connectors to your internal systems) once, and any AI client that speaks MCP – whether it’s a cloud AI service, an open-source LLM, or a custom .NET application – can use those connectors. Microsoft describes MCP as enabling seamless integration between LLM applications and external data sources and tools. In essence, MCP allows AI to step beyond its trained knowledge and interact with live data and operations – safely and in a controlled manner.

Another key benefit is that MCP encourages a secure and structured approach. Because it standardizes how authentication works and how data schemas are defined (more on those later), it’s easier to reason about and enforce security when an AI is calling external functions. This is especially important in enterprise settings where you might want fine-grained control over what an AI can do.

It’s worth noting that MCP is an open protocol and community-driven. It’s not tied to a single vendor. Early adopters beyond Anthropic include companies like Block (formerly Square) and Apollo, and developer tool companies such as Zed, Replit, Codeium, and Sourcegraph have all explored MCP to enhance their AI features. There is an ecosystem forming around MCP – for example, there are already pre-built MCP server connectors for popular systems like Google Drive, Slack, GitHub, Git (version control), PostgreSQL databases, web browsers (Puppeteer), and more. This means as a developer, you might find existing MCP servers you can readily use, or you can contribute your own. Over time, this ecosystem can significantly speed up how we integrate AI with the “real world.”

Now that we have the big picture of what MCP is, let’s focus on the C# angle: how does .NET fit into this? That’s where the MCP C# SDK comes into play.

Overview of the MCP C# SDK

The MCP C# SDK is the official .NET implementation of the Model Context Protocol, enabling developers to easily create MCP servers and clients using C#. Microsoft has actively collaborated in bringing this SDK to the .NET community, as evidenced by the .NET blog posts and the open-source repository on GitHub. In practical terms, the MCP C# SDK is a NuGet package (named ModelContextProtocol) that you can add to your .NET projects to get all the building blocks for MCP – you don’t have to implement the protocol from scratch or worry about the low-level details of JSON-RPC, socket management, etc.

What Can You Do With the C# SDK? In short, you can build both MCP servers and MCP clients with it. This means as a .NET developer you can:

  • Create an MCP Server: Expose your own tools, functions, or data to any MCP-compliant AI client. For example, you could make an MCP server that exposes an API to your company’s internal HR system, or a server that provides a set of utility tools (calculations, data lookup, etc.) for an AI assistant to use. The SDK makes it straightforward to turn C# methods into MCP “tools” that are discoverable and callable via the protocol.
  • Create an MCP Client: Connect to existing MCP servers and invoke tools from your .NET code. For instance, you might build a .NET application that acts as an AI orchestrator – it could consume an MCP server that provides, say, stock market data or IoT sensor readings. Or you might integrate an AI assistant into your app that calls out to various MCP servers for information. The SDK provides client components to handle connecting, initializing, and calling tools on a server, including handling the asynchronous, streaming nature of some interactions.

The MCP C# SDK is open source and hosted on GitHub, which is great for transparency and community contributions. Developers are encouraged to contribute – whether by reporting issues, suggesting enhancements, or even contributing code – to help evolve the SDK alongside the MCP spec updates. It’s also in active development: at the time of writing (mid-2025), the SDK is in preview, meaning APIs might still evolve as the community learns and as the MCP specification is refined. (For example, earlier blog posts explicitly note the SDK was in preview and would be updated continuously.)

One of the design goals of the SDK is to simplify the implementation process of the protocol so that you, the developer, can focus on your application’s unique features rather than protocol boilerplate. Essentially, it wraps the MCP protocol (which involves JSON RPC calls, standard endpoints like tools/list, etc.) into familiar C# patterns. It leverages .NET features like dependency injection, attributes for decorating classes as tools, and integration with Microsoft.Extensions.Hosting to easily spin up servers.

To illustrate this, the SDK uses attributes like [McpServerTool] and [McpServerToolType] to mark which classes and methods should be exposed as tools on your server, and it uses conventions to automatically generate JSON schemas for inputs/outputs. It also provides built-in transports – e.g., communication can happen over standard I/O, named pipes, HTTP, or other channels – so you can run an MCP server locally (for example, via STDIO which is useful for local tools in VS Code) or potentially as a web service.

The SDK also integrates with the emerging Microsoft.Extensions.AI packages (as noted in some documentation) to easily tie in actual AI model calls if needed, though for many tools you write, the logic might just be standard C# code that accesses some resource.

NuGet and Versioning: You can acquire the MCP C# SDK via NuGet. Since it’s in preview around the 2025 timeframe, you might need to include prerelease packages. The blog posts and documentation instruct to install it like so:

dotnet add package ModelContextProtocol --prerelease

This pulls down the SDK package. The version that added support for the 2025-06-18 MCP spec was a prerelease, aligning with that spec update. Going forward, the SDK will likely continue to track the MCP spec revisions (which are date-versioned). The spec version “2025-06-18” literally corresponds to June 18, 2025 – the date when that spec revision was finalized. The SDK update that supports it was released soon after (the .NET blog announcement came on July 22, 2025).

In summary, the MCP C# SDK is your AI integration toolkit in the .NET world: it abstracts the complexity of the Model Context Protocol into easy-to-use C# APIs, letting you expose and consume AI “tools” in a standard way. Now, let’s actually use it. In the next section, we’ll walk through setting up a simple MCP server and tool using C# – a practical onboarding guide.

Getting Started with the MCP C# SDK

Let’s roll up our sleeves and walk through the process of building a minimal MCP server in C#. This will serve as an onboarding guide to help you understand the basics of using the MCP C# SDK in practice. We’ll cover setting up a project, adding the SDK, defining a tool, and running the server. Along the way, we’ll highlight key concepts and best practices. By the end of this section, you should be able to create your own MCP server that an AI agent (or any MCP client) could connect to and use.

Setting up an MCP Server (Step-by-Step)

For this walkthrough, we’ll create a simple console application that will act as our MCP server. You can follow these steps:

  1. Create a new .NET console project: Use the dotnet CLI or Visual Studio to create a console app. For example, from the command line: dotnet new console -n MyFirstMcpServer This will create a new console application in a folder named MyFirstMcpServer (feel free to choose your own name).
  2. Add the MCP C# SDK NuGet package: As mentioned, the SDK is available as a NuGet package (likely in prerelease). Add it to your project by running: dotnet add package ModelContextProtocol --prerelease This will reference the MCP SDK library so you can use its APIs in your code. Additionally, for convenience, we’ll use the generic host and dependency injection from .NET, so let’s also add: dotnet add package Microsoft.Extensions.Hosting (This isn’t strictly required for MCP, but it makes it easy to set up a robust server with DI, logging, configuration, etc., and is used in most examples.)
  3. Write the server initialization code: Open the Program.cs of your console app. We’ll configure a host, add MCP server services, and specify how the server should listen for requests. Here’s a basic example setup:
    using Microsoft.Extensions.DependencyInjection;
    using Microsoft.Extensions.Hosting;
    using Microsoft.Extensions.Logging;
    using ModelContextProtocol.Server;
    // ... other usings as needed

    var builder = Host.CreateApplicationBuilder(args);

    // Optional: configure logging to see what’s happening (logs to console for now) builder.Logging.AddConsole(options => {
    options.LogToStandardErrorThreshold = LogLevel.Trace; // log everything to stderr
    });

    // Register and configure the MCP server
    builder.Services
    .AddMcpServer() // Add core MCP server services
    .WithStdioServerTransport() // Use STDIO transport (server will communicate over stdin/stdout)
    .WithToolsFromAssembly(); // Automatically discover tools in this assembly

    await builder.Build().RunAsync();
    Let’s break down what this does:
    • We create a Host builder, which is the typical pattern for console apps using the Generic Host in .NET (providing dependency injection, logging, etc.).We add a console logger just for debugging purposes (so we can see logs from the MCP framework).The call to AddMcpServer() registers the MCP server infrastructure. We then configure it with a transport. Here we chose .WithStdioServerTransport(), which means our server will communicate using standard input/output streams. This mode is especially useful for local testing and for integrating with tools like VS Code or GitHub Copilot (more on that soon). Essentially, STDIO transport means the server reads requests from stdin and writes responses to stdout – a pattern that some IDE extensions and tools use to run local “agents.”.WithToolsFromAssembly() is a convenient method that scans the current assembly for any classes/methods that are marked as MCP tools (using the attributes we’ll show below), and automatically registers them. This saves us from manually registering each tool.
    After configuring services, we build and run the host. Running the host will start listening for incoming MCP client connections/requests. At this point, our server doesn’t actually have any tools defined yet – it’s just a framework ready to host tools. We’ll fix that next by defining a simple tool.

Defining Tools (Example: Echo Tool)

In MCP, tools are the functions or actions that a server exposes to clients. Using the C# SDK, you define tools as C# methods (static methods or instance methods in certain classes) and decorate them with attributes so the SDK knows to expose them.

Let’s create a very simple example tool – an “Echo” tool that takes a message and returns it (maybe with some modification). This will demonstrate the basic pattern.

First, we need a class to hold our tools. We can create a new class file or just add it to Program.cs for simplicity. We’ll make a static class with a couple of static methods:

using System.ComponentModel;
using ModelContextProtocol.Server;

[McpServerToolType]  // This attribute marks the class as containing MCP tools
public static class EchoTools
{
    [McpServerTool, Description("Echoes the message back to the client.")]
    public static string Echo(string message)
    {
        return $"Echo: {message}";
    }

    [McpServerTool, Description("Echoes the message in reverse.")]
    public static string ReverseEcho(string message)
    {
        // reverse the string
        char[] chars = message.ToCharArray();
        Array.Reverse(chars);
        return new string(chars);
    }
}

A few important points about this code:

  • We applied the [McpServerToolType] attribute to the class. This signals to the MCP SDK that this class may contain tool methods. The .WithToolsFromAssembly() we used earlier will scan for this attribute.
  • Each tool method is marked with [McpServerTool]. This attribute tells the SDK to register the method as a tool. We also provided a Description attribute for each. The description is metadata that gets sent to the client; it helps the AI (or developer) understand what the tool does. For example, when a client asks the server for the list of available tools, the server will include the name and description of each tool. The description can influence the AI’s decision on which tool to use.
  • Our methods Echo and ReverseEcho each take a string parameter and return a string. The SDK will automatically generate a JSON schema for the input and output of these tools. For instance, it will know that the input is an object with a property “message” of type string, and that the output is just a string. We don’t have to manually define any JSON – the SDK does it for us.
  • The return type here is a simple string, which means the content will be returned as text. (Later we’ll discuss how you can return more complex data or even structured results.)

That’s it! With just these few lines, we’ve declared two tools. Because we wired up our Program with WithToolsFromAssembly, the SDK will detect EchoTools class and register both Echo and ReverseEcho as tools when the server starts.

Now, conceptually, if an MCP client connects to our server and requests the tool list, it would see something like:

  • Tool name: echo (it will default to a lowercase name; by default the SDK uses the method name in snake_case if not otherwise specified)
  • Description: “Echoes the message back to the client.”
  • Input schema: one string parameter “message”
  • Output schema: string result

And similarly for reverse_echo.

We could actually customize the tool names or provide a more user-friendly title, if desired, by using parameters on [McpServerTool]. For example, we could do [McpServerTool(Name = "echo", Title = "Echo Tool")] to explicitly set the name and a title (we’ll revisit this when we discuss the update features, as the Title is a new addition in the update). But for now, default is fine.

Running and Testing Your MCP Server (VS Code Integration)

With our server code in place (the Program with AddMcpServer and the EchoTools defined), we can run the application. If you run it normally (e.g., dotnet run from the command line), it will start up and wait for an MCP client to connect via STDIO. However, nothing visibly will happen since there’s no client attached yet. So how do we actually test this?

One convenient way to test and interact with your MCP server is through Visual Studio Code’s MCP support (GitHub Copilot Chat’s “agent” mode). Microsoft’s VS Code (with the GitHub Copilot extension, as of mid-2025) has integrated support for MCP, allowing developers to register local MCP servers as “agents” that Copilot can use to answer questions or perform tasks. We can use this to try out our Echo tool in a chat-like environment.

Here’s how to set it up in VS Code:

  1. Make sure you have the latest GitHub Copilot extension and that the “Copilot Chat” or “AI Assistant” feature is enabled (often called Agent mode or similar).
  2. In your VS Code workspace (or global settings), create an mcp.json file (if it’s not there already). This file is used to configure MCP servers for VS Code’s AI features.
  3. Add an entry for your server. For example, your mcp.json might look like: { "inputs": [], "servers": { "MyFirstMCP": { "type": "stdio", "command": "dotnet", "args": [ "run", "--project", "D:/source/MyFirstMCP/MyFirstMCP.csproj" ] } } } This JSON tells VS Code that there is a server named “MyFirstMCP” which can be started by running the command specified. We use dotnet run --project <path> to launch our project. Adjust the path to point to your project’s .csproj. The type: "stdio" matches how we configured the server to communicate. Essentially, VS Code will spawn this process and communicate with it via stdin/out.
  4. Now, open the Copilot Chat / Agent panel in VS Code. There is usually a dropdown or some way to select the active “Agent” or server. You should see “MyFirstMCP” (or whatever name you gave) as an available server/agent in the dropdown. Select it.
  5. Once selected, your MCP server will launch (VS Code will run the command). In the Copilot chat, you can now ask it to use your tools. For example, type a prompt like: “Use the Echo tool to echo the message ‘Hello MCP’”. Copilot (the AI) will see that there is a tool called “echo” available (with description “Echoes the message back to the client”) and should decide to invoke it. The Copilot UI will likely prompt you to confirm running the tool (for security reasons, it asks user permission before executing tools).

Screenshot: Selecting our custom MCP server (“MyFirstMCP”) in VS Code’s Copilot Chat agent dropdown. Once selected, the AI assistant knows about the Echo and ReverseEcho tools we defined, as shown in the available tools list.

When you trigger the tool, you’ll see a message like “The extension wants to run tool reverse_echo with input { message: ‘Hello MCP’ }” and ask for confirmation. After you allow it, the server will execute our ReverseEcho method and return the result, which the AI will then display in the chat.

Screenshot: GitHub Copilot Chat prompting for permission to run the ReverseEcho tool with the provided input. This is an example of the MCP client (Copilot) performing an elicitation for confirmation, which showcases security in tool invocation.

In our case, if we used Echo, the response should come back as “Echo: Hello MCP”. If we used ReverseEcho, the response would be “PCM olleH” (which is “Hello MCP” reversed, plus our prefix if any). The Copilot chat will then present the tool’s output as part of its answer.

This quick test confirms that our MCP server is up and running and that our tools can be invoked by an AI client. We just built a trivial example, but it’s not hard to imagine more useful scenarios. For instance, you could create tools to query a database, call a web API, or perform computations, and the AI (Copilot, or any MCP-compliant AI) could use those tools to help answer questions or automate tasks.

Before moving on, let’s reflect: in just a few lines of code we achieved something powerful – we extended an AI assistant (Copilot) with custom capabilities written in C#. That’s the magic of MCP: by writing standard C# methods and hosting an MCP server, we can plugin our own functionality into AI. This is extremely potent for enterprise use-cases (connecting AI to proprietary data or systems) and for personal productivity (automating dev tasks, etc.).

Now that you have a basic idea of how to set up and run an MCP server with the C# SDK, it’s time to look at the major update from June 18, 2025. This update introduced several enhancements to MCP and the SDK that make it even more capable. Understanding these will help you take advantage of the latest features and also ensures your MCP implementations are aligned with the current spec.

Inside the June 18, 2025 MCP SDK Update

On June 18, 2025, a new version of the MCP specification was released (version 2025-06-18), and the MCP C# SDK was updated to support it shortly after (announced in late July 2025). This was a significant update, bringing in four major new features:

  1. Improved Authentication Protocol – a shift to a more robust OAuth 2.1-based security scheme.
  2. Elicitation Support – enabling interactive prompts where the server can ask the user for more information mid-conversation.
  3. Structured Tool Output – allowing tools to return structured (typed) data that the AI can more easily interpret.
  4. Resource Links in Tool Results – tools can now return hyperlinks or references to resources they create, facilitating follow-up actions.
  5. Additionally, there were schema and metadata enhancements such as richer metadata fields and human-friendly titles for tools.

Let’s go through each of these changes, understand what they are, how to use them in the SDK, and why they matter. Along the way, I’ll provide some analysis or commentary on the implications for developers (in terms of new capabilities or best practices).

Improved Authentication Protocol (OAuth 2.1 Integration)

One of the headline changes in the 2025-06-18 MCP spec is a new authentication and authorization model. In previous versions of MCP, authentication might have been more rudimentary or left to custom implementations. The updated spec embraces OAuth 2.0/2.1 standards, making it much easier to integrate MCP servers with existing enterprise authentication providers (think Azure AD, IdentityServer, Okta, etc.) and to secure MCP endpoints.

Specifically, the new protocol separates the roles of the Authentication Server and the Resource Server (the MCP server being the resource server). This is a classic OAuth concept: your MCP server can delegate auth to an OAuth 2.0 identity provider. The server can declare “I require a bearer token issued by XYZ authority with ABC scopes”, and clients know how to obtain and use those tokens. Under the hood, the spec aligned with OAuth 2.1 and introduced something called the Protected Resource Metadata (PRM) document (as per RFC 9728). In short, an MCP server, if it’s protected, will host a well-known JSON document (at a URL like /.well-known/oauth-protected-resource) that tells clients how to authenticate – including details like the expected OAuth 2.0 issuer or authorization server, supported scopes, token endpoints, etc..

For example, such a document might look like this (as shown in the docs):

{
  "resource": "https://api.example.com/v1/",
  "authorization_servers": [
    "https://auth.example.com"
  ],
  "scopes_supported": [
    "read:data",
    "write:data"
  ],
  "token_endpoint_auth_methods_supported": [
    "client_secret_basic",
    "client_secret_post"
  ]
}

This tells an MCP client: “My protected resources are under api.example.com/v1/. You need to go to auth.example.com to get a token. The scopes you might need are read:data or write:data. When you go to the token endpoint, you can use HTTP Basic or POST for client authentication,” etc. Armed with this info, the client can perform the standard OAuth Authorization Code flow or another flow to get a token. Once it has a token, it will include it in requests to the MCP server (likely as a bearer token in headers).

From a developer’s perspective, the MCP C# SDK now bakes in support for this OAuth flow, so you don’t have to manually handle a lot of it. According to Microsoft’s explanation, the SDK covers the discovery and token handling so you don’t have to “wrestle with token endpoints and scope configurations” yourself. This is huge – it means secure by default. If you use the updated SDK, you can get OAuth security up and running with minimal configuration, rather than writing custom auth code (which is error-prone).

Using OAuth in MCP C# SDK: So how do you actually use this as a developer?

  • On the MCP Server side: You need to configure your server to use OAuth. In the C# SDK, you might do this when setting up the server services. For instance, if you have an existing OAuth authority (like Azure AD), you’d configure the server with the required options (like accepted issuers, audience, scopes, etc.). The specifics might involve something like calling AddMcpServer().AddAuthorization(...) or adding an [Authorize] attribute to tools – this detail might be documented in the SDK docs. From Den Delimarsky’s blog, an example server configuration was given where they set up an in-memory OAuth server for testing. A simplified example might be:

    builder.Services.AddMcpServer()
    .WithTools<WeatherTools>() // your tools
    .WithHttpTransport(options => {
    options.BaseUrl = "http://localhost:7071/";
    options.OAuth = new OAuthOptions {
    // (hypothetical options)
    Authority = "https://login.example.com/",
    Audience = "api://your-mcp-server",
    RequiredScopes = new[] { "access_as_user" }
    };
    });

    This isn’t actual code from the blog, but the idea is that the server declares what authority to trust and what scopes/tokens are needed. The SDK would then automatically serve the .well-known metadata and enforce that any incoming request has a valid JWT token with the right claims.
  • On the MCP Client side: The client (which could be your .NET app acting as a client, or an AI agent) needs to obtain tokens and present them. In the SDK, the client options now support OAuth as well. For example, when configuring a client transport, you can specify OAuth client details and a callback for handling the user authorization step. Den’s blog provided a snippet for a client:

    var transport = new SseClientTransport(new()
    {
    Endpoint = new Uri(serverUrl),
    Name = "Secure Weather Client",
    OAuth = new()
    {
    ClientName = "ProtectedMcpClient",
    RedirectUri = new Uri("http://localhost:1179/callback"), AuthorizationRedirectDelegate = HandleAuthorizationUrlAsync,
    }
    }, httpClient, loggerFactory);

    Here, OAuth is an instance of ClientOAuthOptions. The client is basically saying: “I’m an OAuth client with name X and I have a redirect URI Y”. The AuthorizationRedirectDelegate is a function that will be called when the client needs to direct the user to a login page (for the auth server). This could open a browser for the user to log in, or maybe handle it headlessly if possible. The blog notes that a lot of complexity is hidden: because the server exposes the PRM doc, the client can automatically discover the authorization server and needed scopes. The developer just provides some basic OAuth client info, and the SDK manages the rest (like launching the auth process and exchanging code for token).

The result of all this is that MCP clients and servers can perform a full OAuth 2.1 Authorization Code flow with minimal developer effort. Tokens are obtained and stored, and on subsequent tool calls, the token is presented. If you’re building an enterprise solution, you can integrate with your existing identity provider so that only authorized users (or services) can access certain MCP tools. For example, you might require a user to be in a certain Azure AD role to use an “AdminReport” tool on your MCP server. With OAuth, that’s feasible – the AI client would authenticate as that user and present a token that the server can validate and map to permissions.

Implications: The improved authentication means enterprise readiness. Companies can be more comfortable deploying MCP-based systems knowing they adhere to standard security practices. For developers, it reduces the time spent implementing security (which as Den humorously notes, is “always more complicated than you expect”). Instead of writing custom auth logic or inadvertently creating security holes, you lean on battle-tested standards. The separation of auth server and resource server also means you can use third-party or existing auth – you don’t have to run a custom identity system for your MCP deployment.

One thing to note is that OAuth 2.1 is embraced. OAuth 2.1 is basically a security best-practices iteration of OAuth 2.0 (it folds in extension specs and recommends the authorization code with PKCE, etc.). By aligning with OAuth 2.1, the MCP spec designers are ensuring the latest and most secure flows are the norm.

From a developer experience perspective, Microsoft’s own commentary is that this should be straightforward and not overcomplicate things: it’s standard OAuth but “minus the boilerplate”. They want it to feel idiomatic, perhaps similar to how ASP.NET Core’s authentication is configured (which is also quite straightforward with the right extension methods).

To wrap up on auth: if you update to the latest MCP C# SDK and you have an existing MCP server, you’ll want to read the updated docs on how to enable this new auth. It likely involves slight changes to your initialization code (or none at all if you leave it open to public access). But given the importance of security, this is a feature to adopt sooner rather than later.

Also, don’t forget to implement proper OAuth flows in production – meaning, test your auth, use HTTPS, handle token refresh if needed, etc. The official advice is to follow security best practices from the MCP specification, which includes using resource indicators (an OAuth concept to prevent a token meant for one resource from being used at another – basically binding tokens to the intended audience), and validating all inputs. We’ll revisit best practices later, but keep security in mind as you integrate OAuth.

Elicitation Support (Interactive User Prompts)

The next big feature is elicitation. This addition is all about making interactions with AI more dynamic and interactive. In a normal AI query, a user asks something, the AI might call a tool, get a result, and answer. But what if the tool needs more information from the user before it can proceed? Or what if the AI (server side) wants to clarify something with the user in the middle of a workflow? This is where elicitation comes in.

What is Elicitation? Elicitation in MCP allows an MCP server (or the tools running on it) to request additional input from the user during an interaction. Think of it as the server saying: “I need to ask the user a follow-up question.” This turns a one-shot question/answer into a guided dialogue when needed. A classic example could be a scenario like: The user says “book me a flight”. The AI might call a “BookFlight” tool on an MCP server. But that tool might require parameters like destination, date, etc., which the user didn’t provide initially. With elicitation, the server can send back a prompt to the user asking for those missing pieces (“What is your destination?” or “Do you want first class or economy?”), and once the user replies, the interaction can continue.

In the MCP C# SDK, elicitation is implemented via an extension method ElicitAsync on the IMcpServer interface. Essentially, from within a tool’s code on the server, you can call server.ElicitAsync<T> to ask the client for input of type T. The server will specify the schema of what it’s asking (like expecting a string or number, etc.), and you can also provide a description (which can be shown to the user as a prompt). The protocol only supports primitive types for these elicited values (string, number, boolean) – no complex nested structures – since it’s meant for quick Q&A type clarifications.

Let’s look at how we might use this in code. The blog gave an example of a simple game tool where elicitation is used:

Imagine a tool method:

[McpServerTool, Description("A simple game where the user has to guess a number between 1 and 10.")]
public async Task<string> GuessTheNumber(IMcpServer server, CancellationToken token)
{
    // First ask the user if they want to play
    bool wantsToPlay = await server.ElicitAsync<bool>(
        name: "ready", 
        description: "Do you want to play the guessing game? (true/false)",
        cancellationToken: token);

    if (!wantsToPlay)
    {
        return "Maybe next time!";
    }

    // (If yes, then proceed to pick a number and have the user guess, etc., possibly more elicitation)
    ...
}

In the above pseudo-code, the tool uses server.ElicitAsync<bool> to ask the user a yes/no question. It provides a name “ready” (just an identifier for the input) and a description which likely will be shown to the user to prompt them. The client (AI agent) when it receives this, will pause the tool execution and prompt the user for input. In a chat interface, for example, the user would see a message generated by the AI like “The tool needs to know: Do you want to play the guessing game? (true/false)”. The user’s answer (true or false) is then sent back to the server, ElicitAsync returns that value, and the tool can continue.

Under the hood, what happened is the server responded to the client with a special message indicating an elicitation request, including the schema of the expected answer (in our case a boolean) and the prompt. The client’s role is to convey that to the user and collect a response. It’s an optional feature – meaning not all clients support it. Clients must declare that they support elicitation in their initialization handshake. For example, in the MCP C# SDK if you’re writing a client, you’d configure something like an ElicitationHandler in McpClientOptions. This handler would define how the client deals with elicitation (e.g., in a console app client, maybe it just prompts on the console; in a chat app, it sends a message to the user).

In the GitHub Copilot scenario, Copilot Chat does support elicitation to an extent – for instance, when it asked for confirmation to run a tool, that’s a form of elicitation (though that specific case might be a permission prompt, but conceptually similar). Another scenario could be a multi-step command where Copilot asks you a follow-up question because the server needed it.

From a developer perspective, elicitation can greatly enhance user experience. It allows you to write tools that gather missing info interactively instead of failing or making the AI guess. Consider an enterprise use case: an AI agent that creates a ticket in a helpdesk system. The user might say “Open a ticket for issue X”. The tool might need a priority or department. Instead of the AI guessing or using defaults, the tool can elicit “What priority should this ticket be? (Low/Medium/High)”. This ensures the operation is done correctly and with user confirmation.

Using elicitation in the C# SDK is straightforward: just inject IMcpServer into your tool method parameters (the SDK will provide it via DI), and then call ElicitAsync. You specify a name for each input (unique per elicitation call so you can differentiate if you ask multiple things at once) and a short description prompt. You can ask one question at a time or even multiple at once (the protocol allows batching multiple questions in one elicitation request, each with a different name). The response from the user will be mapped back to those names.

Important: Elicitation should be used thoughtfully. Not every tool needs it, and you don’t want to overuse it to the point the AI assistant constantly pesters the user. It’s best for when the user’s intent was clear but incomplete. Also, keep in mind that not all clients (especially older ones) support elicitation. If a client doesn’t support it, they might just error out or ignore the request. The MCP spec allows clients to declare support, so ideally your client will say so up front.

Implications for developers: Elicitation means we can build more user-friendly AI workflows. It’s especially helpful for complex operations where the AI might not have all details. It shifts some control to the server side (server-driven interaction) whereas before, the AI model had to handle all conversation logic. Now your tool can drive the mini-dialogue for data it needs. This can reduce errors because you’re directly asking the user rather than the model hallucinating or assuming.

For developers implementing servers, you should validate elicited input on the server side as well – never trust raw input blindly. The blog’s best practices remind to validate all elicited user input. For example, if you elicited a number supposed to be between 1 and 10, double-check it’s in range. Just because the user (or AI on behalf of user) provided something doesn’t mean it’s safe or sensible.

In summary, elicitation support in the MCP C# SDK opens the door to richer interactions and more reliable tool usage. It’s a powerful feature for building interactive AI agents that can perform multi-turn tasks.

Structured Tool Output (Schema-Based Results)

Another major enhancement from the June 2025 update is support for structured output from tools. This feature addresses a subtle but important aspect of tool integration: how the results of tools are passed back to the AI.

Before this update, when a tool returned data, it could return any content (often as text or maybe as a JSON string), and the AI model (or the client) would have to interpret that output. For example, if a tool returned a list of products in JSON, the LLM would receive it as plain text and attempt to parse it. There was no formal way for the tool to say “this is structured data with this schema” to the client/LLM – the model just saw the text and had to make sense of it. That was workable but not ideal, especially if we expect AIs to handle complex data reliably.

With the 2025-06-18 spec, tools can now explicitly mark their output as structured data, and the MCP protocol allows including a separate structured content section in the response. The MCP C# SDK makes use of this via the [McpServerTool(UseStructuredContent = true)] attribute setting.

How does this work in practice? Let’s say you have a tool that returns a list of products (with properties like id, name, price, etc.). Using the updated SDK, you can write it like so:

[McpServerTool(UseStructuredContent = true), Description("Gets a list of structured product data with detailed information.")]
public static List<Product> GetProducts(int count = 5)
{
    // ... fetch or generate a list of Product objects ...
    return productList;
}

Here, Product is presumably a class (or record) with various properties (e.g., Id, Name, Price, etc.). Because we set UseStructuredContent = true, the SDK will know that the output should be treated as structured data. It will automatically reflect on the return type List<Product> and generate a JSON Schema for that output. When a client asks for the tool metadata (via tools/list), the server’s response will include an outputSchema along with the input schema for this tool. For example, it might advertise something like:

"tools": [
  {
    "name": "get_products",
    "description": "Gets a list of structured product data with detailed information.",
    "inputSchema": {
      "type": "object",
      "properties": {
        "count": { "type": "integer", "default": 5 }
      }
    },
    "outputSchema": {
      "type": "object",
      "properties": {
        "result": {
          "type": "array",
          "items": {
            "type": "object",
            "properties": {
              "id": { "type": "integer", "description": "Unique identifier for the product" },
              "name": { "type": "string", "description": "Name of the product" },
              "description": { "type": "string" },
              "price": { "type": "number" },
              "category": { "type": "string" },
              "inStock": { "type": "integer" },
              "rating": { "type": "number" },
              "features": { "type": "array", "items": { "type": "string" } },
              "specifications": { "type": "object", ... }
            }
          }
        }
      }
    }
  }
]

This schema is essentially describing the Product structure. Notice how detailed it can be – it even includes the descriptions for properties if available (like “Unique identifier for the product”). The SDK likely got those from data annotations or XML comments, or maybe the Description attribute on class properties if any. In our snippet above, it shows several properties like id, name, etc., with types.

Now, why is this useful? Because when the AI model (or the client application) receives data from the tool at runtime, it can be delivered in a more structured way. According to the protocol, the actual response from the tool call will have a section for structuredContent separate from any raw text content. For example, the JSON-RPC response might look like:

{
  "result": {
    "content": [
      { "type": "text", "text": "<some text representation or summary>" }
    ],
    "structuredContent": {
      "result": [ 
         { "id": 1, "name": "Laptop Pro", "description": "High-quality laptop...", "price": 278, ... },
         { ... another product ... }
      ]
    }
  },
  "id": 2,
  "jsonrpc": "2.0"
}

The key is that the structured data is in a machine-friendly format separate from the textual content. The AI model (if it’s aware) or the client can then use this directly. In an orchestrated scenario, the calling client might not even feed the raw JSON to the LLM as text; it might, for instance, render it in a UI or use it in some logic. If the AI model is directly consuming it, having the schema means the model was informed about what kind of data to expect, which could influence how it processes it (in some advanced agent implementations, they might incorporate the schema in the prompt to help the model parse it).

In plain terms: structured output reduces the chances of misunderstanding between the tool and the AI. The AI doesn’t have to “guess” the format or parse unstructured text. It knows, for example, that get_products returns an array of objects each with fields like id, name, price, etc., because the tool metadata said so. This can lead to more accurate and safer use of the data. For example, if an AI needs to present the data to the user, it can systematically go through the structured object rather than relying on brittle string parsing.

For the developer, using structured output in C# SDK is as simple as toggling that UseStructuredContent to true on the attribute and returning a serializable type (like a POCO class, or a collection of them, or even something like a Dictionary if needed). The SDK handles generating the schema and packaging the result. It likely uses System.Text.Json or similar to serialize the object. If you have custom serialization needs, you might need to ensure your types are JSON-serializable (e.g., use proper data types and perhaps attributes like [JsonPropertyName] if you want to adjust names, although the SDK might handle naming conversion automatically as it did for input parameters).

It’s worth noting that even before this update, one could return JSON by simply returning a string of JSON (like in the MonkeyTools example earlier, they returned JsonSerializer.Serialize(monkey)). But that was just a string as far as MCP was concerned. Now, we have first-class support.

Implications: With structured outputs, AI integrations can be more robust. The AI can trust the format of the tool response, making it easier to do things like present data in a table or perform calculations on results. In the context of enterprise apps, if a tool returns a complex object (say an invoice or a medical record), the structured approach ensures no piece of data is lost or misinterpreted because of formatting issues. It’s akin to having a typed API instead of a plain text response.

From an AI perspective, if the LLM has been designed or fine-tuned to work with structured content (or if the client uses an intermediary to handle structured data), this can dramatically improve correctness. For example, if a tool returns a numeric value in a structured field, the client might ensure the LLM sees it not just as “some text that looks like a number” but explicitly knows it’s a number 123. This might reduce errors in reasoning about units or comparisons, etc.

For .NET developers, I would encourage using structured outputs for any non-trivial data that the AI might need to use. It’s essentially self-documenting your output and making life easier for the AI client. The overhead is minimal – just return actual objects instead of formatted strings, and set the attribute flag.

One thing to be mindful of: The client or AI needs to support it. Most likely, backward compatibility is maintained (if a client doesn’t support structuredContent, it might just see the textual part). But modern clients that speak spec 2025-06-18 will utilize it.

Resource Links in Tool Responses

The update also introduced the concept of Resource Links in tool outputs. This is a neat feature that allows a tool to return a reference (typically a URI/URL) to some resource that was created or is relevant, rather than (or in addition to) returning the raw content of that resource. The idea is to improve “resource discovery and navigation” for AI tools.

Consider scenarios where a tool’s primary job is to create or fetch a resource that might be large or better accessed via a link. For example:

  • A tool that generates a report (maybe a PDF or an Excel file) – instead of returning the entire report content (which could be huge and not suitable to stuff into a chat), the tool can return a URL where that report can be downloaded.
  • A tool that interfaces with a wiki or document system – maybe it returns a link to a page rather than dumping the whole page text.
  • A tool that creates a new database entry or an order in a system – it could return a link to that newly created resource.

With resource links, the MCP server can include one or more links as part of the tool’s result, in a structured way that the client recognizes. In the C# SDK, this is done by returning a CallToolResult object (instead of your own data type), and adding ResourceLinkBlock entries to its content list.

For example, the blog gives a sample tool:

[McpServerTool]
[Description("Creates a resource with a random value and returns a link to this resource.")]
public async Task<CallToolResult> MakeAResource()
{
    int id = new Random().Next(1, 101); // generate an ID 1-100
    var resource = ResourceGenerator.CreateResource(id);

    var result = new CallToolResult();
    result.Content.Add(new ResourceLinkBlock() {
        Uri = resource.Uri,
        Name = resource.Name
    });

    return result;
}

In this snippet, ResourceGenerator.CreateResource(id) presumably creates some resource and returns an object that has a URI (maybe an HTTP link or a file path) and a Name. They then create a CallToolResult (which is likely a class provided by the SDK for custom tool responses), and add a ResourceLinkBlock to the result’s content. The ResourceLinkBlock contains a Uri and a Name. This will tell the client: “Hey, here’s a resource located at resource.Uri, and here’s a human-readable name for it.” The MCP protocol likely defines that a ResourceLinkBlock is a content block that should be interpreted as a link.

What can the client do with that? If the client is an interactive UI (like a chat app), it might render that as a clickable link for the user, or it might automatically fetch it. For instance, if Copilot Chat got a resource link, maybe in the chat it would output something like: “📎 Created resource: MyResourceName”. If the client is another program, it could decide to follow the link or store it.

Essentially, resource links allow tools to hand off results by reference rather than by value. This is very useful when dealing with large data or actions that produce an external artifact.

From a developer usage perspective: To use this in your tool, you return a CallToolResult. This seems to be a special return type that the SDK knows how to serialize into the MCP response format. You then populate it with either text or link blocks (there might also be other block types, possibly for images or other media, not sure). In the example, they only added a ResourceLinkBlock. Possibly you could add multiple if your tool returns multiple links (e.g., “here are links to 3 files I created”).

One might ask, how is this different from just returning a URL as text? The difference is the client can explicitly know it’s a resource link with a certain intended use, as opposed to just an arbitrary string that happens to be a URL. By using the structured block, it’s machine-readable. Also, a text URL might accidentally be changed or truncated by an AI whereas a resource link block is out-of-band of the AI’s text generation (the AI doesn’t have to regurgitate it, the client already has it). This ties into trust as well: the AI doesn’t have to hallucinate the link, it’s given directly by the tool.

Imagine an AI agent fetching a file: previously, if a tool returned a link as text, the AI model would have to detect “this is a URL, I should maybe ask to fetch it.” With resource links, the client might programmatically know to fetch or at least highlight it.

Implications: Resource links enhance what kind of tasks AI agents can do via MCP. They allow for richer workflows where the output might be something external. For example, an AI can instruct a tool to create a Google Doc, and the tool can return the share link, then the AI can provide that to the user. Or an AI asks a tool to execute a long-running job; instead of blocking, the tool returns a link where the user can check status or download results when ready.

For developers implementing such tools, it’s a straightforward pattern: produce the resource in your code (maybe saving a file or creating an entry accessible via a URL), then return a link to it. You should ensure the link is accessible to the client (taking into account security – maybe it requires the same auth token or some access control).

In enterprise environments, resource links could point to internal web portals or APIs. A benefit of using resource indicators (as mentioned earlier in auth best practices) is you can embed resource identifiers in tokens. For instance, the token used by the client to call the tool might also be usable to fetch the returned resource if it’s on the same domain and protected.

One more subtle point: Resource links help with keeping the conversation concise. Instead of dumping maybe pages of text, you give a link. This is beneficial given token limits of models and clarity for users.

Schema & Metadata Enhancements

Beyond the headline features, the June 2025 update also brought some improvements to the MCP schema and metadata that developers will find useful for extensibility and user-friendliness. Two notable ones are:

  • Enhanced Metadata Support via _meta fields
  • Human-Friendly Titles for tools, resources, prompts

Metadata (_meta fields): The MCP spec includes the concept of a _meta field on various objects to allow extension. In this update, they expanded the availability of _meta on more interface types. For example, the C# SDK’s Tool class (which likely underpins any server tool) now has a metadata dictionary you can use. The blog gave a snippet:

public class CustomTool : Tool
{
    public ToolMetadata Meta { get; set; } = new()
    {
        ["version"] = "1.0.0",
        ["author"] = "Your Name",
        ["category"] = "data-analysis"
    };
}

This means you can attach arbitrary metadata to your tools (or other objects like resources or prompts) in key-value form. The example shows adding a version, author, and category. The MCP framework doesn’t necessarily use these itself (they’re for your or client’s use), but they will be exposed via the protocol. So a client asking for tools/list might see a _meta block for each tool if you provided one. This is useful if you want to categorize or filter tools on the client side, or display extra info.

For instance, in a large organization, you could tag certain tools as “internal” or “beta” or categorize them (reporting, CRM, engineering, etc.) so that an AI client could reason like “oh, this is a data-analysis tool, maybe I use it only in certain contexts” or a UI could group them. It’s mostly about extensibility – letting developers include any additional info without breaking the standard schema.

Human-Friendly Titles: Previously, an MCP tool was primarily identified by its name (which, by default, was derived from the method name, often in snake_case or similar). That’s fine for machines, but sometimes the names aren’t the prettiest for displaying to users. The update introduces a separate title field for tools, resources, and prompts. The title is meant to be a more readable name or display name, whereas the name remains a codified identifier.

In the C# SDK, support for this is added via the [McpServerTool] attribute parameters. You can specify Name and Title independently. For example:

[McpServerTool(Name = "echo", Title = "Echo Tool")]
[Description("Echoes the message back to the client.")]
public static string Echo(string message) => $"Echo: {message}";

Here, Name is what the tool is called in code/protocol (and what the AI would use to invoke it, e.g., “echo”), but Title is a nicer label “Echo Tool”. If we didn’t specify, by default Name would be “echo” (since our method is Echo) and Title would default to empty or null. Now, when a client lists tools, it could show “Echo Tool” to the user instead of just “echo”. This is purely for human UX; the AI might not care, but if the AI UI shows a list of tools or logs something, it looks nicer.

Similarly, resources (like maybe a type of resource) and prompt definitions can have titles. Prompt is another concept in MCP (prompt objects that maybe define system messages or instructions that can have names and titles).

These small changes improve clarity. Imagine an enterprise with dozens of tools – having a Title allows for spaces, capitalization, etc., making it more presentable.

Implications: For developers, it’s mostly about taking advantage of these to create a polished experience. Use _meta to enrich your tool definitions with anything that might be useful. For example, if you have documentation links or support info for a tool, you could include that. Or track a version like in the example, which might help if tools evolve.

Using Titles is recommended especially if your tool names are abbreviations or not immediately clear. E.g., you might have a tool name get_user_info with title “Get User Information” – the latter is nicer to display in a UI or logs.

These changes also hint that tooling around MCP is improving – e.g., UIs that show the list of tools (like VS Code) could show the Title. Developers should thus supply titles for better integration with such tools.

All of these features from 5.1 to 5.5 were part of the June 2025 update and are supported in the latest MCP C# SDK. To start using them, make sure to update your NuGet package to the latest version (at least the one that corresponds to spec 2025-06-18). Once updated, you’ll have access to the new attribute properties and classes like ElicitResult, ResourceLinkBlock, etc.

Now that we’ve covered the what and how of the new features, let’s talk briefly about best practices when using the MCP C# SDK, especially in light of these new capabilities.

Best Practices for Using the MCP C# SDK

As with any powerful technology, using MCP effectively requires following certain best practices – both to ensure security and to provide the best experience. Some of these we’ve touched on, but it’s worth compiling a checklist:

  • Secure Your MCP Server (Use OAuth and Proper Auth Flows): If your MCP server is exposing any sensitive or privileged actions, don’t leave it open. Implement the OAuth-based auth as provided. Always use secure protocols (HTTPS for any network transport). Follow the principle of least privilege – issue tokens with only the scopes a given client needs. Microsoft’s guidance specifically says to implement proper OAuth flows for production. This includes handling refresh tokens if needed and verifying tokens properly on each request.
  • Use Resource Indicators for Tokens: When dealing with OAuth, consider using resource indicators or audience restrictions on tokens. This means if a token is meant for your MCP server, it shouldn’t be usable elsewhere, and vice versa. It helps prevent a stolen token from being widely useful beyond its intended scope.
  • Validate All Elicited Input (and tool inputs in general): Treat elicited input like any user input – validate it on the server side before trusting it. If you ask for a number 1-10 and get 50, handle that gracefully (maybe ask again or default). If you ask for a filename, be wary of path traversal. Basically, do not blindly execute or use data from elicitation or tool parameters without checking. The AI might misunderstand the user or a malicious user might try to exploit a tool via specially crafted input.
  • Be Descriptive in Metadata: Use descriptions, titles, and meta info to your advantage. A clear Description on a tool can greatly help the AI decide when to use it (the description is often used by the AI’s prompting logic). It’s often written as an imperative or explanatory sentence. For example, “Gets a list of monkeys” is clear for a GetMonkeys tool. If a tool has any side effects or costs, you might mention that (e.g., “Deletes a user account (use with caution)” in the description might signal the AI to be careful).
  • Design Tools to Be Idempotent and Side-Effect Aware: This is more of a design tip – if a user repeats a question or if an AI agent decides to call a tool multiple times, know what happens. If your tool creates resources, ensure it handles duplicates or multiple calls gracefully (maybe by returning the same link or by creating multiple entries but clearly communicating). Idempotent tools (same input -> same result without additional side effects) are easier for AI to work with, though not always possible.
  • Keep Tools Focused and Coarse-Grained: Each tool should ideally do one thing well (like an API endpoint). Don’t make a single tool do too many unrelated actions based on parameters. If it’s too conditional, the AI might misuse it. Instead, have multiple tools if necessary. Also, if a tool will retrieve a lot of data, consider whether a resource link is better than dumping it. Think about token limits – the AI can only consume so much text.
  • Testing and Simulations: Test your MCP server and tools with actual AI clients. For example, run scenarios in Copilot Chat or a custom test harness. See if the AI picks the right tool and how it responds. You might find you need to tweak descriptions or add some system prompts to steer the AI. For instance, you might configure your AI client with an initial prompt that explains the available tools and their intended usage.
  • Use Logging: The SDK integrates with .NET logging; enable logs (especially in development) to see what’s happening – e.g., when a tool is invoked, what inputs were passed, what was returned. This can help debug why an AI did something unexpected.
  • Stay Updated on Spec Changes: MCP is still evolving. As new spec versions come out (they seem to use dates as versions), update your SDK and read the change logs. New features or changes might require you to adjust your tools. Also, keeping your tools and client updated ensures compatibility with other MCP implementations.
  • Open Source Contributions and Learning: Since the MCP C# SDK is open source, don’t hesitate to peek at the code if something isn’t clear. You can also contribute. There may be a community building around it (check the GitHub issues/discussions). By contributing or even just following the project, you’ll gain insights and directly help shape the tooling that you yourself use.

Following these practices will help ensure that your AI + MCP integration is robust, secure, and effective. The official documentation and blog posts (some we cited here) are great resources – for example, the security best practices in the MCP spec likely contain more detailed guidelines, especially around OAuth, which enterprises should adhere to.

Now, with a solid grounding in what MCP is, how to use the C# SDK, and what’s new in the latest update, let’s explore some concrete use cases where MCP can shine for .NET developers and architects. This will help contextualize when you might reach for MCP in your projects.

Use Cases and Scenarios

MCP isn’t just a theoretical protocol – it’s created to solve real-world integration challenges in AI applications. Let’s discuss a few scenarios where MCP (and the C# SDK) can be applied and bring value. We’ll cover use cases spanning AI assistants, developer tools, and enterprise data integration, which are particularly relevant to .NET professionals and architects.

AI Assistants and Chatbots

One obvious use case is building AI-driven assistants or chatbots that have enhanced capabilities thanks to MCP. Imagine a customer support chatbot that, beyond just answering from a knowledge base, can actually perform actions: create a support ticket, lookup order status, schedule a technician visit, etc. Each of those actions can be an MCP tool tied into backend systems.

For example, you could have an MCP server with tools like LookupOrder(orderId), CreateSupportTicket(details), ScheduleAppointment(date, customerId), all implemented in C# to interact with your databases or APIs. The chatbot (could be running on Azure OpenAI or any LLM platform that supports MCP clients) would, upon understanding the user’s request, call the appropriate tool.

MCP provides a structured, secure way to do this:

  • The AI assistant doesn’t need direct database connectivity or special-case code for each action – it just knows “call tool X with these parameters”.
  • All the heavy lifting is done by your .NET code on the server side, where you have full control (and can enforce business logic, validations, etc.).
  • Security is enforced via OAuth, ensuring the chatbot only does what it’s authorized to (e.g., maybe it has a token that only allows read-only queries for certain data for regular users, but a different scope for admin actions).

This pattern can be used for internal enterprise assistants too – say an AI helper for employees that can fetch HR information, company policy documents, or initiate internal workflows (like file an IT ticket). Instead of exposing a bunch of internal APIs to an external AI, you keep them behind an MCP server and let the AI in through that single standardized gateway (with monitoring and logging of tool use).

Another example: Voice assistants or smart devices – you might integrate MCP into a voice assistant such that when a user asks, “Hey, what’s the current inventory of product X?”, the voice assistant’s logic uses an MCP client to call your inventory tool. The benefit is any AI can do this if they support MCP – you’re not writing Alexa skill separate from Google Assistant integration separate from a chat web UI; you write one MCP server.

One more forward-looking scenario: multiple AI agents collaborating. Because MCP is an open protocol, you could have different AIs (maybe from different vendors or specialized for different tasks) all using the same MCP server to fetch information or perform tasks in a consistent way. This avoids lock-in to one AI provider’s plugin system.

Developer Tools and IDE Integration

Interestingly, one area where MCP is already shining is in developer tools – specifically IDEs like VS Code. We saw how GitHub Copilot X in VS Code uses MCP to let you extend Copilot with custom tools. This essentially turns your IDE into an AI-enhanced environment where the AI can invoke local or cloud-based tools.

Consider building developer productivity tools:

  • Code Generators or Refactoring Tools: Suppose you have an internal code generation tool (maybe something that sets up a project template or enforces code standards). You could expose it via MCP. In Copilot Chat, you could ask “Hey, create a new microservice using our company template,” and the AI could call your MCP tool that scaffolds that project.
  • Debugging/Diagnostics Tools: You might have tools that fetch logs, or run database queries, or check system health. An AI agent could use those to answer questions like “What was the error in last night’s build?” by calling a GetBuildLogs(buildId) tool.
  • Multi-agent Orchestration: There’s a pattern in AI called “Tool use” or “Agent pattern”. MCP basically formalizes the tool interface. You could orchestrate multiple specialized tools (some might even call other AIs). For example, one tool might query a knowledge base, another might call an external API, and the AI orchestrates calling them in sequence to solve a complex request (like the agent decides: first call a search tool, then feed result to a calculation tool, etc.). MCP gives a uniform interface to do so.

For developer tools, .NET developers can create MCP servers that tie into the rich .NET ecosystem:

  • There’s mention of an MCP server for Git and one for GitHub – these could allow an AI to perform git operations or query GitHub issues/pull requests. Indeed, imagine asking “AI, check how many PRs are open in repo X and who’s assigned” – it could call a GitHub MCP tool to get that info, then tell you.
  • A Playwright MCP server was mentioned – maybe that allows an AI to automate a browser (potentially for testing or data scraping) in a controlled way. Instead of the AI free-form controlling a browser (which is tricky and not standardized), an MCP tool interface for “open page”, “click button” could be used.

The existence of a Filesystem MCP server is interesting: an AI could read/write files on your system through a controlled MCP server (with proper sandboxing). That’s like giving the AI the ability to edit code or config files on command, but through a managed layer (so it can’t just do anything; you define the allowed operations).

For enterprise devs, this means you can also integrate AI into your custom dev tools. For instance, if your company has a custom deployment pipeline with specific steps, you could make an MCP server for it. Then a developer could ask the AI assistant “Deploy the latest version of service X to staging” and the AI calls the deployment tool.

Enterprise Systems and Data Integration

This scenario is about bridging AI with large-scale enterprise systems (where .NET often thrives). Many enterprises have a wealth of data in CRM systems, ERP systems, data warehouses, etc. Often, an AI could provide value by fetching insights or acting on those systems. MCP can be the bridge.

Examples:

  • Business Intelligence Q&A: You have a data warehouse or reporting system. You create an MCP server that can run certain pre-defined queries or procedures (to prevent arbitrary runaway queries). An AI client could translate a user’s natural language question (“What were our sales in Europe last quarter compared to the same quarter last year?”) into a call to, say, a GetSalesComparison(region, quarter1, quarter2) tool. The tool runs the query and returns structured data or even a link to a chart (resource link pointing to a dashboard).
  • IT Automation: Imagine a sysadmin AI assistant. It could use MCP to interface with systems like Active Directory, monitoring tools, or cloud management APIs. For instance, a tool might be RestartServer(name) or GetCpuUsage(serverName). An AI in your ops team’s chat could then do “Check CPU on server X” and get a result, or “Restart server X” and confirm an action. Because it’s all going through defined tools, you can audit and control usage.
  • Workflow Orchestration: Many companies have workflows that span multiple systems (say, an order goes from a website to fulfillment to shipping). An AI could serve as an orchestrator or an assistant to manage these if it has tools for each step. MCP servers could be set up for each subsystem, and the AI can coordinate. This is especially helpful if you want an AI to handle routine tasks like “flag orders over $10k for review” or “generate a summary of this week’s new support tickets”.
  • Content Management and Search: An AI that can fetch documents from SharePoint or a knowledge base via MCP could answer employee queries with actual documents or excerpts. Tools could do things like SearchDocuments(query) returning top matches (structured), and GetDocument(docId) maybe returning a link or summary. The update’s structured output and resource links fit well here (returning results as structured list of titles & links, etc.).
  • Multi-user scenario: MCP doesn’t specify UI, so one could even build a multi-user chat where an AI mediates. For example, an AI facilitator that has tools to fetch user profiles, schedule meetings, etc. Team members could ask it and it uses those tools behind the scenes.

The general theme is: MCP allows AI to act with agency in a controlled environment. Instead of giving an AI model direct access to your database or an unrestricted plugin, you give it a curated toolbox (the MCP server’s tools). This greatly reduces risk because you define exactly what can be done, and you can incorporate checks within those tools (like an “UpdateRecord” tool might require a confirmation or might log an audit trail).

From an enterprise architect perspective, MCP could be seen as part of your integration layer. Much like you might have API gateways or service busses, you might in the future have an “AI gateway” (just speculating) where various AI services connect to internal tools via MCP. It could standardize how all AI interactions are funneled (anthropic’s vision is multiple vendors adopting it, which would mean you write one integration and use many AI models interchangeably – a nice way to avoid vendor lock-in in the AI space).

Given these use cases, you might wonder about performance and scaling. MCP servers can be hosted just like any other .NET service – you could host them in Azure (perhaps as Web API with SSE, or on containers). The SDK as we saw even supports containerizing easily (James Montemagno’s blog showed adding container support to deploy an MCP server). So you can scale out behind load balancers if needed. The protocol likely supports streaming (SSE or websockets) which can handle high throughput of requests if needed.

Implications of MCP for .NET Professionals

It’s clear that MCP and the MCP C# SDK open up new possibilities for .NET developers in the AI space. But what does it mean for you as a professional or for your team’s strategy?

  • Democratizing AI Integration: With MCP, adding AI capabilities to your .NET application becomes less about inventing new APIs and more about exposing existing functionality in a standardized way. This lowers the barrier to entry. If you’re a C# developer, you don’t need deep AI expertise to make your software AI-friendly – you just need to wrap your services as MCP tools. Conversely, if you’re an AI/ML engineer, you don’t have to learn the internals of every enterprise system – you can rely on MCP endpoints provided by the system owners.
  • Future-Proofing with Open Standards: Betting on an open protocol like MCP can be safer in the long run than a proprietary plugin system. Since it’s backed by an open-source community (with support from players like Anthropic and Microsoft), it’s more likely to evolve in the open and be adopted widely. .NET support via this SDK means the Microsoft ecosystem is taking it seriously. For architects, adopting MCP might be a strategic choice to avoid getting siloed into one AI vendor’s ecosystem.
  • Acceleration of AI Projects: We often see a gap between AI prototypes and production systems – MCP can help bridge that by making it easier to connect prototypes (often done in Python or using some AI service) with production .NET systems. Instead of months to integrate an AI with your secure environment, you expose an MCP server in weeks or days and use that. The June 2025 update specifically improved a lot of enterprise concerns (security, structured data) – so it’s even more compelling for production use now.
  • Skills and Roles: .NET developers might find that building MCP tools becomes a new task in projects. It’s almost like creating APIs, but specifically for AI consumption. Understanding how to design good tools (with the right granularity, descriptions, etc.) and how to secure them will become a valued skill. Similarly, being able to operate and monitor an MCP server (maybe as part of your microservices) will be important.
  • Collaboration between AI teams and Dev teams: MCP can foster better collaboration – AI specialists can say “I need the AI to be able to do X”, and the .NET dev can quickly expose X as a tool on an MCP server, rather than AI folks hacking things on their own or vice versa. It creates a contract (spec/schema) that both sides can agree on.
  • Ecosystem and Community: We should note that while we focused on the C# SDK, there are SDKs in other languages too (likely Python, maybe Java, etc.). This means you might run a Python-based MCP server for some things and a C# one for others, and a client could use both. .NET might not always be the only piece, but it will often be in the mix, especially for enterprise backends. It’s good to be aware of cross-language aspects; e.g., if others in your company use Python, there’s an MCP Python SDK presumably – you could share knowledge and experiences on how to best design MCP interfaces.

In essence, MCP is one of those technologies that can change how we architect AI-enabled software, by formalizing the “glue” between AI and the rest. For .NET professionals, it’s an opportunity to leverage your existing ecosystem (NuGet, ASP.NET, Azure etc.) to play in the AI domain in a first-class way.

To wrap up, let’s provide some resources where you can learn more and then conclude our discussion.

Resources and Further Learning

If you’re excited to dig deeper into MCP and the C# SDK, here are some valuable resources:

  • Official MCP Documentation: The Model Context Protocol website has an introduction and detailed docs on the spec, concepts, and guides. This is a great starting point to understand the architecture and all features of MCP in a language-agnostic way.
  • MCP C# SDK Documentation and GitHub: The GitHub repository for the MCP C# SDK is a treasure trove. It likely includes a README with usage examples, as well as issues and discussions where you can learn from others. You can find it here: modelcontextprotocol/csharp-sdk on GitHub. The official docs might also have a C# section (e.g., modelcontextprotocol.github.io pages).
  • Microsoft .NET Blog Posts: We referenced a few:
    • “Build a Model Context Protocol (MCP) server in C#” by James Montemagno (April 2025) – a step-by-step guide (some of which we followed).
    • “MCP C# SDK Gets Major Update: Support for Protocol Version 2025-06-18” by Mike Kistler (July 2025) – the announcement of the update, which we covered in detail. It’s worth reading for the official explanations and any nuance we might have skipped.
    • Den Delimarsky’s blog “OAuth in the MCP C# SDK: Simple, Secure, Standard” (July 2025) – an excellent deep dive into the new authentication features with example code and a more narrative explanation of why it’s simpler now.
  • Community Content: The .NET and AI community is picking up MCP. There are already some blog posts and videos (for instance, on DEV.to or Hashnode, e.g. “MCP C# SDK: What’s New in Protocol 2025-06-18”, and possibly presentations on YouTube or conferences). Keep an eye on community blogs for tips.
  • Anthropic’s Announcement: For context on MCP’s origins and vision, Anthropic’s original announcement “Introducing the Model Context Protocol” is a quick read (3 min) that gives the why of MCP. It’s more high-level but inspirational.
  • Samples Repository: The official blog mentioned a samples repo and Montemagno’s post links to some sample servers (like Monkey server, Git, etc.). Cloning those and running them can be a great way to see real MCP servers in action and learn how they’re built.
  • Security References: If you plan to implement authentication, understanding OAuth 2.1 concepts is useful. RFC 9126 and RFC 9068 might be of interest for OAuth 2.1 and resource indicators, respectively. The Den blog references RFC 9728 for protected resource metadata – reading its summary could help you grasp the flows.

Finally, don’t forget to engage with the community – join discussions on GitHub or forums if you have questions. As MCP is relatively new, best practices are still being discovered, and by sharing your experiences you might help shape its development.

Conclusion

The Model Context Protocol (MCP) and the MCP C# SDK together provide a powerful pathway for .NET developers to embrace the era of AI-integrated applications. We began by understanding MCP as an open standard – a kind of universal adapter that lets AI systems safely and effectively connect to external data and services. We saw how the MCP C# SDK makes implementing this protocol in .NET not only possible but developer-friendly, leveraging familiar patterns like dependency injection and attributes to turn your code into AI-accessible tools.

The June 18, 2025 MCP update brought the SDK (and protocol) to a new level of maturity, addressing real-world needs:

  • A robust OAuth 2.1-based authentication system, giving us enterprise-grade security for AI interactions.
  • Elicitation, allowing AI to engage in multi-turn dialogues to clarify information and making our tools more interactive.
  • Structured outputs, ensuring that the data our tools provide is machine-readable and schema-backed, leading to more reliable AI reasoning.
  • Resource links, bridging AI with external resources in a clean way – handing off file links or references without breaking the conversational flow.
  • Metadata and usability tweaks like titles, which polish the developer and user experience.

For .NET professionals, these changes mean that integrating AI is no longer a black box magic trick – it’s becoming a structured engineering task, much like building any other distributed system component. We treat AI not just as a model but as a participant in our architecture that communicates via a well-defined protocol.

We walked through a practical example of creating an MCP server with the C# SDK, demonstrating that with just a few lines of code, we could empower an AI (Copilot, in our case) to perform new tasks. We also discussed how to leverage the new features in our code – from adding OAuth configuration to writing elicitation logic and returning structured data.

The implications and use cases we explored paint an exciting picture:

  • AI assistants that can truly act on users’ intent by safely executing backend operations.
  • Developer tools that become smarter, letting AI handle mundane coding or ops tasks by invoking MCP-exposed commands.
  • Enterprises unlocking their data silos to AI insights while maintaining control and compliance.

As you venture into building with MCP, keep the best practices in mind: security first, clarity in design, and thorough testing. The technology is new, but with the guidelines from Microsoft and the community, you can avoid pitfalls and ensure your AI integrations are both powerful and responsible.

In conclusion, the MCP C# SDK gives .NET developers a kind of “AI superpower” – the ability to quickly create a bridge between AI and the rich world of .NET applications and services. With the June 2025 update, this bridge is stronger and more capable than ever. It’s a great time to start experimenting with MCP in your projects, be it a hackathon prototype or a production enterprise solution. The learning curve is gentle, and the potential rewards – in terms of automation, user satisfaction, and innovation – are high.

The future of .NET and AI integration looks bright, and MCP is poised to play a key role in it. So go ahead: update your SDK, spin up an MCP server, and let your imagination (and your AI) reach out to new tools and data. Happy coding, and happy prompting!

Leave a comment