← Back to Home
microsoft-agent-frameworkagentsdotnetmcpai-products

Microsoft Agent Framework: Build Agentic AI Services for Your Products

Why building custom AI agents gives you more control than MCP servers, and how Microsoft Agent Framework enables specialized agentic experiences in .NET

Tools like Copilot, Cursor, and Claude are great for developers and power users who enjoy customization. If you have a software product, you could expose it through an MCP server. This is basically an “API for agents” that any AI tool can consume.

But there is another path: build the agent yourself.

When you build custom agents around your products, you control the entire user experience. Instead of providing a protocol and hoping users figure out how to configure it, you deliver a specialized agentic system tuned to your domain.

The trade-off? You pay for LLM usage. But that trade-off might be exactly right for your business.

MCP: The “API for Agents” Approach

The Model Context Protocol was introduced by Anthropic in November 2024. It provides a standardized way to connect AI systems with data sources. Think of it like USB-C for AI applications. A universal port that works with Claude, Cursor, ChatGPT, and other tools.

Building an MCP server for your product means any compatible AI assistant can interact with your system. Users get flexibility. You get broad compatibility.

But MCP puts the burden on users:

  • They must discover and install your server
  • They manage JSON configuration files and API keys
  • They deal with authentication setup across multiple services
  • They navigate tool overload as they add more servers

According to Clutch research, 72% of users abandon apps during onboarding if it requires too many steps. MCP setup involves multiple steps: server discovery, JSON configuration, API key management. That is friction before users get any value.

The Andreessen Horowitz deep dive on MCP notes: “More tools activated = more tokens used on every conversation turn, and performance starts to deteriorate.”

Custom Agents: Control the Experience

When you build your own agent, you hide all that complexity. Users do not configure servers or manage API keys. They just use your product. The AI capabilities are integrated, optimized, and ready to go.

What you gain:

  1. Unified Experience. One interface, not scattered tool configurations.
  2. Domain Specialization. Your agent understands your product’s specific vocabulary, workflows, and edge cases.
  3. Hidden Complexity. Users never see MCP, JSON configs, or authentication flows.
  4. Data Flywheel. Every interaction generates proprietary data that improves your system.

Research from Metronome found that “predictability, not price point, drives enterprise adoption.” Users avoid AI tools when they cannot forecast costs or understand setup requirements. Custom agents with bundled experiences remove that friction.

The Cost Trade-Off

When you build custom agents, you pay for LLM usage. Your users perform tasks, your subscription gets billed.

This is a feature, not a bug.

Stripe’s research on AI pricing shows that 56% of successful AI companies use hybrid models (subscription + usage) rather than pure pay-per-token billing. Users want predictability. Bundling LLM costs into your subscription provides that.

The alternative is Bring Your Own Key (BYOK). It adds friction:

  • Users create accounts with OpenAI, Anthropic, or others
  • They obtain and manage API keys
  • They handle rate limits and separate billing
  • They debug issues across your product AND their provider

Metronome’s field report found that “end users avoid AI features due to uncertainty about charge impact.” One company reported usage declined because admins “didn’t trust they’d stay in budget.”

BYOK shifts cost to users but also shifts complexity. For many products, absorbing LLM costs delivers a better experience.

Microsoft Agent Framework

This is where Microsoft Agent Framework fits. It is the unified .NET SDK for building these specialized agentic services.

Core Concepts

ChatClientAgent is the fundamental building block:

using Microsoft.Agents.AI;

var agent = new ChatClientAgent(
    chatClient,
    name: "InventoryAssistant",
    instructions: """
        You help users manage their inventory. You can search items,
        check stock levels, and place reorders when supplies run low.
        """
);

WorkflowBuilder lets you chain specialized agents:

var triage = new ChatClientAgent(chatClient,
    name: "Triage",
    instructions: "Classify the customer request and determine urgency.");

var resolver = new ChatClientAgent(chatClient,
    name: "Resolver",
    instructions: "Handle the request based on triage classification.");

var workflow = new WorkflowBuilder(triage)
    .AddEdge(triage, resolver)
    .Build();

Tools with AIFunctionFactory give agents capabilities:

var agent = new ChatClientAgent(
    chatClient,
    new ChatClientAgentOptions
    {
        Name = "OrderAgent",
        Instructions = "You can check orders and process refunds.",
        ChatOptions = new ChatOptions
        {
            Tools = [
                AIFunctionFactory.Create(CheckOrderStatus),
                AIFunctionFactory.Create(ProcessRefund)
            ]
        }
    }
);

[Description("Check the status of a customer order")]
static async Task<OrderStatus> CheckOrderStatus(string orderId)
{
    // Your order lookup logic
}

Enterprise Features

Observability with built-in OpenTelemetry:

builder.Services.AddOpenTelemetry()
    .WithTracing(tracing => tracing
        .AddSource("Microsoft.Agents.AI")
        .AddConsoleExporter());

Every tool call, every decision, every LLM invocation is traced.

Security through Entra ID integration, RBAC, and On-Behalf-Of flows. Agents operate with user permissions, not service account privileges.

Compliance features include human-in-the-loop approval, content safety, and audit logging.

Provider Agnostic

Works with any IChatClient:

// Azure OpenAI
var chatClient = new AzureOpenAIClient(...)
    .GetChatClient("gpt-4o");

// OpenAI direct
var chatClient = new OpenAIClient(...)
    .GetChatClient("gpt-4o");

// Local with Ollama
var chatClient = new OllamaChatClient("llama3.2");

Switch providers by configuration, not code.

When to Use Each Approach

ApproachBest For
MCP ServerDeveloper-focused products where users expect to integrate with their existing AI tools
Custom AgentProducts where you want to control the experience and reduce user friction
BothMCP for power users, custom agent for mainstream adoption

Microsoft’s own guidance in the Azure Cloud Adoption Framework recommends evaluating whether a SaaS agent meets your needs. If not, build custom using Agent Framework, Azure AI Foundry, or Copilot Studio.

The Defensibility Question

Bessemer Venture Partners predicts that vertical AI market capitalization will be at least 10x the size of legacy vertical SaaS. The key insight: the moat is not the LLM. It is your data and domain expertise.

Generic API access provides no defensibility. Anyone can rebuild your functionality. But custom agents that deeply integrate with your product:

  • Create data flywheels from every interaction
  • Build switching costs through workflow integration
  • Compound competitive advantage over time

As OpenAI’s product leaders note: “Foundation models are becoming commodities. What was groundbreaking in 2022 is increasingly standard in 2025.”

The differentiation is in what you build on top.

Getting Started

Install the NuGet Packages

dotnet add package Microsoft.Agents.AI --prerelease
dotnet add package Microsoft.Agents.AI.OpenAI --prerelease

Configure Your Environment

Follow our Setting Up the Agent Environment guide for Ollama (free, local), OpenAI, Azure OpenAI, or OpenRouter.

Try the Patterns

Start with Pattern 1: Prompt Chaining to see multi-agent workflows in action.

Resources

Comments