Setting Up the Agent Environment
Set up your LLM provider to run DotNetAgents patterns. Choose from Azure OpenAI (default), Ollama, GitHub Models, OpenAI, or OpenRouter.
Set up your LLM provider to run the pattern examples. Azure OpenAI is the default provider. For a free local option, try Ollama.
Prerequisites
- .NET 10.0 SDK or later - Download
- Git - to clone the patterns repository
- An LLM provider - see options below
Choose Your LLM Provider
Azure OpenAI
Pay-as-you-goEnterprise deployments with Azure security and compliance. This is the default provider.
Ollama
FreeRun open-source LLMs locally. No API keys, no costs.
GitHub Models
Free tierUse your GitHub account to access AI models. Quick setup with gh auth.
OpenAI
Pay-as-you-goDirect access to OpenAI's latest models like GPT-4o.
OpenRouter
Pay-as-you-goAccess 100+ models from multiple providers with one API key.
Option 1: Azure OpenAI
Default | Enterprise
Azure OpenAI is the default provider. For enterprise deployments with Azure compliance and security features.
Prerequisites
- Azure subscription
- Azure OpenAI resource with a deployed model
- API key from your Azure OpenAI resource
Set Environment Variables
PowerShell:
$env:AZURE_OPENAI_API_KEY = "your-azure-api-key"
$env:AZURE_OPENAI_ENDPOINT = "https://your-resource.openai.azure.com/"
Bash:
export AZURE_OPENAI_API_KEY=your-azure-api-key
export AZURE_OPENAI_ENDPOINT=https://your-resource.openai.azure.com/
Or create a .env file in the patterns directory:
AZURE_OPENAI_API_KEY = "your-azure-api-key"
AZURE_OPENAI_ENDPOINT = "https://your-resource.openai.azure.com/"
Run a Pattern
# Provider is specified in code, not environment variables
cd patterns/01-prompt-chaining/src/PromptChaining
dotnet run
Option 2: Ollama
Free | Local
Ollama lets you run open-source LLMs locally on your machine. No API keys, no costs, works offline.
Step 1: Install Ollama
Download and install from ollama.com
Step 2: Start Ollama
Windows/Mac: Ollama runs automatically after installation. Look for the llama icon in your system tray.
Linux: Start the Ollama server:
ollama serve
Step 3: Pull a Model
ollama pull llama3.2
For smaller machines, try phi3 or llama3.2:1b
Step 4: Set Environment Variables (Optional)
Ollama uses http://localhost:11434 by default. Only set this if using a custom endpoint:
PowerShell:
$env:OLLAMA_ENDPOINT = "http://localhost:11434"
Bash:
export OLLAMA_ENDPOINT=http://localhost:11434
Or create a .env file in the patterns directory:
OLLAMA_ENDPOINT = "http://localhost:11434" # optional
Step 5: Run a Pattern
cd patterns/01-prompt-chaining/src/PromptChaining
dotnet run
Option 3: GitHub Models
Free tier | Cloud
Use your GitHub account to access AI models like GPT-4o, Llama, and more. If you have the GitHub CLI installed, setup is just one command.
Step 1: Install GitHub CLI
If you don’t have it, install from cli.github.com
Step 2: Authenticate with GitHub
gh auth login
Step 3: Get Your Token
# This outputs your GitHub token
gh auth token
Step 4: Set Environment Variables
PowerShell:
$env:GITHUB_TOKEN = (gh auth token)
Bash:
export GITHUB_TOKEN=$(gh auth token)
Or create a .env file in the patterns directory:
GITHUB_TOKEN = "github_pat_..."
Available Models
gpt-4o- OpenAI GPT-4ogpt-4o-mini- Fast and affordable GPT-4oLlama-3.3-70B-Instruct- Meta’s Llama 3.3Mistral-Large-2411- Mistral Large
See github.com/marketplace/models for the full list.
Step 5: Run a Pattern
cd patterns/01-prompt-chaining/src/PromptChaining
dotnet run
Option 4: OpenAI
Pay-as-you-go
Use OpenAI’s API directly with models like GPT-4o.
Step 1: Get an API Key
Create an account at platform.openai.com and generate an API key.
Step 2: Set Environment Variables
PowerShell:
$env:OPENAI_API_KEY = "sk-..."
Bash:
export OPENAI_API_KEY=sk-...
Or create a .env file in the patterns directory:
OPENAI_API_KEY = "sk-..."
Step 3: Run a Pattern
cd patterns/01-prompt-chaining/src/PromptChaining
dotnet run
Option 5: OpenRouter
Pay-as-you-go | Multi-Model
OpenRouter provides access to 100+ models from multiple providers (OpenAI, Anthropic, Google, Meta, Mistral, and more) through a single API. Great for trying different models without managing multiple API keys.
Step 1: Get an API Key
Create an account at openrouter.ai and generate an API key from the Keys page.
Step 2: Set Environment Variables
PowerShell:
$env:OPENROUTER_API_KEY = "sk-or-..."
Bash:
export OPENROUTER_API_KEY=sk-or-...
Or create a .env file in the patterns directory:
OPENROUTER_API_KEY = "sk-or-..."
Popular Models
openai/gpt-4o-mini- Fast and affordable GPT-4oanthropic/claude-sonnet-4- Anthropic’s Claude Sonnet 4google/gemini-2.0-flash- Google’s Gemini 2.0 Flashmeta-llama/llama-3.3-70b-instruct- Meta’s Llama 3.3
See openrouter.ai/models for the full list.
Step 3: Run a Pattern
cd patterns/01-prompt-chaining/src/PromptChaining
dotnet run
Troubleshooting
Azure: Authentication failed
Check that your AZURE_OPENAI_API_KEY and AZURE_OPENAI_ENDPOINT are correct. Get your API key from the Azure portal under your Azure OpenAI resource.
Ollama: “connection refused”
Make sure Ollama is running. On Windows/Mac, check the system tray. On Linux, run ollama serve.
Ollama: Model not found
Pull the model first: ollama pull llama3.2
GitHub Models: 401 Unauthorized
Make sure you’re authenticated with gh auth login and your GITHUB_TOKEN is valid. Run gh auth token to verify.
GitHub Models: Model not found
Check the exact model name at github.com/marketplace/models. Model names are case-sensitive.
OpenAI: 401 Unauthorized
Check that your OPENAI_API_KEY is correct and has available credits.
OpenRouter: 401 Unauthorized
Check that your OPENROUTER_API_KEY is correct. Keys start with sk-or-.
OpenRouter: Model not found
Check the model name format at openrouter.ai/models. Models use provider/model-name format.
.env file not loading
Make sure your .env file is in the patterns directory or a parent directory. The file is loaded automatically when running patterns.
Next Steps
Now that your environment is set up, try the first pattern:
Found this helpful?
Comments