Skip to content

Azure-Samples/foundry-hosted-langchain-demos

Internal HR Benefits Agent

A sample AI agent built with LangChain/LangGraph that helps employees with HR benefits questions. This project is designed as an azd starter template for deploying hosted AI agents to Microsoft Foundry.

What it does

The agent uses company HR documents (via Azure AI Search) and tool-calling to:

  • Answer questions about employee benefits (health insurance, dental, vision, 401k, etc.)
  • Look up enrollment deadlines and dates
  • Search the web for current information when the knowledge base doesn't have the answer
  • Run code via Code Interpreter for data analysis tasks

Architecture

The agent connects to a Foundry Toolbox MCP endpoint that provides knowledge-base retrieval, web search, and code interpreter as tools. The LangGraph agent decides when to call each tool based on the user's question.

The staged demos show the progression from local to hosted:

Stage Script What it adds
0 agents/stage0_local_model.py Fully local agent with Ollama
1 agents/stage1_foundry_model.py Azure OpenAI / Foundry model deployment
2 agents/stage2_foundry_iq.py Foundry IQ grounding via Azure AI Search
3 agents/stage3_foundry_toolbox.py Foundry Toolbox (KB + web search + code interpreter)
4 agents/stage4_foundry_hosted.py Hosted agent with Responses protocol

Prerequisites

Optional for Stage 0:

Quick start

Deploy to Azure

azd auth login
azd up

Region: The template restricts deployment to regions that support all features (Responses API, evaluations, red teaming): eastus2, francecentral, northcentralus, swedencentral.

Run locally

Install dependencies

uv sync

Run the hosted agent locally

Start the local hosted-agent server:

azd ai agent run

Invoke the agent from another terminal:

azd ai agent invoke --local "What benefits are there, and when do I need to enroll by?"

Run the staged demos

The staged scripts can be run independently for teaching purposes:

uv run python agents/stage0_local_model.py
uv run python agents/stage1_foundry_model.py
uv run python agents/stage2_foundry_iq.py
uv run python agents/stage3_foundry_toolbox.py

Or call a deployed agent via the SDK:

uv run python agents/call_foundry_hosted.py "What PerksPlus benefits are there?"

Workflows

The workflows/ directory is deployed as a separate Foundry-hosted service and demonstrates LangGraph workflows:

Stage Script What it demonstrates
1 workflows/stage1_simple_nodes.py Pure data-transformation pipeline (no LLM)
2 workflows/stage2_agent_nodes.py Workflow backed by a Foundry model
3 workflows/stage3_foundry_hosted_as_agent.py Hosted workflow entry point

Evaluation scripts

Scripts for quality evaluation, red teaming, and scheduled runs are in scripts/:

Script Description
scripts/quality_eval.py Run quality evaluation (task adherence, groundedness, relevance)
scripts/red_team_scan.py Run a one-time red team scan with attack strategies
scripts/scheduled_eval.py Set up daily quality evaluation schedule
scripts/scheduled_red_team.py Set up daily red team schedule
uv run scripts/quality_eval.py
uv run scripts/red_team_scan.py

Note: Red teaming requires a supported region (East US 2, Sweden Central, etc.). See evaluation region support.

Debug with azd

After deploying, use these commands to inspect and troubleshoot your hosted agent:

# View container status, health, and error details
azd ai agent show

# Fetch recent logs
azd ai agent monitor

# Stream logs in real time
azd ai agent monitor -f

Observability

The hosted agent server exports its own HTTP-layer traces (request/response timing) to Application Insights automatically when APPLICATIONINSIGHTS_CONNECTION_STRING is set.

To capture sensitive data in traces (tool call arguments, prompts, responses), set enable_content_recording=True in the enable_auto_tracing() call. This is useful for debugging but should be disabled in production.

To query traces in Application Insights:

dependencies
| where timestamp > ago(1h)
| where customDimensions has "gen_ai.operation.name"
| extend opName = tostring(customDimensions["gen_ai.operation.name"])
| extend toolName = tostring(customDimensions["gen_ai.tool.name"])
| extend toolArgs = tostring(customDimensions["gen_ai.tool.call.arguments"])
| project timestamp, name, opName, toolName, toolArgs
| order by timestamp desc

Environment variables

Variable Required Description
FOUNDRY_PROJECT_ENDPOINT Yes Foundry project endpoint
AZURE_AI_MODEL_DEPLOYMENT_NAME Yes Model deployment name (e.g., gpt-5.2)
AZURE_OPENAI_ENDPOINT Stages only Azure OpenAI endpoint for local stage scripts
AZURE_AI_SEARCH_SERVICE_ENDPOINT Stage 2 only Azure AI Search endpoint
AZURE_AI_SEARCH_KNOWLEDGE_BASE_NAME Stage 2 only Knowledge base name (default: zava-company-kb)
CUSTOM_FOUNDRY_AGENT_TOOLBOX_NAME No Toolbox name (default: hr-agent-tools)
APPLICATIONINSIGHTS_CONNECTION_STRING No App Insights connection string for tracing

About

No description, website, or topics provided.

Resources

License

Code of conduct

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors