Skip to content

Reasoning trace is not being returned when using an AzureChatOpenAI model from Microsoft Foundry #34439

@jackirvine97

Description

Checked other resources

  • This is a bug, not a usage question.
  • I added a clear and descriptive title that summarizes this issue.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangChain rather than my code.
  • The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
  • This is not related to the langchain-community package.
  • I posted a self-contained, minimal, reproducible example. A maintainer can copy it and run it AS IS.

Package (Required)

  • langchain
  • langchain-openai
  • langchain-anthropic
  • langchain-classic
  • langchain-core
  • langchain-cli
  • langchain-model-profiles
  • langchain-tests
  • langchain-text-splitters
  • langchain-chroma
  • langchain-deepseek
  • langchain-exa
  • langchain-fireworks
  • langchain-groq
  • langchain-huggingface
  • langchain-mistralai
  • langchain-nomic
  • langchain-ollama
  • langchain-perplexity
  • langchain-prompty
  • langchain-qdrant
  • langchain-xai
  • Other / not sure / general

Example Code (Python)

deployment_name = "gpt-5-nano"
model = AzureChatOpenAI(
    azure_endpoint=config.azure_ai_endpoint,
    api_key=config.azure_ai_api_key,
    azure_deployment=deployment_name,
    api_version="2025-03-01-preview",
    streaming=True,
    reasoning={"effort": "medium", "summary": "detailed"},
)

# Taken direct from https://docs.langchain.com/oss/python/langchain/models#reasoning
for chunk in model.stream("what is capital of France?"):
    reasoning_steps = [r for r in chunk.content_blocks if r["type"] == "reasoning"]
    if reasoning_steps:
        print(reasoning_steps)

Output

No reasoning trace is output.

Expected output: reasoning tokens should be output

Description

  • I want to print the reasoning trace from my model
  • My model is gpt5-nano deployed on Microsoft foundry
  • However, I cannot get the reasoning trace to output
  • I have confirmed the model is working correctly - I have run this exact scenario with the responses API and the reasoning trace was output successfully
  • This leads me to think it must be an issue with either the docs or AzureChatOpenAI

System Info

System Information

OS: Darwin
OS Version: Darwin Kernel Version 22.1.0: Sun Oct 9 20:14:30 PDT 2022; root:xnu-8792.41.9~2/RELEASE_ARM64_T8103
Python Version: 3.12.12 (main, Dec 13 2025, 10:24:45) [Clang 14.0.3 (clang-1403.0.22.14.1)]

Package Information

langchain_core: 1.2.0
langchain: 1.1.3
langsmith: 0.4.59
langchain_azure_ai: 0.1.8
langchain_openai: 0.3.35
langchain_text_splitters: 1.0.0
langgraph_sdk: 0.3.0

Optional packages not installed

langserve

Other Dependencies

aiohttp: 3.13.2
azure-ai-agents: 1.2.0b6
azure-ai-inference: 1.0.0b9
azure-ai-projects: 1.1.0b4
azure-core: 1.36.0
azure-cosmos: 4.15.0b1
azure-identity: 1.25.1
azure-search-documents: 11.6.0
httpx: 0.28.1
jsonpatch: 1.33
langgraph: 1.0.5
numpy: 2.0.2
openai: 2.8.1
openai-agents: 0.6.1
opentelemetry-api: 1.38.0
orjson: 3.11.5
packaging: 25.0
pydantic: 2.12.5
pyyaml: 6.0.3
requests: 2.32.5
requests-toolbelt: 1.0.0
tenacity: 9.1.2
tiktoken: 0.12.0
typing-extensions: 4.15.0
uuid-utils: 0.12.0
zstandard: 0.25.0

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugRelated to a bug, vulnerability, unexpected error with an existing featureexternalopenai`langchain-openai` package issues & PRs

    Type

    No fields configured for Bug.

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions