Skip to content

feat: add LLM_ENABLE_THINKING env var for thinking-mode LLMs#852

Draft
deardarlingoose wants to merge 1 commit intomainfrom
feat/llm-enable-thinking
Draft

feat: add LLM_ENABLE_THINKING env var for thinking-mode LLMs#852
deardarlingoose wants to merge 1 commit intomainfrom
feat/llm-enable-thinking

Conversation

@deardarlingoose
Copy link
Contributor

  • Adds LLM_ENABLE_THINKING setting (bool | None, default None) to control thinking-mode LLMs

    • When set, passes enable_thinking via extra_body to OpenAILike/LiteLLM
    • Fixes GLM-4.5-Air-FP8 returning content: null (all output in reasoning_content), which broke structured output parsing with LLMParseError

    Three states:

    • None (default) — don't send the param, model uses its own default
    • true — explicitly enable thinking
    • false — disable thinking (needed for GLM-4.5-Air-FP8 structured output)

Some LLMs (e.g. GLM-4.5-Air) default to thinking mode which returns
content in reasoning_content instead of content field, breaking
structured output parsing. This setting passes enable_thinking through
extra_body to control the behavior per deployment.

Three states: None (default, don't send), True, False.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants