Skip to content

[Bug]: Gemini 3 caching doesn't work #16961

@sdip15fa

Description

@sdip15fa

What happened?

Reproduction:

  • install litellm (git) in a virtualenv
  • make config.yaml:
model_list:
  - model_name: gemini-3-pro-preview
    litellm_params:
      model: gemini/gemini-3-pro-preview
      api_key: os.environ/GEMINI_API_KEY
litellm_settings:
  master_key: os.environ/LITELLM_MASTER_KEY
  drop_params: true
  • run litellm --config config.yaml (listening on port 7777)
  • run ANTHROPIC_DEFAULT_SONNET_MODEL=gemini-3-pro-preview ANTHROPIC_DEFAULT_HAIKU_MODEL=gemini-3-pro-preview ANTHROPIC_BASE_URL=http://localhost:7777 ANTHROPIC_AUTH_TOKEN=$LITELLM_MASTER_KEY claude
  • send a long enough message to fit the minimum cached content size
  • error

Relevant log output

litellm.exceptions.BadRequestError: litellm.BadRequestError: GeminiException BadRequestError - {
  "error": {
    "code": 400,
    "message": "* CreateCachedContentRequest.cached_content.tools[0].function_declarations[22].parameters.properties[truncateMode].enum: only allowed for STRING type\n* CreateCachedContentRequest.cached_content.tools[0].function_declarations[34].parameters.properties[truncateMode].enum: only allowed for STRING type\n* CreateCachedContentRequest.cached_content.tools[0].function_declarations[40].parameters.properties[truncateMode].enum: only allowed for STRING type\n",
    "status": "INVALID_ARGUMENT"
  }
}
. Received Model Group=gemini-3-pro-preview
Available Model Group Fallbacks=None LiteLLM Retried: 1 times, LiteLLM Max Retries: 2

Are you a ML Ops Team?

No

What LiteLLM version are you on ?

v1.80.0

Twitter / LinkedIn details

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions