Skip to content

Conversation

@njbrake
Copy link
Contributor

@njbrake njbrake commented Sep 8, 2025

Description

Better structuring for providers where things are organized into shared function definitions

PR Type

💅 Refactor ## Relevant issues

Checklist

  • I have added unit tests that prove my fix/feature works
  • New and existing tests pass locally
  • Documentation was updated where necessary
  • I have read and followed the contribution guidelines```

@njbrake njbrake linked an issue Sep 8, 2025 that may be closed by this pull request
@codecov
Copy link

codecov bot commented Sep 8, 2025

Codecov Report

❌ Patch coverage is 68.72792% with 177 lines in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
src/any_llm/providers/cerebras/cerebras.py 37.14% 22 Missing ⚠️
src/any_llm/providers/together/together.py 41.17% 20 Missing ⚠️
src/any_llm/providers/cohere/cohere.py 43.33% 17 Missing ⚠️
src/any_llm/providers/openai/base.py 64.58% 12 Missing and 5 partials ⚠️
src/any_llm/providers/sagemaker/sagemaker.py 45.16% 17 Missing ⚠️
src/any_llm/providers/azure/azure.py 56.75% 13 Missing and 3 partials ⚠️
src/any_llm/provider.py 60.00% 12 Missing ⚠️
src/any_llm/providers/watsonx/watsonx.py 63.63% 11 Missing and 1 partial ⚠️
src/any_llm/providers/groq/groq.py 65.51% 9 Missing and 1 partial ⚠️
src/any_llm/providers/voyage/voyage.py 71.87% 8 Missing and 1 partial ⚠️
... and 6 more
Files with missing lines Coverage Δ
src/any_llm/providers/gemini/base.py 98.26% <100.00%> (+0.50%) ⬆️
src/any_llm/providers/mistral/mistral.py 95.89% <93.54%> (-1.94%) ⬇️
src/any_llm/providers/ollama/ollama.py 93.75% <94.11%> (+0.49%) ⬆️
src/any_llm/providers/anthropic/anthropic.py 93.33% <85.18%> (-6.67%) ⬇️
src/any_llm/providers/bedrock/bedrock.py 87.09% <86.66%> (-0.79%) ⬇️
src/any_llm/providers/xai/xai.py 83.33% <82.75%> (-0.23%) ⬇️
src/any_llm/providers/huggingface/huggingface.py 80.00% <70.37%> (-3.34%) ⬇️
src/any_llm/providers/voyage/voyage.py 81.35% <71.87%> (-8.97%) ⬇️
src/any_llm/providers/groq/groq.py 70.23% <65.51%> (+0.23%) ⬆️
src/any_llm/provider.py 82.53% <60.00%> (-3.40%) ⬇️
... and 7 more

... and 1 file with indirect coverage changes

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@njbrake
Copy link
Contributor Author

njbrake commented Sep 8, 2025

Here's what I've tested locally ( I put a helper script into our scripts dir to help with this)

✅ Available Providers (25):
🔑 anthropic (env: ANTHROPIC_API_KEY)
🔑 azure (env: AZURE_API_KEY)
🔑 bedrock (env: AWS_BEARER_TOKEN_BEDROCK)
🔑 cerebras (env: CEREBRAS_API_KEY)
🔑 cohere (env: CO_API_KEY)
🔑 deepseek (env: DEEPSEEK_API_KEY)
🔑 fireworks (env: FIREWORKS_API_KEY)
🔑 gemini (env: GEMINI_API_KEY/GOOGLE_API_KEY)
🔑 groq (env: GROQ_API_KEY)
🔑 huggingface (env: HF_TOKEN)
🔑 llama (env: LLAMA_API_KEY)
🔑 llamacpp (env: LLAMA_API_KEY)
🔓 lmstudio (env: LM_STUDIO_API_KEY)
🔑 mistral (env: MISTRAL_API_KEY)
🔑 nebius (env: NEBIUS_API_KEY)
🔓 ollama (env: None)
🔑 openai (env: OPENAI_API_KEY)
🔑 openrouter (env: OPENROUTER_API_KEY)
🔑 portkey (env: PORTKEY_API_KEY)
🔑 sambanova (env: SAMBANOVA_API_KEY)
🔑 voyage (env: VOYAGE_API_KEY)
🔑 watsonx (env: WATSONX_API_KEY)
🔑 xai (env: XAI_API_KEY)

🔑 Missing API Keys (7):
🔓 sagemaker (env: None)
🔓 llamafile (env: None)
❌ azureopenai - Set AZURE_OPENAI_API_KEY
❌ databricks - Set DATABRICKS_TOKEN
❌ inception - Set INCEPTION_API_KEY
❌ moonshot - Set MOONSHOT_API_KEY
❌ perplexity - Set PERPLEXITY_API_KEY
❌ together - Set TOGETHER_API_KEY
❌ vertexai - Set GOOGLE_PROJECT_ID

@njbrake njbrake marked this pull request as ready for review September 8, 2025 20:24
Comment on lines +52 to +55
@staticmethod
def _convert_completion_response(response: "Message") -> ChatCompletion:
"""Convert Anthropic Message to OpenAI ChatCompletion format."""
return _convert_response(response)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought about moving all the logic from the utils.py file into this function, but I ended up keeping it like this because it seemed smoother and kept the anthropic file from getting too busy 🤷

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will not complain about smaller files

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know that there are some provider-specific quirks, but it would be super great if we could make those happen in an inherited call with stuff before and after a super().acompletion(*args, **kwargs)

@njbrake njbrake requested review from besaleli and daavoo September 8, 2025 20:26
@njbrake
Copy link
Contributor Author

njbrake commented Sep 8, 2025

In a sense, I get that this is a bit of a YOLO: I updated all the providers and ran the integration tests, but its definitely still possible that I messed something up. However, this at least migrates us to a working setup, and if the integration tests are all passing I feel like that provides at least a moderate level of confidence that everything's ok.

Comment on lines +52 to +55
@staticmethod
def _convert_completion_response(response: "Message") -> ChatCompletion:
"""Convert Anthropic Message to OpenAI ChatCompletion format."""
return _convert_response(response)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will not complain about smaller files

yield _create_openai_chunk_from_anthropic_chunk(event, kwargs.get("model", "unknown"))
yield self._convert_completion_chunk_response(event, model_id=kwargs.get("model", "unknown"))

async def acompletion(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if we're standardizing API by adding more required methods, should we abstract acompletion et al. to provided methods?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like this idea. This PR is already a big boy, so I'll split this off into a separate PR, so that this change doesn't keep exploding into a bigger change.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Comment on lines +52 to +55
@staticmethod
def _convert_completion_response(response: "Message") -> ChatCompletion:
"""Convert Anthropic Message to OpenAI ChatCompletion format."""
return _convert_response(response)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know that there are some provider-specific quirks, but it would be super great if we could make those happen in an inherited call with stuff before and after a super().acompletion(*args, **kwargs)

@njbrake
Copy link
Contributor Author

njbrake commented Sep 9, 2025

@njbrake njbrake merged commit c672fe2 into main Sep 9, 2025
10 of 11 checks passed
@njbrake njbrake deleted the 382-internal-api branch September 9, 2025 09:35
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Internal API proposal

3 participants