Skip to content

Commit 1480675

Browse files
authored
Standardize provider imports in documentation (#1896)
1 parent d14fc54 commit 1480675

132 files changed

Lines changed: 313 additions & 713 deletions

File tree

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

CLAUDE.md

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -165,6 +165,13 @@ This file provides guidance to Claude Code (claude.ai/code) when working with co
165165
- **Configuration**: Uses `pyproject.toml` settings for type checking
166166
- Run `uv run ty check` before committing - aim for zero errors
167167

168+
### Code Quality Checks Before Committing
169+
Always run these checks before committing code:
170+
1. **Ruff linting**: `uv run ruff check .` - Fix all errors
171+
2. **Ruff formatting**: `uv run ruff format .` - Apply consistent formatting
172+
3. **Type checking**: `uv run ty check` - Aim for zero type errors
173+
4. **Tests**: Run relevant tests to ensure changes don't break functionality
174+
168175
### Type Patterns
169176
- **Bounded TypeVars**: Use `T = TypeVar("T", bound=Union[BaseModel, ...])` for constraints
170177
- **Version Compatibility**: Handle Python 3.9 vs 3.10+ typing differences explicitly

docs/architecture.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -61,7 +61,7 @@ class User(BaseModel):
6161
name: str
6262
age: int
6363

64-
client = instructor.from_openai(openai.OpenAI())
64+
client = instructor.from_provider("openai/gpt-5-nano")
6565

6666
model = client.chat.completions.create(
6767
model="gpt-4o-mini",
@@ -87,7 +87,7 @@ class User(BaseModel):
8787
age: int
8888

8989
async def main():
90-
aclient = instructor.from_openai(openai.AsyncOpenAI())
90+
aclient = instructor.from_provider("openai/gpt-5-nano", async_client=True)
9191
model = await aclient.chat.completions.create(
9292
model="gpt-4o-mini",
9393
messages=[{"role": "user", "content": "{\"name\": \"Ada\", \"age\": 37}"}],

docs/blog/posts/announcing-gemini-tool-calling-support.md

Lines changed: 2 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -64,9 +64,7 @@ class User(BaseModel):
6464
age: int
6565

6666

67-
client = instructor.from_gemini(
68-
client=genai.GenerativeModel(
69-
model_name="models/gemini-1.5-flash-latest", # (1)!
67+
client = instructor.from_provider("google/gemini-2.5-flash")
7068
)
7169
)
7270

@@ -104,8 +102,7 @@ class User(BaseModel):
104102
age: int
105103

106104

107-
client = instructor.from_vertexai(
108-
client=GenerativeModel("gemini-1.5-pro-preview-0409"), # (1)!
105+
client = instructor.from_provider("google/gemini-2.5-flash", vertexai=True), # (1)!
109106
)
110107

111108

docs/blog/posts/announcing-unified-provider-interface.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -42,7 +42,7 @@ The `from_provider()` function is designed to streamline several common workflow
4242

4343
## How it Works: A Look Under the Hood
4444

45-
Internally, `from_provider()` (located in `instructor/auto_client.py`) parses the model string (e.g., `"openai/gpt-4o-mini"`) to identify the provider and model name. It then uses conditional logic to import the correct libraries, instantiate the client, and apply the appropriate Instructor patch. For instance, the conceptual handling for an OpenAI client would involve importing the `openai` SDK and `instructor.from_openai`.
45+
Internally, `from_provider()` (located in `instructor/auto_client.py`) parses the model string (e.g., `"openai/gpt-5-nano"`) to identify the provider and model name. It then uses conditional logic to import the correct libraries, instantiate the client, and apply the appropriate Instructor patch. For instance, the conceptual handling for an OpenAI client would involve importing the `openai` SDK and `instructor.from_openai`.
4646

4747
```python
4848
# Conceptual illustration of internal logic for OpenAI:

docs/blog/posts/best_framework.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@ class User(BaseModel):
3737
age: int
3838

3939

40-
client = instructor.from_openai(openai.OpenAI())
40+
client = instructor.from_provider("openai/gpt-5-nano")
4141

4242
user = client.chat.completions.create(
4343
model="gpt-3.5-turbo",

docs/blog/posts/caching.md

Lines changed: 3 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -109,11 +109,9 @@ Let's first consider our canonical example, using the `OpenAI` Python client to
109109

110110
```python
111111
import instructor
112-
from openai import OpenAI
113112
from pydantic import BaseModel
114-
115113
# Enables `response_model`
116-
client = instructor.from_openai(OpenAI())
114+
client = instructor.from_provider("openai/gpt-5-nano")
117115

118116

119117
class UserDetail(BaseModel):
@@ -336,10 +334,8 @@ import inspect
336334
import instructor
337335
import diskcache
338336

339-
from openai import OpenAI
340337
from pydantic import BaseModel
341-
342-
client = instructor.from_openai(OpenAI())
338+
client = instructor.from_provider("openai/gpt-5-nano")
343339
cache = diskcache.Cache('./my_cache_directory')
344340

345341

@@ -503,9 +499,7 @@ import inspect
503499
import instructor
504500

505501
from pydantic import BaseModel
506-
from openai import OpenAI
507-
508-
client = instructor.from_openai(OpenAI())
502+
client = instructor.from_provider("openai/gpt-5-nano")
509503
cache = redis.Redis("localhost")
510504

511505

docs/blog/posts/chain-of-density.md

Lines changed: 2 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -287,10 +287,8 @@ def min_entity_density(cls, v: str):
287287
Now that we have our models and the rough flow figured out, let's implement a function to summarize a piece of text using `Chain Of Density` summarization.
288288

289289
```python hl_lines="4 9-24 38-68"
290-
from openai import OpenAI
291290
import instructor
292-
293-
client = instructor.from_openai(OpenAI()) #(1)!
291+
client = instructor.from_provider("openai/gpt-5-nano") #(1)!
294292

295293
def summarize_article(article: str, summary_steps: int = 3):
296294
summary_chain = []
@@ -399,9 +397,7 @@ import csv
399397
import logging
400398
import instructor
401399
from pydantic import BaseModel
402-
from openai import OpenAI
403-
404-
client = instructor.from_openai(OpenAI()) # (2)!
400+
client = instructor.from_provider("openai/gpt-5-nano") # (2)!
405401

406402
logging.basicConfig(level=logging.INFO) #(3)!
407403

docs/blog/posts/chat-with-your-pdf-with-gemini.md

Lines changed: 1 addition & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -49,11 +49,7 @@ from pydantic import BaseModel
4949
import time
5050

5151
# Initialize the client
52-
client = instructor.from_gemini(
53-
client=genai.GenerativeModel(
54-
model_name="models/gemini-1.5-flash-latest",
55-
)
56-
)
52+
client = instructor.from_provider("google/gemini-2.5-flash")
5753

5854

5955
# Define your output structure

docs/blog/posts/citations.md

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -33,11 +33,9 @@ In this example, we use the `Statements` class to verify if a given substring qu
3333

3434
```python
3535
from typing import List
36-
from openai import OpenAI
3736
from pydantic import BaseModel, ValidationInfo, field_validator
3837
import instructor
39-
40-
client = instructor.from_openai(OpenAI())
38+
client = instructor.from_provider("openai/gpt-5-nano")
4139

4240

4341
class Statements(BaseModel):

docs/blog/posts/extracting-model-metadata.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -170,7 +170,7 @@ With our inbuilt support for `jinja` formatting using the `context` keyword that
170170
import openai
171171
import instructor
172172
173-
client = instructor.from_openai(openai.OpenAI())
173+
client = instructor.from_provider("openai/gpt-5-nano")
174174
175175
resp = client.chat.completions.create(
176176
model="gpt-4o",

0 commit comments

Comments
 (0)