Skip to content

Commit 8b36fb7

Browse files
authored
Merge pull request #2 from koaning/tests
moar
2 parents f48e738 + cb58c81 commit 8b36fb7

File tree

4 files changed

+687
-3
lines changed

4 files changed

+687
-3
lines changed

README.md

Lines changed: 122 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,2 +1,122 @@
1-
# smartfunc
2-
Turn docstrings into LLM-functions
1+
<img src="imgs/logo.png" width="125" height="125" align="right" />
2+
3+
### smartfunc
4+
5+
> Turn docstrings into LLM-functions
6+
7+
## Installation
8+
9+
```bash
10+
uv pip install smartfunc
11+
```
12+
13+
14+
## What is this?
15+
16+
Here is a nice example of what is possible with this library:
17+
18+
```python
19+
from smartfunc import backend
20+
21+
@backend("gpt-4")
22+
def generate_summary(text: str):
23+
"""Generate a summary of the following text: {{ text }}"""
24+
pass
25+
```
26+
27+
The `generate_summary` function will now return a string with the summary of the text.
28+
29+
## How does it work?
30+
31+
This library wraps around the [llm library](https://llm.datasette.io/en/stable/index.html) made by Simon Willison. The docstring is parsed and turned into a Jinja2 template which we inject with variables to generate a prompt at runtime. We then use the backend given by the decorator to run the prompt and return the result.
32+
33+
The `llm` library is minimalistic and while it does not support all the features out there it does offer a solid foundation to build on. This library is mainly meant as a method to add some syntactic sugar on top. We do get a few nice benefits from the `llm` library though:
34+
35+
- The `llm` library is well maintained and has a large community
36+
- An [ecosystem of backends](https://llm.datasette.io/en/stable/plugins/directory.html) for different LLM providers
37+
- Many of the vendors have `async` support, which allows us to do microbatching
38+
- Many of the vendors have schema support, which allows us to use Pydantic models to define the response
39+
- You can use `.env` files to store your API keys
40+
41+
## Extra features
42+
43+
### Schemas
44+
45+
The following snippet shows how you might create a re-useable backend decorator that uses a system prompt. Also notice how we're able to use a Pydantic model to define the response.
46+
47+
```python
48+
from smartfunc import backend
49+
from pydantic import BaseModel
50+
from dotenv import load_dotenv
51+
52+
load_dotenv(".env")
53+
54+
class Summary(BaseModel):
55+
summary: str
56+
pros: list[str]
57+
cons: list[str]
58+
59+
llmify = backend("gpt-4o-mini", system="You are a helpful assistant.", temperature=0.5)
60+
61+
@llmify
62+
def generate_poke_desc(text: str) -> Summary:
63+
"""Describe the following pokemon: {{ text }}"""
64+
pass
65+
66+
print(generate_poke_desc("pikachu"))
67+
```
68+
69+
This is the result that we got back:
70+
71+
```python
72+
{
73+
'summary': 'Pikachu is a small, electric-type Pokémon known for its adorable appearance and strong electrical abilities. It is recognized as the mascot of the Pokémon franchise, with distinctive features and a cheerful personality.',
74+
'pros': [
75+
'Iconic and recognizable worldwide',
76+
'Strong electric attacks like Thunderbolt',
77+
'Has a cute and friendly appearance',
78+
'Evolves into Raichu with a Thunder Stone',
79+
'Popular choice in Pokémon merchandise and media'
80+
],
81+
'cons': [
82+
'Not very strong in higher-level battles',
83+
'Weak against ground-type moves',
84+
'Limited to electric-type attacks unless learned through TMs',
85+
'Can be overshadowed by other powerful Pokémon in competitive play'
86+
],
87+
}
88+
```
89+
90+
Not every backend supports schemas, but you will a helpful error message if that is the case.
91+
92+
> [!NOTE]
93+
> You might look at this example and wonder if you might be better off using [instructor](https://python.useinstructor.com/). After all, that library has more support for validation of parameters and even has some utilities for multi-turn conversations. All of this is true, but instructor requires you to learn a fair bit more about each individual backend. If you want to to use claude instead of openai then you will need to load in a different library. Here, you just need to make sure the `llm` plugin is installed and you're good to go.
94+
95+
96+
### Async
97+
98+
The library also supports async functions. This is useful if you want to do microbatching or if you want to use the async backends from the `llm` library.
99+
100+
```python
101+
import asyncio
102+
from smartfunc import async_backend
103+
from pydantic import BaseModel
104+
from dotenv import load_dotenv
105+
106+
load_dotenv(".env")
107+
108+
109+
class Summary(BaseModel):
110+
summary: str
111+
pros: list[str]
112+
cons: list[str]
113+
114+
115+
@async_backend("gpt-4o-mini")
116+
async def generate_poke_desc(text: str) -> Summary:
117+
"""Describe the following pokemon: {{ text }}"""
118+
pass
119+
120+
resp = asyncio.run(generate_poke_desc("pikachu"))
121+
print(resp)
122+
```

imgs/logo.png

19.3 KB
Loading

smartfunc/__init__.py

Lines changed: 10 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -180,4 +180,13 @@ async def wrapper(*args, **kwargs):
180180

181181
async def run(self, func, *args, **kwargs):
182182
new_func = self(func)
183-
return new_func(*args, **kwargs)
183+
return new_func(*args, **kwargs)
184+
185+
186+
def get_backend_models():
187+
for model in llm.get_models():
188+
print(model.model_id)
189+
190+
def get_async_backend_models():
191+
for model in llm.get_async_models():
192+
print(model.model_id)

0 commit comments

Comments
 (0)