Skip to content

Commit f153fd6

Browse files
Merge pull request #216 from wussh/doc-improvements
docs: Add generateText example to workers-ai-provider and clarify supported methods
2 parents f1090ec + e09da2a commit f153fd6

File tree

3 files changed

+41
-1
lines changed

3 files changed

+41
-1
lines changed

.changeset/doc-improvements.md

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,6 @@
1+
---
2+
"workers-ai-provider": patch
3+
"ai-gateway-provider": patch
4+
---
5+
6+
Improve documentation by adding generateText example to workers-ai-provider and clarifying supported methods in ai-gateway-provider.

packages/ai-gateway-provider/README.md

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -162,7 +162,11 @@ const {text} = await generateText({
162162

163163
## Supported Methods
164164

165-
Currently only chat completions (non-streaming) is supported.
165+
Currently, the following methods are supported:
166+
167+
* **Non-streaming text generation**: Using `generateText()` from the Vercel AI SDK
168+
* **Chat completions**: Using `generateText()` with message-based prompts
169+
166170
More can be added, please open an issue in the GitHub repository!
167171

168172
## Error Handling

packages/workers-ai-provider/README.md

Lines changed: 30 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -78,6 +78,36 @@ const text = await streamText({
7878
});
7979
```
8080

81+
### Using generateText for Non-Streaming Responses
82+
83+
If you prefer to get a complete text response rather than a stream, you can use the `generateText` function:
84+
85+
```ts
86+
import { createWorkersAI } from "workers-ai-provider";
87+
import { generateText } from "ai";
88+
89+
type Env = {
90+
AI: Ai;
91+
};
92+
93+
export default {
94+
async fetch(req: Request, env: Env) {
95+
const workersai = createWorkersAI({ binding: env.AI });
96+
97+
const { text } = await generateText({
98+
model: workersai("@cf/meta/llama-3.3-70b-instruct-fp8-fast"),
99+
prompt: "Write a short poem about clouds",
100+
});
101+
102+
return new Response(JSON.stringify({ generatedText: text }), {
103+
headers: {
104+
"Content-Type": "application/json",
105+
},
106+
});
107+
},
108+
};
109+
```
110+
81111
### Using AutoRAG
82112

83113
The provider now supports [Cloudflare's AutoRAG](https://developers.cloudflare.com/autorag/), allowing you to prompt your AutoRAG models directly from the Vercel AI SDK. Here's how to use it in your Worker:

0 commit comments

Comments
 (0)