Conversation
|
Review the following changes in direct dependencies. Learn more about Socket for GitHub.
|
Deploying with
|
| Status | Name | Latest Commit | Preview URL | Updated (UTC) |
|---|---|---|---|---|
| ✅ Deployment successful! View logs |
ai | e9adbf1 | Commit Preview URL Branch Preview URL |
Feb 11 2026, 10:33 PM |
mujahidkay
left a comment
There was a problem hiding this comment.
Looks good.
Btw, web_search and web_fetch were added so that the chatbot could go look for answers related to user's query if it weren't available in the context of its mcp tools or system prompt
There was a problem hiding this comment.
Pull request overview
Updates the project’s AI SDK dependencies to address a production web_fetcher.web_fetch_20250910 schema error and adjusts server/UI streaming behavior to avoid emitting model reasoning.
Changes:
- Bump
aiand@ai-sdk/*packages to newer major versions. - Update API routes to precompute
convertToModelMessages(messages)and disablesendReasoningin streamed responses. - Reduce client-side logging noise by switching
console.log→console.debugin chat submission effects.
Reviewed changes
Copilot reviewed 5 out of 6 changed files in this pull request and generated 4 comments.
Show a summary per file
| File | Description |
|---|---|
package.json |
Major version bumps for ai and @ai-sdk/* dependencies. |
yarn.lock |
Lockfile updates reflecting the upgraded AI SDK dependency graph. |
components/chat.tsx |
Downgrades several chat debug logs from log to debug. |
app/api/chat/route.ts |
Awaits convertToModelMessages once and disables reasoning streaming. |
app/api/support/route.ts |
Same: precomputes model messages and disables reasoning streaming. |
app/api/ymax/route.ts |
Same: precomputes model messages and disables reasoning streaming. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| "@ai-sdk/mcp": "^1.0.20", | ||
| "@ai-sdk/openai": "^3.0.27", | ||
| "@ai-sdk/react": "^3.0.83", | ||
| "@ai-sdk/xai": "^3.0.53", |
There was a problem hiding this comment.
Upgrading to @ai-sdk/react@^3 introduces a stricter React peer range (see yarn.lock: react: ^18 || ~19.0.1 || ~19.1.2 || ^19.2.1). The repo currently depends on react/react-dom ^19.1.0 and yarn.lock resolves react@19.2.0, which does not satisfy ^19.2.1. This can lead to peer dependency errors/warnings (and potentially PnP runtime issues). Consider pinning react/react-dom to a compatible version (e.g., >=19.2.1 or ~19.1.2) or adjusting the @ai-sdk/react version to one that supports the resolved React version.
| return result.toUIMessageStreamResponse({ | ||
| originalMessages: messages, | ||
| sendReasoning: true, // Enable streaming of reasoning/thinking content | ||
| sendReasoning: false, | ||
| headers: { |
There was a problem hiding this comment.
sendReasoning is disabled here, but this route still enables provider “thinking” via providerOptions (Google thinkingBudget / Anthropic thinking enabled). If the intent is to fully disable reasoning (and reduce cost/latency), consider disabling or gating those thinking configs as well, otherwise the model may still generate hidden reasoning tokens.
| return result.toUIMessageStreamResponse({ | ||
| originalMessages: messages, | ||
| sendReasoning: true, // Enable streaming of reasoning/thinking content | ||
| sendReasoning: false, | ||
| headers: { |
There was a problem hiding this comment.
sendReasoning is disabled here, but this route still enables provider “thinking” via providerOptions (Google thinkingBudget / Anthropic thinking enabled). If the intent is to fully disable reasoning (and reduce cost/latency), consider disabling or gating those thinking configs as well, otherwise the model may still generate hidden reasoning tokens.
| return result.toUIMessageStreamResponse({ | ||
| originalMessages: messages, | ||
| sendReasoning: true, // Enable streaming of reasoning/thinking content | ||
| sendReasoning: false, | ||
| headers: { |
There was a problem hiding this comment.
sendReasoning is disabled here, but this route still enables provider “thinking” via providerOptions (Google thinkingBudget / Anthropic thinking enabled). If the intent is to fully disable reasoning (and reduce cost/latency), consider disabling or gating those thinking configs as well, otherwise the model may still generate hidden reasoning tokens.
https://chat.agoric.net/ started erroring,
One fix that worked was to disable the Web fetch tool. But I don't know if that's necessary for some funcionality.
What also worked was to upgrade all the "ai" deps. That's a better long term value so that's what this PR has.
I don't know if it's due to the version bump but the chat would stream reasoning ("The user is…") before the UI collapsed it. The use cases for this app don't need reasoning so this PR also disables that.
UPDATE: the backend error may have been transient. Still, the changes here are improvements were landing.