-
Notifications
You must be signed in to change notification settings - Fork 31
feat!: Support invoke with structured output in LangChain provider #970
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat!: Support invoke with structured output in LangChain provider #970
Conversation
|
This PR is reliant on #969 being merged first. |
|
@launchdarkly/browser size report |
|
@launchdarkly/js-sdk-common size report |
|
@launchdarkly/js-client-sdk size report |
|
@launchdarkly/js-client-sdk-common size report |
packages/ai-providers/server-ai-langchain/__tests__/LangChainProvider.test.ts
Show resolved
Hide resolved
🤖 I have created a release *beep* *boop* --- <details><summary>server-sdk-ai: 0.14.0</summary> ## [0.14.0](server-sdk-ai-v0.13.0...server-sdk-ai-v0.14.0) (2025-11-06) ### ⚠ BREAKING CHANGES * Removed deprecated Vercel methods ([#983](#983)) * Add support for real time judge evals ([#969](#969)) * AI Config defaults require the "enabled" attribute * Renamed LDAIAgentConfig to LDAIAgentConfigRequest for clarity * Renamed LDAIAgent to LDAIAgentConfig *note the previous use of this name * Renamed LDAIAgentDefault to LDAIAgentConfigDefault for clarity * Renamed LDAIDefaults to LDAICompletionConfigDefault for clarity ### Features * Add support for real time judge evals ([#969](#969)) ([6ecd9ab](6ecd9ab)) * Added createJudge method ([6ecd9ab](6ecd9ab)) * Added judgeConfig method to AI SDK to retrieve an AI Judge Config ([6ecd9ab](6ecd9ab)) * Added trackEvalScores method to config tracker ([6ecd9ab](6ecd9ab)) * Chat will evaluate responses with configured judges ([6ecd9ab](6ecd9ab)) * Include AI SDK version in tracking information ([#985](#985)) ([ef90564](ef90564)) * Removed deprecated Vercel methods ([#983](#983)) ([960a499](960a499)) ### Bug Fixes * AI Config defaults require the "enabled" attribute ([6ecd9ab](6ecd9ab)) * Renamed LDAIAgent to LDAIAgentConfig *note the previous use of this name ([6ecd9ab](6ecd9ab)) * Renamed LDAIAgentConfig to LDAIAgentConfigRequest for clarity ([6ecd9ab](6ecd9ab)) * Renamed LDAIAgentDefault to LDAIAgentConfigDefault for clarity ([6ecd9ab](6ecd9ab)) * Renamed LDAIDefaults to LDAICompletionConfigDefault for clarity ([6ecd9ab](6ecd9ab)) </details> <details><summary>server-sdk-ai-langchain: 0.3.0</summary> ## [0.3.0](server-sdk-ai-langchain-v0.2.0...server-sdk-ai-langchain-v0.3.0) (2025-11-06) ### ⚠ BREAKING CHANGES * Support invoke with structured output in LangChain provider ([#970](#970)) ### Features * Support invoke with structured output in LangChain provider ([#970](#970)) ([0427908](0427908)) ### Dependencies * The following workspace dependencies were updated * devDependencies * @launchdarkly/server-sdk-ai bumped from ^0.13.0 to ^0.14.0 * peerDependencies * @launchdarkly/server-sdk-ai bumped from ^0.12.2 to ^0.14.0 </details> <details><summary>server-sdk-ai-openai: 0.3.0</summary> ## [0.3.0](server-sdk-ai-openai-v0.2.0...server-sdk-ai-openai-v0.3.0) (2025-11-06) ### ⚠ BREAKING CHANGES * Support invoke with structured output in OpenAI provider ([#980](#980)) ### Features * Support invoke with structured output in OpenAI provider ([#980](#980)) ([515dbdf](515dbdf)) ### Dependencies * The following workspace dependencies were updated * devDependencies * @launchdarkly/server-sdk-ai bumped from ^0.13.0 to ^0.14.0 * peerDependencies * @launchdarkly/server-sdk-ai bumped from ^0.12.2 to ^0.14.0 </details> <details><summary>server-sdk-ai-vercel: 0.3.0</summary> ## [0.3.0](server-sdk-ai-vercel-v0.2.0...server-sdk-ai-vercel-v0.3.0) (2025-11-06) ### ⚠ BREAKING CHANGES * Support invoke with structured output in VercelAI provider ([#981](#981)) ### Features * Support invoke with structured output in VercelAI provider ([#981](#981)) ([d0cb41d](d0cb41d)) ### Dependencies * The following workspace dependencies were updated * devDependencies * @launchdarkly/server-sdk-ai bumped from ^0.13.0 to ^0.14.0 * peerDependencies * @launchdarkly/server-sdk-ai bumped from ^0.12.2 to ^0.14.0 </details> --- This PR was generated with [Release Please](https://github.com/googleapis/release-please). See [documentation](https://github.com/googleapis/release-please#release-please). <!-- CURSOR_SUMMARY --> --- > [!NOTE] > Release server-ai 0.14.0 (judge evals, breaking renames/removals) and update LangChain/OpenAI/Vercel providers to 0.3.0 with structured output; refresh examples and manifests to new versions. > > - **SDK (`packages/sdk/server-ai`) — `0.14.0`** > - Adds real-time judge evaluations and related APIs (`createJudge`, `judgeConfig`, `trackEvalScores`); includes SDK version in tracking. > - Breaking: removes deprecated Vercel methods; requires `enabled` in AI Config defaults; renames several AI config types. > - **AI Providers — `0.3.0`** > - `@launchdarkly/server-sdk-ai-langchain`, `-openai`, `-vercel`: add structured output support for `invoke` (breaking changes). > - Bump peer/dev dependency on `@launchdarkly/server-sdk-ai` to `^0.14.0`. > - **Examples** > - Update example apps to use `@launchdarkly/[email protected]` and provider packages `^0.3.0`. > - **Release metadata** > - Update `.release-please-manifest.json` with new versions. > > <sup>Written by [Cursor Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit 00cd808. This will update automatically on new commits. Configure [here](https://cursor.com/dashboard?tab=bugbot).</sup> <!-- /CURSOR_SUMMARY --> --------- Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: jsonbailey <[email protected]>
Note
Introduces invokeStructuredModel for structured outputs and wraps invokeModel with try/catch to return consistent failure responses with logging; updates tests accordingly.
src/LangChainProvider.ts):invokeStructuredModel(messages, responseStructure)usingwithStructuredOutput(...), returning{ data, rawResponse, metrics }and handling errors with warnings and failure metrics.invokeModelin try/catch; on failure, log warning and return empty assistant message withsuccess=false.StructuredResponsetype.createLangChainModel(spreadparametersbeforemodelProvider).__tests__/LangChainProvider.test.ts):invokeModelerror path (logs and failure response).invokeStructuredModelsuccess and error behaviors.Written by Cursor Bugbot for commit 7560025. This will update automatically on new commits. Configure here.