-
Notifications
You must be signed in to change notification settings - Fork 29
chore(deps): update dependency @mastra/core to ^0.18.0 #485
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
fd2e4bb to
a228a08
Compare
d78e16b to
62491ed
Compare
62491ed to
eed3b6c
Compare
eed3b6c to
1d81986
Compare
1d81986 to
c468604
Compare
5468a64 to
5664af6
Compare
222e465 to
0b41896
Compare
0b41896 to
84c7f6c
Compare
1b85f78 to
39e8418
Compare
39e8418 to
855da7f
Compare
Edited/Blocked NotificationRenovate will not automatically rebase this PR, because it does not recognize the last commit author and assumes somebody else may have edited the PR. You can manually request rebase by checking the rebase/retry box above. |
jirispilka
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've tested that and it works.
If there is no failure in tests we can merge.
Coming soon: The Renovate bot (GitHub App) will be renamed to Mend. PRs from Renovate will soon appear from 'Mend'. Learn more here.
This PR contains the following updates:
^0.4.4->^0.18.0Release Notes
mastra-ai/mastra (@mastra/core)
v0.18.0Compare Source
Minor Changes
Allow agent instructions to accept SystemMessage types (#7987)
Agents can now use rich instruction formats beyond simple strings:
Patch Changes
Agent type fixes (#8072)
Fixes for
getStepResultin workflow steps (#8065)fix: result object type inference when using structuredOutput and unify output/structuredOutput types with single OUTPUT generic (#7969)
feat: implement trace scoring with batch processing capabilities (#8033)
Fix selection of agent method based on model version (#8001)
show the tool-output stream in the playground for streamVNext (#7983)
Add scorer type, for automatic type inferrence when creating scorers for agents (#8032)
Get rid off swr one for all (#7931)
Fix PostgreSQL vector index recreation issue and add optional index configuration (#8020)
Fix navigating between scores and entity types (#8129)
Delayed streamVNext breaking change notice by 1 week (#8121)
Tool hitl (#8084)
Updated dependencies [
b61b8e0]:v0.17.1Compare Source
Patch Changes
Refactor agent.#execute fn workflow to make code easier to follow. (#7964)
fix workflow resuming issue in the playground (#7988)
feat: Add system option support to VNext methods (#7925)
v0.17.0Compare Source
Minor Changes
Remove original AgentNetwork (#7919)
Fully deprecated createRun (now throws an error) in favour of createRunAsync (#7897)
Improved workspace dependency resolution during development and builds. This makes the build process more reliable when working with monorepos and workspace packages, reducing potential bundling errors and improving development experience. (#7619)
Patch Changes
dependencies updates: (#7861)
hono@^4.9.7↗︎ (from^4.9.6, independencies)Updated SensitiveDataFilter to be less greedy in its redacting (#7840)
clean up console logs in monorepo (#7926)
Update dependencies ai-v5 and @ai-sdk/provider-utils-v5 to latest (#7884)
Added the ability to hide internal ai tracing spans (enabled by default) (#7764)
"refactored ai tracing to commonize types" (#7744)
Register server cache in Mastra (#7946)
feat: add requiresAuth option for custom API routes (#7703)
Added a new
requiresAuthoption to theApiRoutetype that allows users to explicitly control authentication requirements for custom endpoints.requiresAuth: true)requiresAuth: falseto make a route publicly accessible without authenticationExample usage:
This addresses issue #7674 where custom endpoints were not being protected by the authentication system.
Resumable streams (#7949)
Only log stream/generate deprecation warning once (#7905)
Add support for running the Mastra dev server over HTTPS for local development. (#7871)
Add
--httpsflag formastra dev. This automatically creates a local key and certificate for you.Alternatively, you can provide your own key and cert through
server.https:refactored handling of internal ai spans to be more intelligent (#7876)
Improve error message when using V1 model with streamVNext (#7948)
prevent out-of-order span errors in ai-tracing DefaultExporter (#7895)
move ToolExecutionOptions and ToolCallOptions to a union type (ToolInvocationOptions) for use in createTool, Tool, and ToolAction (#7914)
avoid refetching on error when resolving a workflow in cloud (#7842)
fix scorers table link full row (#7915)
fix(core): handle JSON code blocks in structured output streaming (#7864)
Postgresql Storage Query Index Performance: Adds index operations and automatic indexing for Postgresql (#7757)
adjust the way we display scorers in agent metadata (#7910)
fix: support destructuring of streamVNext return values (#7920)
Fix VNext generate/stream usage tokens. They used to be undefined, now we are receiving the proper values. (#7901)
Add model fallbacks (#7126)
Add resource id to workflow run snapshots (#7740)
Fixes assistant message ids when using toUIMessageStream, preserves the original messageId rather than creating a new id for this message. (#7783)
Fixes multiple issues with stopWhen and step results. (#7862)
fix error message when fetching observability things (#7956)
Network stream class when calling agent.network() (#7763)
fix workflows runs fetching and displaying (#7852)
fix empty state for scorers on agent page (#7846)
Remove extraneous console.log (#7916)
Deprecate "output" in generate and stream VNext in favour of structuredOutput. When structuredOutput is used in tandem with maxSteps = 1, the structuredOutput processor won't run, it'll generate the output using the main agent, similar to how "output" used to work. (#7750)
Fix switch in prompt-injection (#7951)
v0.16.3Compare Source
Patch Changes
dependencies updates: (#7545)
hono@^4.9.6↗︎ (from^4.8.12, independencies)Delayed deprecation notice for streamVNext() replacing stream() until Sept 23rd (#7739)
Fix onFinish callback in VNext functions to properly resolve the result (#7733)
support JSONSchema7 output option with generateVNext, streamVNext (#7630)
various improvements to input & output data on ai spans (#7636)
cleanup (#7736)
add network method (#7704)
Fix memory not being affected by agent output processors (#7087). Output processors now correctly modify messages before they are saved to memory storage. The fix ensures that any transformations applied by output processors (like redacting sensitive information) are properly propagated to the memory system. (#7647)
Fix agent structuredOutput option types (#7668)
"added output to agent spans in ai-tracing" (#7717)
Ensure system messages are persisted in processedList (#7715)
AN Merge pt 1 (#7702)
Custom metadata for traces can now be set when starting agents or workflows (#7689)
Workflow & Agent executions now return traceId. (#7663)
fixed bugs in observability config parsing (#7669)
fix playground UI issue about dynmic workflow exec in agent thread (#7665)
Updated dependencies [
779d469]:v0.16.2Compare Source
Patch Changes
v0.16.1Compare Source
Patch Changes
Fixed ai tracing for workflows nested directly in agents (#7599)
Fixed provider defined tools for stream/generate vnext (#7642)
Made tracing context optional on tool execute() (#7532)
Fixed ai tracing context propagation in tool calls (#7531)
Call getMemoryMessages even during first turn in a thread when semantic recall scope is resource (#7529)
add usage and total usage to streamVNext onFinish callback (#7598)
Add prepareStep to generate/stream VNext options. (#7646)
Change to createRunAsync (#7632)
Fix type in worfklow (#7519)
Execute tool calls in parallel in generate/stream VNext methods (#7524)
Allow streamVNext and generateVNext to use structuredOutputs from the MastraClient (#7597)
Use workflow streamVNext in playground (#7575)
Revert "feat(mcp): add createMCPTool helper for proper execute types" (#7513)
Fix InvalidDataContentError when using image messages with AI SDK (#7542)
Resolves an issue where passing image content in messages would throw an InvalidDataContentError. The fix properly handles multi-part content arrays containing both text and image parts when converting between Mastra and AI SDK message formats.
Flatten loop config in stream options and pass to loop options (#7643)
Pass mastra instance into MCP Server tools (#7520)
Fix image input handling for Google Gemini models in AI SDK V5 (#7490)
Resolves issue #7362 where Gemini threw
AI_InvalidDataContentErrorwhen receiving URLs in image parts. The fix properly handles V3 message file parts that contain both URL and data fields, ensuring URLs are passed as URLs rather than being incorrectly treated as base64 data.Vnext output schema injection (#6990)
removed duplicate 'float' switch case (#7516)
Fix issue with response message id consistency between stream/generate response and the message ids saveed in the DB. Also fixed the custom generatorId implementation to work with this. (#7606)
v0.16.0Compare Source
Minor Changes
a01cf14: Add workflow graph in agent (workflow as tool in agent)Patch Changes
8fbf79e: Fix this to be not set when workflow is a stepfd83526: Stream agent events with workflow.streamVNext()d0b90ab: Fix output processors to run before saving messages to memory6f5eb7a: Throw if an empty or whitespace-only threadId is passed when getting messagesa9e50ee: Allow both workflow stream message formats for now5397eb4: Add public URL support when adding files in Multi Modalc9f4e4a: Pass tracing context to scorer run0acbc80: Add InferUITools and related type helpers for AI SDK compatibilityAdds new type utility functions to help with type inference when using Mastra tools with the AI SDK's UI components:
InferUITools- Infers input/output types for a collection of toolsInferUITool- Infers input/output types for a single toolThese type helpers allow developers to easily integrate Mastra tools with AI SDK UI components like
useChatby providing proper type inference for tool inputs and outputs.v0.15.3Compare Source
Patch Changes
ab48c97: dependencies updates:zod-to-json-schema@^3.24.6↗︎ (from^3.24.5, independencies)85ef90b: Return nested workflow steps information in getWorkflowRunExecutionResultaedbbfa: Fixed wrapping of models with AI Tracing when used with structured output.ff89505: Add deprecation warnings and add legacy routes637f323: Fix issue with some compilers and calling zod v4's toJSONSchema functionde3cbc6: Update thepackage.jsonfile to include additional fields likerepository,homepageorfiles.c19bcf7: stopped recording event spans for llm_chunks in ai-observability4474d04: fix: do not pass tracing context to score run183dc95: Added a fix to prevent filtering out injected initial default user messages. Related to issue 7231a1111e2: Fixes #7254 where the onFinish callback wasn't returning assistant messages when using format: 'aisdk' in streamVNext. The messageList was being updated with response messages but these weren't being passed to the user's onFinish callback.b42a961: New createMCPTool helper for correct types for MCP Server tools61debef: Fix - add missing tool options to createTool9beaeff: Create new@mastra/ai-sdkpackage to better supportuseChat()29de0e1: MastraEmbeddingModel and ts hackf643c65: Support file download00c74e7: Added a DefaultExporter for AI Tracing.fef7375: Fix tool validation when schema uses context or inputData reserved keyse3d8fea: Support Inngest flow control features for Mastra Inngest workflows45e4d39: Try fixing theAttempted import error: 'z'.'toJSONSchema' is not exported from 'zod'error by tricking the compiler9eee594: Fix passing providerOptions through in streamVNext, enabling reasoning-delta chunks to be receiving.7149d8d: Add tripwire chunk to streamVNext full stream822c2e8: Fix custom output (tool-output) in ai-sdk stream output979912c: "Updated langfuse exporter to handle Event spans"7dcf4c0: Ensure original stacktrace is preserved during workflow runs4106a58: Fix image handling for Google Gemini and other providers when using streamVNext (fixes #7362)ad78bfc: "pipes tracingContext through all ai items: agents, workflows, tools, processors, scorers, etc.."0302f50: Some LLM providers (openrouter for ex) add response-metadata chunks after each text-delta, this was resulting in us flushing text deltas into parts after each delta, so our output messages (with streamVNext) would have a separate text part for each text delta, instead of one text part for the combined deltas6ac697e: improveEmbeddingModelStuff74db265: Adds handling for event-type spans to the default ai observability exporter0ce418a: upgrade ai v5 versions to latest for core and memoryaf90672: Add maxSteps8387952: Register scorers on mastra instance to override per agent generate call7f3b8da: Automatically pipe writer to workflows as a tool.Also changed start, finish, step-output events to be workflow-start, workflow-finish and workflow-step-output
905352b: Support AISDK models for runExperiment599d04c: follow up fix for scorers56041d0: Don't set supportsStructuredOutputs for every v2 model3412597: Pass provider options5eca5d2: Fixed wrapped mastra class inside workflow steps.f2cda47: Fixed issue where multiple split messages were created with identical contentinstead of properly distributing different parts of the original message.
5de1555: Fixed tracingContext on tool executions in AI tracingcfd377a: fix default stream options onFinish being overridden1ed5a3e: Support workflows for run experimentsab48c97]637f323]de3cbc6]45e4d39]v0.15.2Compare Source
Patch Changes
c6113ed]:v0.15.1Compare Source
v0.15.0Compare Source
Minor Changes
1191ce9Thanks @wardpeet! - Bump zod peerdep to 3.25.0 to support both v3/v4Patch Changes
#6938
0778757Thanks @dane-ai-mastra! - dependencies updates:@opentelemetry/auto-instrumentations-node@^0.62.1↗︎ (from^0.62.0, independencies)#6997
943a7f3Thanks @wardpeet! - Bundle/mastra speed improvements#6933
bf504a8Thanks @NikAiyer! - Add util functions for workflow server handlers and made processor process function async#6954
be49354Thanks @YujohnNattrass! - Add db schema and base storage apis for AI Tracing#6957
d591ab3Thanks @YujohnNattrass! - Implement Tracing API for inmemory(mock) storage#6923
ba82abeThanks @rase-! - Event based execution engine#6971
727f7e5Thanks @epinzur! - "updated ai tracing in workflows"#6949
e6f5046Thanks @CalebBarnes! - stream/generate vnext: simplify internal output schema handling, improve types and typescript generics, and add jsdoc comments#6993
82d9f64Thanks @wardpeet! - Improve types and fix linting issues#7020
2e58325Thanks @YujohnNattrass! - Add column to ai spans table to tell if it's an event#7011
4189486Thanks @epinzur! - Wrapped mastra objects in workflow steps to automatically pass on tracing context#6942
ca8ec2fThanks @wardpeet! - Add zod as peerdeps for all packages#6943
9613558Thanks @taofeeq-deru! - Persist to snapshot when step startsUpdated dependencies [
da58ccc,94e9f54,1191ce9,a93f3ba]:v0.14.1Compare Source
Patch Changes
#6919
6e7e120Thanks @dane-ai-mastra! - dependencies updates:@ai-sdk/provider-utils-v5@​npm:@​ai-sdk/[email protected]↗︎ (fromnpm:@​ai-sdk/[email protected], independencies)ai@^4.3.19↗︎ (from^4.3.16, independencies)ai-v5@​npm:[email protected]↗︎ (fromnpm:[email protected], independencies)#6864
0f00e17Thanks @TylerBarnes! - Added a convertMessages(from).to("Mastra.V2" | "AIV*") util for operating on DB messages directly#6927
217cd7aThanks @DanielSLew! - Fix output processors to match new stream types.#6700
a5a23d9Thanks @gpanakkal! - AddgetMessagesByIdmethod toMastraStorageadaptersv0.14.0Compare Source
Minor Changes
3b5fec7: Added AIV5 support to internal MessageList, precursor to full AIV5 support in latest MastraPatch Changes
227c7e6: replace console.log with logger.debug in inmemory operations12cae67: fix: add threadId and resourceId to scorersfd3a3eb: Add runExperments to run scorers in a test suite or in CI6faaee5: Reworks agent Processor API to include output processors. Adds structuredOutput property in agent.streamVNext and agent.generate to replace experimental_output. Move imports for processors to @mastra/core/processors. Adds 6 new output processors, BatchParts, StructuredOutputProcessor, TokenLimiter, SystemPromptScrubber, ModerationProcessor, PiiDetectorProcessor.4232b14: Fix provider metadata preservation during V5 message conversionsProvider metadata (providerMetadata and callProviderMetadata) is now properly preserved when converting messages between AI SDK V5 and internal V2 formats. This ensures provider-specific information isn't lost during message transformations.
a89de7e: Adding a new agentic loop and streaming workflow system while working towards AI SDK v5 support.5a37d0c: Fix dev server bug related to p-map imports4bde0cb: Allow renaming .map functions in workflowscf4f357: When using the Cloudflare deployer you might see a[duplicate-case]warning. The internal cause for this was fixed.ad888a2: Stream vnext agent-network481751d: Testsmitt.offevent handler removal2454423: Agentic loop and streaming workflow: generateVNext and streamVNext194e395: exclude _wrapToolsWithAITracing from agent tracea722c0b: Added a patch to filter out system messages that were stored in the db via an old memory bug that was patched long ago (see issue 6689). Users upgrading from the old version that still had the bug would see errors when the memory messages were retrieved from the dbc30bca8: Fix do while resume-suspend in simple workflow losing dataa8f129d: initial addition of experimental ai observability tracing features.v0.13.2Compare Source
Patch Changes
d5330bf: Allow agent model to be updated after the agent is created2e74797: Fix tool arguments being lost when tool-result messages arrive separately from tool-call messages or when messages are restored from database. Tool invocations now correctly preserve their arguments in all scenarios.8388649: Allow array of messages in vnext agent networka239d41: Updated A2A syntax to v0.3.0dd94a26: Dont rely on the full language model for schema compat3ba6772: MastraModelInputb5cf2a3: make system message always available during agent calls2fff911: Fix vnext working memory tool schema when model is incompatible with schemab32c50d: Filter scores by source63449d0: Change the function signatures ofbundle,lint, and internallygetToolsInputOptionsto expand thetoolsPathsTypeScript type fromstring[]to(string | string[])[].121a3f8: Fixed an issue where telemetry logs were displaying promise statuses whenagent.streamis calledec510e7: Tool input validation now returns errors as tool results instead of throwing, allowing agents to understand validation failures and retry with corrected parameters.dd94a26]2fff911]ae2eb63]v0.13.1Compare Source
Patch Changes
cd0042e: Fix tool call history not being accessible in agent conversationsWhen converting v2 messages (with combined tool calls and text) to v1 format for memory storage, split messages were all keeping the same ID. This caused later messages to replace earlier ones when added back to MessageList, losing tool history.
The fix adds ID deduplication by appending
__split-Nsuffixes to split messages and prevents double-suffixing when messages are re-converted between formats.v0.13.0Compare Source
Minor Changes
ea0c5f2: Update scorer apiPatch Changes
cb36de0: dependencies updates:hono@^4.8.11↗︎ (from^4.8.9, independencies)d0496e6: dependencies updates:hono@^4.8.12↗︎ (from^4.8.11, independencies)a82b851: Exclude getVoice, getScorers from agent trace41a0a0e: fixed a minor bug where ID generator wasn't being properly bound to instances of MessageList2871020: update safelyParseJSON to check for value of param when handling parse94f4812: lazy initialize Run'sAbortControllere202b82: Add getThreadsByResourceIdPaginated to the Memory Classe00f6a0: Fixed an issue where converting from v2->v1 messages would not properly split text and tool call parts into multiple messages4a406ec: fixes TypeScript declaration file imports to ensure proper ESM compatibilityb0e43c1: Fixed an issue where branching workflow steps maintained "suspended" status even after they've been successfully resumed and executed.5d377e5: Fix tracing of runtimeContext values"1fb812e: Fixed a bug in parallel workflow execution where resuming only one of multiple suspended parallel steps incorrectly completed the entire parallel block. The fix ensures proper execution and state management when resuming from suspension in parallel workflows.35c5798: Add support for transpilePackages option4a406ec]v0.12.1Compare Source
Patch Changes
33dcb07: dependencies updates:@opentelemetry/auto-instrumentations-node@^0.62.0↗︎ (from^0.59.0, independencies)@opentelemetry/exporter-trace-otlp-grpc@^0.203.0↗︎ (from^0.201.1, independencies)@opentelemetry/exporter-trace-otlp-http@^0.203.0↗︎ (from^0.201.1, independencies)@opentelemetry/otlp-exporter-base@^0.203.0↗︎ (from^0.201.1, independencies)@opentelemetry/otlp-transformer@^0.203.0↗︎ (from^0.201.1, independencies)@opentelemetry/sdk-node@^0.203.0↗︎ (from^0.201.1, independencies)@opentelemetry/semantic-conventions@^1.36.0↗︎ (from^1.34.0, independencies)d0d9500: Fixed an issue where AWS Bedrock is expecting a user message at the beginning of the message listd30b1a0: Remove js-tiktoken as it's unusedbff87f7: fix issue where v1 messages from db wouldn't properly show tool calls in llm context window from historryb4a8df0: Fixed an issue where memory instances were not being registered with Mastra and custom ID generators weren't being usedv0.12.0Compare Source
Minor Changes
2ecf658: Added the option to provide a custom ID generator when creating an instance of Mastra. If the generator is not provided, a fallback of using UUID is used to generate IDs instead.Patch Changes
510e2c8: dependencies updates:radash@^12.1.1↗︎ (from^12.1.0, independencies)2f72fb2: dependencies updates:xstate@^5.20.1↗︎ (from^5.19.4, independencies)27cc97a: dependencies updates:hono@^4.8.9↗︎ (from^4.8.4, independencies)3f89307: improve registerApiRoute validation9eda7d4: Move createMockModel to test scope. This prevents test dependencies from leaking into production code.9d49408: Fix conditional branch execution after nested workflow resume. Now conditional branches properly re-evaluate their conditions during resume, ensuring only the correct branches execute.41daa63: Threads are no longer created until message generation is complete to avoid leaving orphaned empty threads in storage on failuread0a58b: Enhancements for premade input processors254a36b: Expose mastra instance on dynamic agent arguments7a7754f: Fast follow scorers fixing input types, improve llm scorer reliability, fix ui to display scores that are 0fc92d80: fix: GenerateReturn typee0f73c6: Make input optional for scorer run0b89602: Fix workflow feedback loop crashes by preventing resume data reuseFixes an issue where workflows with loops (dountil/dowhile) containing suspended steps would incorrectly reuse resume data across iterations. This caused human-in-the-loop workflows to crash or skip suspend points after resuming.
The fix ensures resume data is cleared after a step completes (non-suspended status), allowing subsequent loop iterations to properly suspend for new input.
Fixes #6014
4d37822: Fix workflow input property preservation after resume from snapshotEnsure that when resuming a workflow from a snapshot, the input property is correctly set from the snapshot's context input rather than from resume data. This prevents the loss of original workflow input data during suspend/resume cycles.
23a6a7c: improve error message for missing memory idscda801d: Added the ability to pass in metadata for UIMessage and MastraMessageV2 in client-js and agent.stream/generatea77c823: include PATCH method in default CORS configurationff9c125: enhance thread retrieval with sorting options in libsql and pg09bca64: Log warning when telemetry is enabled but not loadedb8efbb9: feat: add flexible deleteMessages method to memory APImemory.deleteMessages(input)method that accepts multiple input types:deleteMessages('msg-123')deleteMessages(['msg-1', 'msg-2'])deleteMessages({ id: 'msg-123' })deleteMessages([{ id: 'msg-1' }, { id: 'msg-2' }])POST /api/memory/messages/deletethread.deleteMessages()accepts all input types71466e7: Adds traceId and resourceId to telemetry spans for agent invocations0c99fbe: [Feature] Add ability to include input processors to Agent primitive in order to add guardrails to incoming messages.v0.11.1Compare Source
Patch Changes
f248d53: AddinggetMessagesPaginatedto the serve, deployer, and client-js2affc57: Fix output type of network loop66e13e3: Add methods to fetch workflow/agent by its true idedd9482: Fix "workflow run was not suspended" error when attempting to resume a workflow with consecutive nested workflows.18344d7: Code and llm scorers9d372c2: Fix streamVNext error handling40c2525: Fix agent.generate error with experimental_output and clientToole473f27: Implement off the shelf Scorers032cb66: ClientJS703ac71: scores schemaa723d69: Pass workflowId through7827943: Handle streaming large data5889a31: Export storage domain typesbf1e7e7: Configure agent memory using runtimeContext65e3395: Add Scores playground-ui and add scorer hooks4933192: Update Message List to ensure correct order of message partsd1c77a4: Scorer interfacebea9dd1: Refactor Agent class to consolidate LLM generate and stream methods and improve type safety. This includesextracting common logic into prepareLLMOptions(), enhancing type definitions, and fixing test annotations.
This changeset entry follows the established format in your project:
dcd4802: scores mastra servercbddd18: Remove erroneous reassignment ofMastra.prototype.#vectors7ba91fa: Add scorer abstract methods for base storagev0.11.0Compare Source
Patch Changes
f248d53: AddinggetMessagesPaginatedto the serve, deployer, and client-js2affc57: Fix output type of network loop66e13e3: Add methods to fetch workflow/agent by its true idedd9482: Fix "workflow run was not suspended" error when attempting to resume a workflow with consecutive nested workflows.18344d7: Code and llm scorers9d372c2: Fix streamVNext error handling40c2525: Fix agent.generate error with experimental_output and clientToole473f27: Implement ofConfiguration
📅 Schedule: Branch creation - "every weekday" (UTC), Automerge - At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
🔕 Ignore: Close this PR and you won't be reminded about this update again.
This PR was generated by Mend Renovate. View the repository job log.