Skip to content

feat:Add endpoints and schemas to ElevenLabs OpenAPI (openapi.yaml)#169

Merged
HavenDV merged 1 commit intomainfrom
bot/update-openapi_202508221827
Aug 22, 2025
Merged

feat:Add endpoints and schemas to ElevenLabs OpenAPI (openapi.yaml)#169
HavenDV merged 1 commit intomainfrom
bot/update-openapi_202508221827

Conversation

@HavenDV
Copy link
Contributor

@HavenDV HavenDV commented Aug 22, 2025

Summary by CodeRabbit

  • New Features
    • Agent testing: create/manage tests, view summaries, resubmit, and simulate conversations (including streaming); list conversations, download audio, and submit feedback.
    • Knowledge base: add/update documents, view content/chunks, compute/view/delete RAG indexes, see dependent agents, and track size.
    • Agent/tool management: list/update, duplicate agents, configure widgets, share links, set avatars, and estimate LLM usage.
    • Calling: batch calling and outbound SIP support.
    • MCP servers: manage servers, tools, and approval policies.
    • Evaluations/media: expanded assessment, speech alignment, and safety analysis.

@coderabbitai
Copy link

coderabbitai bot commented Aug 22, 2025

Walkthrough

Adds numerous REST endpoints and schema models to ElevenLabs OpenAPI for agent testing, conversations, knowledge base with RAG indexing, tools/agents management, MCP servers, batch calling/SIP, and various evaluation/metadata models. Changes are confined to schema and endpoint additions in src/libs/ElevenLabs/openapi.yaml.

Changes

Cohort / File(s) Summary
Agent Testing APIs
src/libs/ElevenLabs/openapi.yaml
Adds endpoints to create, retrieve, update, delete agent tests; fetch summaries; manage test invocations (get, resubmit). Introduces models like UnitTestRunResponseModel, UnitTestSummaryResponseModel, and evaluation-related schemas.
Conversations
src/libs/ElevenLabs/openapi.yaml
Adds list/get/delete conversations, fetch audio, and submit feedback.
Knowledge Base & RAG
src/libs/ElevenLabs/openapi.yaml
Adds CRUD for KB documents, content/chunk retrieval, dependent agents, and RAG index compute/get/delete plus overview. Extends schemas for document metadata, language, and indexing status.
Agents & Tools
src/libs/ElevenLabs/openapi.yaml
Adds list/get/patch/delete tools; agents CRUD, widget config, share link, avatar set, KB size, LLM usage calculation, duplicate, and simulate conversation (sync/stream).
MCP Servers & Approvals
src/libs/ElevenLabs/openapi.yaml
Adds MCP server CRUD/listing, tools listing, approval policy updates, and tool approval management. Includes transport/config models.
Batch Calling & SIP
src/libs/ElevenLabs/openapi.yaml
Adds scheduling, listing, canceling, retrying batch calls; outbound SIP trunk call handling.
Projects/Snapshots/Configs
src/libs/ElevenLabs/openapi.yaml
Introduces/extends project, chapter, snapshot, widget/conversation UI configuration schemas; permissions, annotations, and policy models.
Media/Voice/Transcription Models
src/libs/ElevenLabs/openapi.yaml
Adds models for media references, transcript alignments, voice settings, speaker separation, and related processing metadata.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  participant Client
  participant AgentTestingAPI as Agent Testing API
  participant EvalEngine as Evaluation Engine
  participant Store as Test Store

  rect rgb(235, 245, 255)
  note over Client,AgentTestingAPI: Create & manage tests (new)
  Client->>AgentTestingAPI: POST /v1/convai/agent-testing/create
  AgentTestingAPI->>Store: Persist Test Definition
  Store-->>AgentTestingAPI: Test ID
  AgentTestingAPI-->>Client: UnitTestRunResponseModel (id,status)
  end

  rect rgb(245, 235, 255)
  note over Client,EvalEngine: Invocation & evaluation (new)
  Client->>AgentTestingAPI: GET /v1/convai/test-invocations/{id}
  AgentTestingAPI->>EvalEngine: Fetch Results/Evaluations
  EvalEngine-->>AgentTestingAPI: Metrics, rationales
  AgentTestingAPI-->>Client: Summary/Details
  end

  opt Update/Delete
    Client->>AgentTestingAPI: PUT/DELETE /v1/convai/agent-testing/{test_id}
    AgentTestingAPI->>Store: Update/Remove
    Store-->>AgentTestingAPI: Ack
    AgentTestingAPI-->>Client: 200/204
  end
Loading
sequenceDiagram
  autonumber
  participant Client
  participant KBAPI as Knowledge Base API
  participant Indexer as RAG Indexer
  participant Store as KB Store

  Client->>KBAPI: POST /v1/convai/knowledge-base
  KBAPI->>Store: Save Document
  Store-->>KBAPI: Doc ID
  KBAPI-->>Client: Document metadata

  rect rgb(235, 255, 240)
  note over Client,Indexer: RAG index operations (new)
  Client->>KBAPI: POST /knowledge-base/{doc_id}/rag-index
  KBAPI->>Indexer: Start Index Build
  Indexer-->>KBAPI: Status (queued/running)
  KBAPI-->>Client: Index status
  Client->>KBAPI: GET /knowledge-base/{doc_id}/rag-index
  KBAPI->>Indexer: Query Status
  Indexer-->>KBAPI: Status/details
  KBAPI-->>Client: Index info
  end
Loading
sequenceDiagram
  autonumber
  participant Client
  participant AgentsAPI as Agents API
  participant Simulator as Conversation Simulator

  Client->>AgentsAPI: POST /agents/{agent_id}/simulate-conversation
  AgentsAPI->>Simulator: Run Simulation
  Simulator-->>AgentsAPI: Transcript, metrics
  AgentsAPI-->>Client: Simulation result

  alt Stream
    Client->>AgentsAPI: POST /agents/{agent_id}/simulate-conversation/stream
    AgentsAPI-->>Client: Event stream (tokens/events)
  end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60–90 minutes

Poem

I thump my paw—new paths appear,
Tests and talks now crystal clear.
RAG burrows deep to fetch the lore,
Agents practice, learn, and score.
SIPs and tools align in rows—
My whiskers twitch where schema grows.
Hippity hop, to prod it goes! 🐇✨

✨ Finishing Touches
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch bot/update-openapi_202508221827

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@HavenDV HavenDV merged commit dfc2147 into main Aug 22, 2025
2 of 4 checks passed
@HavenDV HavenDV deleted the bot/update-openapi_202508221827 branch August 22, 2025 18:28
@coderabbitai coderabbitai bot changed the title feat:@coderabbitai feat:Add endpoints and schemas to ElevenLabs OpenAPI (openapi.yaml) Aug 22, 2025
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

🧹 Nitpick comments (11)
src/libs/ElevenLabs/openapi.yaml (11)

6850-6882: Resubmit endpoint likely asynchronous; return 202 and drop empty body

Resubmission suggests an async re-queue. Prefer 202 Accepted with no response body for clarity.

-        '200':
-          description: Successful Response
-          content:
-            application/json:
-              schema: { }
+        '202':
+          description: Accepted

10429-10439: AdhocAgentConfigOverrideForTestRequestModel: verify required fields

Both conversation_config and platform_settings are required. Is overriding platform settings always necessary for tests? If not, consider making it optional to reduce payload size.

-      required:
-        - conversation_config
-        - platform_settings
+      required:
+        - conversation_config

10569-10583: Add constraints/examples to failure example text

Optional, but adding minLength/maxLength and an example improves validation and SDK docs.

         response:
           title: Response
-          type: string
+          type: string
+          minLength: 1
+          maxLength: 2000
+          example: "I cannot help with that request."

10662-10675: Mirror constraints on success example

Keep symmetry with failure example.

         response:
           title: Response
-          type: string
+          type: string
+          minLength: 1
+          maxLength: 2000
+          example: "Sure, here's how you can proceed..."

17079-17093: ExactParameterEvaluationStrategy only supports string; consider broader types

Agent tool parameters may be numeric/boolean. Support additional primitives.

-        expected_value:
-          title: Expected Value
-          type: string
-          description: The exact string value that the parameter must match.
+        expected_value:
+          title: Expected Value
+          description: The exact value the parameter must match.
+          oneOf:
+            - type: string
+            - type: number
+            - type: integer
+            - type: boolean

18525-18541: created_at naming/time unit inconsistent with other models

Here it's created_at (integer). Elsewhere you use created_at_unix_secs. Either rename for consistency (breaking) or document units.

         created_at:
           title: Created At
-          type: integer
+          type: integer
+          description: Unix timestamp in seconds

19111-19125: Add basic constraints to LLMParameterEvaluationStrategy.description

Prevents unbounded payloads in requests.

         description:
           title: Description
           type: string
           description: A description of the evaluation strategy to use for the test.
+          minLength: 1
+          maxLength: 2000

22364-22376: ReferencedToolCommonModel: id should be non-empty

Add minLength to prevent empty IDs.

         id:
           title: Id
           type: string
           description: The ID of the tool
+          minLength: 1

22980-22988: SingleTestRunRequestModel: add example for test_id

Improves SDK usability.

         test_id:
           title: Test Id
           type: string
           description: ID of the test to run
+          example: "TeaqRRdTcIfIu2i7BYfT"

24326-24357: Statuses may need “running”/“canceled”; rationale structure is solid

If you plan to expose in-progress/canceled runs, add those statuses now to avoid breaking changes later. The rationale model looks good.

       enum:
-        - pending
-        - passed
-        - failed
+        - pending
+        - running
+        - passed
+        - failed
+        - canceled

24872-24903: UnitTestRunResponseModel looks good; add example and clarify agent_responses

Consider noting that agent_responses may be omitted for failed early runs, and add an example object for SDKs.

     UnitTestRunResponseModel:
       title: UnitTestRunResponseModel
+      example:
+        test_run_id: "run_abc123"
+        test_invocation_id: "inv_123"
+        agent_id: "21m00Tcm4TlvDq8ikWAM"
+        status: "passed"
+        test_id: "TeaqRRdTcIfIu2i7BYfT"
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between e592d5f and 675660e.

⛔ Files ignored due to path filters (109)
  • src/libs/ElevenLabs/Generated/ElevenLabs..JsonSerializerContext.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.ElevenLabsClient.CreateConvaiAgentTestingCreate.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.ElevenLabsClient.CreateConvaiAgentTestingSummaries.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.ElevenLabsClient.CreateConvaiAgentsByAgentIdRunTests.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.ElevenLabsClient.CreateConvaiTestInvocationsByTestInvocationIdResubmit.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.ElevenLabsClient.DeleteConvaiAgentTestingByTestId.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.ElevenLabsClient.GetConvaiAgentTesting.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.ElevenLabsClient.GetConvaiAgentTestingByTestId.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.ElevenLabsClient.GetConvaiTestInvocationsByTestInvocationId.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.ElevenLabsClient.PutConvaiAgentTestingByTestId.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.IElevenLabsClient.CreateConvaiAgentTestingCreate.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.IElevenLabsClient.CreateConvaiAgentTestingSummaries.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.IElevenLabsClient.CreateConvaiAgentsByAgentIdRunTests.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.IElevenLabsClient.CreateConvaiTestInvocationsByTestInvocationIdResubmit.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.IElevenLabsClient.DeleteConvaiAgentTestingByTestId.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.IElevenLabsClient.GetConvaiAgentTesting.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.IElevenLabsClient.GetConvaiAgentTestingByTestId.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.IElevenLabsClient.GetConvaiTestInvocationsByTestInvocationId.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.IElevenLabsClient.GetDocs.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.IElevenLabsClient.PutConvaiAgentTestingByTestId.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.JsonConverters.AgentFailureResponseExampleType.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.JsonConverters.AgentFailureResponseExampleTypeNullable.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.JsonConverters.AgentSuccessfulResponseExampleType.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.JsonConverters.AgentSuccessfulResponseExampleTypeNullable.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.JsonConverters.Eval.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.JsonConverters.ExactParameterEvaluationStrategyType.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.JsonConverters.ExactParameterEvaluationStrategyTypeNullable.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.JsonConverters.LLMParameterEvaluationStrategyType.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.JsonConverters.LLMParameterEvaluationStrategyTypeNullable.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.JsonConverters.RegexParameterEvaluationStrategyType.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.JsonConverters.RegexParameterEvaluationStrategyTypeNullable.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.JsonConverters.TestRunStatus.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.JsonConverters.TestRunStatusNullable.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.JsonConverters.UnitTestToolCallParameterEvalDiscriminatorType.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.JsonConverters.UnitTestToolCallParameterEvalDiscriminatorTypeNullable.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.JsonSerializerContextTypes.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.AdhocAgentConfigOverrideForTestRequestModel.Json.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.AdhocAgentConfigOverrideForTestRequestModel.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.AgentFailureResponseExample.Json.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.AgentFailureResponseExample.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.AgentFailureResponseExampleType.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.AgentSuccessfulResponseExample.Json.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.AgentSuccessfulResponseExample.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.AgentSuccessfulResponseExampleType.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.CreateUnitTestRequest.Json.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.CreateUnitTestRequest.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.CreateUnitTestRequestDynamicVariables.Json.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.CreateUnitTestRequestDynamicVariables.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.CreateUnitTestResponseModel.Json.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.CreateUnitTestResponseModel.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.DeleteChatResponseTestRouteResponse.Json.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.DeleteChatResponseTestRouteResponse.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.Eval.Json.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.Eval.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.ExactParameterEvaluationStrategy.Json.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.ExactParameterEvaluationStrategy.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.ExactParameterEvaluationStrategyType.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.GetTestSuiteInvocationResponseModel.Json.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.GetTestSuiteInvocationResponseModel.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.GetTestsPageResponseModel.Json.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.GetTestsPageResponseModel.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.GetTestsSummariesByIdsResponseModel.Json.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.GetTestsSummariesByIdsResponseModel.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.GetTestsSummariesByIdsResponseModelTests.Json.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.GetTestsSummariesByIdsResponseModelTests.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.GetUnitTestResponseModel.Json.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.GetUnitTestResponseModel.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.GetUnitTestResponseModelDynamicVariables.Json.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.GetUnitTestResponseModelDynamicVariables.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.LLMParameterEvaluationStrategy.Json.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.LLMParameterEvaluationStrategy.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.LLMParameterEvaluationStrategyType.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.ListTestsByIdsRequestModel.Json.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.ListTestsByIdsRequestModel.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.ReferencedToolCommonModel.Json.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.ReferencedToolCommonModel.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.RegexParameterEvaluationStrategy.Json.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.RegexParameterEvaluationStrategy.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.RegexParameterEvaluationStrategyType.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.ResubmitTestsRequestModel.Json.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.ResubmitTestsRequestModel.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.ResubmitTestsRouteResponse.Json.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.ResubmitTestsRouteResponse.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.RunAgentTestsRequestModel.Json.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.RunAgentTestsRequestModel.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.SingleTestRunRequestModel.Json.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.SingleTestRunRequestModel.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.TestConditionRationaleCommonModel.Json.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.TestConditionRationaleCommonModel.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.TestConditionResultCommonModel.Json.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.TestConditionResultCommonModel.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.TestRunStatus.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.UnitTestRunResponseModel.Json.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.UnitTestRunResponseModel.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.UnitTestSummaryResponseModel.Json.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.UnitTestSummaryResponseModel.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.UnitTestToolCallEvaluationModelInput.Json.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.UnitTestToolCallEvaluationModelInput.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.UnitTestToolCallEvaluationModelOutput.Json.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.UnitTestToolCallEvaluationModelOutput.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.UnitTestToolCallParameter.Json.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.UnitTestToolCallParameter.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.UnitTestToolCallParameterEvalDiscriminator.Json.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.UnitTestToolCallParameterEvalDiscriminator.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.UnitTestToolCallParameterEvalDiscriminatorType.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.UpdateUnitTestRequest.Json.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.UpdateUnitTestRequest.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.UpdateUnitTestRequestDynamicVariables.Json.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.UpdateUnitTestRequestDynamicVariables.g.cs is excluded by !**/generated/**
📒 Files selected for processing (1)
  • src/libs/ElevenLabs/openapi.yaml (16 hunks)
🔇 Additional comments (2)
src/libs/ElevenLabs/openapi.yaml (2)

18543-18561: Pagination model looks good

tests + next_cursor + has_more matches the list endpoint’s query params.


22535-22555: ResubmitTestsRequestModel: consider 202 Accepted at endpoint and optional agent_config_override

The model is fine. Ensure the endpoint returns 202 (see earlier comment). Also, if agent_config_override is optional, clarify in docs that it overrides agent defaults only for resubmission.

Comment on lines +6542 to +6699
/v1/convai/agent-testing/create:
post:
summary: Create Agent Response Test
description: Creates a new agent response test.
operationId: create_agent_response_test_route
parameters:
- name: xi-api-key
in: header
description: Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.
schema:
title: Xi-Api-Key
type: string
description: Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.
nullable: true
requestBody:
content:
application/json:
schema:
$ref: '#/components/schemas/CreateUnitTestRequest'
required: true
responses:
'200':
description: Successful Response
content:
application/json:
schema:
$ref: '#/components/schemas/CreateUnitTestResponseModel'
'422':
description: Validation Error
content:
application/json:
schema:
$ref: '#/components/schemas/HTTPValidationError'
'/v1/convai/agent-testing/{test_id}':
get:
summary: Get Agent Response Test By Id
description: Gets an agent response test by ID.
operationId: get_agent_response_test_route
parameters:
- name: test_id
in: path
description: The id of a chat response test. This is returned on test creation.
required: true
schema:
title: Test Id
type: string
description: The id of a chat response test. This is returned on test creation.
example: TeaqRRdTcIfIu2i7BYfT
- name: xi-api-key
in: header
description: Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.
schema:
title: Xi-Api-Key
type: string
description: Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.
nullable: true
responses:
'200':
description: Successful Response
content:
application/json:
schema:
$ref: '#/components/schemas/GetUnitTestResponseModel'
'422':
description: Validation Error
content:
application/json:
schema:
$ref: '#/components/schemas/HTTPValidationError'
put:
summary: Update Agent Response Test
description: Updates an agent response test by ID.
operationId: update_agent_response_test_route
parameters:
- name: test_id
in: path
description: The id of a chat response test. This is returned on test creation.
required: true
schema:
title: Test Id
type: string
description: The id of a chat response test. This is returned on test creation.
example: TeaqRRdTcIfIu2i7BYfT
- name: xi-api-key
in: header
description: Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.
schema:
title: Xi-Api-Key
type: string
description: Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.
nullable: true
requestBody:
content:
application/json:
schema:
$ref: '#/components/schemas/UpdateUnitTestRequest'
required: true
responses:
'200':
description: Successful Response
content:
application/json:
schema:
$ref: '#/components/schemas/GetUnitTestResponseModel'
'422':
description: Validation Error
content:
application/json:
schema:
$ref: '#/components/schemas/HTTPValidationError'
delete:
summary: Delete Agent Response Test
description: Deletes an agent response test by ID.
operationId: delete_chat_response_test_route
parameters:
- name: test_id
in: path
description: The id of a chat response test. This is returned on test creation.
required: true
schema:
title: Test Id
type: string
description: The id of a chat response test. This is returned on test creation.
example: TeaqRRdTcIfIu2i7BYfT
- name: xi-api-key
in: header
description: Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.
schema:
title: Xi-Api-Key
type: string
description: Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.
nullable: true
responses:
'200':
description: Successful Response
content:
application/json:
schema: { }
'422':
description: Validation Error
content:
application/json:
schema:
$ref: '#/components/schemas/HTTPValidationError'
/v1/convai/agent-testing/summaries:
post:
summary: Get Agent Response Test Summaries By Ids
description: Gets multiple agent response tests by their IDs. Returns a dictionary mapping test IDs to test summaries.
operationId: get_agent_response_tests_summaries_route
parameters:
- name: xi-api-key
in: header
description: Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.
schema:
title: Xi-Api-Key
type: string
description: Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website.
nullable: true
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Inconsistent operationIds and suboptimal status codes for create/delete; consider security reuse and tagging

  • operationId naming mixes "agent" and "chat": list and delete use list_chat_response_tests_route/delete_chat_response_test_route while others use agent_*; unify for SDKs.
  • Use 201 for create and 204 for delete (no body). Current 200 + empty schema is less idiomatic.
  • Header parameter xi-api-key is copy-pasted per operation and marked nullable: true though described as required; prefer a reusable components/parameters or securitySchemes entry and mark it required consistently.
  • Methods lack tags while nearby paths are tagged; add tags for grouping and codegen.

Apply targeted diffs:

-      operationId: list_chat_response_tests_route
+      operationId: list_agent_response_tests_route
-      operationId: delete_chat_response_test_route
+      operationId: delete_agent_response_test_route
-        '200':
-          description: Successful Response
+        '201':
+          description: Created
-        '200':
-          description: Successful Response
-          content:
-            application/json:
-              schema: { }
+        '204':
+          description: No Content

Optionally add tags and reuse the API key header:

     post:
+      tags:
+        - conversational_ai
+        - agent_testing

Proposed shared header/ security (outside this hunk):

components:
  securitySchemes:
    XiApiKey:
      type: apiKey
      in: header
      name: xi-api-key
security:
  - XiApiKey: []

Then remove per-endpoint xi-api-key parameters. Do you want me to generate a follow-up PR to consolidate these?

Run to spot remaining chat_* operationIds and missing tags:


🏁 Script executed:

#!/bin/bash
rg -n "operationId:\s*(get|list|delete|update|create)_chat_response_test" -n
rg -nP "(?s)^\s*/v1/convai/agent-testing.*?\n\s{2}(get|post|put|delete):\n(?!\s{4}tags:)" -n

Length of output: 350


🏁 Script executed:

#!/bin/bash

# 1. Check occurrences of xi-api-key parameters (to verify if 'required' is missing)
echo "== xi-api-key parameter usage =="
rg -n "name:\s*xi-api-key" -n src/libs/ElevenLabs/openapi.yaml -C3 | head -n 20

# 2. Look for a shared XiApiKey security scheme in components
echo
echo "== XiApiKey security scheme =="
rg -n "XiApiKey" -n src/libs/ElevenLabs/openapi.yaml

# 3. Locate any securitySchemes definitions under components
echo
echo "== components.securitySchemes definitions =="
rg -nP "^\s*securitySchemes:" -n src/libs/ElevenLabs/openapi.yaml -C5

# 4. Inspect tags usage across the file to see existing grouping
echo
echo "== First tags occurrences =="
rg -n "tags:" -n src/libs/ElevenLabs/openapi.yaml | head -n 30

Length of output: 2546


Consistent operationIds, RESTful status codes, and security reuse; add missing tags

• File: src/libs/ElevenLabs/openapi.yaml Lines 6542–6699
• Rename list_chat_response_tests_routelist_agent_response_tests_route and delete_chat_response_test_routedelete_agent_response_test_route for uniform agent_* naming.
• Change POST /v1/convai/agent-testing/create response from 200201 (Created).
• Change DELETE /v1/convai/agent-testing/{test_id} response from 200 with empty body → 204 (No Content).
• Remove per-operation xi-api-key header parameters and rely on existing global components.securitySchemes.ApiKeyAuth + top-level security: block.
• Add tags under each operation (e.g. tags: [conversational_ai, agent_testing]) for grouping and codegen.

Proposed diffs:

--- a/src/libs/ElevenLabs/openapi.yaml
+++ b/src/libs/ElevenLabs/openapi.yaml
@@ -6545,8 +6545,12 @@
   /v1/convai/agent-testing/create:
-    post:
-      operationId: create_agent_response_test_route
+    post:
+      tags:
+        - conversational_ai
+        - agent_testing
+      operationId: create_agent_response_test_route
       summary: Create Agent Response Test
       description: Creates a new agent response test.
       requestBody:
@@ -6556,7 +6560,7 @@
       responses:
-        '200':
+        '201':
           description: Created
           content:
             application/json:
@@ -6660,8 +6664,12 @@
   '/v1/convai/agent-testing/{test_id}':
     delete:
-      operationId: delete_chat_response_test_route
+      tags:
+        - conversational_ai
+        - agent_testing
+      operationId: delete_agent_response_test_route
       summary: Delete Agent Response Test
       description: Deletes an agent response test by ID.
       responses:
@@ -6671,10 +6679,7 @@
-        '200':
-          description: Successful Response
-          content:
-            application/json:
-              schema: { }
+        '204':
+          description: No Content
         '422':
           description: Validation Error
           content:
@@ -6720,7 +6725,11 @@
   /v1/convai/agent-testing:
     get:
-      operationId: list_chat_response_tests_route
+      tags:
+        - conversational_ai
+        - agent_testing
+      operationId: list_agent_response_tests_route
       summary: List Agent Response Tests
       description: Retrieves a list of agent response tests.
       responses:
@@ -6745,6 +6754,10 @@
   /v1/convai/agent-testing/summaries:
     post:
+      tags:
+        - conversational_ai
+        - agent_testing
       operationId: get_agent_response_tests_summaries_route
       summary: Get Agent Response Test Summaries By Ids
       description: Gets multiple agent response tests by their IDs.
@@ -6545,17 +6558,4 @@
-    parameters:
-      - name: xi-api-key
-        in: header
-        description: Your API key…
-        schema:
-          type: string
-          nullable: true
-
-    (remove this block from each operation under `/v1/convai/agent-testing/*`)

These adjustments will:

  • Ensure consistent SDK-friendly operation IDs
  • Adopt idiomatic HTTP status codes
  • Consolidate API-key handling via the existing ApiKeyAuth scheme (no new definitions required)
  • Provide tags for grouping in generated clients

Let me know if you’d like me to prepare a follow-up PR with these refactors.

Committable suggestion skipped: line range outside the PR's diff.

Comment on lines 16146 to +16154
sid:
title: Sid
type: string
description: Twilio Account SID
token:
title: Token
description: Twilio Account SID
token:
title: Token
type: string
description: Twilio Auth Token
CreateUnitTestRequest:
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Sensitive Twilio token should be writeOnly

Avoid echoing secrets in generated SDKs and UIs.

         token:
           title: Token
           type: string
           description: Twilio Auth Token
+          writeOnly: true
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
sid:
title: Sid
type: string
description: Twilio Account SID
token:
title: Token
description: Twilio Account SID
token:
title: Token
type: string
description: Twilio Auth Token
CreateUnitTestRequest:
sid:
title: Sid
type: string
description: Twilio Account SID
token:
title: Token
type: string
description: Twilio Auth Token
writeOnly: true
CreateUnitTestRequest:
🤖 Prompt for AI Agents
In src/libs/ElevenLabs/openapi.yaml around lines 16146 to 16154, the Twilio
token property is currently exposed as a plain string; update the schema so the
token field is marked writeOnly: true (and optionally format: password) to
prevent it from being echoed in generated SDKs and UIs, and keep or adjust the
description to indicate it is sensitive and not returned in responses.

Comment on lines +16155 to +16207
title: CreateUnitTestRequest
required:
- chat_history
- success_condition
- success_examples
- failure_examples
- name
type: object
properties:
chat_history:
title: Chat History
maxItems: 50
minItems: 1
type: array
items:
$ref: '#/components/schemas/ConversationHistoryTranscriptCommonModel-Input'
success_condition:
title: Success Condition
type: string
description: A prompt that evaluates whether the agent's response is successful. Should return True or False.
success_examples:
title: Success Examples
maxItems: 5
minItems: 0
type: array
items:
$ref: '#/components/schemas/AgentSuccessfulResponseExample'
description: Non-empty list of example responses that should be considered successful
failure_examples:
title: Failure Examples
maxItems: 5
minItems: 0
type: array
items:
$ref: '#/components/schemas/AgentFailureResponseExample'
description: Non-empty list of example responses that should be considered failures
tool_call_parameters:
$ref: '#/components/schemas/UnitTestToolCallEvaluationModel-Input'
dynamic_variables:
title: Dynamic Variables
type: object
additionalProperties:
anyOf:
- type: string
- type: number
- type: integer
- type: boolean
nullable: true
description: Dynamic variables to replace in the agent config during testing
name:
title: Name
type: string
CreateUnitTestResponseModel:
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

❓ Verification inconclusive

success_examples/failure_examples contradict minItems 0 vs “Non-empty list”

Descriptions say non-empty but minItems is 0. Enforce minItems: 1 or relax the description.

         success_examples:
           title: Success Examples
           maxItems: 5
-          minItems: 0
+          minItems: 1
...
         failure_examples:
           title: Failure Examples
           maxItems: 5
-          minItems: 0
+          minItems: 1

If zero is allowed, change descriptions to remove “Non-empty”.


Ensure consistency between array constraints and descriptions

The success_examples and failure_examples properties declare “Non-empty list…” in their descriptions but allow empty arrays via minItems: 0. Please update these to enforce minItems: 1 (or, if empty lists are intended, revise the descriptions to remove “Non-empty”).

Locations to update:

  • src/libs/ElevenLabs/openapi.yaml — CreateUnitTestRequest.properties.success_examples
  • src/libs/ElevenLabs/openapi.yaml — CreateUnitTestRequest.properties.failure_examples

Suggested diff:

         success_examples:
           title: Success Examples
           maxItems: 5
-          minItems: 0
+          minItems: 1
           type: array
           items:
             $ref: '#/components/schemas/AgentSuccessfulResponseExample'
           description: Non-empty list of example responses that should be considered successful

         failure_examples:
           title: Failure Examples
           maxItems: 5
-          minItems: 0
+          minItems: 1
           type: array
           items:
             $ref: '#/components/schemas/AgentFailureResponseExample'
           description: Non-empty list of example responses that should be considered failures
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
title: CreateUnitTestRequest
required:
- chat_history
- success_condition
- success_examples
- failure_examples
- name
type: object
properties:
chat_history:
title: Chat History
maxItems: 50
minItems: 1
type: array
items:
$ref: '#/components/schemas/ConversationHistoryTranscriptCommonModel-Input'
success_condition:
title: Success Condition
type: string
description: A prompt that evaluates whether the agent's response is successful. Should return True or False.
success_examples:
title: Success Examples
maxItems: 5
minItems: 0
type: array
items:
$ref: '#/components/schemas/AgentSuccessfulResponseExample'
description: Non-empty list of example responses that should be considered successful
failure_examples:
title: Failure Examples
maxItems: 5
minItems: 0
type: array
items:
$ref: '#/components/schemas/AgentFailureResponseExample'
description: Non-empty list of example responses that should be considered failures
tool_call_parameters:
$ref: '#/components/schemas/UnitTestToolCallEvaluationModel-Input'
dynamic_variables:
title: Dynamic Variables
type: object
additionalProperties:
anyOf:
- type: string
- type: number
- type: integer
- type: boolean
nullable: true
description: Dynamic variables to replace in the agent config during testing
name:
title: Name
type: string
CreateUnitTestResponseModel:
properties:
chat_history:
title: Chat History
maxItems: 50
minItems: 1
type: array
items:
$ref: '#/components/schemas/ConversationHistoryTranscriptCommonModel-Input'
success_condition:
title: Success Condition
type: string
description: A prompt that evaluates whether the agent's response is successful. Should return True or False.
success_examples:
title: Success Examples
maxItems: 5
minItems: 1
type: array
items:
$ref: '#/components/schemas/AgentSuccessfulResponseExample'
description: Non-empty list of example responses that should be considered successful
failure_examples:
title: Failure Examples
maxItems: 5
minItems: 1
type: array
items:
$ref: '#/components/schemas/AgentFailureResponseExample'
description: Non-empty list of example responses that should be considered failures
tool_call_parameters:
$ref: '#/components/schemas/UnitTestToolCallEvaluationModel-Input'
dynamic_variables:
title: Dynamic Variables
type: object
additionalProperties:
anyOf:
- type: string
- type: number
- type: integer
- type: boolean
nullable: true
description: Dynamic variables to replace in the agent config during testing
name:
title: Name
type: string
🤖 Prompt for AI Agents
In src/libs/ElevenLabs/openapi.yaml around lines 16155 to 16207, the
success_examples and failure_examples descriptions say “Non-empty list…” but
both use minItems: 0; update each schema to set minItems: 1 to enforce non-empty
arrays (leave the descriptions unchanged), and ensure the OpenAPI file remains
valid after the change.

Comment on lines +18600 to +18636
title: GetUnitTestResponseModel
required:
- chat_history
- success_condition
- success_examples
- failure_examples
- id
- name
type: object
properties:
chat_history:
title: Chat History
maxItems: 50
minItems: 1
type: array
items:
$ref: '#/components/schemas/ConversationHistoryTranscriptCommonModel-Output'
success_condition:
title: Success Condition
type: string
description: A prompt that evaluates whether the agent's response is successful. Should return True or False.
success_examples:
title: Success Examples
maxItems: 5
minItems: 0
type: array
items:
$ref: '#/components/schemas/AgentSuccessfulResponseExample'
description: Non-empty list of example responses that should be considered successful
failure_examples:
title: Failure Examples
maxItems: 5
minItems: 0
type: array
items:
$ref: '#/components/schemas/AgentFailureResponseExample'
description: Non-empty list of example responses that should be considered failures
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Same minItems inconsistency on the GET model

Descriptions say “Non-empty” but minItems is 0. Align with Create model.

         success_examples:
           title: Success Examples
           maxItems: 5
-          minItems: 0
+          minItems: 1
...
         failure_examples:
           title: Failure Examples
           maxItems: 5
-          minItems: 0
+          minItems: 1
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
title: GetUnitTestResponseModel
required:
- chat_history
- success_condition
- success_examples
- failure_examples
- id
- name
type: object
properties:
chat_history:
title: Chat History
maxItems: 50
minItems: 1
type: array
items:
$ref: '#/components/schemas/ConversationHistoryTranscriptCommonModel-Output'
success_condition:
title: Success Condition
type: string
description: A prompt that evaluates whether the agent's response is successful. Should return True or False.
success_examples:
title: Success Examples
maxItems: 5
minItems: 0
type: array
items:
$ref: '#/components/schemas/AgentSuccessfulResponseExample'
description: Non-empty list of example responses that should be considered successful
failure_examples:
title: Failure Examples
maxItems: 5
minItems: 0
type: array
items:
$ref: '#/components/schemas/AgentFailureResponseExample'
description: Non-empty list of example responses that should be considered failures
success_examples:
title: Success Examples
maxItems: 5
minItems: 1
type: array
items:
$ref: '#/components/schemas/AgentSuccessfulResponseExample'
description: Non-empty list of example responses that should be considered successful
failure_examples:
title: Failure Examples
maxItems: 5
minItems: 1
type: array
items:
$ref: '#/components/schemas/AgentFailureResponseExample'
description: Non-empty list of example responses that should be considered failures
🤖 Prompt for AI Agents
In src/libs/ElevenLabs/openapi.yaml around lines 18600 to 18636, the GET
response schema lists success_examples and failure_examples with descriptions
stating "Non-empty list" but sets minItems: 0; update both
success_examples.minItems and failure_examples.minItems to 1 to align with the
description and the Create model (leave maxItems: 5 and other properties
unchanged).

Comment on lines +19555 to +19568
title: ListTestsByIdsRequestModel
required:
- test_ids
type: object
properties:
test_ids:
title: Test Ids
type: array
items:
type: string
description: List of test IDs to fetch. No duplicates allowed.
example:
- test_id_1
- test_id_2
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Enforce uniqueness of test_ids

The description says “No duplicates allowed” but schema doesn’t enforce it. Add uniqueItems: true.

         test_ids:
           title: Test Ids
           type: array
+          uniqueItems: true
           items:
             type: string
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
title: ListTestsByIdsRequestModel
required:
- test_ids
type: object
properties:
test_ids:
title: Test Ids
type: array
items:
type: string
description: List of test IDs to fetch. No duplicates allowed.
example:
- test_id_1
- test_id_2
title: ListTestsByIdsRequestModel
required:
- test_ids
type: object
properties:
test_ids:
title: Test Ids
type: array
uniqueItems: true
items:
type: string
description: List of test IDs to fetch. No duplicates allowed.
example:
- test_id_1
- test_id_2
🤖 Prompt for AI Agents
In src/libs/ElevenLabs/openapi.yaml around lines 19555 to 19568 the
ListTestsByIdsRequestModel declares test_ids with description "No duplicates
allowed" but does not enforce it; add uniqueItems: true under the test_ids array
property (alongside type: array and items) to enforce uniqueness in the schema.

Comment on lines +24989 to 25041
title: UpdateUnitTestRequest
required:
- chat_history
- success_condition
- success_examples
- failure_examples
- name
type: object
properties:
chat_history:
title: Chat History
maxItems: 50
minItems: 1
type: array
items:
$ref: '#/components/schemas/ConversationHistoryTranscriptCommonModel-Input'
success_condition:
title: Success Condition
type: string
description: A prompt that evaluates whether the agent's response is successful. Should return True or False.
success_examples:
title: Success Examples
maxItems: 5
minItems: 0
type: array
items:
$ref: '#/components/schemas/AgentSuccessfulResponseExample'
description: Non-empty list of example responses that should be considered successful
failure_examples:
title: Failure Examples
maxItems: 5
minItems: 0
type: array
items:
$ref: '#/components/schemas/AgentFailureResponseExample'
description: Non-empty list of example responses that should be considered failures
tool_call_parameters:
$ref: '#/components/schemas/UnitTestToolCallEvaluationModel-Input'
dynamic_variables:
title: Dynamic Variables
type: object
additionalProperties:
anyOf:
- type: string
- type: number
- type: integer
- type: boolean
nullable: true
description: Dynamic variables to replace in the agent config during testing
name:
title: Name
type: string
UpdateWorkspaceMemberResponseModel:
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

UpdateUnitTestRequest: same minItems inconsistency; align with Create

Mirror the fixes suggested for Create model.

         success_examples:
           title: Success Examples
           maxItems: 5
-          minItems: 0
+          minItems: 1
...
         failure_examples:
           title: Failure Examples
           maxItems: 5
-          minItems: 0
+          minItems: 1
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
title: UpdateUnitTestRequest
required:
- chat_history
- success_condition
- success_examples
- failure_examples
- name
type: object
properties:
chat_history:
title: Chat History
maxItems: 50
minItems: 1
type: array
items:
$ref: '#/components/schemas/ConversationHistoryTranscriptCommonModel-Input'
success_condition:
title: Success Condition
type: string
description: A prompt that evaluates whether the agent's response is successful. Should return True or False.
success_examples:
title: Success Examples
maxItems: 5
minItems: 0
type: array
items:
$ref: '#/components/schemas/AgentSuccessfulResponseExample'
description: Non-empty list of example responses that should be considered successful
failure_examples:
title: Failure Examples
maxItems: 5
minItems: 0
type: array
items:
$ref: '#/components/schemas/AgentFailureResponseExample'
description: Non-empty list of example responses that should be considered failures
tool_call_parameters:
$ref: '#/components/schemas/UnitTestToolCallEvaluationModel-Input'
dynamic_variables:
title: Dynamic Variables
type: object
additionalProperties:
anyOf:
- type: string
- type: number
- type: integer
- type: boolean
nullable: true
description: Dynamic variables to replace in the agent config during testing
name:
title: Name
type: string
UpdateWorkspaceMemberResponseModel:
success_examples:
title: Success Examples
maxItems: 5
minItems: 1
type: array
items:
$ref: '#/components/schemas/AgentSuccessfulResponseExample'
description: Non-empty list of example responses that should be considered successful
failure_examples:
title: Failure Examples
maxItems: 5
minItems: 1
type: array
items:
$ref: '#/components/schemas/AgentFailureResponseExample'
description: Non-empty list of example responses that should be considered failures
🤖 Prompt for AI Agents
In src/libs/ElevenLabs/openapi.yaml around lines 24989 to 25041, the
UpdateUnitTestRequest schema has minItems: 0 for success_examples and
failure_examples while their descriptions state they should be non-empty; mirror
the Create model fix by changing those minItems to 1 (making them required to be
non-empty) and ensure the schema and description are consistent.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant