feat:Add endpoints and schemas to ElevenLabs OpenAPI (openapi.yaml)#169
feat:Add endpoints and schemas to ElevenLabs OpenAPI (openapi.yaml)#169
Conversation
WalkthroughAdds numerous REST endpoints and schema models to ElevenLabs OpenAPI for agent testing, conversations, knowledge base with RAG indexing, tools/agents management, MCP servers, batch calling/SIP, and various evaluation/metadata models. Changes are confined to schema and endpoint additions in src/libs/ElevenLabs/openapi.yaml. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant Client
participant AgentTestingAPI as Agent Testing API
participant EvalEngine as Evaluation Engine
participant Store as Test Store
rect rgb(235, 245, 255)
note over Client,AgentTestingAPI: Create & manage tests (new)
Client->>AgentTestingAPI: POST /v1/convai/agent-testing/create
AgentTestingAPI->>Store: Persist Test Definition
Store-->>AgentTestingAPI: Test ID
AgentTestingAPI-->>Client: UnitTestRunResponseModel (id,status)
end
rect rgb(245, 235, 255)
note over Client,EvalEngine: Invocation & evaluation (new)
Client->>AgentTestingAPI: GET /v1/convai/test-invocations/{id}
AgentTestingAPI->>EvalEngine: Fetch Results/Evaluations
EvalEngine-->>AgentTestingAPI: Metrics, rationales
AgentTestingAPI-->>Client: Summary/Details
end
opt Update/Delete
Client->>AgentTestingAPI: PUT/DELETE /v1/convai/agent-testing/{test_id}
AgentTestingAPI->>Store: Update/Remove
Store-->>AgentTestingAPI: Ack
AgentTestingAPI-->>Client: 200/204
end
sequenceDiagram
autonumber
participant Client
participant KBAPI as Knowledge Base API
participant Indexer as RAG Indexer
participant Store as KB Store
Client->>KBAPI: POST /v1/convai/knowledge-base
KBAPI->>Store: Save Document
Store-->>KBAPI: Doc ID
KBAPI-->>Client: Document metadata
rect rgb(235, 255, 240)
note over Client,Indexer: RAG index operations (new)
Client->>KBAPI: POST /knowledge-base/{doc_id}/rag-index
KBAPI->>Indexer: Start Index Build
Indexer-->>KBAPI: Status (queued/running)
KBAPI-->>Client: Index status
Client->>KBAPI: GET /knowledge-base/{doc_id}/rag-index
KBAPI->>Indexer: Query Status
Indexer-->>KBAPI: Status/details
KBAPI-->>Client: Index info
end
sequenceDiagram
autonumber
participant Client
participant AgentsAPI as Agents API
participant Simulator as Conversation Simulator
Client->>AgentsAPI: POST /agents/{agent_id}/simulate-conversation
AgentsAPI->>Simulator: Run Simulation
Simulator-->>AgentsAPI: Transcript, metrics
AgentsAPI-->>Client: Simulation result
alt Stream
Client->>AgentsAPI: POST /agents/{agent_id}/simulate-conversation/stream
AgentsAPI-->>Client: Event stream (tokens/events)
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60–90 minutes Poem
✨ Finishing Touches🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Actionable comments posted: 6
🧹 Nitpick comments (11)
src/libs/ElevenLabs/openapi.yaml (11)
6850-6882: Resubmit endpoint likely asynchronous; return 202 and drop empty bodyResubmission suggests an async re-queue. Prefer 202 Accepted with no response body for clarity.
- '200': - description: Successful Response - content: - application/json: - schema: { } + '202': + description: Accepted
10429-10439: AdhocAgentConfigOverrideForTestRequestModel: verify required fieldsBoth conversation_config and platform_settings are required. Is overriding platform settings always necessary for tests? If not, consider making it optional to reduce payload size.
- required: - - conversation_config - - platform_settings + required: + - conversation_config
10569-10583: Add constraints/examples to failure example textOptional, but adding minLength/maxLength and an example improves validation and SDK docs.
response: title: Response - type: string + type: string + minLength: 1 + maxLength: 2000 + example: "I cannot help with that request."
10662-10675: Mirror constraints on success exampleKeep symmetry with failure example.
response: title: Response - type: string + type: string + minLength: 1 + maxLength: 2000 + example: "Sure, here's how you can proceed..."
17079-17093: ExactParameterEvaluationStrategy only supports string; consider broader typesAgent tool parameters may be numeric/boolean. Support additional primitives.
- expected_value: - title: Expected Value - type: string - description: The exact string value that the parameter must match. + expected_value: + title: Expected Value + description: The exact value the parameter must match. + oneOf: + - type: string + - type: number + - type: integer + - type: boolean
18525-18541: created_at naming/time unit inconsistent with other modelsHere it's created_at (integer). Elsewhere you use created_at_unix_secs. Either rename for consistency (breaking) or document units.
created_at: title: Created At - type: integer + type: integer + description: Unix timestamp in seconds
19111-19125: Add basic constraints to LLMParameterEvaluationStrategy.descriptionPrevents unbounded payloads in requests.
description: title: Description type: string description: A description of the evaluation strategy to use for the test. + minLength: 1 + maxLength: 2000
22364-22376: ReferencedToolCommonModel: id should be non-emptyAdd minLength to prevent empty IDs.
id: title: Id type: string description: The ID of the tool + minLength: 1
22980-22988: SingleTestRunRequestModel: add example for test_idImproves SDK usability.
test_id: title: Test Id type: string description: ID of the test to run + example: "TeaqRRdTcIfIu2i7BYfT"
24326-24357: Statuses may need “running”/“canceled”; rationale structure is solidIf you plan to expose in-progress/canceled runs, add those statuses now to avoid breaking changes later. The rationale model looks good.
enum: - - pending - - passed - - failed + - pending + - running + - passed + - failed + - canceled
24872-24903: UnitTestRunResponseModel looks good; add example and clarify agent_responsesConsider noting that agent_responses may be omitted for failed early runs, and add an example object for SDKs.
UnitTestRunResponseModel: title: UnitTestRunResponseModel + example: + test_run_id: "run_abc123" + test_invocation_id: "inv_123" + agent_id: "21m00Tcm4TlvDq8ikWAM" + status: "passed" + test_id: "TeaqRRdTcIfIu2i7BYfT"
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
⛔ Files ignored due to path filters (109)
src/libs/ElevenLabs/Generated/ElevenLabs..JsonSerializerContext.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.ElevenLabsClient.CreateConvaiAgentTestingCreate.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.ElevenLabsClient.CreateConvaiAgentTestingSummaries.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.ElevenLabsClient.CreateConvaiAgentsByAgentIdRunTests.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.ElevenLabsClient.CreateConvaiTestInvocationsByTestInvocationIdResubmit.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.ElevenLabsClient.DeleteConvaiAgentTestingByTestId.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.ElevenLabsClient.GetConvaiAgentTesting.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.ElevenLabsClient.GetConvaiAgentTestingByTestId.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.ElevenLabsClient.GetConvaiTestInvocationsByTestInvocationId.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.ElevenLabsClient.PutConvaiAgentTestingByTestId.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.IElevenLabsClient.CreateConvaiAgentTestingCreate.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.IElevenLabsClient.CreateConvaiAgentTestingSummaries.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.IElevenLabsClient.CreateConvaiAgentsByAgentIdRunTests.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.IElevenLabsClient.CreateConvaiTestInvocationsByTestInvocationIdResubmit.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.IElevenLabsClient.DeleteConvaiAgentTestingByTestId.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.IElevenLabsClient.GetConvaiAgentTesting.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.IElevenLabsClient.GetConvaiAgentTestingByTestId.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.IElevenLabsClient.GetConvaiTestInvocationsByTestInvocationId.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.IElevenLabsClient.GetDocs.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.IElevenLabsClient.PutConvaiAgentTestingByTestId.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.JsonConverters.AgentFailureResponseExampleType.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.JsonConverters.AgentFailureResponseExampleTypeNullable.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.JsonConverters.AgentSuccessfulResponseExampleType.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.JsonConverters.AgentSuccessfulResponseExampleTypeNullable.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.JsonConverters.Eval.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.JsonConverters.ExactParameterEvaluationStrategyType.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.JsonConverters.ExactParameterEvaluationStrategyTypeNullable.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.JsonConverters.LLMParameterEvaluationStrategyType.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.JsonConverters.LLMParameterEvaluationStrategyTypeNullable.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.JsonConverters.RegexParameterEvaluationStrategyType.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.JsonConverters.RegexParameterEvaluationStrategyTypeNullable.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.JsonConverters.TestRunStatus.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.JsonConverters.TestRunStatusNullable.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.JsonConverters.UnitTestToolCallParameterEvalDiscriminatorType.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.JsonConverters.UnitTestToolCallParameterEvalDiscriminatorTypeNullable.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.JsonSerializerContextTypes.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.AdhocAgentConfigOverrideForTestRequestModel.Json.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.AdhocAgentConfigOverrideForTestRequestModel.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.AgentFailureResponseExample.Json.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.AgentFailureResponseExample.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.AgentFailureResponseExampleType.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.AgentSuccessfulResponseExample.Json.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.AgentSuccessfulResponseExample.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.AgentSuccessfulResponseExampleType.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.CreateUnitTestRequest.Json.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.CreateUnitTestRequest.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.CreateUnitTestRequestDynamicVariables.Json.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.CreateUnitTestRequestDynamicVariables.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.CreateUnitTestResponseModel.Json.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.CreateUnitTestResponseModel.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.DeleteChatResponseTestRouteResponse.Json.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.DeleteChatResponseTestRouteResponse.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.Eval.Json.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.Eval.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.ExactParameterEvaluationStrategy.Json.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.ExactParameterEvaluationStrategy.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.ExactParameterEvaluationStrategyType.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.GetTestSuiteInvocationResponseModel.Json.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.GetTestSuiteInvocationResponseModel.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.GetTestsPageResponseModel.Json.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.GetTestsPageResponseModel.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.GetTestsSummariesByIdsResponseModel.Json.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.GetTestsSummariesByIdsResponseModel.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.GetTestsSummariesByIdsResponseModelTests.Json.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.GetTestsSummariesByIdsResponseModelTests.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.GetUnitTestResponseModel.Json.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.GetUnitTestResponseModel.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.GetUnitTestResponseModelDynamicVariables.Json.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.GetUnitTestResponseModelDynamicVariables.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.LLMParameterEvaluationStrategy.Json.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.LLMParameterEvaluationStrategy.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.LLMParameterEvaluationStrategyType.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.ListTestsByIdsRequestModel.Json.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.ListTestsByIdsRequestModel.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.ReferencedToolCommonModel.Json.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.ReferencedToolCommonModel.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.RegexParameterEvaluationStrategy.Json.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.RegexParameterEvaluationStrategy.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.RegexParameterEvaluationStrategyType.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.ResubmitTestsRequestModel.Json.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.ResubmitTestsRequestModel.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.ResubmitTestsRouteResponse.Json.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.ResubmitTestsRouteResponse.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.RunAgentTestsRequestModel.Json.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.RunAgentTestsRequestModel.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.SingleTestRunRequestModel.Json.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.SingleTestRunRequestModel.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.TestConditionRationaleCommonModel.Json.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.TestConditionRationaleCommonModel.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.TestConditionResultCommonModel.Json.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.TestConditionResultCommonModel.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.TestRunStatus.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.UnitTestRunResponseModel.Json.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.UnitTestRunResponseModel.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.UnitTestSummaryResponseModel.Json.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.UnitTestSummaryResponseModel.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.UnitTestToolCallEvaluationModelInput.Json.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.UnitTestToolCallEvaluationModelInput.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.UnitTestToolCallEvaluationModelOutput.Json.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.UnitTestToolCallEvaluationModelOutput.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.UnitTestToolCallParameter.Json.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.UnitTestToolCallParameter.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.UnitTestToolCallParameterEvalDiscriminator.Json.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.UnitTestToolCallParameterEvalDiscriminator.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.UnitTestToolCallParameterEvalDiscriminatorType.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.UpdateUnitTestRequest.Json.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.UpdateUnitTestRequest.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.UpdateUnitTestRequestDynamicVariables.Json.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.UpdateUnitTestRequestDynamicVariables.g.csis excluded by!**/generated/**
📒 Files selected for processing (1)
src/libs/ElevenLabs/openapi.yaml(16 hunks)
🔇 Additional comments (2)
src/libs/ElevenLabs/openapi.yaml (2)
18543-18561: Pagination model looks goodtests + next_cursor + has_more matches the list endpoint’s query params.
22535-22555: ResubmitTestsRequestModel: consider 202 Accepted at endpoint and optional agent_config_overrideThe model is fine. Ensure the endpoint returns 202 (see earlier comment). Also, if agent_config_override is optional, clarify in docs that it overrides agent defaults only for resubmission.
| /v1/convai/agent-testing/create: | ||
| post: | ||
| summary: Create Agent Response Test | ||
| description: Creates a new agent response test. | ||
| operationId: create_agent_response_test_route | ||
| parameters: | ||
| - name: xi-api-key | ||
| in: header | ||
| description: Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website. | ||
| schema: | ||
| title: Xi-Api-Key | ||
| type: string | ||
| description: Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website. | ||
| nullable: true | ||
| requestBody: | ||
| content: | ||
| application/json: | ||
| schema: | ||
| $ref: '#/components/schemas/CreateUnitTestRequest' | ||
| required: true | ||
| responses: | ||
| '200': | ||
| description: Successful Response | ||
| content: | ||
| application/json: | ||
| schema: | ||
| $ref: '#/components/schemas/CreateUnitTestResponseModel' | ||
| '422': | ||
| description: Validation Error | ||
| content: | ||
| application/json: | ||
| schema: | ||
| $ref: '#/components/schemas/HTTPValidationError' | ||
| '/v1/convai/agent-testing/{test_id}': | ||
| get: | ||
| summary: Get Agent Response Test By Id | ||
| description: Gets an agent response test by ID. | ||
| operationId: get_agent_response_test_route | ||
| parameters: | ||
| - name: test_id | ||
| in: path | ||
| description: The id of a chat response test. This is returned on test creation. | ||
| required: true | ||
| schema: | ||
| title: Test Id | ||
| type: string | ||
| description: The id of a chat response test. This is returned on test creation. | ||
| example: TeaqRRdTcIfIu2i7BYfT | ||
| - name: xi-api-key | ||
| in: header | ||
| description: Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website. | ||
| schema: | ||
| title: Xi-Api-Key | ||
| type: string | ||
| description: Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website. | ||
| nullable: true | ||
| responses: | ||
| '200': | ||
| description: Successful Response | ||
| content: | ||
| application/json: | ||
| schema: | ||
| $ref: '#/components/schemas/GetUnitTestResponseModel' | ||
| '422': | ||
| description: Validation Error | ||
| content: | ||
| application/json: | ||
| schema: | ||
| $ref: '#/components/schemas/HTTPValidationError' | ||
| put: | ||
| summary: Update Agent Response Test | ||
| description: Updates an agent response test by ID. | ||
| operationId: update_agent_response_test_route | ||
| parameters: | ||
| - name: test_id | ||
| in: path | ||
| description: The id of a chat response test. This is returned on test creation. | ||
| required: true | ||
| schema: | ||
| title: Test Id | ||
| type: string | ||
| description: The id of a chat response test. This is returned on test creation. | ||
| example: TeaqRRdTcIfIu2i7BYfT | ||
| - name: xi-api-key | ||
| in: header | ||
| description: Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website. | ||
| schema: | ||
| title: Xi-Api-Key | ||
| type: string | ||
| description: Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website. | ||
| nullable: true | ||
| requestBody: | ||
| content: | ||
| application/json: | ||
| schema: | ||
| $ref: '#/components/schemas/UpdateUnitTestRequest' | ||
| required: true | ||
| responses: | ||
| '200': | ||
| description: Successful Response | ||
| content: | ||
| application/json: | ||
| schema: | ||
| $ref: '#/components/schemas/GetUnitTestResponseModel' | ||
| '422': | ||
| description: Validation Error | ||
| content: | ||
| application/json: | ||
| schema: | ||
| $ref: '#/components/schemas/HTTPValidationError' | ||
| delete: | ||
| summary: Delete Agent Response Test | ||
| description: Deletes an agent response test by ID. | ||
| operationId: delete_chat_response_test_route | ||
| parameters: | ||
| - name: test_id | ||
| in: path | ||
| description: The id of a chat response test. This is returned on test creation. | ||
| required: true | ||
| schema: | ||
| title: Test Id | ||
| type: string | ||
| description: The id of a chat response test. This is returned on test creation. | ||
| example: TeaqRRdTcIfIu2i7BYfT | ||
| - name: xi-api-key | ||
| in: header | ||
| description: Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website. | ||
| schema: | ||
| title: Xi-Api-Key | ||
| type: string | ||
| description: Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website. | ||
| nullable: true | ||
| responses: | ||
| '200': | ||
| description: Successful Response | ||
| content: | ||
| application/json: | ||
| schema: { } | ||
| '422': | ||
| description: Validation Error | ||
| content: | ||
| application/json: | ||
| schema: | ||
| $ref: '#/components/schemas/HTTPValidationError' | ||
| /v1/convai/agent-testing/summaries: | ||
| post: | ||
| summary: Get Agent Response Test Summaries By Ids | ||
| description: Gets multiple agent response tests by their IDs. Returns a dictionary mapping test IDs to test summaries. | ||
| operationId: get_agent_response_tests_summaries_route | ||
| parameters: | ||
| - name: xi-api-key | ||
| in: header | ||
| description: Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website. | ||
| schema: | ||
| title: Xi-Api-Key | ||
| type: string | ||
| description: Your API key. This is required by most endpoints to access our API programatically. You can view your xi-api-key using the 'Profile' tab on the website. | ||
| nullable: true |
There was a problem hiding this comment.
💡 Verification agent
🧩 Analysis chain
Inconsistent operationIds and suboptimal status codes for create/delete; consider security reuse and tagging
- operationId naming mixes "agent" and "chat": list and delete use list_chat_response_tests_route/delete_chat_response_test_route while others use agent_*; unify for SDKs.
- Use 201 for create and 204 for delete (no body). Current 200 + empty schema is less idiomatic.
- Header parameter xi-api-key is copy-pasted per operation and marked nullable: true though described as required; prefer a reusable components/parameters or securitySchemes entry and mark it required consistently.
- Methods lack tags while nearby paths are tagged; add tags for grouping and codegen.
Apply targeted diffs:
- operationId: list_chat_response_tests_route
+ operationId: list_agent_response_tests_route- operationId: delete_chat_response_test_route
+ operationId: delete_agent_response_test_route- '200':
- description: Successful Response
+ '201':
+ description: Created- '200':
- description: Successful Response
- content:
- application/json:
- schema: { }
+ '204':
+ description: No ContentOptionally add tags and reuse the API key header:
post:
+ tags:
+ - conversational_ai
+ - agent_testingProposed shared header/ security (outside this hunk):
components:
securitySchemes:
XiApiKey:
type: apiKey
in: header
name: xi-api-key
security:
- XiApiKey: []Then remove per-endpoint xi-api-key parameters. Do you want me to generate a follow-up PR to consolidate these?
Run to spot remaining chat_* operationIds and missing tags:
🏁 Script executed:
#!/bin/bash
rg -n "operationId:\s*(get|list|delete|update|create)_chat_response_test" -n
rg -nP "(?s)^\s*/v1/convai/agent-testing.*?\n\s{2}(get|post|put|delete):\n(?!\s{4}tags:)" -nLength of output: 350
🏁 Script executed:
#!/bin/bash
# 1. Check occurrences of xi-api-key parameters (to verify if 'required' is missing)
echo "== xi-api-key parameter usage =="
rg -n "name:\s*xi-api-key" -n src/libs/ElevenLabs/openapi.yaml -C3 | head -n 20
# 2. Look for a shared XiApiKey security scheme in components
echo
echo "== XiApiKey security scheme =="
rg -n "XiApiKey" -n src/libs/ElevenLabs/openapi.yaml
# 3. Locate any securitySchemes definitions under components
echo
echo "== components.securitySchemes definitions =="
rg -nP "^\s*securitySchemes:" -n src/libs/ElevenLabs/openapi.yaml -C5
# 4. Inspect tags usage across the file to see existing grouping
echo
echo "== First tags occurrences =="
rg -n "tags:" -n src/libs/ElevenLabs/openapi.yaml | head -n 30Length of output: 2546
Consistent operationIds, RESTful status codes, and security reuse; add missing tags
• File: src/libs/ElevenLabs/openapi.yaml Lines 6542–6699
• Rename list_chat_response_tests_route → list_agent_response_tests_route and delete_chat_response_test_route → delete_agent_response_test_route for uniform agent_* naming.
• Change POST /v1/convai/agent-testing/create response from 200 → 201 (Created).
• Change DELETE /v1/convai/agent-testing/{test_id} response from 200 with empty body → 204 (No Content).
• Remove per-operation xi-api-key header parameters and rely on existing global components.securitySchemes.ApiKeyAuth + top-level security: block.
• Add tags under each operation (e.g. tags: [conversational_ai, agent_testing]) for grouping and codegen.
Proposed diffs:
--- a/src/libs/ElevenLabs/openapi.yaml
+++ b/src/libs/ElevenLabs/openapi.yaml
@@ -6545,8 +6545,12 @@
/v1/convai/agent-testing/create:
- post:
- operationId: create_agent_response_test_route
+ post:
+ tags:
+ - conversational_ai
+ - agent_testing
+ operationId: create_agent_response_test_route
summary: Create Agent Response Test
description: Creates a new agent response test.
requestBody:
@@ -6556,7 +6560,7 @@
responses:
- '200':
+ '201':
description: Created
content:
application/json:
@@ -6660,8 +6664,12 @@
'/v1/convai/agent-testing/{test_id}':
delete:
- operationId: delete_chat_response_test_route
+ tags:
+ - conversational_ai
+ - agent_testing
+ operationId: delete_agent_response_test_route
summary: Delete Agent Response Test
description: Deletes an agent response test by ID.
responses:
@@ -6671,10 +6679,7 @@
- '200':
- description: Successful Response
- content:
- application/json:
- schema: { }
+ '204':
+ description: No Content
'422':
description: Validation Error
content:
@@ -6720,7 +6725,11 @@
/v1/convai/agent-testing:
get:
- operationId: list_chat_response_tests_route
+ tags:
+ - conversational_ai
+ - agent_testing
+ operationId: list_agent_response_tests_route
summary: List Agent Response Tests
description: Retrieves a list of agent response tests.
responses:
@@ -6745,6 +6754,10 @@
/v1/convai/agent-testing/summaries:
post:
+ tags:
+ - conversational_ai
+ - agent_testing
operationId: get_agent_response_tests_summaries_route
summary: Get Agent Response Test Summaries By Ids
description: Gets multiple agent response tests by their IDs.
@@ -6545,17 +6558,4 @@
- parameters:
- - name: xi-api-key
- in: header
- description: Your API key…
- schema:
- type: string
- nullable: true
- …
- (remove this block from each operation under `/v1/convai/agent-testing/*`)These adjustments will:
- Ensure consistent SDK-friendly operation IDs
- Adopt idiomatic HTTP status codes
- Consolidate API-key handling via the existing
ApiKeyAuthscheme (no new definitions required) - Provide tags for grouping in generated clients
Let me know if you’d like me to prepare a follow-up PR with these refactors.
Committable suggestion skipped: line range outside the PR's diff.
| sid: | ||
| title: Sid | ||
| type: string | ||
| description: Twilio Account SID | ||
| token: | ||
| title: Token | ||
| description: Twilio Account SID | ||
| token: | ||
| title: Token | ||
| type: string | ||
| description: Twilio Auth Token | ||
| CreateUnitTestRequest: |
There was a problem hiding this comment.
Sensitive Twilio token should be writeOnly
Avoid echoing secrets in generated SDKs and UIs.
token:
title: Token
type: string
description: Twilio Auth Token
+ writeOnly: true📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| sid: | |
| title: Sid | |
| type: string | |
| description: Twilio Account SID | |
| token: | |
| title: Token | |
| description: Twilio Account SID | |
| token: | |
| title: Token | |
| type: string | |
| description: Twilio Auth Token | |
| CreateUnitTestRequest: | |
| sid: | |
| title: Sid | |
| type: string | |
| description: Twilio Account SID | |
| token: | |
| title: Token | |
| type: string | |
| description: Twilio Auth Token | |
| writeOnly: true | |
| CreateUnitTestRequest: |
🤖 Prompt for AI Agents
In src/libs/ElevenLabs/openapi.yaml around lines 16146 to 16154, the Twilio
token property is currently exposed as a plain string; update the schema so the
token field is marked writeOnly: true (and optionally format: password) to
prevent it from being echoed in generated SDKs and UIs, and keep or adjust the
description to indicate it is sensitive and not returned in responses.
| title: CreateUnitTestRequest | ||
| required: | ||
| - chat_history | ||
| - success_condition | ||
| - success_examples | ||
| - failure_examples | ||
| - name | ||
| type: object | ||
| properties: | ||
| chat_history: | ||
| title: Chat History | ||
| maxItems: 50 | ||
| minItems: 1 | ||
| type: array | ||
| items: | ||
| $ref: '#/components/schemas/ConversationHistoryTranscriptCommonModel-Input' | ||
| success_condition: | ||
| title: Success Condition | ||
| type: string | ||
| description: A prompt that evaluates whether the agent's response is successful. Should return True or False. | ||
| success_examples: | ||
| title: Success Examples | ||
| maxItems: 5 | ||
| minItems: 0 | ||
| type: array | ||
| items: | ||
| $ref: '#/components/schemas/AgentSuccessfulResponseExample' | ||
| description: Non-empty list of example responses that should be considered successful | ||
| failure_examples: | ||
| title: Failure Examples | ||
| maxItems: 5 | ||
| minItems: 0 | ||
| type: array | ||
| items: | ||
| $ref: '#/components/schemas/AgentFailureResponseExample' | ||
| description: Non-empty list of example responses that should be considered failures | ||
| tool_call_parameters: | ||
| $ref: '#/components/schemas/UnitTestToolCallEvaluationModel-Input' | ||
| dynamic_variables: | ||
| title: Dynamic Variables | ||
| type: object | ||
| additionalProperties: | ||
| anyOf: | ||
| - type: string | ||
| - type: number | ||
| - type: integer | ||
| - type: boolean | ||
| nullable: true | ||
| description: Dynamic variables to replace in the agent config during testing | ||
| name: | ||
| title: Name | ||
| type: string | ||
| CreateUnitTestResponseModel: |
There was a problem hiding this comment.
💡 Verification agent
❓ Verification inconclusive
success_examples/failure_examples contradict minItems 0 vs “Non-empty list”
Descriptions say non-empty but minItems is 0. Enforce minItems: 1 or relax the description.
success_examples:
title: Success Examples
maxItems: 5
- minItems: 0
+ minItems: 1
...
failure_examples:
title: Failure Examples
maxItems: 5
- minItems: 0
+ minItems: 1If zero is allowed, change descriptions to remove “Non-empty”.
Ensure consistency between array constraints and descriptions
The success_examples and failure_examples properties declare “Non-empty list…” in their descriptions but allow empty arrays via minItems: 0. Please update these to enforce minItems: 1 (or, if empty lists are intended, revise the descriptions to remove “Non-empty”).
Locations to update:
- src/libs/ElevenLabs/openapi.yaml —
CreateUnitTestRequest.properties.success_examples - src/libs/ElevenLabs/openapi.yaml —
CreateUnitTestRequest.properties.failure_examples
Suggested diff:
success_examples:
title: Success Examples
maxItems: 5
- minItems: 0
+ minItems: 1
type: array
items:
$ref: '#/components/schemas/AgentSuccessfulResponseExample'
description: Non-empty list of example responses that should be considered successful
failure_examples:
title: Failure Examples
maxItems: 5
- minItems: 0
+ minItems: 1
type: array
items:
$ref: '#/components/schemas/AgentFailureResponseExample'
description: Non-empty list of example responses that should be considered failures📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| title: CreateUnitTestRequest | |
| required: | |
| - chat_history | |
| - success_condition | |
| - success_examples | |
| - failure_examples | |
| - name | |
| type: object | |
| properties: | |
| chat_history: | |
| title: Chat History | |
| maxItems: 50 | |
| minItems: 1 | |
| type: array | |
| items: | |
| $ref: '#/components/schemas/ConversationHistoryTranscriptCommonModel-Input' | |
| success_condition: | |
| title: Success Condition | |
| type: string | |
| description: A prompt that evaluates whether the agent's response is successful. Should return True or False. | |
| success_examples: | |
| title: Success Examples | |
| maxItems: 5 | |
| minItems: 0 | |
| type: array | |
| items: | |
| $ref: '#/components/schemas/AgentSuccessfulResponseExample' | |
| description: Non-empty list of example responses that should be considered successful | |
| failure_examples: | |
| title: Failure Examples | |
| maxItems: 5 | |
| minItems: 0 | |
| type: array | |
| items: | |
| $ref: '#/components/schemas/AgentFailureResponseExample' | |
| description: Non-empty list of example responses that should be considered failures | |
| tool_call_parameters: | |
| $ref: '#/components/schemas/UnitTestToolCallEvaluationModel-Input' | |
| dynamic_variables: | |
| title: Dynamic Variables | |
| type: object | |
| additionalProperties: | |
| anyOf: | |
| - type: string | |
| - type: number | |
| - type: integer | |
| - type: boolean | |
| nullable: true | |
| description: Dynamic variables to replace in the agent config during testing | |
| name: | |
| title: Name | |
| type: string | |
| CreateUnitTestResponseModel: | |
| properties: | |
| chat_history: | |
| title: Chat History | |
| maxItems: 50 | |
| minItems: 1 | |
| type: array | |
| items: | |
| $ref: '#/components/schemas/ConversationHistoryTranscriptCommonModel-Input' | |
| success_condition: | |
| title: Success Condition | |
| type: string | |
| description: A prompt that evaluates whether the agent's response is successful. Should return True or False. | |
| success_examples: | |
| title: Success Examples | |
| maxItems: 5 | |
| minItems: 1 | |
| type: array | |
| items: | |
| $ref: '#/components/schemas/AgentSuccessfulResponseExample' | |
| description: Non-empty list of example responses that should be considered successful | |
| failure_examples: | |
| title: Failure Examples | |
| maxItems: 5 | |
| minItems: 1 | |
| type: array | |
| items: | |
| $ref: '#/components/schemas/AgentFailureResponseExample' | |
| description: Non-empty list of example responses that should be considered failures | |
| tool_call_parameters: | |
| $ref: '#/components/schemas/UnitTestToolCallEvaluationModel-Input' | |
| dynamic_variables: | |
| title: Dynamic Variables | |
| type: object | |
| additionalProperties: | |
| anyOf: | |
| - type: string | |
| - type: number | |
| - type: integer | |
| - type: boolean | |
| nullable: true | |
| description: Dynamic variables to replace in the agent config during testing | |
| name: | |
| title: Name | |
| type: string |
🤖 Prompt for AI Agents
In src/libs/ElevenLabs/openapi.yaml around lines 16155 to 16207, the
success_examples and failure_examples descriptions say “Non-empty list…” but
both use minItems: 0; update each schema to set minItems: 1 to enforce non-empty
arrays (leave the descriptions unchanged), and ensure the OpenAPI file remains
valid after the change.
| title: GetUnitTestResponseModel | ||
| required: | ||
| - chat_history | ||
| - success_condition | ||
| - success_examples | ||
| - failure_examples | ||
| - id | ||
| - name | ||
| type: object | ||
| properties: | ||
| chat_history: | ||
| title: Chat History | ||
| maxItems: 50 | ||
| minItems: 1 | ||
| type: array | ||
| items: | ||
| $ref: '#/components/schemas/ConversationHistoryTranscriptCommonModel-Output' | ||
| success_condition: | ||
| title: Success Condition | ||
| type: string | ||
| description: A prompt that evaluates whether the agent's response is successful. Should return True or False. | ||
| success_examples: | ||
| title: Success Examples | ||
| maxItems: 5 | ||
| minItems: 0 | ||
| type: array | ||
| items: | ||
| $ref: '#/components/schemas/AgentSuccessfulResponseExample' | ||
| description: Non-empty list of example responses that should be considered successful | ||
| failure_examples: | ||
| title: Failure Examples | ||
| maxItems: 5 | ||
| minItems: 0 | ||
| type: array | ||
| items: | ||
| $ref: '#/components/schemas/AgentFailureResponseExample' | ||
| description: Non-empty list of example responses that should be considered failures |
There was a problem hiding this comment.
Same minItems inconsistency on the GET model
Descriptions say “Non-empty” but minItems is 0. Align with Create model.
success_examples:
title: Success Examples
maxItems: 5
- minItems: 0
+ minItems: 1
...
failure_examples:
title: Failure Examples
maxItems: 5
- minItems: 0
+ minItems: 1📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| title: GetUnitTestResponseModel | |
| required: | |
| - chat_history | |
| - success_condition | |
| - success_examples | |
| - failure_examples | |
| - id | |
| - name | |
| type: object | |
| properties: | |
| chat_history: | |
| title: Chat History | |
| maxItems: 50 | |
| minItems: 1 | |
| type: array | |
| items: | |
| $ref: '#/components/schemas/ConversationHistoryTranscriptCommonModel-Output' | |
| success_condition: | |
| title: Success Condition | |
| type: string | |
| description: A prompt that evaluates whether the agent's response is successful. Should return True or False. | |
| success_examples: | |
| title: Success Examples | |
| maxItems: 5 | |
| minItems: 0 | |
| type: array | |
| items: | |
| $ref: '#/components/schemas/AgentSuccessfulResponseExample' | |
| description: Non-empty list of example responses that should be considered successful | |
| failure_examples: | |
| title: Failure Examples | |
| maxItems: 5 | |
| minItems: 0 | |
| type: array | |
| items: | |
| $ref: '#/components/schemas/AgentFailureResponseExample' | |
| description: Non-empty list of example responses that should be considered failures | |
| success_examples: | |
| title: Success Examples | |
| maxItems: 5 | |
| minItems: 1 | |
| type: array | |
| items: | |
| $ref: '#/components/schemas/AgentSuccessfulResponseExample' | |
| description: Non-empty list of example responses that should be considered successful | |
| failure_examples: | |
| title: Failure Examples | |
| maxItems: 5 | |
| minItems: 1 | |
| type: array | |
| items: | |
| $ref: '#/components/schemas/AgentFailureResponseExample' | |
| description: Non-empty list of example responses that should be considered failures |
🤖 Prompt for AI Agents
In src/libs/ElevenLabs/openapi.yaml around lines 18600 to 18636, the GET
response schema lists success_examples and failure_examples with descriptions
stating "Non-empty list" but sets minItems: 0; update both
success_examples.minItems and failure_examples.minItems to 1 to align with the
description and the Create model (leave maxItems: 5 and other properties
unchanged).
| title: ListTestsByIdsRequestModel | ||
| required: | ||
| - test_ids | ||
| type: object | ||
| properties: | ||
| test_ids: | ||
| title: Test Ids | ||
| type: array | ||
| items: | ||
| type: string | ||
| description: List of test IDs to fetch. No duplicates allowed. | ||
| example: | ||
| - test_id_1 | ||
| - test_id_2 |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Enforce uniqueness of test_ids
The description says “No duplicates allowed” but schema doesn’t enforce it. Add uniqueItems: true.
test_ids:
title: Test Ids
type: array
+ uniqueItems: true
items:
type: string📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| title: ListTestsByIdsRequestModel | |
| required: | |
| - test_ids | |
| type: object | |
| properties: | |
| test_ids: | |
| title: Test Ids | |
| type: array | |
| items: | |
| type: string | |
| description: List of test IDs to fetch. No duplicates allowed. | |
| example: | |
| - test_id_1 | |
| - test_id_2 | |
| title: ListTestsByIdsRequestModel | |
| required: | |
| - test_ids | |
| type: object | |
| properties: | |
| test_ids: | |
| title: Test Ids | |
| type: array | |
| uniqueItems: true | |
| items: | |
| type: string | |
| description: List of test IDs to fetch. No duplicates allowed. | |
| example: | |
| - test_id_1 | |
| - test_id_2 |
🤖 Prompt for AI Agents
In src/libs/ElevenLabs/openapi.yaml around lines 19555 to 19568 the
ListTestsByIdsRequestModel declares test_ids with description "No duplicates
allowed" but does not enforce it; add uniqueItems: true under the test_ids array
property (alongside type: array and items) to enforce uniqueness in the schema.
| title: UpdateUnitTestRequest | ||
| required: | ||
| - chat_history | ||
| - success_condition | ||
| - success_examples | ||
| - failure_examples | ||
| - name | ||
| type: object | ||
| properties: | ||
| chat_history: | ||
| title: Chat History | ||
| maxItems: 50 | ||
| minItems: 1 | ||
| type: array | ||
| items: | ||
| $ref: '#/components/schemas/ConversationHistoryTranscriptCommonModel-Input' | ||
| success_condition: | ||
| title: Success Condition | ||
| type: string | ||
| description: A prompt that evaluates whether the agent's response is successful. Should return True or False. | ||
| success_examples: | ||
| title: Success Examples | ||
| maxItems: 5 | ||
| minItems: 0 | ||
| type: array | ||
| items: | ||
| $ref: '#/components/schemas/AgentSuccessfulResponseExample' | ||
| description: Non-empty list of example responses that should be considered successful | ||
| failure_examples: | ||
| title: Failure Examples | ||
| maxItems: 5 | ||
| minItems: 0 | ||
| type: array | ||
| items: | ||
| $ref: '#/components/schemas/AgentFailureResponseExample' | ||
| description: Non-empty list of example responses that should be considered failures | ||
| tool_call_parameters: | ||
| $ref: '#/components/schemas/UnitTestToolCallEvaluationModel-Input' | ||
| dynamic_variables: | ||
| title: Dynamic Variables | ||
| type: object | ||
| additionalProperties: | ||
| anyOf: | ||
| - type: string | ||
| - type: number | ||
| - type: integer | ||
| - type: boolean | ||
| nullable: true | ||
| description: Dynamic variables to replace in the agent config during testing | ||
| name: | ||
| title: Name | ||
| type: string | ||
| UpdateWorkspaceMemberResponseModel: |
There was a problem hiding this comment.
UpdateUnitTestRequest: same minItems inconsistency; align with Create
Mirror the fixes suggested for Create model.
success_examples:
title: Success Examples
maxItems: 5
- minItems: 0
+ minItems: 1
...
failure_examples:
title: Failure Examples
maxItems: 5
- minItems: 0
+ minItems: 1📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| title: UpdateUnitTestRequest | |
| required: | |
| - chat_history | |
| - success_condition | |
| - success_examples | |
| - failure_examples | |
| - name | |
| type: object | |
| properties: | |
| chat_history: | |
| title: Chat History | |
| maxItems: 50 | |
| minItems: 1 | |
| type: array | |
| items: | |
| $ref: '#/components/schemas/ConversationHistoryTranscriptCommonModel-Input' | |
| success_condition: | |
| title: Success Condition | |
| type: string | |
| description: A prompt that evaluates whether the agent's response is successful. Should return True or False. | |
| success_examples: | |
| title: Success Examples | |
| maxItems: 5 | |
| minItems: 0 | |
| type: array | |
| items: | |
| $ref: '#/components/schemas/AgentSuccessfulResponseExample' | |
| description: Non-empty list of example responses that should be considered successful | |
| failure_examples: | |
| title: Failure Examples | |
| maxItems: 5 | |
| minItems: 0 | |
| type: array | |
| items: | |
| $ref: '#/components/schemas/AgentFailureResponseExample' | |
| description: Non-empty list of example responses that should be considered failures | |
| tool_call_parameters: | |
| $ref: '#/components/schemas/UnitTestToolCallEvaluationModel-Input' | |
| dynamic_variables: | |
| title: Dynamic Variables | |
| type: object | |
| additionalProperties: | |
| anyOf: | |
| - type: string | |
| - type: number | |
| - type: integer | |
| - type: boolean | |
| nullable: true | |
| description: Dynamic variables to replace in the agent config during testing | |
| name: | |
| title: Name | |
| type: string | |
| UpdateWorkspaceMemberResponseModel: | |
| success_examples: | |
| title: Success Examples | |
| maxItems: 5 | |
| minItems: 1 | |
| type: array | |
| items: | |
| $ref: '#/components/schemas/AgentSuccessfulResponseExample' | |
| description: Non-empty list of example responses that should be considered successful | |
| failure_examples: | |
| title: Failure Examples | |
| maxItems: 5 | |
| minItems: 1 | |
| type: array | |
| items: | |
| $ref: '#/components/schemas/AgentFailureResponseExample' | |
| description: Non-empty list of example responses that should be considered failures |
🤖 Prompt for AI Agents
In src/libs/ElevenLabs/openapi.yaml around lines 24989 to 25041, the
UpdateUnitTestRequest schema has minItems: 0 for success_examples and
failure_examples while their descriptions state they should be non-empty; mirror
the Create model fix by changing those minItems to 1 (making them required to be
non-empty) and ensure the schema and description are consistent.
Summary by CodeRabbit