Skip to content

feat:Add endpoints for voices filtering and dictionary creation from rules#65

Merged
HavenDV merged 1 commit intomainfrom
bot/update-openapi_202503261520
Mar 26, 2025
Merged

feat:Add endpoints for voices filtering and dictionary creation from rules#65
HavenDV merged 1 commit intomainfrom
bot/update-openapi_202503261520

Conversation

@HavenDV
Copy link
Contributor

@HavenDV HavenDV commented Mar 26, 2025

Summary by CodeRabbit

  • New Features
    • Added an API endpoint that enables browsing available voices with advanced filtering, pagination, and sorting options.
    • Introduced an endpoint for creating a new pronunciation dictionary from defined rules.

@coderabbitai
Copy link

coderabbitai bot commented Mar 26, 2025

Walkthrough

This changeset introduces two new endpoints into the ElevenLabs API. The /v2/voices endpoint supports searching, filtering, and pagination of available voices using various parameters. Additionally, the /v1/pronunciation-dictionaries/add-from-rules endpoint enables users to create new pronunciation dictionaries from specified rules. Corresponding methods have been added to the VoicesClient and PronunciationDictionaryClient classes to interface with these endpoints.

Changes

File Changes Summary
src/libs/ElevenLabs/openapi.yaml - Added /v2/voices endpoint with parameters for search, filter, and pagination.
- Added /v1/pronunciation-dictionaries/add-from-rules endpoint for dictionary creation from rules.
- Introduced get_voices_v2 in VoicesClient and add_pronunciation_dictionary_from_rules in PronunciationDictionaryClient.

Sequence Diagram(s)

sequenceDiagram
  participant Client
  participant VoicesClient
  participant ElevenLabsAPI
  Client->>VoicesClient: get_voices_v2(params)
  VoicesClient->>ElevenLabsAPI: Request /v2/voices with parameters
  ElevenLabsAPI-->>VoicesClient: Return voices list (paginated)
  VoicesClient-->>Client: Return voices data
Loading
sequenceDiagram
  participant Client
  participant PronunciationDictionaryClient
  participant ElevenLabsAPI
  Client->>PronunciationDictionaryClient: add_pronunciation_dictionary_from_rules(rules, name, ...)
  PronunciationDictionaryClient->>ElevenLabsAPI: Request /v1/pronunciation-dictionaries/add-from-rules with data
  ElevenLabsAPI-->>PronunciationDictionaryClient: Return dictionary creation status
  PronunciationDictionaryClient-->>Client: Return operation result
Loading

Possibly related PRs

Poem

I'm a happy rabbit on a coding spree,
Hopping through endpoints so merrily.
Voices echo with a rhythmic chime,
And dictionaries grow, rule by rule in time.
With swift leaps in logic and code so neat,
I cherish these changes with a joyful beat!
🐇💻


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai plan to trigger planning for file edits and PR creation.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@HavenDV HavenDV merged commit 495a833 into main Mar 26, 2025
2 of 4 checks passed
@HavenDV HavenDV deleted the bot/update-openapi_202503261520 branch March 26, 2025 15:21
@coderabbitai coderabbitai bot changed the title feat:@coderabbitai feat:Add endpoints for voices filtering and dictionary creation from rules Mar 26, 2025
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (8)
src/libs/ElevenLabs/openapi.yaml (8)

1281-1391: New /v2/voices Endpoint Addition
The newly added GET endpoint for /v2/voices is well structured—it offers comprehensive query parameters for pagination, searching, filtering, and sorting. One minor point: the operationId is defined as Get_voices_v2_v2_voices_get; the repeated v2 might be accidental. Please verify that this naming is intentional.


5001-5027: Sorting Parameters for Dictionary Retrieval
The query parameters for sort and sort_direction are clearly defined. However, the sort_direction parameter has a default value (DESCENDING) that’s in uppercase while the example uses lowercase (descending). For consistency and to avoid potential case-sensitivity issues, consider aligning these values.


7240-7249: Example Update in Dictionary Model
The schema example now includes the version_rules_num value. Please verify that the example is realistic and adheres to the expected data format for clients.


7782-7812: Enhanced Request Schema for Adding a Pronunciation Dictionary
The request body now requires both rules and name, with an expanded description for the rules field and an example demonstrating both alias and phoneme rules. Consider verifying that the multiline example is correctly processed by client tools as a JSON array rather than a formatted string.


14255-14277: Project Snapshot Response Model Update
The model now marks fields like audio_upload and zip_upload as deprecated. It would be beneficial to include explicit deprecation guidance in your documentation to help clients transition away from these fields.


14300-14317: Project Snapshots Response Model Enhancement
Similar to the previous hunk, the ProjectSnapshotsResponseModel continues to mark deprecated fields. Reinforcing deprecation notices in documentation is recommended.


14328-14336: PromptAgent Input Schema Revision
There appears to be a subtle formatting/indentation boundary between an example for project snapshots and the start of the PromptAgent-Input schema. Please verify that the YAML indentation accurately separates these sections to avoid parsing issues.


14873-14891: SIPTrunkCredentials Schema Update
The SIPTrunkCredentials schema now requires both username and password and includes descriptive metadata. Consider marking these fields with a writeOnly flag for enhanced security if they are not meant to be returned in responses.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a7388c4 and c2cf136.

⛔ Files ignored due to path filters (97)
  • src/libs/ElevenLabs/Generated/ElevenLabs.ConversationalAIClient.CreateConvaiKnowledgeBaseByDocumentationIdRagIndex.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.ConversationalAIClient.CreateConvaiPhoneNumbersCreate.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.ConversationalAIClient.CreateConvaiTwilioOutboundCall.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.DubbingClient.CreateDubbing.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.IConversationalAIClient.CreateConvaiKnowledgeBaseByDocumentationIdRagIndex.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.IConversationalAIClient.CreateConvaiPhoneNumbersCreate.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.IConversationalAIClient.CreateConvaiTwilioOutboundCall.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.IDubbingClient.CreateDubbing.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.IPronunciationDictionaryClient.CreatePronunciationDictionariesAddFromRules.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.IPronunciationDictionaryClient.GetPronunciationDictionaries.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.ITextToSpeechClient.CreateTextToSpeechByVoiceId.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.ITextToSpeechClient.CreateTextToSpeechByVoiceIdStream.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.ITextToSpeechClient.CreateTextToSpeechByVoiceIdStreamWithTimestamps.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.ITextToSpeechClient.CreateTextToSpeechByVoiceIdWithTimestamps.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.IVoicesClient.GetV2Voices.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.IWorkspaceClient.CreateWorkspaceResourcesByResourceIdShare.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.IWorkspaceClient.CreateWorkspaceResourcesByResourceIdUnshare.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.AddPronunciationDictionaryResponseModel.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.AddPronunciationDictionaryRulesResponseModel.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.BodyAddAPronunciationDictionaryV1PronunciationDictionariesAddFromRulesPost.Json.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.BodyAddAPronunciationDictionaryV1PronunciationDictionariesAddFromRulesPost.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.BodyAddAPronunciationDictionaryV1PronunciationDictionariesAddFromRulesPostWorkspaceAccess.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.BodyDubAVideoOrAnAudioFileV1DubbingPost.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.BodyHandleAnOutboundCallViaTwilioV1ConvaiTwilioOutboundCallPost.Json.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.BodyHandleAnOutboundCallViaTwilioV1ConvaiTwilioOutboundCallPost.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.BodyShareWorkspaceResourceV1WorkspaceResourcesResourceIdSharePost.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.BodyTextToSpeechStreamingV1TextToSpeechVoiceIdStreamPost.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.BodyTextToSpeechStreamingV1TextToSpeechVoiceIdStreamPostApplyTextNormalization.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.BodyTextToSpeechStreamingWithTimestampsV1TextToSpeechVoiceIdStreamWithTimestampsPost.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.BodyTextToSpeechStreamingWithTimestampsV1TextToSpeechVoiceIdStreamWithTimestampsPostApplyTextNormalization.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.BodyTextToSpeechV1TextToSpeechVoiceIdPost.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.BodyTextToSpeechV1TextToSpeechVoiceIdPostApplyTextNormalization.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.BodyTextToSpeechWithTimestampsV1TextToSpeechVoiceIdWithTimestampsPost.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.BodyTextToSpeechWithTimestampsV1TextToSpeechVoiceIdWithTimestampsPostApplyTextNormalization.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.BodyUnshareWorkspaceResourceV1WorkspaceResourcesResourceIdUnsharePost.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.ConversationInitiationClientDataRequestInput.Json.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.ConversationInitiationClientDataRequestInput.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.ConversationInitiationClientDataRequestInputCustomLlmExtraBody.Json.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.ConversationInitiationClientDataRequestInputCustomLlmExtraBody.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.ConversationInitiationClientDataRequestInputDynamicVariables.Json.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.ConversationInitiationClientDataRequestInputDynamicVariables.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.ConversationInitiationClientDataRequestOutput.Json.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.ConversationInitiationClientDataRequestOutput.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.ConversationInitiationClientDataRequestOutputCustomLlmExtraBody.Json.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.ConversationInitiationClientDataRequestOutputCustomLlmExtraBody.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.ConversationInitiationClientDataRequestOutputDynamicVariables.Json.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.ConversationInitiationClientDataRequestOutputDynamicVariables.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.CreateSIPTrunkPhoneNumberRequest.Json.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.CreateSIPTrunkPhoneNumberRequest.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.CreateTwilioPhoneNumberRequest.Json.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.CreateTwilioPhoneNumberRequest.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.DependentPhoneNumberIdentifier.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.EmbeddingModelEnum.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.GetConversationResponseModel.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.GetPhoneNumberResponseModel.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.GetPronunciationDictionariesV1PronunciationDictionariesGetSort.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.GetPronunciationDictionaryMetadataResponseModel.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.GetVoicesV2ResponseModel.Json.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.GetVoicesV2ResponseModel.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.ProjectSnapshotExtendedResponseModel.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.ProjectSnapshotExtendedResponseModelAudioUpload.Json.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.ProjectSnapshotExtendedResponseModelAudioUpload.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.ProjectSnapshotExtendedResponseModelZipUpload.Json.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.ProjectSnapshotExtendedResponseModelZipUpload.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.ProjectSnapshotResponseModel.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.ProjectSnapshotResponseModelAudioUpload.Json.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.ProjectSnapshotResponseModelAudioUpload.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.ProjectSnapshotResponseModelZipUpload.Json.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.ProjectSnapshotResponseModelZipUpload.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.ProjectSnapshotUploadResponseModel.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.ProjectSnapshotUploadResponseModelStatus.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.PromptAgentInput.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.PromptAgentOutput.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.PronunciationDictionaryVersionLocatorDBModel.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.PronunciationDictionaryVersionResponseModel.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.PydanticPronunciationDictionaryVersionLocator.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.RemovePronunciationDictionaryRulesResponseModel.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.SIPTrunkCredentials.Json.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.SIPTrunkCredentials.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.TelephonyProvider.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.TwilioOutboundCallResponse.Json.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.Models.TwilioOutboundCallResponse.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.PronunciationDictionaryClient.CreatePronunciationDictionariesAddFromRules.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.PronunciationDictionaryClient.GetPronunciationDictionaries.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.TextToSpeechClient.CreateTextToSpeechByVoiceId.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.TextToSpeechClient.CreateTextToSpeechByVoiceIdStream.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.TextToSpeechClient.CreateTextToSpeechByVoiceIdStreamWithTimestamps.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.TextToSpeechClient.CreateTextToSpeechByVoiceIdWithTimestamps.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.VoicesClient.GetV2Voices.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.WorkspaceClient.CreateWorkspaceResourcesByResourceIdShare.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/ElevenLabs.WorkspaceClient.CreateWorkspaceResourcesByResourceIdUnshare.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/JsonConverters.BodyAddAPronunciationDictionaryV1PronunciationDictionariesAddFromRulesPostWorkspaceAccess.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/JsonConverters.BodyAddAPronunciationDictionaryV1PronunciationDictionariesAddFromRulesPostWorkspaceAccessNullable.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/JsonConverters.GetPronunciationDictionariesV1PronunciationDictionariesGetSort.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/JsonConverters.GetPronunciationDictionariesV1PronunciationDictionariesGetSortNullable.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/JsonSerializerContext.g.cs is excluded by !**/generated/**
  • src/libs/ElevenLabs/Generated/JsonSerializerContextTypes.g.cs is excluded by !**/generated/**
📒 Files selected for processing (1)
  • src/libs/ElevenLabs/openapi.yaml (44 hunks)
🔇 Additional comments (32)
src/libs/ElevenLabs/openapi.yaml (32)

4751-4791: New Pronunciation Dictionary (Add-From-Rules) Endpoint
This hunk introduces the /v1/pronunciation-dictionaries/add-from-rules POST endpoint with clear request body and response definitions. Note that just before this hunk, the SDK method name was set as add_from_file—please ensure that the new add_from_rules identifier does not conflict with any legacy behavior and that clients are updated accordingly.


5361-5367: Share Workspace Resource Endpoint Update
The updated endpoint for sharing a workspace resource now includes a detailed description that clearly explains its behavior. No issues noted here.


5403-5409: Unshare Workspace Resource Endpoint Update
The endpoint’s description now clearly outlines how permissions are removed from a resource, including the restriction on removing permissions from the resource’s creator. This change is clear and well documented.


5557-5597: Twilio Outbound Call Handling for Conversational AI
The new POST endpoint /v1/convai/twilio/outbound_call is defined with the necessary tags, summary, and parameter details. The request body references an external schema, so please verify that the referenced schema (Body_Handle_an_outbound_call_via_Twilio_v1_convai_twilio_outbound_call_post) is valid and up to date.


6225-6231: Import Phone Number Endpoint
The endpoint for importing a phone number is introduced with a concise summary and description. Ensure that the parameter names (such as the header xi-api-key) maintain consistency with other endpoints in the API.


6240-6249: Phone Request Schema Enhancement
The new Phone Request schema now accommodates two variants via anyOf—one for a Twilio phone number request and another for a SIP trunk phone number request. This flexible design is well executed.


7185-7191: Enhanced Dictionary Model: Inclusion of version_rules_num
The updated model now requires a version_rules_num property. Please confirm that all backend and client components are updated to supply and process this new field without breaking backward compatibility.


7208-7217: Pronunciation Dictionary Model Enhancement – Additional Property
The addition of the version_rules_num field—complete with a title and description—improves the model’s completeness.


7229-7235: Reinforcing Required Fields in the Dictionary Model
The requirement for version_rules_num is now explicitly added to the model’s required list. This reinforces data consistency, assuming all producers of this data can supply the field.


8796-8819: File Upload Properties for Audio Processing
New properties for file uploads—such as csv_file, foreground_audio_file, and background_audio_file—are introduced with appropriate descriptions and the binary format. These additions look solid.


8879-8890: Dubbing Mode Specification in Dubbing Endpoint
The addition of the mode field, which indicates whether dubbing runs in automatic or manual mode, is clear and includes a sensible default.


9083-9105: Twilio Outbound Call Request Body Model
The new body model for handling outbound calls includes critical fields (agent_id, agent_phone_number_id, and to_number) and a reference to additional conversation initiation data. Please verify that the schema referenced by ConversationInitiationClientDataRequest-Input is properly defined elsewhere in the document.


9225-9233: Sound Generation Request Update – Workspace API Key
The addition of the workspace_api_key_id field to the request parameters is consistent with the strategy of allowing workspace-scoped API interactions.


9808-9816: Pronunciation Dictionary Update Request – Workspace API Key
The update adds the workspace_api_key_id field in the context of updating pronunciation dictionaries within a project. This aligns with similar changes elsewhere in the API.


10862-10887: Conversation Initiation Client Data Schemas
New schemas for ConversationInitiationClientDataRequest-Input and -Output are introduced with flexible definitions for dynamic variables. Ensure that the use of anyOf for the dynamic_variables field is fully supported by your OpenAPI tooling and correctly interpreted by client generators.


11071-11127: Phone and Pronunciation Dictionary Models Update
This block standardizes several models for phone number creation and pronunciation dictionary creation. The update, including required properties and clear descriptions for each field, is well executed.


11669-11675: New Enum Value for Evaluation Settings
The new enum value multilingual_e5_large_instruct is added alongside existing values. Please verify that your backend supports this new value and that the change is reflected throughout any dependent systems.


12115-12121: Conversations Page Response Enhancement
The reference to ConversationInitiationClientDataRequest-Output in the GetConversationsPageResponseModel is a useful addition for conveying extra client data via conversations.


12496-12504: Enhanced Required Fields in Dictionary Metadata
New required fields in the pronunciation dictionary model, such as latest_version_rules_num and archived_time_unix, have been added. It is important to ensure that API consumers are notified of these additional requirements.


12511-12520: Updated Dictionary Metadata – Latest Version Details
The schema clearly documents the latest version ID and rules number. This consistency helps maintain clarity across different parts of the API.


12527-12535: Archived Time Field Addition
The archived_time_unix field has been added to the dictionary model (with nullable: true). This change appears well justified for handling dictionaries that haven’t been archived.


12710-12736: Pagination Info in GetVoicesV2ResponseModel
The inclusion of next_page_token in the response model enhances pagination capabilities. Its definition (including nullable: true) is clear.


14376-14386: PromptAgent Output Schema Update
The addition of the ignore_default_personality flag—with a default value of false and marked as nullable—enhances control over personality settings. This update is clear and useful.


14427-14437: PromptAgent Override Schema Consistency
The modifications in the PromptAgentOverride schema mirror those in the PromptAgent output schema by adding the ignore_default_personality flag. This consistency is appreciated.


14549-14557: Pronunciation Dictionary Version Response Model Enhancement
Including version_rules_num as a required field—and allowing version_id to be nullable—improves model clarity. Ensure that clients are updated to accommodate this change.


14565-14573: Version Details in Dictionary Response
The definitions for version_id and version_rules_num are now clearly provided. This update should help maintain consistency in version tracking across the API.


14583-14590: Locator Model Update for Pronunciation Dictionaries
The PydanticPronunciationDictionaryVersionLocator now marks archived_time_unix as nullable, which is appropriate for dictionaries that have not yet been archived.


14600-14605: Query Parameters JSON Schema Adjustment
Allowing the version_id to be nullable in the QueryParamsJsonSchema provides greater flexibility when referencing documents. This enhancement is well executed.


14778-14784: Including version_rules_num in Model Requirements
The object now explicitly requires version_rules_num alongside other identifiers. This detail reinforces the versioning model and improves consistency.


14789-14798: Enhanced Property Definitions and Example Values
The updated definitions for version_id and version_rules_num—along with the accompanying example—are clear. Ensure that the example values align with real-world responses from the service.


15884-15889: TelephonyProvider Enum Expansion
The TelephonyProvider enum now includes sip_trunk alongside twilio, which appropriately reflects the API’s expanded provider support.


15902-15924: TwilioOutboundCallResponse and URLAvatar Models
The new TwilioOutboundCallResponse model is detailed—with required fields like success, message, and callSid—and the callSid is marked as nullable. Additionally, the URLAvatar model is defined appropriately. Please verify that these definitions match the actual responses from Twilio’s API.

Comment on lines 9578 to 9584
- on
- off
type: string
description: 'This parameter controls text normalization with three modes: ''auto'', ''on'', and ''off''. When set to ''auto'', the system will automatically decide whether to apply text normalization (e.g., spelling out numbers). With ''on'', text normalization will always be applied, while with ''off'', it will be skipped. Cannot be turned on for ''eleven_turbo_v2_5'' model.'
description: 'This parameter controls text normalization with three modes: ''auto'', ''on'', and ''off''. When set to ''auto'', the system will automatically decide whether to apply text normalization (e.g., spelling out numbers). With ''on'', text normalization will always be applied, while with ''off'', it will be skipped. Cannot be turned on for ''eleven_turbo_v2_5'' or ''eleven_flash_v2_5'' models.'
default: auto
example: 'true'
Body_Text_to_speech_v1_text_to_speech__voice_id__post:
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Text Normalization in TTS
Similar to the previous hunk, the example here uses "true" even though the allowed values are strings ('auto', 'on', 'off'). This needs to be corrected to avoid confusion among API consumers.

Comment on lines 9755 to 9761
- on
- off
type: string
description: 'This parameter controls text normalization with three modes: ''auto'', ''on'', and ''off''. When set to ''auto'', the system will automatically decide whether to apply text normalization (e.g., spelling out numbers). With ''on'', text normalization will always be applied, while with ''off'', it will be skipped. Cannot be turned on for ''eleven_turbo_v2_5'' model.'
description: 'This parameter controls text normalization with three modes: ''auto'', ''on'', and ''off''. When set to ''auto'', the system will automatically decide whether to apply text normalization (e.g., spelling out numbers). With ''on'', text normalization will always be applied, while with ''off'', it will be skipped. Cannot be turned on for ''eleven_turbo_v2_5'' or ''eleven_flash_v2_5'' models.'
default: auto
example: 'true'
Body_Transcribes_segments_v1_dubbing_resource__dubbing_id__transcribe_post:
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Text Normalization in Dubbing Transcription
The same inconsistency persists—the example provided is "true" rather than a valid string option such as "auto". This discrepancy should be addressed.

Comment on lines 9488 to 9494
- on
- off
type: string
description: 'This parameter controls text normalization with three modes: ''auto'', ''on'', and ''off''. When set to ''auto'', the system will automatically decide whether to apply text normalization (e.g., spelling out numbers). With ''on'', text normalization will always be applied, while with ''off'', it will be skipped. Cannot be turned on for ''eleven_turbo_v2_5'' model.'
description: 'This parameter controls text normalization with three modes: ''auto'', ''on'', and ''off''. When set to ''auto'', the system will automatically decide whether to apply text normalization (e.g., spelling out numbers). With ''on'', text normalization will always be applied, while with ''off'', it will be skipped. Cannot be turned on for ''eleven_turbo_v2_5'' or ''eleven_flash_v2_5'' models.'
default: auto
example: 'true'
Body_Text_to_speech_streaming_with_timestamps_v1_text_to_speech__voice_id__stream_with_timestamps_post:
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Text Normalization in TTS Streaming
The description for text normalization clearly states that the valid values are 'auto', 'on', or 'off'. However, the example provided is "true", which is inconsistent with these options. Please update the example to use one of the allowed values (e.g., "auto").

Comment on lines 9668 to 9674
- on
- off
type: string
description: 'This parameter controls text normalization with three modes: ''auto'', ''on'', and ''off''. When set to ''auto'', the system will automatically decide whether to apply text normalization (e.g., spelling out numbers). With ''on'', text normalization will always be applied, while with ''off'', it will be skipped. Cannot be turned on for ''eleven_turbo_v2_5'' model.'
description: 'This parameter controls text normalization with three modes: ''auto'', ''on'', and ''off''. When set to ''auto'', the system will automatically decide whether to apply text normalization (e.g., spelling out numbers). With ''on'', text normalization will always be applied, while with ''off'', it will be skipped. Cannot be turned on for ''eleven_turbo_v2_5'' or ''eleven_flash_v2_5'' models.'
default: auto
example: 'true'
Body_Text_to_speech_with_timestamps_v1_text_to_speech__voice_id__with_timestamps_post:
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Text Normalization in TTS with Timestamps
Again, the example value is "true" while the parameter expects one of 'auto', 'on', or 'off'. Please align the example value with the documented allowed values.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant