feat:Add endpoints for voices filtering and dictionary creation from rules#65
feat:Add endpoints for voices filtering and dictionary creation from rules#65
Conversation
WalkthroughThis changeset introduces two new endpoints into the ElevenLabs API. The Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant VoicesClient
participant ElevenLabsAPI
Client->>VoicesClient: get_voices_v2(params)
VoicesClient->>ElevenLabsAPI: Request /v2/voices with parameters
ElevenLabsAPI-->>VoicesClient: Return voices list (paginated)
VoicesClient-->>Client: Return voices data
sequenceDiagram
participant Client
participant PronunciationDictionaryClient
participant ElevenLabsAPI
Client->>PronunciationDictionaryClient: add_pronunciation_dictionary_from_rules(rules, name, ...)
PronunciationDictionaryClient->>ElevenLabsAPI: Request /v1/pronunciation-dictionaries/add-from-rules with data
ElevenLabsAPI-->>PronunciationDictionaryClient: Return dictionary creation status
PronunciationDictionaryClient-->>Client: Return operation result
Possibly related PRs
Poem
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Actionable comments posted: 4
🧹 Nitpick comments (8)
src/libs/ElevenLabs/openapi.yaml (8)
1281-1391: New/v2/voicesEndpoint Addition
The newly added GET endpoint for/v2/voicesis well structured—it offers comprehensive query parameters for pagination, searching, filtering, and sorting. One minor point: theoperationIdis defined asGet_voices_v2_v2_voices_get; the repeatedv2might be accidental. Please verify that this naming is intentional.
5001-5027: Sorting Parameters for Dictionary Retrieval
The query parameters forsortandsort_directionare clearly defined. However, thesort_directionparameter has a default value (DESCENDING) that’s in uppercase while the example uses lowercase (descending). For consistency and to avoid potential case-sensitivity issues, consider aligning these values.
7240-7249: Example Update in Dictionary Model
The schema example now includes theversion_rules_numvalue. Please verify that the example is realistic and adheres to the expected data format for clients.
7782-7812: Enhanced Request Schema for Adding a Pronunciation Dictionary
The request body now requires bothrulesandname, with an expanded description for therulesfield and an example demonstrating both alias and phoneme rules. Consider verifying that the multiline example is correctly processed by client tools as a JSON array rather than a formatted string.
14255-14277: Project Snapshot Response Model Update
The model now marks fields likeaudio_uploadandzip_uploadas deprecated. It would be beneficial to include explicit deprecation guidance in your documentation to help clients transition away from these fields.
14300-14317: Project Snapshots Response Model Enhancement
Similar to the previous hunk, theProjectSnapshotsResponseModelcontinues to mark deprecated fields. Reinforcing deprecation notices in documentation is recommended.
14328-14336: PromptAgent Input Schema Revision
There appears to be a subtle formatting/indentation boundary between an example for project snapshots and the start of thePromptAgent-Inputschema. Please verify that the YAML indentation accurately separates these sections to avoid parsing issues.
14873-14891: SIPTrunkCredentials Schema Update
TheSIPTrunkCredentialsschema now requires bothusernameandpasswordand includes descriptive metadata. Consider marking these fields with awriteOnlyflag for enhanced security if they are not meant to be returned in responses.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (97)
src/libs/ElevenLabs/Generated/ElevenLabs.ConversationalAIClient.CreateConvaiKnowledgeBaseByDocumentationIdRagIndex.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.ConversationalAIClient.CreateConvaiPhoneNumbersCreate.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.ConversationalAIClient.CreateConvaiTwilioOutboundCall.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.DubbingClient.CreateDubbing.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.IConversationalAIClient.CreateConvaiKnowledgeBaseByDocumentationIdRagIndex.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.IConversationalAIClient.CreateConvaiPhoneNumbersCreate.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.IConversationalAIClient.CreateConvaiTwilioOutboundCall.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.IDubbingClient.CreateDubbing.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.IPronunciationDictionaryClient.CreatePronunciationDictionariesAddFromRules.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.IPronunciationDictionaryClient.GetPronunciationDictionaries.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.ITextToSpeechClient.CreateTextToSpeechByVoiceId.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.ITextToSpeechClient.CreateTextToSpeechByVoiceIdStream.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.ITextToSpeechClient.CreateTextToSpeechByVoiceIdStreamWithTimestamps.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.ITextToSpeechClient.CreateTextToSpeechByVoiceIdWithTimestamps.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.IVoicesClient.GetV2Voices.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.IWorkspaceClient.CreateWorkspaceResourcesByResourceIdShare.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.IWorkspaceClient.CreateWorkspaceResourcesByResourceIdUnshare.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.AddPronunciationDictionaryResponseModel.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.AddPronunciationDictionaryRulesResponseModel.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.BodyAddAPronunciationDictionaryV1PronunciationDictionariesAddFromRulesPost.Json.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.BodyAddAPronunciationDictionaryV1PronunciationDictionariesAddFromRulesPost.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.BodyAddAPronunciationDictionaryV1PronunciationDictionariesAddFromRulesPostWorkspaceAccess.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.BodyDubAVideoOrAnAudioFileV1DubbingPost.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.BodyHandleAnOutboundCallViaTwilioV1ConvaiTwilioOutboundCallPost.Json.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.BodyHandleAnOutboundCallViaTwilioV1ConvaiTwilioOutboundCallPost.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.BodyShareWorkspaceResourceV1WorkspaceResourcesResourceIdSharePost.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.BodyTextToSpeechStreamingV1TextToSpeechVoiceIdStreamPost.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.BodyTextToSpeechStreamingV1TextToSpeechVoiceIdStreamPostApplyTextNormalization.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.BodyTextToSpeechStreamingWithTimestampsV1TextToSpeechVoiceIdStreamWithTimestampsPost.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.BodyTextToSpeechStreamingWithTimestampsV1TextToSpeechVoiceIdStreamWithTimestampsPostApplyTextNormalization.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.BodyTextToSpeechV1TextToSpeechVoiceIdPost.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.BodyTextToSpeechV1TextToSpeechVoiceIdPostApplyTextNormalization.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.BodyTextToSpeechWithTimestampsV1TextToSpeechVoiceIdWithTimestampsPost.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.BodyTextToSpeechWithTimestampsV1TextToSpeechVoiceIdWithTimestampsPostApplyTextNormalization.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.BodyUnshareWorkspaceResourceV1WorkspaceResourcesResourceIdUnsharePost.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.ConversationInitiationClientDataRequestInput.Json.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.ConversationInitiationClientDataRequestInput.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.ConversationInitiationClientDataRequestInputCustomLlmExtraBody.Json.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.ConversationInitiationClientDataRequestInputCustomLlmExtraBody.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.ConversationInitiationClientDataRequestInputDynamicVariables.Json.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.ConversationInitiationClientDataRequestInputDynamicVariables.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.ConversationInitiationClientDataRequestOutput.Json.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.ConversationInitiationClientDataRequestOutput.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.ConversationInitiationClientDataRequestOutputCustomLlmExtraBody.Json.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.ConversationInitiationClientDataRequestOutputCustomLlmExtraBody.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.ConversationInitiationClientDataRequestOutputDynamicVariables.Json.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.ConversationInitiationClientDataRequestOutputDynamicVariables.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.CreateSIPTrunkPhoneNumberRequest.Json.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.CreateSIPTrunkPhoneNumberRequest.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.CreateTwilioPhoneNumberRequest.Json.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.CreateTwilioPhoneNumberRequest.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.DependentPhoneNumberIdentifier.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.EmbeddingModelEnum.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.GetConversationResponseModel.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.GetPhoneNumberResponseModel.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.GetPronunciationDictionariesV1PronunciationDictionariesGetSort.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.GetPronunciationDictionaryMetadataResponseModel.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.GetVoicesV2ResponseModel.Json.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.GetVoicesV2ResponseModel.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.ProjectSnapshotExtendedResponseModel.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.ProjectSnapshotExtendedResponseModelAudioUpload.Json.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.ProjectSnapshotExtendedResponseModelAudioUpload.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.ProjectSnapshotExtendedResponseModelZipUpload.Json.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.ProjectSnapshotExtendedResponseModelZipUpload.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.ProjectSnapshotResponseModel.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.ProjectSnapshotResponseModelAudioUpload.Json.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.ProjectSnapshotResponseModelAudioUpload.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.ProjectSnapshotResponseModelZipUpload.Json.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.ProjectSnapshotResponseModelZipUpload.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.ProjectSnapshotUploadResponseModel.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.ProjectSnapshotUploadResponseModelStatus.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.PromptAgentInput.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.PromptAgentOutput.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.PronunciationDictionaryVersionLocatorDBModel.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.PronunciationDictionaryVersionResponseModel.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.PydanticPronunciationDictionaryVersionLocator.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.RemovePronunciationDictionaryRulesResponseModel.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.SIPTrunkCredentials.Json.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.SIPTrunkCredentials.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.TelephonyProvider.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.TwilioOutboundCallResponse.Json.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.Models.TwilioOutboundCallResponse.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.PronunciationDictionaryClient.CreatePronunciationDictionariesAddFromRules.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.PronunciationDictionaryClient.GetPronunciationDictionaries.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.TextToSpeechClient.CreateTextToSpeechByVoiceId.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.TextToSpeechClient.CreateTextToSpeechByVoiceIdStream.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.TextToSpeechClient.CreateTextToSpeechByVoiceIdStreamWithTimestamps.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.TextToSpeechClient.CreateTextToSpeechByVoiceIdWithTimestamps.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.VoicesClient.GetV2Voices.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.WorkspaceClient.CreateWorkspaceResourcesByResourceIdShare.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/ElevenLabs.WorkspaceClient.CreateWorkspaceResourcesByResourceIdUnshare.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/JsonConverters.BodyAddAPronunciationDictionaryV1PronunciationDictionariesAddFromRulesPostWorkspaceAccess.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/JsonConverters.BodyAddAPronunciationDictionaryV1PronunciationDictionariesAddFromRulesPostWorkspaceAccessNullable.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/JsonConverters.GetPronunciationDictionariesV1PronunciationDictionariesGetSort.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/JsonConverters.GetPronunciationDictionariesV1PronunciationDictionariesGetSortNullable.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/JsonSerializerContext.g.csis excluded by!**/generated/**src/libs/ElevenLabs/Generated/JsonSerializerContextTypes.g.csis excluded by!**/generated/**
📒 Files selected for processing (1)
src/libs/ElevenLabs/openapi.yaml(44 hunks)
🔇 Additional comments (32)
src/libs/ElevenLabs/openapi.yaml (32)
4751-4791: New Pronunciation Dictionary (Add-From-Rules) Endpoint
This hunk introduces the/v1/pronunciation-dictionaries/add-from-rulesPOST endpoint with clear request body and response definitions. Note that just before this hunk, the SDK method name was set asadd_from_file—please ensure that the newadd_from_rulesidentifier does not conflict with any legacy behavior and that clients are updated accordingly.
5361-5367: Share Workspace Resource Endpoint Update
The updated endpoint for sharing a workspace resource now includes a detailed description that clearly explains its behavior. No issues noted here.
5403-5409: Unshare Workspace Resource Endpoint Update
The endpoint’s description now clearly outlines how permissions are removed from a resource, including the restriction on removing permissions from the resource’s creator. This change is clear and well documented.
5557-5597: Twilio Outbound Call Handling for Conversational AI
The new POST endpoint/v1/convai/twilio/outbound_callis defined with the necessary tags, summary, and parameter details. The request body references an external schema, so please verify that the referenced schema (Body_Handle_an_outbound_call_via_Twilio_v1_convai_twilio_outbound_call_post) is valid and up to date.
6225-6231: Import Phone Number Endpoint
The endpoint for importing a phone number is introduced with a concise summary and description. Ensure that the parameter names (such as the headerxi-api-key) maintain consistency with other endpoints in the API.
6240-6249: Phone Request Schema Enhancement
The newPhone Requestschema now accommodates two variants viaanyOf—one for a Twilio phone number request and another for a SIP trunk phone number request. This flexible design is well executed.
7185-7191: Enhanced Dictionary Model: Inclusion ofversion_rules_num
The updated model now requires aversion_rules_numproperty. Please confirm that all backend and client components are updated to supply and process this new field without breaking backward compatibility.
7208-7217: Pronunciation Dictionary Model Enhancement – Additional Property
The addition of theversion_rules_numfield—complete with a title and description—improves the model’s completeness.
7229-7235: Reinforcing Required Fields in the Dictionary Model
The requirement forversion_rules_numis now explicitly added to the model’s required list. This reinforces data consistency, assuming all producers of this data can supply the field.
8796-8819: File Upload Properties for Audio Processing
New properties for file uploads—such ascsv_file,foreground_audio_file, andbackground_audio_file—are introduced with appropriate descriptions and the binary format. These additions look solid.
8879-8890: Dubbing Mode Specification in Dubbing Endpoint
The addition of themodefield, which indicates whether dubbing runs in automatic or manual mode, is clear and includes a sensible default.
9083-9105: Twilio Outbound Call Request Body Model
The new body model for handling outbound calls includes critical fields (agent_id,agent_phone_number_id, andto_number) and a reference to additional conversation initiation data. Please verify that the schema referenced byConversationInitiationClientDataRequest-Inputis properly defined elsewhere in the document.
9225-9233: Sound Generation Request Update – Workspace API Key
The addition of theworkspace_api_key_idfield to the request parameters is consistent with the strategy of allowing workspace-scoped API interactions.
9808-9816: Pronunciation Dictionary Update Request – Workspace API Key
The update adds theworkspace_api_key_idfield in the context of updating pronunciation dictionaries within a project. This aligns with similar changes elsewhere in the API.
10862-10887: Conversation Initiation Client Data Schemas
New schemas forConversationInitiationClientDataRequest-Inputand-Outputare introduced with flexible definitions for dynamic variables. Ensure that the use ofanyOffor thedynamic_variablesfield is fully supported by your OpenAPI tooling and correctly interpreted by client generators.
11071-11127: Phone and Pronunciation Dictionary Models Update
This block standardizes several models for phone number creation and pronunciation dictionary creation. The update, including required properties and clear descriptions for each field, is well executed.
11669-11675: New Enum Value for Evaluation Settings
The new enum valuemultilingual_e5_large_instructis added alongside existing values. Please verify that your backend supports this new value and that the change is reflected throughout any dependent systems.
12115-12121: Conversations Page Response Enhancement
The reference toConversationInitiationClientDataRequest-Outputin theGetConversationsPageResponseModelis a useful addition for conveying extra client data via conversations.
12496-12504: Enhanced Required Fields in Dictionary Metadata
New required fields in the pronunciation dictionary model, such aslatest_version_rules_numandarchived_time_unix, have been added. It is important to ensure that API consumers are notified of these additional requirements.
12511-12520: Updated Dictionary Metadata – Latest Version Details
The schema clearly documents the latest version ID and rules number. This consistency helps maintain clarity across different parts of the API.
12527-12535: Archived Time Field Addition
Thearchived_time_unixfield has been added to the dictionary model (withnullable: true). This change appears well justified for handling dictionaries that haven’t been archived.
12710-12736: Pagination Info in GetVoicesV2ResponseModel
The inclusion ofnext_page_tokenin the response model enhances pagination capabilities. Its definition (includingnullable: true) is clear.
14376-14386: PromptAgent Output Schema Update
The addition of theignore_default_personalityflag—with a default value offalseand marked as nullable—enhances control over personality settings. This update is clear and useful.
14427-14437: PromptAgent Override Schema Consistency
The modifications in thePromptAgentOverrideschema mirror those in the PromptAgent output schema by adding theignore_default_personalityflag. This consistency is appreciated.
14549-14557: Pronunciation Dictionary Version Response Model Enhancement
Includingversion_rules_numas a required field—and allowingversion_idto be nullable—improves model clarity. Ensure that clients are updated to accommodate this change.
14565-14573: Version Details in Dictionary Response
The definitions forversion_idandversion_rules_numare now clearly provided. This update should help maintain consistency in version tracking across the API.
14583-14590: Locator Model Update for Pronunciation Dictionaries
ThePydanticPronunciationDictionaryVersionLocatornow marksarchived_time_unixas nullable, which is appropriate for dictionaries that have not yet been archived.
14600-14605: Query Parameters JSON Schema Adjustment
Allowing theversion_idto be nullable in theQueryParamsJsonSchemaprovides greater flexibility when referencing documents. This enhancement is well executed.
14778-14784: Includingversion_rules_numin Model Requirements
The object now explicitly requiresversion_rules_numalongside other identifiers. This detail reinforces the versioning model and improves consistency.
14789-14798: Enhanced Property Definitions and Example Values
The updated definitions forversion_idandversion_rules_num—along with the accompanying example—are clear. Ensure that the example values align with real-world responses from the service.
15884-15889: TelephonyProvider Enum Expansion
TheTelephonyProviderenum now includessip_trunkalongsidetwilio, which appropriately reflects the API’s expanded provider support.
15902-15924: TwilioOutboundCallResponse and URLAvatar Models
The newTwilioOutboundCallResponsemodel is detailed—with required fields likesuccess,message, andcallSid—and thecallSidis marked as nullable. Additionally, theURLAvatarmodel is defined appropriately. Please verify that these definitions match the actual responses from Twilio’s API.
| - on | ||
| - off | ||
| type: string | ||
| description: 'This parameter controls text normalization with three modes: ''auto'', ''on'', and ''off''. When set to ''auto'', the system will automatically decide whether to apply text normalization (e.g., spelling out numbers). With ''on'', text normalization will always be applied, while with ''off'', it will be skipped. Cannot be turned on for ''eleven_turbo_v2_5'' model.' | ||
| description: 'This parameter controls text normalization with three modes: ''auto'', ''on'', and ''off''. When set to ''auto'', the system will automatically decide whether to apply text normalization (e.g., spelling out numbers). With ''on'', text normalization will always be applied, while with ''off'', it will be skipped. Cannot be turned on for ''eleven_turbo_v2_5'' or ''eleven_flash_v2_5'' models.' | ||
| default: auto | ||
| example: 'true' | ||
| Body_Text_to_speech_v1_text_to_speech__voice_id__post: |
There was a problem hiding this comment.
Text Normalization in TTS
Similar to the previous hunk, the example here uses "true" even though the allowed values are strings ('auto', 'on', 'off'). This needs to be corrected to avoid confusion among API consumers.
| - on | ||
| - off | ||
| type: string | ||
| description: 'This parameter controls text normalization with three modes: ''auto'', ''on'', and ''off''. When set to ''auto'', the system will automatically decide whether to apply text normalization (e.g., spelling out numbers). With ''on'', text normalization will always be applied, while with ''off'', it will be skipped. Cannot be turned on for ''eleven_turbo_v2_5'' model.' | ||
| description: 'This parameter controls text normalization with three modes: ''auto'', ''on'', and ''off''. When set to ''auto'', the system will automatically decide whether to apply text normalization (e.g., spelling out numbers). With ''on'', text normalization will always be applied, while with ''off'', it will be skipped. Cannot be turned on for ''eleven_turbo_v2_5'' or ''eleven_flash_v2_5'' models.' | ||
| default: auto | ||
| example: 'true' | ||
| Body_Transcribes_segments_v1_dubbing_resource__dubbing_id__transcribe_post: |
There was a problem hiding this comment.
Text Normalization in Dubbing Transcription
The same inconsistency persists—the example provided is "true" rather than a valid string option such as "auto". This discrepancy should be addressed.
| - on | ||
| - off | ||
| type: string | ||
| description: 'This parameter controls text normalization with three modes: ''auto'', ''on'', and ''off''. When set to ''auto'', the system will automatically decide whether to apply text normalization (e.g., spelling out numbers). With ''on'', text normalization will always be applied, while with ''off'', it will be skipped. Cannot be turned on for ''eleven_turbo_v2_5'' model.' | ||
| description: 'This parameter controls text normalization with three modes: ''auto'', ''on'', and ''off''. When set to ''auto'', the system will automatically decide whether to apply text normalization (e.g., spelling out numbers). With ''on'', text normalization will always be applied, while with ''off'', it will be skipped. Cannot be turned on for ''eleven_turbo_v2_5'' or ''eleven_flash_v2_5'' models.' | ||
| default: auto | ||
| example: 'true' | ||
| Body_Text_to_speech_streaming_with_timestamps_v1_text_to_speech__voice_id__stream_with_timestamps_post: |
There was a problem hiding this comment.
Text Normalization in TTS Streaming
The description for text normalization clearly states that the valid values are 'auto', 'on', or 'off'. However, the example provided is "true", which is inconsistent with these options. Please update the example to use one of the allowed values (e.g., "auto").
| - on | ||
| - off | ||
| type: string | ||
| description: 'This parameter controls text normalization with three modes: ''auto'', ''on'', and ''off''. When set to ''auto'', the system will automatically decide whether to apply text normalization (e.g., spelling out numbers). With ''on'', text normalization will always be applied, while with ''off'', it will be skipped. Cannot be turned on for ''eleven_turbo_v2_5'' model.' | ||
| description: 'This parameter controls text normalization with three modes: ''auto'', ''on'', and ''off''. When set to ''auto'', the system will automatically decide whether to apply text normalization (e.g., spelling out numbers). With ''on'', text normalization will always be applied, while with ''off'', it will be skipped. Cannot be turned on for ''eleven_turbo_v2_5'' or ''eleven_flash_v2_5'' models.' | ||
| default: auto | ||
| example: 'true' | ||
| Body_Text_to_speech_with_timestamps_v1_text_to_speech__voice_id__with_timestamps_post: |
There was a problem hiding this comment.
Text Normalization in TTS with Timestamps
Again, the example value is "true" while the parameter expects one of 'auto', 'on', or 'off'. Please align the example value with the documented allowed values.
Summary by CodeRabbit