-
Notifications
You must be signed in to change notification settings - Fork 1.4k
Improve TTSService handling of long LLM token outputs #3057
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
@mattieruth including you since you're working in this area of code. I don't think this impacts your PR but sharing in case it does. |
Codecov Report❌ Patch coverage is
🚀 New features to boost your workflow:
|
|
|
||
| return None | ||
|
|
||
| async def aggregate(self, text: str) -> Optional[str]: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am wondering if this could be considered a breaking change for anyone using it. We’re keeping the API the same but changing the behavior, and now, to extract the full text, they would need to use it together with flush_next_sentence.
So I am not sure if it would be better if we introduce a new method instead. What do you think ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think this changes any functionality. aggregate() works consistently between versions, I think.
Previously, it would:
- Return either
Noneif no sentences found or the first sentence if an end of sentence is found. The same is true today. It also uses the same logic.
I extracted the logic to avoid duplication. Otherwise, it could have remaining as is without change.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I think you’re right. Somehow I got confused the first time I looked, but it makes sense now.
And in both versions we would still need to get any remaining text using self._text_aggregator.text at the end.
Cool, makes sense
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Exactly.
Because of Mattie's bot output work, I'm going to have to hold off on merging this. We can stitch in this change once she wraps her work up. Thanks for confirming the approach 🙏
34929b9 to
40ed9a7
Compare
filipi87
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice improvement. 🚀🔥
Please describe the changes in your PR. If it is addressing an issue, please reference that as well.
The motivation for this change was discovering that Google Gemini outputs long chunks, sometimes containing multiple sentences. The
SimpleTextAggregatorextracts only the first sentence and buffers the remainder of the text. When theLLMFullResponseEndFrameis received, all remaining buffered text was being pushed to TTS as one large chunk. This results in potentially many sentences being sent to TTS at once.The issue is that interruptions capture the last complete sentence; for long outputs with many buffered sentences, it was possible to have many sentences missed during interruption, which could cause the bot to repeat itself or continue speaking after being interrupted.
This PR adds a
flush_next_sentence()method to theSimpleTextAggregatorwhich is used when theLLMFullResponseEndFrameis received. Instead of sending all remaining text as one chunk, the buffered sentences are now flushed individually, providing better interruption points throughout the response.DON'T MERGE YET. WAIT UNTIL #2899 LANDS.