Replies: 1 comment
-
|
Been thinking about this too. It’s tricky because natural language doesn’t give you clear signals about complexity. A prompt might look simple but trigger a long chain of agent calls, data pulls, or analysis behind the scenes. One approach that makes sense is starting every request as if it will be fast, then having the first-hop agent do a quick assessment. If it looks like the task will fan out or take time, you switch into async mode early. Issue a task ID, register a push channel, and move on. We’re working on something related right now. Once a task goes async, you need a trusted way to anchor that request, track what was delegated, and make sure the final result came from an authorized source. When multiple agents or services are involved, that delivery path has to be verifiable. Feels like the right building blocks are there. We just need to formalize how systems shift from conversational flows to transactional workflows when the task calls for it. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I am trying to figure out an approach for push notifications.
How can we predict when what is an arbitrary, natural language request will trigger a long-running task rather than a quick response?
Example Scenarios:
A simple request like "What is the population of New York City?" could be answered quickly through a lookup. This can occur on the same open connection , standard request / response.
A more complex request like "What is the historical trend of global carbon emissions in New York over the past 100 years, broken down by Bronx, Brooklyn, Manhattan, Queens, and Staten Island, while factoring in economic growth rates and average weather conditions?" - well this would require a much longer process to compute, pulling data from multiple sources and potentially performing advanced analysis. Likely more then one agent.
Beta Was this translation helpful? Give feedback.
All reactions