-
Notifications
You must be signed in to change notification settings - Fork 184
Add AgentHandoff Chat Item #808
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
simllll
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's the idea of adding it to the chat context and then filtering out again? Just curious about the reason behind this feature? 🤔
|
Hey @simllll, those events will be recorded and sent to the observability for traces. It's been filtered before passing to LLM since LLM does not support a "handoff" message type. |
|
|
||
| export interface AgentOptions<UserData> { | ||
| id?: string; | ||
| instructions: string; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why is this optional?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This also follows python agents:
class Agent:
def __init__(
self,
*,
instructions: str,
id: str | None = None,
chat_ctx: NotGivenOr[llm.ChatContext | None] = NOT_GIVEN,
tools: list[llm.FunctionTool | llm.RawFunctionTool] | None = None,
turn_detection: NotGivenOr[TurnDetectionMode | None] = NOT_GIVEN,
stt: NotGivenOr[stt.STT | STTModels | str | None] = NOT_GIVEN,
vad: NotGivenOr[vad.VAD | None] = NOT_GIVEN,
llm: NotGivenOr[llm.LLM | llm.RealtimeModel | LLMModels | str | None] = NOT_GIVEN,
tts: NotGivenOr[tts.TTS | TTSModels | str | None] = NOT_GIVEN,
mcp_servers: NotGivenOr[list[mcp.MCPServer] | None] = NOT_GIVEN,
allow_interruptions: NotGivenOr[bool] = NOT_GIVEN,
min_consecutive_speech_delay: NotGivenOr[float] = NOT_GIVEN,
use_tts_aligned_transcript: NotGivenOr[bool] = NOT_GIVEN,
min_endpointing_delay: NotGivenOr[float] = NOT_GIVEN,
max_endpointing_delay: NotGivenOr[float] = NOT_GIVEN,
) -> None:
tools = tools or []
if type(self) is Agent:
self._id = "default_agent"
else:
self._id = id or misc.camel_to_snake_case(type(self).__name__)|
@toubatbrian i see, thanks for the insights and the quick reply. My thoughts: But if we are talking about opentelemetry support or similar, is it really necessary to put it in the chat context at all? Or would it be enough to "fake" this call or find some other way to make this traceable. (E.g. if we observe a special function for tracing, we could think about a flag or a second function just for logging purposes?) |
|
Hey @simllll, I see your thoughts, which totally make sense! The main goal is to trying to achieve as much as parity as possible with the python agent framework. Adding handoff object into chat context is what currently been implemented in python side. The other reason tie-ing the handoff to chat context is that, we'll add support for export the context json once a session is finished, so developer can do certain things on it such as eval. Having the agent handoff info as part of the chat context would be useful for that case. |
Implement
AgentHandoffIteminto chat context