fix(openai_compat): accept object tool call arguments#1292
Conversation
nikolasdehor
left a comment
There was a problem hiding this comment.
Solid fix for handling providers that return tool call arguments as JSON objects instead of JSON strings.
Review notes:
-
decodeToolCallArguments-- the function handles all three cases correctly:- null/empty: returns empty map
- string (standard OpenAI format): unmarshals the string as JSON
- object (non-standard but valid): uses the map directly
- other types: logs and falls back to raw string
-
json.RawMessagefor Arguments field -- this is the correct type to use when the wire format can be either a string or an object. The previousstringtype would fail to unmarshal object-form arguments. -
Graceful fallback -- malformed or unsupported argument types still produce a result with a "raw" key, preserving debuggability. This matches the previous behavior.
-
Test coverage -- the new test
TestProviderChat_ParsesToolCallsWithObjectArgumentscreates a realistic server response with object-form arguments and verifies both string and boolean argument values are preserved. -
Minor: the
bytes.TrimSpacecall on raw before checking for "null" is good defensive coding -- some providers may return " null " with whitespace.
LGTM.
|
What is missing for this PR to get merged? Would be great to get it working again :) |
yinwm
left a comment
There was a problem hiding this comment.
Reviewed and compared with #1379. This PR is more complete:
- Preserves original arguments string via
ToolCall.Function.Arguments - Adds
Typefield for completeness - Better test coverage including Function field validation
The other PR (#1379) loses the raw arguments string which may be needed downstream.
LGTM 👍
Summary
Fixes #1287.
Testing