Skip to content

[Python SDK] execute_tool_call_loop crashes when LLM tool call omits function.arguments #353

@santoshkumarradha

Description

@santoshkumarradha

Summary

When an LLM returns a tool call object whose function field has no arguments key (i.e. just {"name": "..."}), agentfield.tool_calling.execute_tool_call_loop raises an AttributeError instead of treating the missing arguments as a recoverable parsing error and continuing the loop.

This is a real-world failure mode — frontier LLMs occasionally emit malformed tool calls under load, and the SDK should report the error back to the model so it can retry, not crash the whole reasoner.

Where

  • File: sdk/python/agentfield/tool_calling.py
  • Function: execute_tool_call_loop
  • Discovered by: sdk/python/tests/test_tool_calling_error_paths.py::test_malformed_tool_call_missing_arguments_is_reported_and_loop_continues (currently skipped with pytest.skip("source bug: ..."))

Reproduction

The skipped test in PR #352 reproduces it directly. Minimal repro: feed execute_tool_call_loop an LLM response whose tool call looks like:

{
    "id": "tc_missing",
    "type": "function",
    "function": {"name": "utility.echo"}  # NOTE: no "arguments" key
}

The loop accesses tool_call.function.arguments without a guard and raises AttributeError.

Expected behavior

The loop should:

  1. Catch the missing-arguments case (treat it the same as malformed JSON arguments)
  2. Append a tool-role message back to the conversation describing the parse error
  3. Continue to the next turn so the LLM can retry or pivot

Acceptance criteria

  • execute_tool_call_loop no longer raises on tool calls missing the arguments field
  • An error message describing the missing arguments is appended to messages as a tool role entry
  • The loop continues for additional turns up to max_turns
  • The skipped test in test_tool_calling_error_paths.py::test_malformed_tool_call_missing_arguments_is_reported_and_loop_continues is unskipped and passes

Discovered via

PR #352 (test coverage improvements). One of 5 source bugs surfaced while writing failure-mode tests for the Python SDK.

Metadata

Metadata

Assignees

No one assigned

    Labels

    ai-friendlyWell-documented task suitable for AI-assisted developmentarea:aiAI/LLM integrationbugSomething isn't workinggood first issueGood for newcomershelp wantedExtra attention is neededsdk:pythonPython SDK relatedtestsUnit test improvements and coverage

    Type

    No type

    Projects

    No projects

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions