fix: return fetched content to LLM in web_fetch tool#833
Conversation
WebFetchTool.Execute was setting ForLLM to a summary string
("Fetched N bytes from URL ...") instead of the actual extracted
text. This meant the LLM never saw the page content and could not
answer questions based on fetched web pages.
Return the extracted text in ForLLM so the model can use it.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Pull request overview
Fixes web_fetch so the LLM receives the actual extracted page content (instead of a metadata-only summary), restoring the tool’s usefulness for answering questions based on fetched URLs.
Changes:
- Update
WebFetchTool.Executeto return extractedtextinToolResult.ForLLM. - Keep
ToolResult.ForUseras the full JSON payload (url/status/extractor/truncated/length/text).
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
pkg/tools/web.go
Outdated
| return &ToolResult{ | ||
| ForLLM: fmt.Sprintf( | ||
| "Fetched %d bytes from %s (extractor: %s, truncated: %v)", | ||
| len(text), | ||
| urlStr, | ||
| extractor, | ||
| truncated, | ||
| ), | ||
| ForLLM: text, | ||
| ForUser: string(resultJSON), |
There was a problem hiding this comment.
This change will break existing unit tests that assert WebFetchTool.ForLLM contains a summary string (see pkg/tools/web_test.go:40-43). Update the test expectations (and any other callers) to validate that ForLLM now contains the extracted text content instead of metadata, or include the needed metadata in ForLLM if the summary is still required by contract.
|
great bug you found! but instead of replacing ForLLM with text I would do something like this: return &ToolResult{
ForLLM: string(resultJSON),
ForUser: fmt.Sprintf(
"Fetched %d bytes from %s (extractor: %s, truncated: %v)",
len(text),
urlStr,
extractor,
truncated,
),
}there are also unit tests to fix, but nice PR! |
Accept suggestion from afjcjsbx: the LLM should receive the full JSON result (including extracted text) while the user sees a short summary. Update tests to match the new field assignment. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
Thanks @afjcjsbx, good call! Applied your suggestion — ForLLM now gets the full JSON result and ForUser gets the summary. Also updated the tests to match. |
|
LGTM! |
|
@lxowalle PTAL |
|
@xiaket PTAL |
xiaket
left a comment
There was a problem hiding this comment.
LGTM, thanks for the fix!
|
@lucamartinetti Good catch on the web_fetch bug. Returning the summary metadata string instead of the actual fetched content to the LLM would make the tool basically useless for real web lookups. Subtle regression from the ToolResult refactor. We have a PicoClaw Dev Group on Discord where contributors chat and collaborate. If you're interested, send an email to |
* fix: return fetched content to LLM in web_fetch tool
WebFetchTool.Execute was setting ForLLM to a summary string
("Fetched N bytes from URL ...") instead of the actual extracted
text. This meant the LLM never saw the page content and could not
answer questions based on fetched web pages.
Return the extracted text in ForLLM so the model can use it.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: put full JSON result in ForLLM, summary in ForUser
Accept suggestion from afjcjsbx: the LLM should receive the full JSON
result (including extracted text) while the user sees a short summary.
Update tests to match the new field assignment.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* fix: return fetched content to LLM in web_fetch tool
WebFetchTool.Execute was setting ForLLM to a summary string
("Fetched N bytes from URL ...") instead of the actual extracted
text. This meant the LLM never saw the page content and could not
answer questions based on fetched web pages.
Return the extracted text in ForLLM so the model can use it.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: put full JSON result in ForLLM, summary in ForUser
Accept suggestion from afjcjsbx: the LLM should receive the full JSON
result (including extracted text) while the user sees a short summary.
Update tests to match the new field assignment.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* fix: return fetched content to LLM in web_fetch tool
WebFetchTool.Execute was setting ForLLM to a summary string
("Fetched N bytes from URL ...") instead of the actual extracted
text. This meant the LLM never saw the page content and could not
answer questions based on fetched web pages.
Return the extracted text in ForLLM so the model can use it.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: put full JSON result in ForLLM, summary in ForUser
Accept suggestion from afjcjsbx: the LLM should receive the full JSON
result (including extracted text) while the user sees a short summary.
Update tests to match the new field assignment.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Summary
WebFetchTool.Executewas settingForLLMto a summary string ("Fetched N bytes from URL (extractor: ..., truncated: ...)") instead of the actual extracted text(string, error)to*ToolResult— the old code returned the full content as a single string, but the split intoForLLM/ForUseraccidentally hid the content from the LLMFix
Return the extracted
textinForLLM(the field the LLM actually reads) instead of the metadata summary.ForUsercontinues to return the full JSON result.Test plan
web_fetchnow returns page content the LLM can use (tested with weather data queries)🤖 Generated with Claude Code