feat: search through webfetch tool output (when result exceeds context limits -> blocks conversation) #3869
+255
−48
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Adds context limit checking to webfetch tool to prevent exceeding model's context window. Truncates large webfetch outputs when they would exceed available context space for the specific chosen model.
This is using the same overflow detection pattern as SessionCompaction.isOverflow().
Some docs like the nextjs docs or shadcn docs for certain components are a lot larger in context length. When fetch is called it exceeds the model's context limit and doesn't continue.
Now there is a
searchproperty which the model can use if it only needs specific content from the docs.Example: The shadcn sidebar documentation is very detailed and long (~50k tokens).
Now the model can search "SideMenuSub" in the
webfetchdirectly, and it would return this specific section only. Which would be just 200-500 tokens.I've only been using opencode for sometime, but when it stops when it encounters large docs or websites i am not even able to continue or even compact the conversation.
So this is something that fixes that for me with no performance loss.