Description
When opencode is used in a workspace that contains large non-code files (media files, datasets, ML models, etc.), the snapshot system's git add . can run indefinitely, consuming excessive CPU and memory.
The snapshot system maintains a separate git repo to track worktree state. It runs git add . against the full worktree on every agentic step (via the add effect in snapshot/index.ts). When the worktree contains large directories with binary/media files, this operation never completes in a reasonable time.
Steps to reproduce
- Open opencode in a workspace containing a large directory (e.g. 500MB+ of media files, datasets, or ML model weights)
- Start a session and interact with the assistant
- Observe
git add . and git pack-objects processes consuming high CPU/memory indefinitely
Observed behavior
git add . ran for 3+ hours at ~90% CPU
- The subsequent hourly
git gc (snapshot cleanup) consumed ~3.7GB of RAM trying to repack the bloated snapshot repo
- The system became sluggish and required manual process killing
Expected behavior
The snapshot system should handle large worktrees gracefully — either by timing out and excluding large/binary files, or by respecting size limits.
Environment
- OS: Linux (Ubuntu)
- opencode version: v1.2.27
Description
When opencode is used in a workspace that contains large non-code files (media files, datasets, ML models, etc.), the snapshot system's
git add .can run indefinitely, consuming excessive CPU and memory.The snapshot system maintains a separate git repo to track worktree state. It runs
git add .against the full worktree on every agentic step (via theaddeffect insnapshot/index.ts). When the worktree contains large directories with binary/media files, this operation never completes in a reasonable time.Steps to reproduce
git add .andgit pack-objectsprocesses consuming high CPU/memory indefinitelyObserved behavior
git add .ran for 3+ hours at ~90% CPUgit gc(snapshot cleanup) consumed ~3.7GB of RAM trying to repack the bloated snapshot repoExpected behavior
The snapshot system should handle large worktrees gracefully — either by timing out and excluding large/binary files, or by respecting size limits.
Environment