This repository defines a local, AI-agnostic harness so different AI TUIs (Claude, Codex, Gemini, etc.) can operate against the same system.
- shared tools
- agent definitions
- shared memory vault
- system prompts/contracts
- infrastructure hooks/scripts
seed/bin/aih: Harness CLI source for initializing and validating shared state.seed/bin/ralph: Loop runner for one-task-per-run automation across supported CLIs.seed/tools/bin/vault: Canonical memory vault CLI (file-native markdown+YAML notes).seed/systems/prompts/ralph.md: Canonical Ralph loop prompt synced to~/.ai/systems/prompts/ralph.md.setup.sh: Root bootstrap orchestrator for new-system provisioning.scripts/setup/*.sh: Per-domain installers (plugins.sh,agents.sh, etc.).seed/: Repository seed bundle copied into$AI_HOMEduring setup.docs/architecture.md: Canonical architecture and integration contract.
Bootstrap contract:
~/.ai/cortex.mdis loaded first at session start.- contracts are loaded lazily from
~/.ai/systems/contracts/*.mdas needed.
Do this in order:
- Clone this repo and
cdinto it. - Generate a dedicated SSH key for sandbox use (do not reuse your normal SSH key):
ssh-keygen -t ed25519 -f ~/.ssh/ai_ed25519 -C "ai-sandbox" -N ""
- Build the image:
docker build --build-arg GITCONFIG=$(base64 -i ~/.gitconfig) -t sandbox .
- Copy the
sandbox()function from Run Container into your shell startup file (~/.bashrc,~/.zshrc, or~/.profile). - Source your startup file (for example,
source ~/.zshrc). - In your terminal, navigate to the project you want mounted into
/workspace. - Run:
sandbox
Do not run ./setup.sh manually for normal use. The container entrypoint runs the required setup automatically when sandbox starts.
The harness uses a two-layer memory system:
- Vault (
~/.ai/memory) — canonical, file-native markdown notes with YAML frontmatter. Source of truth. - engram (
~/.ai/memory/engram) — supplemental memory layer for session summaries and full-text search across past sessions.
vault handles durable structured records. engram adds searchable session memory with MCP integration so agents can save and retrieve context automatically.
engram is wired as an MCP server in all supported clients (Claude, OpenCode, Gemini, Codex) and provides 13 tools including mem_save, mem_search, mem_context, and mem_session_summary.
| Variable | Default | Description |
|---|---|---|
ENGRAM_DATA_DIR |
~/.ai/memory/engram |
engram SQLite database location |
ENGRAM_DATA_DIR is set automatically by the container entrypoint. The path is under ~/.ai/memory which is persisted by the volume mount.
Search memories from the CLI:
engram search "bootstrap contract"Browse all memories interactively:
engram tuiThe MCP server is started automatically by each AI client when needed. No manual initialization required.
Project documentation is expected to live in a separate Obsidian vault repo.
Set DOC_VAULT_ROOT to your local documentation vault path:
export DOC_VAULT_ROOT="/path/to/your/documentation/vault"DOC_VAULT_ROOT is the global documentation vault location. It is separate from the
project repo you are actively working in.
Project documentation folder names are determined by running docslug in the working
project repo. Documentation generation should fail if either DOC_VAULT_ROOT is unset
or docslug is unavailable.
Add that export to your shell startup file (~/.bashrc, ~/.zshrc, or ~/.profile) so tooling can consistently find the shared docs vault.
- Docker installed and available in your shell
- New SSH keys created specifically for AI use. See the SSH Keys section.
- Optional: base64 support if passing
GITCONFIGat build time
docker build --build-arg GITCONFIG=$(base64 -i ~/.gitconfig) -t sandbox .Notes:
TZcontrols container timezone.GITCONFIGis optional; omit it if you do not want to inject git config.
Add this helper to ~/.bashrc, ~/.zshrc, or ~/.profile:
sandbox() {
docker network create sandbox >/dev/null 2>&1 || true
docker run --rm -it \
--network sandbox \
-v "sandbox_home:/home/ai" \
-v "sandbox_ai:/home/ai/.ai" \
-v "sandbox_cache:/home/ai/.cache" \
-v "sandbox_claude:/home/ai/.claude" \
-v "sandbox_codex:/home/ai/.codex" \
-v "sandbox_config:/home/ai/.config" \
-v "sandbox_cursor:/home/ai/.cursor" \
-v "sandbox_gemini:/home/ai/.gemini" \
-v "sandbox_local:/home/ai/.local" \
#-v "$HOME/.ai/memory:/home/ai/.ai/memory" \ # Optional if you want your memories to persist onto your hard drive. Need create local folder first to use.
-v "$HOME/.ssh/ai_ed25519:/home/ai/.ssh/id_ed25519:ro" \
-v "$HOME/.ssh/ai_ed25519.pub:/home/ai/.ssh/id_ed25519.pub:ro" \
-v "$HOME/.ssh/known_hosts:/home/ai/.ssh/known_hosts:ro" \
-v "$(pwd):/workspace" \
--workdir /workspace \
sandbox \
/bin/bash
}On startup, the container entrypoint runs harness setup and client config merge:
setup.sh(default components)setup.sh clients
The default CLI functions configured in the image use permissive flags (--yolo,
--dangerously-*). Treat this container as an untrusted automation environment,
not a hardened production boundary.
These are the shell functions to launch the various tui:
agent() { TID="$(openssl rand -hex 24)" PROVIDER=cursor command agent --yolo "$@"; }
cursor() { TID="$(openssl rand -hex 24)" PROVIDER=cursor command agent --yolo "$@"; }
claude() { TID="$(openssl rand -hex 24)" PROVIDER=claude command claude --dangerously-skip-permissions "$@"; }
codex() { TID="$(openssl rand -hex 24)" PROVIDER=codex command codex --dangerously-bypass-approvals-and-sandbox "$@"; }
gemini() { TID="$(openssl rand -hex 24)" PROVIDER=gemini command gemini --yolo "$@"; }
kilo() { auth_file="$HOME/.local/share/kilo/auth.json"; mkdir -p "$(dirname "$auth_file")"; if [ -f "$auth_file" ] && grep -q APIKEY "$auth_file"; then [ -z "${MINIMAX_API_KEY:-}" ] && read -s -p "MiniMax API key: " MINIMAX_API_KEY && echo; sed -i "s/APIKEY/$MINIMAX_API_KEY/g" "$auth_file"; fi; TID="$(openssl rand -hex 24)" PROVIDER=kilo command kilo "$@"; }
opencode() { auth_file="$HOME/.local/share/opencode/auth.json"; mkdir -p "$(dirname "$auth_file")"; if [ -f "$auth_file" ] && grep -q APIKEY "$auth_file"; then [ -z "${MINIMAX_API_KEY:-}" ] && read -s -p "MiniMax API key: " MINIMAX_API_KEY && echo; sed -i "s/APIKEY/$MINIMAX_API_KEY/g" "$auth_file"; fi; TID="$(openssl rand -hex 24)" PROVIDER=opencode command opencode "$@"; }ralph runs a prompt-driven execution loop for claude, opencode, or kilo.
It is installed to ~/.ai/bin/ralph and reads the default prompt from
~/.ai/systems/prompts/ralph.md.
Required workspace files (must already exist before running ralph):
tasks.mdstatus.md
Default behavior:
- One task per iteration
- Stops on
Status: doneorStatus: blocked - Sets
Status: doneautomatically when no unchecked tasks remain - Per-iteration timeout supported (
--iteration-timeout, default 600s)
CLI defaults:
- Claude: model
haiku, output formatjson - OpenCode: model
minimax/MiniMax-M2.5 - Kilo: model
minimax/MiniMax-M2.5
Examples:
ralph --cli claude --max-iterations 5 --iteration-timeout 30
ralph --cli opencode --max-iterations 5 --iteration-timeout 30
ralph --cli kilo --max-iterations 5 --iteration-timeout 30 -- --autosandboxThis command:
- Creates a Docker network named
sandbox(if it doesn't exist) - Mounts your SSH keys as read-only volumes
- Shares your current working directory as
/workspace - Provides persistent volumes for each AI tool's configuration
- Runs as a non-root user
Once inside, you have access to:
- AI Tools:
claude,codex,cursor,gemini,kilo,opencode(with pre-configured start functions) - Memory:
engram(persistent session memory, MCP server, TUI —engram tui,engram search) - Package Managers:
npm,pnpm,bun - Development Tools:
git,gh(GitHub CLI),glab(GitLab CLI),python3,vim,jq,ripgrep - Browser Automation:
agent-browser(Playwright-backed browser CLI for AI agents) - Worktree Management:
wt(git worktree helper) - Workspace: Your current directory mounted at
/workspace
Replays your commit history as an animated TUI. Defaults to a screensaver-style random commit replay.
# Replay last 10 commits oldest-first
gitlogue --commit HEAD~10..HEAD --order asc
# Filter by author, loop continuously
gitlogue --author "alice" --loop
# View staged or unstaged changes
gitlogue diff
gitlogue diff --unstaged
# Date range with a theme
gitlogue --after "1 week ago" --theme dracula
# Speed up the replay (default is slow; lower ms = faster)
gitlogue --speed 10Opens a TUI that watches a directory and shows colored diffs as files are written. Useful for watching what the AI is editing in real time.
# Watch current directory (non-recursive)
diffwatch
# Watch entire project recursively
diffwatch -p /workspace -r
# Watch a specific path recursively
diffwatch -p /workspace/src -rPress q or Ctrl+C to exit.
Streams a new git status line every time the repo changes, event-driven via inotify (no polling). Also powers the Claude Code status bar GIT column automatically.
# Continuous watch with formatted output
gsw --format '{branch} +{staged} ~{modified} ?{untracked} ⇡{ahead}⇣{behind}'
# One-shot (current status, then exit)
gsw --once --format '{branch} +{staged} ~{modified} ?{untracked}'
# Include stash count and repo state (merge/rebase/etc.)
gsw --format '{branch} +{staged} ~{modified} ?{untracked} stash:{stash} [{state}]'Format placeholders: {branch}, {staged}, {modified}, {untracked}, {conflicted}, {ahead}, {behind}, {stash}, {state}
Creates N isolated copies of the current repo as git worktrees, each on its own branch, for running multiple agents against the same codebase simultaneously without conflicts.
Folders and branches are named {short-sha}-v{n} scoped to the current HEAD, so repeated runs never clobber each other.
When to use: spin up multiple worktrees when you want to run several AI agents in parallel on competing implementations of the same feature, then compare and cherry-pick the best result.
# Create 3 worktrees from current HEAD (e.g. abc1234-v1, abc1234-v2, abc1234-v3)
wt 3
# Each worktree is an independent branch you can work in
cd .worktree/abc1234-v1
claude # agent 1 works here
cd .worktree/abc1234-v2
claude # agent 2 works here independently
# Remove all worktrees when done
wt --rmWorktrees are created under .worktree/ which is automatically added to .gitignore.
The setup expects SSH keys at:
~/.ssh/ai_ed25519(private key, read-only)~/.ssh/ai_ed25519.pub(public key, read-only)~/.ssh/known_hosts(known hosts, read-only)
ssh-keygen -t ed25519 -f ~/.ssh/ai_ed25519 -C "ai-sandbox" -N ""This creates a dedicated key pair specifically for the sandbox.
You must add the new public key to your version control system (GitHub, GitLab, Gitea, etc.) before using the sandbox:
GitHub:
- Copy your public key:
cat ~/.ssh/ai_ed25519.pub - Go to Settings → SSH and GPG keys → New SSH key
- Paste the key and save
GitLab:
- Copy your public key:
cat ~/.ssh/ai_ed25519.pub - Go to Preferences → SSH Keys
- Paste the key and save
Other VCS:
- Consult your platform's documentation for adding SSH keys to your account
- The public key (
~/.ssh/ai_ed25519.pub) is what you add to your VCS profile
This allows the sandbox to authenticate with your repositories without exposing your main SSH keys.
- Isolation: If the container is compromised, only the sandbox-specific key is exposed
- Security: Your main SSH keys remain safe on your host system
- Control: You can easily revoke the sandbox key without affecting other systems
- Auditability: You can track sandbox activity separately in your VCS logs
If you need to use a different key file name, update the key paths in the sandbox() function.
The ai user is configured with sudo privileges. The Dockerfile sets a specific password hash for the ai user account.
You can change this to your own password:
-
Generate a SHA-512 password hash (using
cryptformat with$y$prefix):python3 -c "import crypt; print(crypt.crypt('your_password', crypt.METHOD_SHA512))" -
Replace the hash in the Dockerfile with the hash you just generated (line 45): Start with the
$and replace from it to the last:. -
Rebuild the image with your new hash.