Go CLI for AI quota tracking. Polls 8 providers → SQLite → Material Design 3 dashboard.
Background daemon (<50MB RAM) tracking: Anthropic, Synthetic, Z.ai, Copilot, Codex, MiniMax, Antigravity, Gemini.
main.go # CLI entry, daemon lifecycle
internal/
├── api/ # HTTP clients + types per provider
│ └── {provider}_client.go, {provider}_types.go
├── store/ # SQLite persistence per provider
│ └── store.go (schema), {provider}_store.go
├── tracker/ # Poll orchestration per provider
├── agent/ # Background polling agents
├── web/ # Dashboard server
│ ├── handlers.go # API endpoints
│ ├── static/ # Embedded JS/CSS (embed.FS)
│ └── templates/ # HTML templates
├── config/ # Config + container detection
└── notify/ # Email + push notifications
- TDD-first: Test → fail → implement → pass
- RAM-bounded: 40MB limit, single SQLite conn, lean HTTP
- Single binary: All assets via
embed.FS
Always use app.sh for build and test - never run go build or go test directly.
./app.sh --build # Build production binary
./app.sh --test # Run all tests with race detection and coverage
./app.sh --smoke # Quick validation: vet + build check + short tests
go test -race ./... && go vet ./... # Pre-commit (mandatory)| Rule | Reason |
|---|---|
Never commit .env, .db, binaries |
Security |
| Never log API keys | Security |
| Parameterized SQL only | Injection prevention |
context.Context always |
Leak prevention |
-race before commit |
Data race detection |
subtle.ConstantTimeCompare for creds |
Timing attacks |
| Bounded queries (cycles≤200, insights≤50) | Memory caps |
Adding a provider:
internal/api/{provider}_client.go+_types.gointernal/store/{provider}_store.gointernal/tracker/{provider}_tracker.gointernal/agent/{provider}_agent.go- Add to
internal/web/handlers.goendpoints - Update dashboard JS in
internal/web/static/app.js
API Docs: See docs/ for provider-specific setup (COPILOT_SETUP.md, CODEX_SETUP.md, ANTIGRAVITY_SETUP.md, GEMINI_SETUP.md)
Containers: IsDockerEnvironment() in config.go detects Docker/K8s. Containers run foreground only.
Release: ./app.sh --release → cross-compile 5 platforms → include all binaries in GitHub release.
Anthropic Rate Limit Bypass: Anthropic's usage API has aggressive rate limits (~5 requests per token, then 429 for ~5 min). onWatch bypasses this by refreshing the OAuth token when rate limited - each new access token gets a fresh rate limit window. Implementation details:
internal/agent/anthropic_agent.go: Detects 429, callsRefreshAnthropicToken, saves new tokens, retriesinternal/api/anthropic_oauth.go: OAuth token refresh endpoint (console.anthropic.com/v1/oauth/token)internal/api/anthropic_token_unix.go: Writes to macOS Keychain + file for persistenceinternal/api/anthropic_token_windows.go: Writes to credentials file- Refresh tokens are one-time use (OAuth rotation) - MUST save new refresh token after each refresh
- See: issue #16, anthropics/claude-code#31021
- Use
-(hyphen) instead of—(em dash) in all text