Skip to content

Latest commit

 

History

History
191 lines (139 loc) · 10.3 KB

File metadata and controls

191 lines (139 loc) · 10.3 KB
Bernstein

Orchestrate any AI coding agent. Any model. One command.

Bernstein in action — parallel AI agents orchestrated in real time

CI codecov GitHub stars PyPI npm VS Marketplace Python 3.12+ License MCP Compatible A2A Compatible Share on X SaaSHub Built with Bernstein

Documentation · Getting Started · Glossary · Limitations

Wall of fame

"lol, good luck, keep vibecoding shit that you have no idea about xD"PeaceFirePL, Reddit


Bernstein takes a goal, breaks it into tasks, assigns them to AI coding agents running in parallel, verifies the output, and merges the results. You come back to working code, passing tests, and a clean git history.

No framework to learn. No vendor lock-in. Agents are interchangeable workers — swap any agent, any model, any provider. The orchestrator itself is deterministic Python code. Zero LLM tokens on scheduling.

pip install bernstein
bernstein -g "Add JWT auth with refresh tokens, tests, and API docs"

Also available via pipx, uv tool install, brew, dnf copr, and npx bernstein-orchestrator. See install options.

Supported agents

Bernstein auto-discovers installed CLI agents. Mix them in the same run — cheap local models for boilerplate, heavy cloud models for architecture.

Agent Models Install
Claude Code opus 4.6, sonnet 4.6, haiku 4.5 npm install -g @anthropic-ai/claude-code
Codex CLI gpt-5.4, gpt-5.4-mini npm install -g @openai/codex
Gemini CLI gemini-3.1-pro, gemini-3-flash npm install -g @google/gemini-cli
Cursor sonnet 4.6, opus 4.6, gpt-5.4 Cursor app
Aider Any OpenAI/Anthropic-compatible pip install aider-chat
Ollama + Aider Local models (offline) brew install ollama
Amp, Cody, Continue.dev, Goose, Kilo, Kiro, OpenCode, Qwen, Roo Code, Tabby Various See docs
Generic Any CLI with --prompt Built-in

Any adapter also works as the internal scheduler LLM — run the entire stack without any specific provider:

internal_llm_provider: gemini            # or qwen, ollama, codex, goose, ...
internal_llm_model: gemini-3.1-pro-preview

Tip

Run bernstein --headless for CI pipelines — no TUI, structured JSON output, non-zero exit on failure.

Quick start

cd your-project
bernstein init                    # creates .sdd/ workspace + bernstein.yaml
bernstein -g "Add rate limiting"  # agents spawn, work in parallel, verify, exit
bernstein live                    # watch progress in the TUI dashboard
bernstein stop                    # graceful shutdown with drain

For multi-stage projects, define a YAML plan:

bernstein run plan.yaml           # skips LLM planning, goes straight to execution
bernstein run --dry-run plan.yaml # preview tasks and estimated cost

How it works

  1. Decompose — the manager breaks your goal into tasks with roles, owned files, and completion signals
  2. Spawn — agents start in isolated git worktrees, one per task. Main branch stays clean.
  3. Verify — the janitor checks concrete signals: tests pass, files exist, lint clean, types correct
  4. Merge — verified work lands in main. Failed tasks get retried or routed to a different model.

The orchestrator is a Python scheduler, not an LLM. Scheduling decisions are deterministic, auditable, and reproducible.

Capabilities

Core orchestration — parallel execution, git worktree isolation, janitor verification, quality gates (lint + types + PII scan), cross-model code review, circuit breaker for misbehaving agents, token growth monitoring with auto-intervention.

Intelligence — contextual bandit router learns optimal model/effort pairs over time. Knowledge graph for codebase impact analysis. Semantic caching saves tokens on repeated patterns. Cost anomaly detection with Z-score flagging.

Enterprise — HMAC-chained tamper-evident audit logs. Policy limits with fail-open defaults and multi-tenant isolation. PII output gating. OAuth 2.0 PKCE. SSO/SAML/OIDC auth. WAL crash recovery — no silent data loss.

Observability — Prometheus /metrics, OTel exporter presets, Grafana dashboards. Per-model cost tracking (bernstein cost). Terminal TUI and web dashboard. Agent process visibility in ps.

Ecosystem — MCP server mode, A2A protocol support, GitHub App integration, pluggy-based plugin system, multi-repo workspaces, cluster mode for distributed execution, self-evolution via --evolve.

Full feature matrix: FEATURE_MATRIX.md

How it compares

Feature Bernstein CrewAI AutoGen LangGraph
Orchestrator Deterministic code LLM-driven LLM-driven Graph + LLM
Works with Any CLI agent (29 adapters) Python SDK classes Python agents LangChain nodes
Git isolation Worktrees per agent No No No
Verification Janitor + quality gates No No Conditional edges
Cost tracking Built-in No No No
State model File-based (.sdd/) In-memory In-memory Checkpointer
Self-evolution Built-in No No No
Declarative plans (YAML) Yes Partial No Yes
Model routing per task Yes No No Manual
MCP support Yes No No No
Agent-to-agent chat No Yes Yes No
Web UI No Yes Yes Partial
Cloud hosted option No Yes No Yes
Built-in RAG/retrieval No Yes Yes Yes

Last verified: 2026-04-14. See full comparison pages for detailed feature matrices.

Monitoring

bernstein live       # TUI dashboard
bernstein dashboard  # web dashboard
bernstein status     # task summary
bernstein ps         # running agents
bernstein cost       # spend by model/task
bernstein doctor     # pre-flight checks
bernstein recap      # post-run summary
bernstein trace <ID> # agent decision trace
bernstein run-changelog --hours 48  # changelog from agent-produced diffs
bernstein explain <cmd>  # detailed help with examples
bernstein dry-run    # preview tasks without executing
bernstein dep-impact # API breakage + downstream caller impact
bernstein aliases    # show command shortcuts
bernstein config-path    # show config file locations
bernstein init-wizard    # interactive project setup
bernstein debug-bundle   # collect logs, config, and state for bug reports
bernstein fingerprint build --corpus-dir ~/oss-corpus  # build local similarity index
bernstein fingerprint check src/foo.py                 # check generated code against the index

Install

Method Command
pip pip install bernstein
pipx pipx install bernstein
uv uv tool install bernstein
Homebrew brew tap chernistry/bernstein && brew install bernstein
Fedora / RHEL sudo dnf copr enable alexchernysh/bernstein && sudo dnf install bernstein
npm (wrapper) npx bernstein-orchestrator

Editor extensions: VS Marketplace · Open VSX

Contributing

PRs welcome. See CONTRIBUTING.md for setup and code style.

Support

If Bernstein saves you time: GitHub Sponsors · Open Collective

License

Apache License 2.0


"To achieve great things, two things are needed: a plan and not quite enough time." — Leonard Bernstein