Orenda is an open-source, YAML-driven agent framework for continuously scanning collaboration workspaces (e.g., Slack public channels) and sending soft, reflective messages to users. Instead of micromanaging, Orenda encourages self-correction and non-invasive orchestration: the system detects issues (like unanswered questions or risky tone) and nudges people to reflect in their own context and language.
The name Orenda comes from the Iroquois concept of a spiritual force that empowers change. This framework is a βmirrorβ that helps teams notice, reflect, and improveβon their own.
- Continuously scans channels (e.g., Slack public channels).
- Runs Detectors (LLM or rule-based) that spot conditions (unanswered questions, tone risks, confusion/blockers, etc.).
- Triggers Reflectors that deliver gentle messages at the right time (immediately, later in the morning, next Monday).
- Supports refire and escalation logic for persistent issuesβwithout becoming intrusive.
- Is fully declarative via YAML: define logic as nodes, link prompts by file path, schedule everything, and modularize with includes.
- Ships with JSON Schemas for validation and a CLI for lint/compile/run.
Orenda is a self-management amplifier. It promotes:
- Autonomy: People decide if/how to act after reflection.
- Clarity: Detectors are explicit, finite, and auditable.
- Respect: Reflectors are phrased in the userβs language and tone, with actionable options.
- Safety: PII redaction and clear audit trails.
Orenda implements a sophisticated execution engine with the following components:
βββββββββββββββ ββββββββββββββββ βββββββββββββββββββ
β Scheduler βββββΆβ Dispatcher βββββΆβ NodeRuntime β
β (Cron Jobs) β β (Orchestra.) β β (Executes LLM) β
βββββββββββββββ ββββββββββββββββ βββββββββββββββββββ
β β
βΌ βΌ
ββββββββββββββββ βββββββββββββββββββ
β Router β β EscalationHandlerβ
β (Next Nodes) β β & DeliveryServiceβ
ββββββββββββββββ βββββββββββββββββββ
β β
βΌ βΌ
ββββββββββββββββ βββββββββββββββββββ
β StateManager β β SlackService β
β (PostgreSQL) β β (Messaging API) β
ββββββββββββββββ βββββββββββββββββββ
π― NodeRuntime - Executes individual detector/reflector nodes
- Runs LLM prompts with context injection
- Handles binary and categorical detection modes
- Manages escalation and refire conditions
- Stores execution results for audit trails
π§ Router - Handles graph traversal and next node resolution
- Compiles YAML trees into optimized runtime format
- Resolves next nodes based on detector results
- Prevents infinite loops with cycle detection
- Validates tree structure and dependencies
π¦ Dispatcher - Orchestrates execution across multiple threads
- Queue management with priority and retry logic
- Concurrent execution with configurable limits
- Integration with all core services
- Run tracking and progress monitoring
β¬οΈ EscalationHandler - Manages escalation logic and refire states
- Multi-target escalation (users, channels, managers, on-call)
- Cooldown periods to prevent spam
- Escalation rule management and tracking
- Integration with organizational systems
π¬ DeliveryService - Handles scheduled message delivery
- Scheduled delivery with retry logic
- Multiple delivery targets (thread, DM, channel)
- Persistent job storage and recovery
- Cleanup of completed jobs
β° Scheduler - Cron-based execution triggering
- YAML tree scheduling with cron expressions
- Auto-sync and tree recompilation
- Manual trigger capabilities
- Slack message synchronization
π§ Supporting Infrastructure
- StateManager: PostgreSQL-based message and context storage
- SlackService: Slack API integration for messaging
- LLMRunner: OpenAI GPT integration with context injection
- Merger & RolesResolver: YAML configuration processing
- Validator: JSON Schema validation
- π Scheduler triggers trees based on cron expressions
- π Merger processes YAML configuration with includes & roles
- π§ Router compiles tree into optimized execution format
- π¦ Dispatcher queues root detector nodes for each relevant thread
- π― NodeRuntime executes detectors with LLM integration
- π§ Router determines next nodes based on results
- β¬οΈ EscalationHandler manages escalations when limits reached
- π¬ DeliveryService schedules reflector messages for delivery
- πΎ StateManager persists all execution state and results
git clone https://github.com/your-org/orenda.git
cd orenda
# Node or Deno or Bun; choose your runtime. Example with Node:
npm installOrenda requires PostgreSQL for storing messages, contexts, and execution state.
# Install PostgreSQL (if not already installed)
# On macOS: brew install postgresql
# On Ubuntu: sudo apt-get install postgresql postgresql-contrib
# On Windows: Download from https://www.postgresql.org/download/
# Create database and user
psql -U postgres
CREATE DATABASE orenda_dev;
CREATE USER orenda WITH ENCRYPTED PASSWORD 'your_password';
GRANT ALL PRIVILEGES ON DATABASE orenda_dev TO orenda;
\qUpdate src/config/settings.dev.json with your database connection:
{
"connectionString": "postgresql://orenda:your_password@localhost:5432/orenda_dev",
"slack": {
"botToken": "xoxb-your-slack-bot-token"
},
"openai": {
"apiKey": "sk-your-openai-api-key"
}
}Create or edit:
orenda-tree.yamlβ the one canonical tree for your app.orenda-roles.yamlβ maps symbolic roles (e.g.,@teamlead) to users.
Example minimal tree:
version: 1.5
tree:
id: orenda_main
name: "Orenda Daily Channel Scan"
schedule: "0 3 * * *" # run daily 03:00
include:
- trees/detectors/unanswered.yaml
- trees/reflectors/unanswered.yamlExample roles:
version: 1.0
roles:
"@teamlead": "john.doe"
"@user": "{{thread.owner}}"Create prompt files under prompts/ and reference them by path in nodes.
Example prompts/reflect_unanswered.md:
You are a reflector node.
Notify the user they likely missed a reply.
Context:
- thread.url
- question_excerpt
- asker_display
Message:
You may have missed a question from {{asker_display}} in {{thread.url}}:
> {{question_excerpt}}
Please reply or acknowledge today.# Validate YAML against schemas
orenda lint --schema schemas/orenda_tree.schema.json orenda-tree.yaml
orenda lint --schema schemas/orenda_roles.schema.json orenda-roles.yaml# Simulate on sample data (no messages posted)
orenda run --dry --tree orenda-tree.yaml --since P1D# Start scheduler (executes tree cron and reflector deliveries)
orenda startOrendaβs YAML is small but powerful. It describes one application tree plus modular includes, detectors and reflectors, and the timing semantics that make it humane.
- Each Orenda app exposes one canonical root file:
orenda-tree.yaml. - That file may include other YAML fragments to keep things modular.
- The final compiled graph is a single DAG (directed acyclic graph).
version: <string|number>
tree:
id: <string> # a-zA-Z0-9_.-
name: <string>
schedule: <cron> # when the tree runs (e.g., "0 3 * * *")
include: # optional; merges external YAMLs
- trees/detectors/unanswered.yaml
- trees/reflectors/unanswered.yaml
nodes: # optional in root; often defined in includes
- { ...node definitions... }Notes:
scheduleon the root defines the main daily/periodic run (e.g., process last dayβs threads).includemerges external YAML files before validation and execution.
Two node types: detector and reflector.
- id: detect_unanswered
type: detector
mode: binary | categorical
prompt: prompts/detect_unanswered.md # external file path
triggers: # mapping of outputs β next node
on_true: reflect_unanswered # for binary
# or for categorical:
# risky_tone: reflect_tone
# neutral_only: end (implicit termination if not mapped)
refire: # optional: re-detection logic
delay: "48h" # OR schedule: "0 10 * * MON"
prompt: prompts/detect_unanswered_refire.md
limit: 2 # max rechecks
escalation: # optional
when: limit_reached | persistent_true | repeated_offense
next: reflect_escalate_manager-
Detectors are parallel: they run independently on the same batch.
-
Mode:
binary: emitstrue/falseβ usetriggers.on_true.categorical: emits one of a finite set β map each category to a next node viatriggers.
-
Refire:
delay(e.g.,"15m","2h","1d") orschedule(cron).promptoverrides instruction for the refire phase.limitcaps rechecks to prevent loops.
-
Escalation:
-
when:limit_reached: fires whenrefire.limitis exhausted.persistent_true: issue stayed true across consecutive runs (engine tracks).repeated_offense: same user/thread triggered >N times in a window (configurable; tracked by engine).
-
next: reflector to trigger when escalation condition hits.
-
- id: reflect_unanswered
type: reflector
prompt: prompts/reflect_unanswered.md
to: "@teamlead" # optional; resolve via roles
delivery:
delay: "15m" # OR schedule: "0 9 * * *"
end: true # terminal node-
Reflectors are terminal: every path must end in at least one reflector with
end: true. -
Delivery:
delay: relative timing after detection (s/m/h/d suffix).schedule: cron timing (e.g., batch to 09:00 daily).
-
to:
- Role symbol (e.g.,
@teamlead) or leave empty to default to thread owner; roles resolved viaorenda-roles.yaml.
- Role symbol (e.g.,
Important: Prompts are referenced by path, not embedded. This keeps YAML clean and prompts reusable.
- Use
triggersto map detector outputs to next nodes. outputsis a legacy synonym and should be avoided going forward.
-
includeaccepts a list of YAML fragment files. -
Included files typically expose:
nodes: - id: ... type: detector|reflector ...
-
Node IDs must be globally unique after merging.
-
Include depth is supported (recommend β€3).
-
The root file remains the source of truth for schedule and app identity.
version: 1.0
roles:
"@teamlead": "john.doe" # static
"@user": "{{thread.owner}}" # dynamic template
"@manager":
primary: "{{users[thread.owner].manager}}" # fallback chain
fallback: "@cto"-
Role names start with
@. -
Values can be:
- Static usernames (platform handles).
- Templated expressions evaluated at runtime.
- Objects with
primary+fallback.
-
Reflectors use
to: "@role"and are resolved at delivery time.
- One root tree per app (
orenda-tree.yaml). - All includes must merge into a DAG without cycles.
- Every branch must terminate in a reflector (
end: true). - First runnable nodes are detectors (reflectors are terminal and do not start branches).
detector.modeis binary or categorical only; categories are finite.triggerstargets must reference valid node IDs.deliveryin reflectors must specifydelayorschedule(not both).refirein detectors must specifydelayorschedule(not both);limit β₯ 1.escalation.whenβ {limit_reached,persistent_true,repeated_offense}.- Role symbols used in
to:must exist inorenda-roles.yaml. - Prompts are file paths (Markdown or text), not inline strings.
orenda/
ββ README.md
ββ LICENSE
ββ .gitignore
ββ .eslintrc.json # ESLint configuration
ββ package.json # npm dependencies and scripts
ββ tsconfig.json # TypeScript compiler configuration
ββ examples/ # example configurations and templates
β ββ orenda-example-tree.yaml # example root tree configuration
β ββ orenda-example-roles.yaml # example role mappings
β ββ detectors/
β β ββ unanswered.detector.yaml
β β ββ tone.detector.yaml
β β ββ blockers.detector.yaml
β ββ reflectors/
β β ββ unanswered.reflector.yaml
β β ββ tone.reflector.yaml
β β ββ escalation.reflector.yaml
β ββ prompts/
β ββ detect_unanswered.prompt.md
β ββ detect_unanswered_refire.prompt.md
β ββ detect_tone.prompt.md
ββ schemas/
β ββ orenda_tree.schema.json # JSON Schema for tree configurations
β ββ orenda_roles.schema.json # JSON Schema for role mappings
β ββ orenda_nodes.schema.json # JSON Schema for node definitions
ββ src/
β ββ cli/
β β ββ main.ts # CLI entry point
β ββ config/
β β ββ settings.dev.json # development settings
β ββ loader/
β β ββ yaml-loader.ts # reads tree & includes
β β ββ merger.ts # merges nodes, resolves paths
β β ββ validator.ts # JSON-schema validation + graph checks
β β ββ roles-resolver.ts # resolves @roles β users
β ββ runtime/
β β ββ scheduler.ts # cron for tree + deliveries + refires
β β ββ dispatcher.ts # per-thread fanout, concurrency, retries
β β ββ node-runtime.ts # executes detectors/reflectors
β β ββ delivery-service.ts # posts messages (e.g., Slack)
β β ββ escalation-handler.ts # limit/persistence/offense handling
β β ββ state-manager.ts # persistence for refires/escalations
β ββ integrations/
β ββ llm-runner.ts # LLM integration for prompt execution
β ββ slack-service.ts # Slack API integration
ββ tests/ (empty) # test files and fixtures
nodes:
- id: detect_unanswered
type: detector
mode: binary
prompt: prompts/detect_unanswered.md
triggers:
on_true: reflect_unanswered
refire:
delay: "48h"
prompt: prompts/detect_unanswered_refire.md
limit: 2
escalation:
when: "limit_reached"
next: reflect_escalate_managernodes:
- id: reflect_unanswered
type: reflector
prompt: prompts/reflect_unanswered.md
delivery:
delay: "15m"
end: true
- id: reflect_escalate_manager
type: reflector
prompt: prompts/reflect_escalate_manager.md
to: "@teamlead"
delivery:
schedule: "0 9 * * *"
end: true- PII Awareness: redact sensitive content before LLM calls where appropriate.
- Audit Trail: every node execution, reflection, and escalation is logged with timestamps and resolved recipients.
- Access Control: restrict who can modify trees/prompts.
- Cost Visibility: record tokens/costs per detector to budget LLM usage.
# Lint YAML using schemas
orenda lint --schema schemas/orenda_tree.schema.json orenda-tree.yaml
# Compile includes β flattened JSON DAG (for inspection)
orenda compile --tree orenda-tree.yaml --out build/compiled_tree.json
# Dry run (no messages posted)
orenda run --dry --tree orenda-tree.yaml --since P1D
# Start scheduler + delivery services
orenda startMIT License
Copyright (c) 2025 ...
Permission is hereby granted, free of charge, to any person obtaining a copy
...
(See LICENSE for full text.)
- Fork, create a feature branch, and open a PR.
- Add/update unit tests and schemas.
- Keep YAML language minimal & declarativeβnew features must not introduce scripting.
- Provide examples and docs for any new node properties.
- Resolution listeners (auto-cancel refires when the thread is answered).
- Multi-workspace ingestion & cross-channel deduplication.
- Local rule engine for common detectors (zero-LLM).
- Visual DAG editor for non-technical users.
- More delivery adapters (email, Teams, Discord).
Orenda is an invitation to reflectβquietly and effectively. Define your detectors, schedule your reflections, and let your teams self-optimize.