This directory contains the YAML-based configuration system for ROMA-DSPy, built with OmegaConf for configuration operations and Pydantic for validation.
from roma_dspy.config import load_config
# Load with defaults
config = load_config()
# Load with profile
config = load_config(profile="lightweight")
# Load with overrides
config = load_config(overrides=["agents.executor.llm.temperature=0.5"])Default settings that override Pydantic defaults:
- Project metadata
- Agent configuration overrides
- Runtime settings
- Resilience parameters
Delta configurations that overlay specific use cases:
- lightweight.yaml: Reduced resource usage, lower token limits
- tool_enabled.yaml: Prepared for future tool implementation
agents:
executor:
llm:
model: "gpt-4o-mini"
temperature: 0.7
max_tokens: 2000
timeout: 30
api_key: ${oc.env:OPENAI_API_KEY} # Environment variableagents:
executor:
prediction_strategy: "chain_of_thought"
tools: []
enabled: true
agent_config: # Agent business logic parameters
max_subtasks: 10
strategy_config: {} # Prediction strategy algorithm parametersruntime:
max_concurrency: 5
timeout: 30
verbose: ${oc.env:ROMA_VERBOSE,false}
cache_dir: ".cache/dspy"resilience:
max_retries: 3
retry_delay: 1.0
circuit_breaker_threshold: 5
circuit_breaker_timeout: 60Later sources override earlier ones:
- Pydantic defaults (in schema classes)
- Base YAML (
defaults/config.yaml) - Profile YAML (
profiles/{profile}.yaml) - Override strings (
["key=value"]) - Environment variables (
ROMA_*)
- Prefix:
ROMA_ - Nested keys: double underscore
__ - Example:
ROMA_AGENTS__EXECUTOR__LLM__TEMPERATURE=0.5
# API Keys
export OPENAI_API_KEY="your-key"
export FIREWORKS_API_KEY="your-key"
# Runtime settings
export ROMA_VERBOSE=true
export ROMA_MAX_RETRIES=5
export ROMA_CACHE_DIR="/custom/cache"
# Agent settings
export ROMA_AGENTS__EXECUTOR__LLM__TEMPERATURE=0.3Create profiles/my_profile.yaml:
# My custom profile
agents:
executor:
llm:
temperature: 0.1
agent_config:
max_iterations: 20
runtime:
max_concurrency: 10config = load_config(profile="my_profile")# Variable interpolation
base_timeout: 30
runtime:
timeout: ${base_timeout}
# Environment variable with default
cache_dir: ${oc.env:ROMA_CACHE_DIR,.cache/dspy}The ConfigManager automatically caches loaded configurations for performance:
manager = ConfigManager()
config1 = manager.load_config() # Loads from file
config2 = manager.load_config() # Uses cache
manager.clear_cache() # Clears cacheThe system provides two-stage validation:
- OmegaConf: YAML structure and type checking
- Pydantic: Business logic validation
Example validations:
- Temperature must be between 0.0 and 2.0
- max_tokens must be between 1 and 100,000
- Tool-strategy compatibility checking
- Timeout consistency validation
from roma_dspy.config import load_config
from roma_dspy.core.modules import Executor
# Load configuration
config = load_config(profile="lightweight")
# Create module with config
executor = Executor(
signature=MySignature,
config=config.agents.executor
)from roma_dspy.core.engine.solve import RecursiveSolver
# Create solver with config
solver = RecursiveSolver(config=config)
result = solver.solve("Complex task")defaults/config.yaml- Base configuration overridesprofiles/lightweight.yaml- Minimal resource usageprofiles/tool_enabled.yaml- Tool-ready configuration
- Use profiles for different deployment environments
- Environment variables for secrets and environment-specific settings
- Override strings for quick testing and experimentation
- Base config for organization-wide defaults
- Separate agent_config and strategy_config for proper parameter isolation
- OmegaConf type errors: Check YAML syntax and avoid Pydantic Field objects
- Validation errors: Review Pydantic validators and constraints
- Missing profiles: Ensure profile files exist in
profiles/directory - Environment variables: Use correct naming convention with
ROMA_prefix
# Enable verbose logging
config = load_config(overrides=["runtime.verbose=true"])
# Check resolved configuration
print(OmegaConf.to_yaml(config))
# Validate specific sections
from roma_dspy.config.schemas import LLMConfig
llm_config = LLMConfig(**config.agents.executor.llm)