Agents making my life easier.
Core architecture components
- Foundation Model: Llama 3 as the base LLM
- Will power all agent reasoning capabilities
- Can be run locally or via API depending on your compute resources
- LlamaIndex Framework
- Provides agent structuring, RAG capabilities, and tool integration
- Handles context management and query routing
- Specialized Agents
- Planning Agent: Coordinates workflows and breaks down tasks
- Calendar Agent: Manages scheduling and time-based activities
- Email Agent: Handles email drafting, summarization, and organization
- Research Agent: Gathers information from various sources
- Memory Agent: Retrieves past interactions and maintains context
- Task Agent: Executes specific actions and reports results
- Shared Memory & Orchestration
- Maintains state between agent interactions
- Routes messages between specialized agents
- Tracks task completion status
- Chroma Vector Database
- Stores embeddings for documents, past interactions, and knowledge
- Enables semantic search for relevant context
- External API Connections
- Calendar services
- Email providers
- Web search capabilities
To run locally with Ollama:
- Install Ollama from https://ollama.com/download or via
curl -fsSL https://ollama.com/install.sh | sh(macOS/Linux only) - Pull the llama3 (or other) model
ollama pull llama3:8b - Run the Ollama server:
ollama serve