Skip to content
View mohammadaffaneh's full-sized avatar

Block or report mohammadaffaneh

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don't include any personal information such as legal names or email addresses. Markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
mohammadaffaneh/README.md

Hey, I'm Mohammad

I build production-grade agentic AI systems and data pipelines. Currently growing my skills in building self-healing agent workflows, observability infrastructure, and human-in-the-loop systems.

Expertise and interests:

Agent Systems & Orchestration

  • Multi-agent orchestration with LangGraph/ Claude Agents SDK for complex workflows—routing between specialized agents, implementing feedback loops, and managing context across agent interactions
  • Self-healing agent pipelines with automated error recovery, retry logic, and fallback strategies
  • Human-in-the-loop (HITL) workflows
  • Agent communication protocols including Model Context Protocol (MCP) for standardized tool integration
  • Real-time monitoring and alerting for LLM operations with latency, cost, and error rate tracking

Scalable AI Infrastructure

  • Batch inference pipelines for multimodal document understanding at scale
  • Hybrid search systems combining semantic and lexical retrieval with OpenSearch
  • Production FastAPI services with async patterns for high-concurrency LLM operations
  • Cost-aware architecture with caching, prompt optimization, and intelligent model routing

Stack: Python, FastAPI, LangGraph, Claude/OpenAI Agents SDK, AWS (Bedrock, EKS, SageMaker, Step Functions, DynamoDB), OpenSearch, Redis, observability tools


📍 Berlin, Germany | 🌐 Open to collaborations


Differentiators

Reliability Engineering for AI: Design agent systems with self-healing capabilities, automated recovery workflows, and comprehensive observability—no black-box deployments

Evaluation-Driven Development: Every AI feature ships with quantitative evals, regression tests, and continuous monitoring—deployed systems improve over time through structured feedback


Let's Connect

Building agentic systems that need production-grade reliability, observability, and human oversight? Let's discuss architecture for systems that don't just work on demo day—they scale, self-heal, and improve continuously.

Popular repositories Loading

  1. mohammadaffaneh mohammadaffaneh Public