A Model Context Protocol (MCP) server that provides tools for interacting with Ollama models. This server enables AI assistants to list, chat with, generate responses from, and manage Ollama models through a standardized protocol.
- Model Management: List, pull, and delete Ollama models
- Chat Interface: Multi-turn conversations with models
- Text Generation: Single-prompt text generation
- Dual Transport: Stdio (local) and HTTP (remote) support
- Railway Ready: Pre-configured for Railway deployment
- Type Safe: Full TypeScript implementation with strict typing
- Node.js 18+
- Ollama installed and running locally
- For Railway deployment: Railway CLI
-
Clone and install dependencies:
git clone <repository-url> cd ollama-mcp npm install
-
Build the project:
npm run build
-
Start the server:
npm start
Add this to your Cursor MCP configuration (~/.cursor/mcp/config.json):
{
"mcpServers": {
"ollama": {
"command": "node",
"args": ["/path/to/ollama-mcp/dist/main.js"]
}
}
}Quick setup:
curl -sSL https://raw.githubusercontent.com/your-repo/ollama-mcp/main/config/mcp.config.json -o ~/.cursor/mcp/config.jsonThe project is structured for maximum readability and maintainability:
src/
├── main.ts # Main entry point
├── config/ # Configuration management
├── server/ # Core MCP server
├── tools/ # MCP tool implementations
├── transports/ # Communication transports
└── ollama-client.ts # Ollama API client
docs/ # Comprehensive documentation
config/ # Configuration files
scripts/ # Deployment scripts
See ARCHITECTURE.md for detailed architecture documentation.
| Variable | Description | Default |
|---|---|---|
MCP_TRANSPORT |
Transport type (stdio or http) |
stdio |
OLLAMA_BASE_URL |
Ollama API base URL | http://localhost:11434 |
MCP_HTTP_HOST |
HTTP server host (HTTP mode) | 0.0.0.0 |
MCP_HTTP_PORT |
HTTP server port (HTTP mode) | 8080 |
MCP_HTTP_ALLOWED_ORIGINS |
CORS allowed origins (HTTP mode) | None |
Perfect for local development and direct integration:
npm startIdeal for remote deployment and web-based clients:
MCP_TRANSPORT=http npm start-
Install Railway CLI:
npm install -g @railway/cli railway login
-
Deploy:
railway up
-
Add models (optional):
railway shell # Follow instructions in docs/RAILWAY_MODELS_SETUP.md
The Railway deployment automatically uses HTTP transport and exposes:
- MCP Endpoint:
https://your-app.railway.app/mcp - Health Check:
https://your-app.railway.app/healthz
# Build the image
npm run docker:build
# Run locally
npm run docker:run
# Deploy to Railway
railway upThe server provides 5 MCP tools for Ollama interaction:
ollama_list_models- List available modelsollama_chat- Multi-turn conversationsollama_generate- Single-prompt generationollama_pull_model- Download modelsollama_delete_model- Remove models
See API.md for detailed API documentation.
# Test stdio transport
npm start
# Test HTTP transport
MCP_TRANSPORT=http npm start
# Test health check (HTTP mode)
curl http://localhost:8080/healthz# List available models
ollama list
# Test a model
ollama run llama2 "Hello, how are you?"- Architecture - Detailed system architecture
- API Reference - Complete API documentation
- Railway Setup - Model deployment guide
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests if applicable
- Submit a pull request
MIT License - see LICENSE for details.
"Cannot find module" errors:
npm install
npm run buildOllama connection issues:
# Check if Ollama is running
ollama list
# Check Ollama service
ollama serveRailway deployment issues:
# Check Railway logs
railway logs
# Verify environment variables
railway variables- Check the documentation
- Review troubleshooting guide
- Open an issue on GitHub
Built with ❤️ for the AI community