Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 12 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -54,18 +54,26 @@ While the OpenAI API has become the de facto standard for LLM provider interface
- **[Framework-specific solutions](https://github.com/agno-agi/agno/tree/main/libs/agno/agno/models)**: Some agent frameworks either depend on LiteLLM or implement their own provider integrations, creating fragmentation
- **[Proxy Only Solutions](https://openrouter.ai/)**: solutions like [OpenRouter](https://openrouter.ai/) and [Portkey](https://github.com/Portkey-AI/portkey-python-sdk) require a hosted proxy to serve as the interface between your code and the LLM provider.

## Demo
## Demos

Try `any-llm` in action with our interactive chat demo that showcases streaming completions and provider switching:
Try `any-llm` in action with our interactive demos:

**[📂 Run the Demo](./demos/chat/README.md)**
### 💬 Chat Demo
**[📂 Run the Chat Demo](./demos/chat/README.md)**

The demo features:
An interactive chat interface showcasing streaming completions and provider switching:
- Real-time streaming responses with character-by-character display
- Support for multiple LLM providers with easy switching
- Collapsible "thinking" content display for supported models
- Clean chat interface with auto-scrolling

### 🔍 Model Finder Demo
**[📂 Run the Model Finder Demo](./demos/finder/README.md)**

A model discovery tool that helps you find AI models across different providers:
- Search and filter models across all your configured providers
- Provider status dashboard showing which APIs you have configured

## Quickstart

### Requirements
Expand Down
70 changes: 70 additions & 0 deletions demos/finder/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,70 @@
# Dependencies
node_modules/
backend/__pycache__/
backend/.venv/

# Production builds
frontend/build/
backend/dist/

# Environment variables
.env
.env.local
.env.development.local
.env.test.local
.env.production.local

# Logs
npm-debug.log*
yarn-debug.log*
yarn-error.log*
*.log

# Runtime data
pids
*.pid
*.seed
*.pid.lock

# Coverage directory used by tools like istanbul
coverage/

# nyc test coverage
.nyc_output

# IDE
.vscode/
.idea/
*.swp
*.swo

# OS generated files
.DS_Store
.DS_Store?
._*
.Spotlight-V100
.Trashes
ehthumbs.db
Thumbs.db

# Python
*.pyc
*.pyo
*.pyd
__pycache__/
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
*.egg-info/
.installed.cfg
*.egg
100 changes: 100 additions & 0 deletions demos/finder/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,100 @@
# any-llm Model Finder

A demo application that helps you search for AI models across different providers to see where specific models are available. This tool shows you which providers you have configured with API keys and allows you to search for models across all your configured providers.

![Model Finder Demo](./assets/model_finder_demo.gif)

## Features

- **Provider Status Dashboard**: See which providers you have API keys configured for
- **Model Search**: Search for specific models across all configured providers
- **Browse All Models**: View all available models from your configured providers
- **Real-time Results**: Get instant feedback on model availability
- **Provider Error Reporting**: See which providers have issues and why

## Setup

### Backend (FastAPI)

1. Navigate to the backend directory:
```bash
cd backend
```

2. Install dependencies with uv:
```bash
uv sync
```

3. Set up provider environment variables for the providers you want to search:
```bash
# Example API keys - set the ones you want to use, or have them already set in your terminal
export OPENAI_API_KEY="your-openai-api-key"
export ANTHROPIC_API_KEY="your-anthropic-api-key"
export GOOGLE_API_KEY="your-google-api-key"
export MISTRAL_API_KEY="your-mistral-api-key"
export GROQ_API_KEY="your-groq-api-key"
# ... add other provider API keys as needed
```

The application will automatically detect which providers you have configured and only search those providers. See the [any-llm providers documentation](https://mozilla-ai.github.io/any-llm/providers/) to understand what environment variables are expected for each provider.

4. Run the server:
```bash
uv run python main.py
```

The API will be available at `http://localhost:8000`

### Frontend (React)

1. Navigate to the frontend directory:
```bash
cd frontend
```

2. Install dependencies:
```bash
npm install
```

3. Start the development server:
```bash
npm start
```

The frontend will be available at `http://localhost:3000`

## Usage

1. **Check Provider Status**: The sidebar shows all available providers and indicates which ones you have configured with API keys
2. **Search for Models**: Enter a search term (like "gpt-4", "claude", "llama") to find models matching that pattern
3. **Browse All Models**: Click "Browse All Models" to see every model available from your configured providers
4. **View Results**: Models are displayed with their provider, name, and additional metadata when available

## API Endpoints

The backend provides these endpoints:

- `GET /provider-status` - Get the status of all providers (API key configured, supports list_models, etc.)
- `POST /search-models` - Search for models matching a query across configured providers
- `GET /all-models` - Get all models from all configured providers

## Troubleshooting

### No providers configured
If you see "No providers are configured with API keys":
1. Make sure you've set the required environment variables
2. Restart the backend server after setting environment variables
3. Check the provider status panel to see which specific environment variables are needed

### Provider errors
The application will show provider-specific errors in the results. Common issues:
- **API key not configured**: Set the required environment variable
- **Missing packages**: Install additional dependencies with `pip install any-llm-sdk[provider_name]`
- **API errors**: Check your API key validity and provider service status

### Search returns no results
- Try broader search terms (e.g., "gpt" instead of "gpt-4-turbo-specific-version")
- Check that you have providers configured that actually offer the models you're searching for
- Some providers may have rate limiting - wait a moment and try again
Binary file added demos/finder/assets/model_finder_demo.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading