Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
39 changes: 39 additions & 0 deletions apps/kilocode-docs/docs/providers/cerebras.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
---
sidebar_label: Cerebras
---

# Using Cerebras With Kilo Code

Cerebras is known for their ultra-fast AI inference powered by the Cerebras CS-3 chip, the world's largest and fastest AI accelerator. Their platform delivers exceptional inference speeds for large language models, making them ideal for interactive development workflows.

**Website:** [https://cerebras.ai/](https://cerebras.ai/)

## Getting an API Key

1. **Sign Up/Sign In:** Go to the [Cerebras Cloud Platform](https://cloud.cerebras.ai/). Create an account or sign in.
2. **Navigate to API Keys:** Access the API Keys section in your account dashboard.
3. **Create a Key:** Click to generate a new API key. Give it a descriptive name (e.g., "Kilo Code").
4. **Copy the Key:** **Important:** Copy the API key _immediately_. Store it securely.

## Supported Models

Kilo Code supports the following Cerebras models:

- `gpt-oss-120b` (Default) – High-performance open-source model optimized for fast inference
- `zai-glm-4.6` – Advanced GLM model with enhanced reasoning capabilities

Refer to the [Cerebras documentation](https://docs.cerebras.ai/) for detailed information on model capabilities and performance characteristics.

## Configuration in Kilo Code

1. **Open Kilo Code Settings:** Click the gear icon (<Codicon name="gear" />) in the Kilo Code panel.
2. **Select Provider:** Choose "Cerebras" from the "API Provider" dropdown.
3. **Enter API Key:** Paste your Cerebras API key into the "Cerebras API Key" field.
4. **Select Model:** Choose your desired model from the "Model" dropdown.

## Tips and Notes

- **Inference Speed:** Cerebras models deliver some of the fastest inference speeds available, reducing wait times during development.
- **Open Source Models:** Many Cerebras models are based on open-source architectures, optimized for their custom hardware.
- **Cost Efficiency:** Fast inference can lead to better cost efficiency for interactive use cases.
- **Pricing:** Refer to the Cerebras platform for current pricing information and available plans.
35 changes: 35 additions & 0 deletions apps/kilocode-docs/docs/providers/inception.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
---
sidebar_label: Inception
---

# Using Inception With Kilo Code

Inception provides access to cutting-edge AI models with a focus on performance and reliability. Their infrastructure is designed for enterprise-grade applications requiring consistent, high-quality outputs.

**Website:** [https://inception.ai](https://inception.ai)

## Getting an API Key

1. **Sign Up/Sign In:** Go to the [Inception Platform](https://platform.inception.ai). Create an account or sign in.
2. **Navigate to API Keys:** Access the API Keys section in your account settings.
3. **Create a Key:** Click "Create new API key". Give your key a descriptive name (e.g., "Kilo Code").
4. **Copy the Key:** **Important:** Copy the API key _immediately_. You will not be able to see it again. Store it securely.

## Supported Models

Kilo Code supports Inception's available models. Model selection and capabilities may vary based on your account tier.

Refer to the [Inception documentation](https://docs.inception.ai) for the most up-to-date list of supported models and their specific capabilities.

## Configuration in Kilo Code

1. **Open Kilo Code Settings:** Click the gear icon (<Codicon name="gear" />) in the Kilo Code panel.
2. **Select Provider:** Choose "Inception" from the "API Provider" dropdown.
3. **Enter API Key:** Paste your Inception API key into the "Inception API Key" field.
4. **Select Model:** Choose your desired model from the "Model" dropdown.

## Tips and Notes

- **Enterprise Focus:** Inception is designed for production-grade AI applications with emphasis on reliability and consistency.
- **Pricing:** Refer to the Inception platform for current pricing details and available subscription options.
- **Support:** Enterprise customers have access to dedicated support channels for technical assistance.
41 changes: 41 additions & 0 deletions apps/kilocode-docs/docs/providers/moonshot.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
---
sidebar_label: Moonshot.ai
---

# Using Moonshot.ai With Kilo Code

Moonshot.ai is a Chinese AI company known for their **Kimi** models featuring ultra-long context windows (up to 200K tokens) and advanced reasoning capabilities. Their K2-Thinking model delivers extended thinking and problem-solving abilities.

**Website:** [https://www.moonshot.cn/](https://www.moonshot.cn/)

## Getting an API Key

1. **Sign Up/Sign In:** Go to the [Moonshot.ai Platform](https://platform.moonshot.cn/). Create an account or sign in.
2. **Navigate to API Keys:** Access the API Keys section in your account dashboard.
3. **Create a Key:** Click to generate a new API key. Give it a descriptive name (e.g., "Kilo Code").
4. **Copy the Key:** **Important:** Copy the API key _immediately_. Store it securely.

## Supported Models

Kilo Code supports the following Moonshot.ai models:

- `moonshot-v1-8k` – General-purpose model with 8K context
- `moonshot-v1-32k` – Extended context model with 32K tokens
- `moonshot-v1-128k` – Long-context model with 128K tokens
- `kimi-k2-thinking` – Advanced reasoning model with extended thinking capabilities

Refer to the [Moonshot.ai documentation](https://platform.moonshot.cn/docs) for detailed information on each model's capabilities and pricing.

## Configuration in Kilo Code

1. **Open Kilo Code Settings:** Click the gear icon (<Codicon name="gear" />) in the Kilo Code panel.
2. **Select Provider:** Choose "Moonshot.ai" from the "API Provider" dropdown.
3. **Enter API Key:** Paste your Moonshot.ai API key into the "Moonshot.ai API Key" field.
4. **Select Model:** Choose your desired model from the "Model" dropdown.

## Tips and Notes

- **Ultra-Long Context:** Kimi models excel at handling large codebases and complex projects with their extended context windows.
- **Reasoning Capabilities:** The K2-Thinking variant provides enhanced problem-solving through extended reasoning chains.
- **Language Support:** Kimi models have strong support for both English and Chinese languages.
- **Pricing:** Refer to the Moonshot.ai platform for current pricing information on different models.
11 changes: 7 additions & 4 deletions apps/kilocode-docs/sidebars.ts
Original file line number Diff line number Diff line change
Expand Up @@ -53,28 +53,31 @@ const sidebars: SidebarsConfig = {
items: [
"providers/anthropic",
"providers/bedrock",
"providers/cerebras", // kilocode_change
"providers/chutes-ai",
"providers/claude-code",
"providers/deepseek",
"providers/fireworks",
"providers/synthetic", // kilocode_change
"providers/vertex",
"providers/glama",
"providers/gemini",
"providers/glama",
"providers/groq",
"providers/human-relay",
"providers/inception", // kilocode_change
"providers/lmstudio",
"providers/minimax",
"providers/minimax", // kilocode_change (M2 model update)
"providers/mistral",
"providers/moonshot", // kilocode_change
"providers/ollama",
"providers/openai",
"providers/openai-compatible",
"providers/openrouter",
"providers/ovhcloud", // kilocode_change
"providers/requesty",
"providers/synthetic", // kilocode_change
"providers/unbound",
"providers/v0",
"providers/vercel-ai-gateway",
"providers/vertex",
"providers/virtual-quota-fallback",
"providers/vscode-lm",
"providers/xai",
Expand Down
Loading