diff --git a/docs/src/pages/docs/llama-cpp-server.mdx b/docs/src/pages/docs/llama-cpp-server.mdx index d89f481397..3a3d24c46b 100644 --- a/docs/src/pages/docs/llama-cpp-server.mdx +++ b/docs/src/pages/docs/llama-cpp-server.mdx @@ -24,7 +24,7 @@ import { Settings } from 'lucide-react' `llama.cpp` is the core **inference engine** Jan uses to run AI models locally on your computer. This section covers the settings for the engine itself, which control *how* a model processes information on your hardware. -Looking for API server settings (like port, host, CORS)? They have been moved to the dedicated [**Local API Server**](/docs/local-server/api-server) page. +Looking for API server settings (like port, host, CORS)? They have been moved to the dedicated [**Local API Server**](/docs/api-server) page. ## Accessing Engine Settings diff --git a/docs/src/pages/docs/server-settings.mdx b/docs/src/pages/docs/server-settings.mdx index 80d2cc0b2b..b352293e53 100644 --- a/docs/src/pages/docs/server-settings.mdx +++ b/docs/src/pages/docs/server-settings.mdx @@ -174,7 +174,7 @@ This includes configuration for: - CORS (Cross-Origin Resource Sharing) - Verbose Logging -[**Go to Local API Server Settings →**](/docs/local-server/api-server) +[**Go to Local API Server Settings →**](/docs/api-server) ## Emergency Options