Setting up a remote connection with Ollama and VS Code = Error Code #8857
Unanswered
Firedogs2x
asked this question in
Help
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I have Ollama running on my desktop, while VS Code is on my laptop. Both are hardwired to my local network.
Both computers are running Win 10.
VS Code version: 1.106.2
Continue: 1.2.11
Ollama: .13
Through testing I am able to verify that my laptop is seeing Ollama.
I have tried to follow the Continue documentation and I even researched the web. But I can not find a detailed How To to setup a remote connection.
I am using the Local Config.yaml
name: Local Config
version: 1.0.0
schema: v1
models:
provider: ollama
model: llama3.1:8b
roles:
provider: ollama
model: qwen2.5-coder:1.5b-base
roles:
provider: ollama
model: nomic-embed-text:latest
roles:
provider: ollama
model: GPT-OSS:120B
apiBase: http://192.xxx.xxx.x:11434/v1
apiKey: dummy
roles:
When I attempt to use the agent with GPT-OSS:120B I get the following error:
request to http://192.xxx.xxx.x:11434/v1/api/chat failed, reason: read ECONNRESET
I tried without the apiKey. no change. I am at a loss as to how I should proceed.
Anybody if you have any ideas I would appreciate the help.
Thank you.
Beta Was this translation helpful? Give feedback.
All reactions