Checklist
❓ Question
I am currently using Giskard, specifically the RAGET toolkit, for evaluating our chatbot. By default, Giskard uses GPT-4 from OpenAI to evaluate the output of our model. However, I would like to replace GPT-4 with an open-source LLM-as-a-judge, specifically Ollama. I have already set up the Ollama client using below code (The one mentioned in the Giskard document).
import giskard
api_base = "http://localhost:11434" # Default api_base for local Ollama
giskard.llm.set_llm_model("ollama/llama3.1", disable_structured_output=True, api_base=api_base)
giskard.llm.set_embedding_model("ollama/nomic-embed-text", api_base=api_base)
Additionally, for confidentiality reasons, I want to replace the default LLM API calls (which use remote LLMs) with local LLMs (with Ollama call). I have set up the Ollama client locally (as shown above) and would like to know if this setup will replace all external LLM API calls with local LLMs, wherever Giskard relies on an external LLM.
Below are my questions:
- Once the Ollama client is set up, does it automatically replace OpenAI GPT-4 as the LLM-as-a-judge, or is there additional configuration required?
- Will the Ollama client setup replace all external API calls and use the local LLM instead? If not, are there additional configurations needed to ensure only local LLMs are used for all relevant tasks?
I know the answer to the second question will also address the first one, but I would still like to ask the first one specifically 😄