Skip to content

peterruler/tools-chat

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Llama 3.1:8B Chainlit Chat

Ein intelligenter Chat-Assistent basierend auf dem Llama 3.1:8B Modell mit Web-Suche und Python-Code-Ausführung.

Features

  • Lokales LLM: Verwendet das llama3.1:8b Modell über Ollama
  • Web-Suche: Integrierte DuckDuckGo-Suche für aktuelle Informationen
  • Python REPL: Direkte Python-Code-Ausführung für Berechnungen und Analysen
  • Benutzerfreundliche Oberfläche: Chainlit-basierte Web-Oberfläche

Prerequisites

Before you begin, ensure that Ollama is installed and running.

  1. Install Ollama: Visit ollama.com and download the version for your operating system.

  2. Download the Llama 3.1:8B Model: Open your terminal and run the following command to download the specific model:

    ollama pull llama3.1:8b

    Ensure the Ollama service is running in the background before starting the application.

Installation

You can install the project dependencies using either Conda or pip.

Option 1: Using Conda

  1. Create a new Conda environment (recommended):

    conda create -n llama-chat python=3.11  # You can choose a different Python version if desired
    conda activate llama-chat
  2. Install dependencies: Use the provided installation script:

    chmod +x install.sh
    ./install.sh

    Or manually install with:

    pip install chainlit langchain langchain-community ollama ddgs

Option 2: Using pip

  1. Create a virtual environment (recommended):

    python -m venv venv
    source venv/bin/activate  # On macOS/Linux
    # venv\Scripts\activate  # On Windows
  2. Install dependencies: Install directly from requirements.txt:

    pip install -r requirements.txt

    Or install the main dependencies manually:

    pip install chainlit langchain langchain-community ollama ddgs

Running the Application

Quick Start

  1. Ensure your (Conda or virtual) environment is activated.
  2. Ensure the Ollama service is running and the llama3.1:8b model is available.
  3. Navigate to the project directory in your terminal.
  4. Use the provided run script:
    chmod +x run.sh
    ./run.sh

Manual Start

Alternatively, you can start the application manually:

chainlit run app.py -w

The -w flag enables automatic reloading on code changes.

Features in Detail

Available Tools

The chat assistant comes with several built-in tools:

  • Web Search: Ask questions about current events or get up-to-date information from the internet

    • Example: "What's the latest news about artificial intelligence?"
  • Python REPL: Perform calculations, data analysis, or execute Python code

    • Example: "Calculate the compound interest for $1000 at 5% for 10 years"
    • Example: "Create a simple plot showing the Fibonacci sequence"

Usage Examples

  1. General Questions: The AI can answer general knowledge questions using its training data
  2. Current Information: Use web search for recent events, stock prices, weather, etc.
  3. Mathematical Problems: Leverage Python for complex calculations, statistics, or visualizations
  4. Code Help: Get assistance with programming problems and see code executed in real-time

Accessing the Chat in the Browser

After starting the application, you will see output in the terminal similar to this:

Your app is available at http://localhost:8000

Open your web browser and navigate to the displayed address (defaults to http://localhost:8000, but may vary if the port is already in use). You should now see the Llama 3.1 chat interface with tool capabilities.

Troubleshooting

  • Ollama Connection Issues: Ensure Ollama is running and the llama3.1:8b model is pulled
  • Port Already in Use: Chainlit will automatically find an available port if 8000 is occupied
  • Python Tool Errors: Make sure your environment has the necessary Python packages for any code you're trying to execute
  • Web Search Issues: Check your internet connection if web search functionality isn't working
  • Session Not Found Errors: These WebSocket errors are usually harmless. If they persist:
    • Refresh the browser page
    • Restart the application with ./run.sh
    • Check that no other processes are using the same port

Test Examples

Basic Test Query

Search for the birth years of Niels Bohr and Marilyn Monroe using web search, then calculate the age difference between their birth years using Python REPL and output the result in the chainlit chat.

Advanced Test Queries

1. Find the current stock price of Apple and Tesla, then calculate which one has grown more in the last year.

2. Search for the latest developments in AI and summarize the top 3 most important news items.

3. Calculate the compound interest for an investment of $10,000 at 7% annual interest for 20 years, and create a simple visualization showing the growth over time.

Screenshots

Output Output2

About

llama 3.1 chat with langchain tools websearch & repl

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors