A super lightweight, local chatbot project designed for desktop/laptop use — ideal for minimal setups and quick testing.
🛠️ This is my first chatbot project, but it has been refined and optimized to be shared here publicly for others to explore and learn from.
- 💬 Local LLM chatbot powered by Ollama
- 🧠 Custom memory via
memory.json - 📝 Chat and journal history saved in
chat_history.json - ⚡ Super lightweight and clean architecture (can be accesed by others by using local wifi connection)
- 📦 Uses models directly pulled from Ollama (no LM Studio required)
- 🖥️ Optimized for laptops and desktops (offline-capable)
- 🔧 Built for simplicity and extensibility
Here’s a look at the chatbot running locally:
💡 This chatbot was developed and tested on the following specs:
- 🔹 Intel Core i5-13420H
- 🔹 RTX 4050 Laptop GPU
- 🔹 16GB RAM
- 🔹 Windows 11
Minimum recommended:
- ✅ Quad-core CPU
- ✅ 8–16GB RAM
- ✅ Optional: Discrete GPU (for faster performance)
- ✅ Smaller quantized models (like Phi-2) work well on modest hardware
├── app.py # Main Flask app
├── brain.py # Core chatbot flow and logic
├── functions/ # Modular helper functions
│ ├── __init__.py
│ ├── history_func.py # Manages conversation history
│ ├── journal_func.py # Handles journaling features
│ ├── memory_func.py # Memory system logic
│ ├── model_runner.py # Interacts with LLM via Ollama
│ └── prompt.py # Prompt creation/injection
├── templates/
│ ├── index.html # Main chat UI
│ └── history.html # Journal/history viewer
├── static/
│ ├── style.css # Basic CSS styling
│ └── script.js # Frontend behavior
├── memory.json # Persistent chatbot memory
├── chat_history.json # Stored conversation/journal logs
├── demo.gif # Optional: Local UI preview
This project skips the LM Studio step and directly uses Ollama to pull and run the model:
Download and install from https://ollama.com
ollama pull phi:2🧠 This project uses the Phi-2 model for its balance of speed and performance. Feel free to swap in other models (like
mistral,llama3, or others).
ollama run phi:2Your local model is now running and ready for chat.
git clone https://github.com/yourusername/very-very-light-chatbot.git
cd very-very-light-chatbotpython -m venv venv
venv\Scripts\activate # On Windows
# OR
source venv/bin/activate # On Mac/Linuxpip install flaskMake sure Ollama is running your model (phi:2 or another):
ollama run phi:2
python app.pyGo to:
http://localhost:11434
python --version
flask --version
ollama --versionBuilt with ❤️ by Pranziss/yubedaoneineed
This is my first public chatbot project — feel free to fork, star ⭐, or reach out with feedback or ideas!
