Skip to content

A small experimental assistant that combines a Next.js frontend with a FastAPI backend using LangChain / LangGraph and OpenAI integrations.

Notifications You must be signed in to change notification settings

apurvabanka/perplexity_2.0

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Perplexity 2.0

A small experimental assistant that combines a Next.js frontend with a FastAPI backend using LangChain / LangGraph and OpenAI integrations.

Repository structure

  • client/ — Next.js (app router) frontend. Key files:
    • client/package.json — project scripts and dependencies.
    • client/src/app/ — Next.js pages and layout.
    • client/src/components/ — UI components like Header.tsx, InputBar.tsx, MessageArea.tsx.
  • server/ — Python FastAPI backend and related code.
    • server/app.py — FastAPI application (SSE /chat_stream/{message} endpoint).
    • server/requirements.txt — Python dependencies.
    • server/Dockerfile — Docker image for the server.
    • server/app.ipynb — notebook with exploration / demos.

Quick start (local, macOS / zsh)

Prerequisites:

  • Node.js (v18+ recommended) and npm (or pnpm/yarn)
  • Python 3.12 (server uses Python 3.12 in Dockerfile)
  • pip and optionally virtualenv

Frontend (client):

  1. Install dependencies and run dev server:
cd client
npm install
npm run dev
  1. Open the app at: http://localhost:3000 (default Next.js port).

Backend (server):

  1. Create a virtual environment and install dependencies:
cd server
python3 -m venv .venv
source .venv/bin/activate
pip install --upgrade pip
pip install -r requirements.txt
  1. Run the FastAPI server with Uvicorn (development):
uvicorn app:app --reload --host 127.0.0.1 --port 8000
  1. The server exposes a Server-Sent Events (SSE) endpoint used by the frontend:
  • GET /chat_stream/{message} — stream chat responses and search events. Example:
curl "http://127.0.0.1:8000/chat_stream/hello"

Docker (server):

  1. Build the Docker image from the server folder:
docker build -t perplexity-server ./server
  1. Run the container and map port 8000:
docker run --rm -p 8000:8000 perplexity-server

Environment variables

  • The server loads .env variables via python-dotenv (server/app.py calls load_dotenv()).
  • Provide keys like OPENAI_API_KEY or other secrets in server/.env when required.

Notes & implementation details

  • The backend uses langgraph, langchain, and langchain_openai to orchestrate LLM calls and tools (see server/app.py).
  • A search tool (TavilySearchResults) is wired into the graph and may trigger search events; search results are streamed to the frontend as SSE events.
  • CORS is enabled (allow_origins=["*"]) in server/app.py to allow the Next.js frontend to connect during development.
  • The frontend is built with Next.js (App Router), Tailwind-related dev dependencies are listed in client/package.json.

Troubleshooting

  • SSL certificate issues when using search tools: refer to server/README.md which suggests installing certifi and running the macOS Install Certificates.command if needed.
  • If you see dependency conflicts in Python, try recreating the virtual environment and installing requirements.txt again.

Development tips

  • Use npm run dev in client/ and uvicorn ... --reload in server/ for hot reload during development.
  • To test the SSE endpoint locally, use curl or an SSE-capable client and pass checkpoint_id to continue a conversation.

Contributing

  • Feel free to open issues or PRs. Add tests or example clients for the SSE behavior if you work on the backend.

License

  • No license file included in the repository; add one if you intend to make this project public under a specific license.

If you want, I can:

  • Add examples showing how the client connects to the SSE endpoint;
  • Add a minimal .env.example with recommended environment variables;
  • Commit and push the README.md for you.

About

A small experimental assistant that combines a Next.js frontend with a FastAPI backend using LangChain / LangGraph and OpenAI integrations.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published