This repository contains solutions for the Norwegian AI Championship 2025 multi-task competition, featuring three exciting AI challenges:
- Emergency Healthcare RAG - Medical statement verification using Retrieval-Augmented Generation
 - Tumor Segmentation - Medical image segmentation for tumor detection
 - Race Car Control - AI-powered autonomous race car control
 
- What is the fastest experiment I can run right now to learn the most?
 - What's the simplest model that gets me 70% there?
 - Where could I be overfitting without noticing?
 - Can I visualize the error? (Wrong classification, wrong mask, off-center detection?)
 - Am I using all available metadata (e.g., timestamps, IDs, contextual hints)?
 
- Git: Ensure that git is installed on your machine. Download Git
 - Python 3.11: Required for the project. Download Python
 - UV: Used for managing Python environments. Install UV
 - Docker (optional): For DevContainer development. Download Docker
 
- 
Clone the repository:
git clone https://github.com/CogitoNTNU/norwegian-ai-championship-2025.git cd norwegian-ai-championship-2025 - 
Set up environment variables:
cp .env.example .env # Edit .env with your competition token and other settings - 
Install pre-commit hooks (optional for development):
cd rag # segmentation or race-car uv run pre-commit install
 
Each task is now organized as an independent project with its own dependencies and configuration:
norwegian-ai-championship-2025/
βββ rag/                         # Emergency Healthcare RAG
β   βββ api.py                   # FastAPI application
β   βββ model.py                 # BM25s RAG model implementation
β   βββ validate.py              # Competition validation
β   βββ example.py               # Example/starter code
β   βββ utils.py                 # Utility functions
β   βββ pyproject.toml           # Task dependencies & config
β   βββ uv.lock                  # Dependency lock file
β   βββ data/                    # RAG-specific data
β   βββ cache/                   # Model cache
β   βββ results/                 # Evaluation results
β   βββ rag-evaluation/          # Evaluation framework
βββ segmentation/                # Tumor Segmentation
β   βββ api.py                   # FastAPI application
β   βββ example.py               # Prediction functions
β   βββ validate.py              # Competition validation
β   βββ dtos.py                  # Data transfer objects
β   βββ utils.py                 # Utility functions
β   βββ tumor_dataset.py         # Dataset handling
β   βββ pyproject.toml           # Task dependencies & config
β   βββ uv.lock                  # Dependency lock file
β   βββ utilities/               # Task-specific utilities
β   βββ docs/                    # Documentation
βββ race-car/                    # Race Car Control
β   βββ api.py                   # FastAPI application
β   βββ example.py               # Prediction functions
β   βββ validate.py              # Competition validation
β   βββ dtos.py                  # Data transfer objects
β   βββ test_endpoint.py         # Endpoint testing
β   βββ pyproject.toml           # Task dependencies & config
β   βββ uv.lock                  # Dependency lock file
β   βββ src/                     # Game engine
β   βββ public/                  # Static assets
βββ data/                        # Shared data resources
βββ DM-i-AI-2025/               # Reference implementations
βββ docs/                        # Project documentation
βββ experiments/                 # Experimental code
βββ .env.example                 # Environment variables template
βββ .pre-commit-config.yaml      # Pre-commit hooks configuration
βββ README.md                    # This file
Each task is completely independent. Navigate to the task folder and run:
cd rag/
uv sync                          # Install dependencies
uv run api                       # Start server on port 8000Features:
- BM25s-powered retrieval system
 - Medical statement classification
 - Topic identification (115+ topics)
 - Mistral 7B-Instruct integration
 - Auto port cleanup and logging
 
cd segmentation/
uv sync                          # Install dependencies  
uv run api                       # Start server on port 9051Features:
- Medical image processing
 - Tumor detection and segmentation
 - Base64 image handling
 - PyTorch/scikit-learn support
 - Auto port cleanup and logging
 
cd race-car/
uv sync                          # Install dependencies
uv run api                       # Start server on port 9052Features:
- Real-time game state processing
 - Action prediction (ACCELERATE, STEER_LEFT, etc.)
 - Pygame-based simulation
 - Sensor data integration
 - Auto port cleanup and logging
 
Each task follows the same pattern:
GET /- Service information and statusGET /api- API details, version, and uptimePOST /predict- Main prediction endpoint
# Emergency Healthcare RAG
curl -X POST "http://localhost:8000/predict" \
     -H "Content-Type: application/json" \
     -d '{"statement": "Aspirin is used to treat heart attacks"}'
# Tumor Segmentation
curl -X POST "http://localhost:9051/predict" \
     -H "Content-Type: application/json" \
     -d '{"img": "base64_encoded_image_data"}'
# Race Car Control
curl -X POST "http://localhost:9052/predict" \
     -H "Content-Type: application/json" \
     -d '{"did_crash": false, "elapsed_time_ms": 1000, ...}'- 
Choose your task and navigate to the corresponding directory:
cd rag/ # or segmentation/ or race-car/
 - 
Install dependencies:
uv sync
 - 
Customize your prediction logic:
- Emergency Healthcare RAG: Edit 
model.pyfor your RAG implementation - Tumor Segmentation: Edit 
example.pyβpredict_tumor_segmentation() - Race Car Control: Edit 
example.pyβpredict_race_car_action() 
 - Emergency Healthcare RAG: Edit 
 - 
Test locally:
uv run api # Start with auto port cleanup and loggingOr for development with hot reload:
uv run uvicorn api:app --host 0.0.0.0 --port [PORT] --reload
 - 
Validate with competition:
uv run validate # Submit validation uv run check-status <uuid> # Check status uv run validate --wait # Submit and wait
 
Each task directory has its own validation script that connects to the competition system:
# Emergency Healthcare RAG
cd rag/ && uv run validate
# Tumor Segmentation  
cd segmentation/ && uv run validate
# Race Car Control
cd race-car/ && uv run validateOnce your API is running locally, validate it with the competition system:
# Set your environment variables
export EVAL_API_TOKEN="your-token-here"
# For Emergency Healthcare RAG
export SERVICE_URL="http://0.0.0.0:8000"
curl https://cases.ainm.no/api/v1/usecases/emergency-healthcare-rag/validate/queue \
     -X POST --header "x-token: $EVAL_API_TOKEN" \
     --data "{\"url\": \"$SERVICE_URL/predict\"}"
# For Tumor Segmentation
export SERVICE_URL="http://0.0.0.0:9051"
curl https://cases.ainm.no/api/v1/usecases/tumor-segmentation/validate/queue \
     -X POST --header "x-token: $EVAL_API_TOKEN" \
     --data "{\"url\": \"$SERVICE_URL/predict\"}"
# For Race Car Control
export SERVICE_URL="http://0.0.0.0:9052"
curl https://cases.ainm.no/api/v1/usecases/race-car/validate/queue \
     -X POST --header "x-token: $EVAL_API_TOKEN" \
     --data "{\"url\": \"$SERVICE_URL/predict\"}"For proper validation against the Norwegian AI Championship competition server, expose your local API via Pinggy tunnels:
From any task directory:
cd rag/          # or segmentation/ or race-car/
uv run api       # Starts server with auto port cleanupIn the same directory, follow the logs in real-time:
tail -f logs/api.logExpose your local server to the internet:
# For Emergency Healthcare RAG (port 8000)
ssh -p 443 -R0:localhost:8000 free.pinggy.io
# For Tumor Segmentation (port 9051)
ssh -p 443 -R0:localhost:9051 free.pinggy.io
# For Race Car Control (port 9052)
ssh -p 443 -R0:localhost:9052 free.pinggy.io- Go to https://cases.ainm.no/
 - Navigate to your task (Emergency Healthcare RAG, Tumor Segmentation, or Race Car)
 - Paste your Pinggy HTTPS URL (e.g., 
https://rnxtd-....a.free.pinggy.link/predict) - Enter your competition token
 - Submit the evaluation request
 
- Watch the real-time logs: 
tail -f logs/api.log - Check the competition scoreboard for results
 - Keep both the server and tunnel running during validation
 
Each task manages its own dependencies independently. Here's a comprehensive guide:
# Navigate to your task directory first
cd rag/  # or segmentation/ or race-car/
# Add a production dependency
uv add pandas
# Add multiple dependencies at once
uv add numpy scipy matplotlib
# Add a development dependency (for testing, linting, etc.)
uv add --dev pytest black ruff
# Add a dependency with a specific version
uv add "torch>=2.0.0"
# Add from a specific index or with extras
uv add "fastapi[standard]>=0.104.0"# Remove a dependency
uv remove pandas
# Remove a development dependency
uv remove --dev pytest# Install all dependencies (after cloning or when lock file changes)
uv sync
# Install only production dependencies (skip dev dependencies)
uv sync --no-dev
# Force reinstall all dependencies
uv sync --reinstall# Run a command in the virtual environment
uv run python main.py
# Run a script defined in pyproject.toml
uv run validate
# Run with specific arguments
uv run uvicorn api:app --host 0.0.0.0 --port 8000 --reload
# Activate the shell (alternative to running individual commands)
uv shell
# Run pre-commit hooks
uv run pre-commit run --all-files# Update all dependencies to latest compatible versions
uv lock --upgrade
# Then sync to install the updated versions
uv sync
# Update a specific dependency
uv add "pandas@latest"Note: All dependency changes are automatically reflected in
pyproject.tomlanduv.lock. You don't need to manually edit these files.
β
 Independent Development: Work on one task without affecting others
β
 Isolated Dependencies: Each task has its own requirements and versions
β
 Simple Deployment: Just uv sync and run the task you need
β
 Clean Separation: No more shared complexity or conflicts
β
 Easy Submission: Each task can be submitted independently
β
 Faster Setup: Only install dependencies for the task you're working on
β
 Better Testing: Test each task in isolation
β
 Flexible Deployment: Deploy tasks on different servers/containers
What Changed:
- No more centralized API: Each task runs independently on its own port
 - No more shared dependencies: Each task manages its own 
pyproject.tomlanduv.lock - Individual validation: Run validation from within each task folder
 - Simplified workflow: 
cd task/ && uv sync && uv run uvicorn api:app - Independent deployment: Each task can be deployed separately
 
Migration Steps:
- Navigate to your specific task directory (
rag/,segmentation/, orrace-car/) - Run 
uv syncto install task-specific dependencies - Your existing code should work with minimal changes
 - Use the new individual APIs instead of the unified API
 
To build and preview the documentation site locally:
uv run mkdocs build
uv run mkdocs serveThis will build the documentation and start a local server at http://127.0.0.1:8000/ where you can browse the docs and API reference.
- Previous Experiences and Strengths
 - Competition Guidelines
 - UV Documentation
 - FastAPI Documentation
 - Pre-commit Documentation
 - PyTorch Documentation
 
- Choose the task you want to work on
 - Navigate to the task directory (
cd rag/orcd segmentation/orcd race-car/) - Install dependencies (
uv sync) - Make your changes
 - Test locally (
uv run apioruv run uvicorn api:app --reload) - Validate with competition (
uv run validate) - Commit and push your changes
 
If you encounter any issues:
- Check the task-specific README in each directory
 - Ensure all dependencies are installed (
uv sync) - Verify your API is running on the correct port
 - Check the logs for detailed error messages
 - Refer to the DM-i-AI-2025 reference implementations