Skip to content

PVInverterLink: A multi-tenant solar inverter monitoring system built with FastAPI and TimescaleDB. Features a high-performance Rust collector for Deye (Solarman V5) and HTTP API for OpenDTU (Hoymiles). Includes HTMX UI.

Notifications You must be signed in to change notification settings

bjoernh/PVInverterLink

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Deye Hard Backend

FastAPI-based solar inverter management system with multi-tenant TimescaleDB storage

Python FastAPI TimescaleDB Rust Docker

Overview

Deye Hard Backend is a multi-tenant solar inverter monitoring and data collection system that provides real-time and historical power generation data. The system supports multiple inverter types through different integration methods while maintaining strict data isolation between users.

Key Features

  • Multi-Tenant Architecture: TimescaleDB with Row-Level Security (RLS) for automatic user data isolation
  • Deye Inverter Support: High-performance Rust collector using Solarman V5 protocol
  • OpenDTU Integration: HTTP API for Hoymiles microinverters via OpenDTU firmware
  • Real-Time Monitoring: Interactive web dashboard with live power data
  • Time-Series Storage: Automatic data compression and 2-year retention policy
  • Comprehensive API: RESTful API with interactive documentation
  • Modern Web UI: HTMX-powered interface with responsive design
  • Secure Authentication: JWT-based authentication with rate limiting and CSRF protection

Supported Inverter Types

Deye Inverters (Primary Support)

Deye inverters communicate via Solarman data loggers using a proprietary TCP protocol. The system includes a high-performance Rust collector that:

  • Listens on TCP port 10000 for Solarman logger connections
  • Decodes Solarman V5 protocol packets
  • Validates checksums and message integrity
  • Forwards telemetry data into the timescaledb

Documentation: See collector/README.md for detailed collector setup and protocol information.

OpenDTU Devices

OpenDTU is open-source firmware for ESP32 microcontrollers that communicates with Hoymiles microinverters. Integration is achieved through HTTP push:

  • ESP32 with OpenDTU firmware pushes data via HTTP POST
  • Endpoint: POST /api/opendtu/measurements
  • Authentication via X-API-Key header
  • Supports multiple inverters per DTU
  • Comprehensive measurement data including DC channels

Firmware: Use the forked OpenDTU repository with HTTP push feature: https://github.com/bjoernh/OpenDTU

Configuration: Configure the HTTP push target in OpenDTU's web interface to point to your backend API endpoint. See the forked repository for detailed configuration instructions.

Technology Stack

Backend

  • FastAPI 0.115+: Modern async web framework
  • Python 3.13+: Runtime environment
  • Uvicorn: ASGI server
  • Pydantic: Data validation and settings

Database

  • PostgreSQL + TimescaleDB: Time-series database extension
  • SQLAlchemy 2.x: Async ORM with declarative models
  • Alembic: Database migration management
  • Row-Level Security (RLS): Multi-tenant data isolation

Collector (Deye Support)

  • Rust: High-performance systems language
  • Tokio: Async runtime
  • Solarman V5 Protocol: Custom TCP protocol decoder
  • TCP Server: Port 10000 listener

Frontend

  • HTMX: Dynamic HTML interactions without JavaScript
  • Jinja2: Server-side templating
  • DaisyUI + Tailwind CSS: Modern UI components and styling
  • Plotly: Interactive charts and visualizations

Authentication & Security

  • fastapi-users: User management framework
  • JWT: Token-based authentication (2-day lifetime)
  • CSRF Protection: fastapi-csrf-protect middleware
  • Rate Limiting: slowapi for endpoint throttling
  • Fernet Encryption: Symmetric encryption for sensitive data

Development Tools

  • uv: Fast Python package manager
  • Docker & Docker Compose: Containerized development
  • pytest: Async test framework

Quick Start

Prerequisites

# Docker and Docker Compose
# Install from: https://docs.docker.com/get-docker/

# uv (Python package manager)
curl -LsSf https://astral.sh/uv/install.sh | sh

# Rust (for collector development, optional)
# Install from: https://rustup.rs/

Docker Development (Recommended)

# Clone repository with submodules
git clone --recursive <repository-url>
cd solar-backend

# Start all services (backend + TimescaleDB)
docker-compose up -d

# Run database migrations
docker-compose exec backend uv run alembic upgrade head

# Access the application
# - Web UI: http://localhost:8000
# - API Documentation: http://localhost:8000/docs
# - Admin Interface: http://localhost:8000/admin

Local Development

# Install dependencies
uv sync

# Configure environment
cp .env.example .env
# Edit .env with your settings

# Set environment variable
export ENV_FILE=.env

# Start TimescaleDB (if not using Docker for everything)
docker-compose up -d db

# Run database migrations
uv run alembic upgrade head

# Start development server
uv run uvicorn solar_backend.app:app --reload

Deye Collector Setup

Building the Collector

cd collector

# Development build
cargo build

# Release build (optimized)
cargo build --release

# Run tests
cargo test

Configuration

Create collector/.env:

PORT=10000
BACKEND_URL=http://localhost:8000
RUST_LOG=info

Running the Collector

# Using cargo
cargo run --release

# Or using Docker
docker-compose -f collector/docker-compose.yml up -d

How It Works

  1. Deye Inverter sends telemetry via Solarman data logger
  2. Solarman Logger forwards data to Collector TCP port 10000
  3. Rust Collector decodes Solarman V5 protocol packets
  4. Rust Collector stores measurements in TimescaleDB

For detailed collector documentation, protocol specifications, and troubleshooting, see collector/README.md.

OpenDTU Setup

Requirements

  • ESP32 microcontroller with OpenDTU-compatible hardware
  • Modified OpenDTU firmware with HTTP push feature

Firmware Installation

A) Pre Build generic_esp32 firmware

Download firmware.http-push.bin from https://github.com/bjoernh/OpenDTU

Flash the file in OpenDTU's web interface (Settings -> Firmware Upgrade)

B) Build from Source

  1. Download firmware from forked repository: https://github.com/bjoernh/OpenDTU
  2. Follow the repository's flashing instructions for your ESP32 hardware
  3. Configure WiFi settings via OpenDTU's initial setup wizard

Configuration

Configure HTTP push in OpenDTU's web interface (Settings > HTTP Push Settings):

screenshot of dtu config

Target URL: http://your-backend-host:8000/api/opendtu/measurements
Publish Interval: 60
Authentication Header Name: X-API-Key
Authentication Token: your-api-key-here

The OpenDTU device will automatically push measurement data at regular intervals.

Configuration

Required Environment Variables

Create a .env file with the following settings:

Variable Description Example
DATABASE_URL TimescaleDB connection string postgresql+asyncpg://user:pass@localhost:5432/deyehard
AUTH_SECRET JWT signing secret (32+ bytes) your-secret-key-min-32-characters-long
ENCRYPTION_KEY Fernet encryption key (base64) 6DLfBB4KnMuChUJZsMHWz2kJTtNRNTTtoTCCbH7CYyw=
BASE_URL Public URL for email links http://localhost:8000
COOKIE_SECURE Secure flag for cookies False (dev), True (production)

Email Configuration (Optional)

For user registration and password reset emails:

[email protected]
FASTMAIL__MAIL_PASSWORD=secret
[email protected]
FASTMAIL__MAIL_SERVER=smtp.example.com
FASTMAIL__MAIL_PORT=587
FASTMAIL__MAIL_FROM_NAME="Deye Hard"
FASTMAIL__MAIL_STARTTLS=true
FASTMAIL__MAIL_SSL_TLS=false
FASTMAIL__USE_CREDENTIALS=true
FASTMAIL__VALIDATE_CERTS=true
FASTMAIL__SUPPRESS_SEND=false

Development Settings

COOKIE_SECURE=False      # Allow HTTP cookies in development
DEBUG=False              # Enable SQLAlchemy query logging if True

See .env.example for a complete configuration template.

Database Setup

TimescaleDB Extension

The TimescaleDB extension is automatically enabled when using the Docker setup. For manual PostgreSQL installations:

CREATE EXTENSION IF NOT EXISTS timescaledb;

Running Migrations

# Apply all pending migrations
uv run alembic upgrade head

# Create a new migration after model changes
uv run alembic revision --autogenerate -m "description of changes"

# Rollback one migration
uv run alembic downgrade -1

# View migration history
uv run alembic history

Database Architecture

  • Hypertable: inverter_measurements with 7-day time-based chunks
  • Multi-dimensional Partitioning: By time and user_id (4 space partitions)
  • Row-Level Security (RLS): Automatic user data isolation at database level
  • Automatic Compression: Data compressed after 7 days
  • Retention Policy: 2-year data retention

The RLS policy ensures users can only access their own data, even if application-level authorization fails. The database automatically filters queries based on the app.current_user_id session variable.

API Documentation

OpenDTU Measurements Endpoint

Send measurement data from OpenDTU devices:

Endpoint: POST /api/opendtu/measurements

Request Example:

curl -X POST "http://localhost:8000/api/opendtu/measurements" \
  -H "Content-Type: application/json" \
  -H "X-API-KEY: YOUR-API-KEY" \
  -d '{
    "timestamp": "2025-10-19T17:54:43+02:00",
    "dtu_serial": "145680140006",
    "inverters": [{
      "serial": "116166771004",
      "name": "Inverter 1",
      "reachable": true,
      "producing": true,
      "last_update": 1760889277,
      "measurements": {
        "power_ac": 16.1,
        "voltage_ac": 229.8,
        "current_ac": 0.07,
        "frequency": 49.99,
        "power_factor": 0.617,
        "power_dc": 17
      },
      "dc_channels": [{
        "channel": 1,
        "name": "Panel 1",
        "power": 3.4,
        "voltage": 30.4,
        "current": 0.11,
        "yield_day": 337,
        "yield_total": 444.671,
        "irradiation": 1.545455
      }]
    }]
  }'

Response (201 Created):

{
  "dtu_serial": "199980140256",
  "timestamp": "2025-10-19T17:54:43+02:00",
  "total_inverters": 1,
  "success_count": 1,
  "error_count": 0,
  "results": [{
    "serial": "116183771004",
    "status": "ok",
    "inverter_id": 1,
    "power_ac": 16
  }]
}

Status Codes:

  • 201 Created: All inverters processed successfully
  • 207 Multi-Status: Some inverters succeeded, some failed
  • 404 Not Found: All inverters not found in database

Interactive API Documentation

The interactive documentation provides full API specifications, request/response schemas, and a testing interface.

Testing

# Run all tests
uv run pytest

# Run tests by marker
uv run pytest -m unit          # Unit tests only
uv run pytest -m integration   # Integration tests
uv run pytest -m smoke         # Smoke tests

# Run with verbose output
uv run pytest -v

# Run specific test file
uv run pytest tests/test_measurements_api_key.py

# Run with coverage report
uv run pytest --cov=solar_backend

Note: Test configuration is automatically loaded from tests/test.env. No manual environment setup required.

Test Coverage

  • ✅ Authentication & authorization
  • ✅ User registration & email verification
  • ✅ Inverter CRUD operations
  • ✅ OpenDTU measurements API
  • ✅ Time-series data queries
  • ✅ Multi-tenant data isolation
  • ✅ Rate limiting
  • ✅ Concurrent request handling

Project Structure

solar-backend/
├── collector/                    # Rust collector (git submodule)
│   ├── src/                     # Solarman V5 protocol implementation
│   │   ├── protocol/            # Packet decoder, control codes
│   │   ├── server/              # TCP server
│   │   └── ...
│   ├── Cargo.toml               # Rust dependencies
│   ├── README.md                # Collector documentation
│   └── DEPLOYMENT.md            # Deployment guide
│
├── solar_backend/               # Python backend
│   ├── api/                     # API endpoint modules
│   │   ├── measurements.py      # OpenDTU endpoint
│   │   ├── inverter.py          # Inverter management
│   │   ├── dashboard.py         # Dashboard data
│   │   ├── signup.py            # User registration
│   │   ├── login.py             # Authentication
│   │   ├── account.py           # Account management
│   │   └── start.py             # Homepage
│   │
│   ├── services/                # Business logic layer
│   │   ├── inverter_service.py  # Inverter-related logic
│   │   └── exceptions.py        # Custom service exceptions
│   │
│   ├── utils/                   # Utility modules
│   │   ├── query_builder.py     # Advanced query construction
│   │   ├── timeseries.py        # TimescaleDB operations
│   │   ├── email.py             # Email sending
│   │   ├── crypto.py            # Encryption utilities
│   │   └── admin_auth.py        # Admin authentication
│   │
│   ├── templates/               # Jinja2 HTML templates
│   │   ├── email/               # Email templates
│   │   └── ...
│   │
│   ├── app.py                   # FastAPI application setup
│   ├── db.py                    # SQLAlchemy models
│   ├── config.py                # Pydantic settings
│   ├── users.py                 # User management
│   └── schemas.py               # Pydantic validation models
│
├── tests/                       # Pytest test suite
│   ├── integration/             # Integration tests
│   ├── unit/                    # Unit tests
│   ├── conftest.py              # Test fixtures
│   └── ...
│
├── alembic/                     # Database migrations
│   └── versions/                # Migration scripts
│
├── docker-compose.yml           # Docker services configuration
├── pyproject.toml               # Python dependencies (uv)
├── Dockerfile                   # Backend container
├── CLAUDE.md                    # Development guide for Claude Code
├── SPEC.md                      # System specification
└── README.md                    # This file

Development Workflow

Adding New Features

  1. Modify Database Models: Update models in solar_backend/db.py
  2. Generate Migration: uv run alembic revision --autogenerate -m "description"
  3. Review Migration: Check generated file in alembic/versions/
  4. Apply Migration: uv run alembic upgrade head
  5. Add Business Logic: Implement logic in the appropriate service in solar_backend/services/.
  6. Add API Endpoints: Create/update files in solar_backend/api/ that call the service layer.
  7. Write Tests: Add tests in tests/
  8. Run Tests: uv run pytest

Important Development Notes

  • Don't rebuild the backend Docker container in dev mode: The code directory is volume-mounted with uvicorn's --reload flag, so changes are detected automatically
  • Use RLS Context: Always use the rls_context manager from utils/timeseries.py when querying time-series data.
  • HTMX Templates: Must be initialized in test setup with correct path

Code Style

  • Follow PEP 8 for Python code
  • Use type hints for function signatures
  • Write docstrings for public functions and classes
  • Keep functions focused and single-purpose
  • Use async/await for I/O operations

Architecture

Multi-Tenant Data Isolation

User A                           User B
├─ Inverter 1                    ├─ Inverter 3
│  └─ Measurements               │  └─ Measurements
└─ Inverter 2                    └─ Inverter 4
   └─ Measurements                  └─ Measurements

TimescaleDB Hypertable (inverter_measurements)
├─ Partitioned by time (7-day chunks)
├─ Partitioned by user_id (4 space partitions)
└─ RLS Policy: user_id = app.current_user_id

Data isolation is enforced at the database level through Row-Level Security. Even if application logic fails, users cannot access other users' data.

Data Flow - Deye Inverters

Deye Inverter
    ↓
Solarman Data Logger (TCP client)
    ↓
Rust Collector (TCP server on port 10000)
    ├─ Decode Solarman V5 protocol
    ├─ Validate checksums
    └─ Extract telemetry data
    ↓
TimescaleDB (Multi-tenant storage)
    ├─ Partition by user_id
    ├─ Compress after 7 days
    └─ Retain for 2 years

Data Flow - OpenDTU Devices

Hoymiles Microinverter
    ↓
OpenDTU (ESP32 microcontroller)
    ├─ Collect inverter telemetry
    ├─ Format as JSON payload
    └─ HTTP POST to backend
    ↓
Backend API (/api/opendtu/measurements)
    ├─ Validate API key (X-API-Key header)
    ├─ Parse multi-inverter payload
    ├─ Match inverters by serial number
    └─ Set RLS context per user
    ↓
TimescaleDB (Multi-tenant storage)
    └─ Same storage as Deye inverters

Deployment

Production Environment Variables

# Database
DATABASE_URL=postgresql+asyncpg://user:strong-password@prod-host:5432/deyehard

# Security
AUTH_SECRET=generate-strong-random-secret-min-32-characters
ENCRYPTION_KEY=generate-fernet-key-base64-encoded

# Application
COOKIE_SECURE=True
BASE_URL=https://solar.64b.de

# Email (if using)
FASTMAIL__MAIL_SERVER=smtp.example.com
[email protected]
# ... other email settings

Docker Deployment

The project uses pre-built Docker images from ghcr.io/bjoernh/pvinverterlink for deployments. Images are built automatically via CI/CD.

# Set registry credentials (required)
export DOCKER_REGISTRY_USERNAME="your-username"
export DOCKER_REGISTRY_PASSWORD="your-password"

# Set desired image tag (optional, defaults based on environment)
export IMAGE_TAG="v1.2.3"  # or 'prod', 'staging', 'test'

# Deploy using environment-specific scripts
cd deployment
./scripts/deploy-prod.sh     # Production
./scripts/deploy-staging.sh  # Staging
./scripts/deploy-test.sh     # Test

# Or manually with docker compose
docker login ghcr.io -u "$DOCKER_REGISTRY_USERNAME"
docker compose -f deployment/docker-compose.prod.yml pull
docker compose -f deployment/docker-compose.prod.yml up -d

# Run database migrations
docker compose -f deployment/docker-compose.prod.yml exec backend-prod \
  sh -c "ENV_FILE=/app/.env uv run alembic upgrade head"

# View logs
docker compose -f deployment/docker-compose.prod.yml logs -f

# Check health
curl https://solar.64b.de/healthcheck

See also:

Security Checklist

  • Set COOKIE_SECURE=True in production
  • Use strong random values for AUTH_SECRET and ENCRYPTION_KEY
  • Enable HTTPS/TLS for all endpoints (use reverse proxy like nginx or Traefik)
  • Configure firewall to allow only necessary ports (80, 443, 10000 for collector)
  • Use strong database passwords (avoid default credentials)
  • Regularly update dependencies: uv sync --upgrade
  • Monitor application logs for security events
  • Implement backup strategy for TimescaleDB
  • Restrict admin interface access (IP whitelist or VPN)
  • Rotate API keys periodically

Monitoring

The system uses structured logging with structlog. Key events logged:

  • User registration and verification
  • Login attempts (successful and failed)
  • Measurement data ingestion
  • Database errors
  • API authentication failures
  • RLS context operations

Configure log aggregation in production (e.g., ELK stack, Grafana Loki, or CloudWatch).

CI/CD Pipeline

The project uses GitHub Actions for automated Docker image builds and pushes to the registry.

Automatic Builds

Images are built automatically on:

  • Push to main/master branch → Tags: latest, main, sha-xxxxx
  • Push to any branch → Tags: <branch-name>, test, sha-xxxxx
  • Creating git tags (e.g., v1.2.3) → Tags: v1.2.3, v1.2, v1, prod, sha-xxxxx

Manual Builds

# Build and push manually
deployment/scripts/build-and-push.sh v1.2.3

# Or trigger via GitHub Actions UI
# → Go to Actions tab → Run workflow → Select environment/tag

See Docker Registry Guide for complete CI/CD documentation.

Related Documentation

Contributing

We welcome contributions! Please follow these steps:

  1. Fork the repository
  2. Create a feature branch: git checkout -b feature/your-feature-name
  3. Make your changes
  4. Write or update tests for your changes
  5. Ensure all tests pass: uv run pytest
  6. Commit your changes: git commit -m "feat: add your feature"
  7. Push to your fork: git push origin feature/your-feature-name
  8. Submit a pull request

Commit Message Convention

  • feat: New feature
  • fix: Bug fix
  • docs: Documentation changes
  • test: Test additions or modifications
  • refactor: Code refactoring
  • chore: Maintenance tasks

License

TBD

Authors

Björn Hauffe WTF Kooperative eG

Acknowledgments

  • OpenDTU project for the excellent ESP32 firmware
  • TimescaleDB team for the powerful time-series extension
  • FastAPI community for the amazing web framework

About

PVInverterLink: A multi-tenant solar inverter monitoring system built with FastAPI and TimescaleDB. Features a high-performance Rust collector for Deye (Solarman V5) and HTTP API for OpenDTU (Hoymiles). Includes HTMX UI.

Resources

Stars

Watchers

Forks

Packages

 
 
 

Contributors 3

  •  
  •  
  •