Enterprise-Grade Token Management & Rate Limiting System
π Documentation β’ π Quick Start β’ π Architecture β’ π§ API
This is an auto-generated project by AI agents. I already have experience with all of these technologies, but nobody cares if you don't have a GitHub repo with a pet projectβso here it is.
Why do I use AI agents to generate projects? Because it's fun and I'm bored. Do I really work with all of these technologies? Yep. Ten years in, that's the answer.
- But you still have to write code yourself, because AI agents are not perfect and can't write everything for you. And this project doesn't show your skills.
- Well, that's the new world. I worked hard before. I wrote code day and night, but times have changed.
|
|
|
|
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β π API Gateway β
β (Port 8080 - Entry Point) β
βββββββββββββββ¬ββββββββββββββββββββββββββββββββββββββββββββββββββββ
β
βββββββββββΌββββββββββ¬ββββββββββββββ
β β β β
βΌ βΌ βΌ βΌ
βββββββββ βββββββββ ββββββββββ ββββββββββββ
β π β β πͺ β β π β β π β
β Auth β β Token β βAnalyticsβ β Config β
β 8081 β β 8082 β β 8083 β β 8888 β
βββββββββ βββββββββ ββββββββββ ββββββββββββ
ποΈ PostgreSQL β‘ Redis π¨ Kafka π¦ Cassandra
| Service | Port | Purpose | Status |
|---|---|---|---|
| π API Gateway | 8080 | Entry point, routing, rate limiting | β Ready |
| π Auth Server | 8081 | JWT & API Key authentication | β Ready |
| πͺ Token Service | 8082 | Core business logic | β Ready |
| π Analytics | 8083 | Usage tracking & reports | β Ready |
| π Eureka | 8761 | Service discovery | β Ready |
| βοΈ Config | 8888 | Centralized configuration | β Ready |
- β Java 25
- π³ Docker & Docker Compose
- π¦ Gradle (wrapper included)
# Clone repository
git clone <repository-url> && cd leaky-tokens
# Start infrastructure
docker-compose -f docker-compose.infra.yml up -d
# Run all services
./gradlew bootRun --parallel# Check service health
curl http://localhost:8082/api/v1/tokens/status | jq
# Register a user
curl -X POST http://localhost:8081/api/v1/auth/register \
-H "Content-Type: application/json" \
-d '{"username":"demo","email":"[email protected]","password":"password"}' | jq
# Test token consumption
curl -X POST http://localhost:8082/api/v1/tokens/consume \
-H "Content-Type: application/json" \
-d '{"userId":"YOUR_USER_ID","provider":"openai","tokens":50}' | jq- Import the collection
docs/postman/leaky-tokens.postman_collection.json. - Import the environment
docs/postman/leaky-tokens.postman_environment.json. - Select the
Leaky Tokens Localenvironment. - Run
Auth/Loginto auto-populateaccessTokenanduserId, then exercise the other requests.
| π Guide | π Description |
|---|---|
| π Overview | Project purpose & architecture |
| π Getting Started | Installation & setup |
| π€ User Guide | How to use the API |
| ποΈ Architecture | Technical deep-dive |
| π― Use Cases | Business scenarios |
| βοΈ Configuration | All config options |
| π Monitoring | Metrics, logs, tracing |
| π API Reference | Complete API docs |
| π§ Troubleshooting | Common issues |
| π» Development | Contributing guide |
Leaky Bucket (Default)
Tokens leak at a constant rate, smoothing out traffic bursts:
token:
bucket:
capacity: 1000
leakRatePerSecond: 10.0
strategy: LEAKY_BUCKETToken Bucket
Allows short bursts while maintaining average rate:
token:
bucket:
capacity: 1000
leakRatePerSecond: 10.0
strategy: TOKEN_BUCKETFixed Window
Simple counter-based limiting:
token:
bucket:
capacity: 1000
windowSeconds: 60
strategy: FIXED_WINDOW# Check quota
curl "http://localhost:8082/api/v1/tokens/quota?userId=...&provider=openai"
# Consume tokens
curl -X POST http://localhost:8082/api/v1/tokens/consume \
-d '{"userId":"...","provider":"openai","tokens":100}'
# Purchase more tokens
curl -X POST http://localhost:8082/api/v1/tokens/purchase \
-H "Idempotency-Key: purchase-001" \
-d '{"userId":"...","provider":"openai","tokens":1000}'| Tool | URL | Description |
|---|---|---|
| π Grafana | http://localhost:3000 | Dashboards (admin/admin) |
| π Prometheus | http://localhost:9090 | Metrics collection |
| π Jaeger | http://localhost:16686 | Distributed tracing |
| π Eureka | http://localhost:8761 | Service registry |
View end-to-end request flows across microservices:
# Start with full tracing stack (includes Jaeger)
docker-compose -f docker-compose.full.yml up -d
# Access Jaeger UI
open http://localhost:16686
# Make some requests and watch traces appear
curl -X POST http://localhost:8080/api/v1/tokens/consume \
-H "Content-Type: application/json" \
-d '{"userId":"...","provider":"openai","tokens":50}'Trace Analysis:
- See request paths through all services
- Identify latency bottlenecks
- Debug distributed issues
- 100% sampling in development mode
π Full Tracing Guide
# Service logs
./gradlew :token-service:bootRun 2>&1 | tee service.log
# Docker logs
docker-compose logs -f token-service
# Search for errors
grep "ERROR" service.log# JWT Token
curl -H "Authorization: Bearer $JWT_TOKEN" ...
# API Key
curl -H "X-Api-Key: leaky_userid_xxxx" ...| Method | Endpoint | Description | Auth |
|---|---|---|---|
| POST | /api/v1/auth/register |
Create user account | Public |
| POST | /api/v1/auth/login |
Authenticate | Public |
| GET | /api/v1/tokens/quota |
Check user quota | JWT |
| POST | /api/v1/tokens/consume |
Consume tokens | JWT |
| POST | /api/v1/tokens/purchase |
Buy tokens | JWT |
| GET | /api/v1/analytics/report |
Usage report | JWT |
π Full API Documentation β
# Run all tests
./gradlew test
# Run with coverage
./gradlew jacocoTestReport
# Performance tests
./gradlew :performance-tests:gatlingRunWe welcome contributions! Please see our Development Guide for:
- ποΈ Project structure
- π Code style guidelines
- π§ͺ Testing best practices
- π§ Development setup
β Star this repo if you find it helpful!
Made with β€οΈ and π€ AI assistance