A fully automated system that seamlessly bridges the gap between an athlete's recorded activities on Strava and their coach's prescribed training log in Google Sheets.
The Academy Sync eliminates the tedious, error-prone, and time-consuming task of manually transferring training data from Strava to Google Sheets. By automating this process, athletes can focus purely on their training and recovery, knowing their data is being meticulously managed in the background.
- Automated Data Transfer: Fetches run data from Strava and logs it to Google Sheets according to coach-prescribed formatting rules
- Intelligent Processing: Handles complex workout descriptions, RPE calculations, and data aggregation
- 7-Day Lookback: Automatically processes missed entries from the past week
- Smart Scheduling: Processes data based on user's local timezone (3:00-3:59 AM window)
- Manual Sync: On-demand processing via web interface
- Email Notifications: Daily summary emails with processing results
- Multi-User Ready: Architected to support multiple users with isolated processing
The system follows a microservices architecture deployed on Google Cloud Platform:
- Web App: React SPA for user configuration and monitoring
- Backend API: Go service handling authentication and configuration
- Automation Engine: Go service for core data processing
- Notification Service: Go service for email delivery
- Database: PostgreSQL for user data and logs
- Job Queues: Redis for asynchronous processing
- Backend: Go with Chi framework
- Frontend: React
- Database: PostgreSQL
- Queues: Redis
- Cloud Platform: Google Cloud Platform
- Authentication: Google OAuth 2.0
- External APIs: Strava API, Google Sheets API
- Email: SendGrid
- Infrastructure: Terraform
The Academy Sync now includes a comprehensive manual sync feature that allows users to trigger on-demand synchronization of their Strava activities to Google Sheets.
- User Trigger: User clicks "Sync Now" button in the web interface
- API Request: Frontend sends POST request to
/api/syncendpoint - Validation: Backend validates user authentication and configuration:
- Checks JWT authentication
- Validates Strava connection exists
- Validates Google Spreadsheet is configured
- Job Enqueueing: Valid requests are enqueued to Redis
jobs_queue - Worker Processing: Automation engine workers dequeue and process jobs
- Data Transfer: Activities are fetched from Strava and written to Google Sheets
- Completion: Job processing results are logged for monitoring
The manual sync uses a robust Redis-based queue system:
- Producer: Backend API enqueues sync jobs
- Consumer: Automation engine dequeues jobs with configurable worker pool
- Queue: Redis
jobs_queueensures FIFO processing and persistence - Scaling: Configurable
MAX_WORKERS(1-1000, default: 20)
Each sync job processes multiple days of activities based on the sync type:
Manual Sync Processing:
- Today's activities (from midnight to current time)
- Yesterday's activities (full day)
- 7-day lookback period (days 2-8 in the past)
Scheduled Sync Processing:
- Yesterday's activities (full day)
- 7-day lookback period (days 2-8 in the past)
The processing follows these steps:
- 🚀 Starting automation processing: Initialize job with context and OAuth credentials
- 📋 Step 1: Retrieving user configuration: Load user settings and validate automation is enabled
- 🏃 Step 2: Creating Strava API client: Initialize Strava client with token management
- 📊 Step 3: Creating Google Sheets API client: Initialize Sheets client with token management
- 🔐 Step 4: Validating Google Sheets access: Verify spreadsheet permissions
- 📊 Step 5: Processing data based on job type: Execute the appropriate sync logic
- 🎉 Successfully completed: Log summary with total activities processed
Activity Count in Logs:
The activity_count shown in logs represents the total number of activities processed across all days in the current sync operation, not just a single day. For example:
- 2 activities today + 3 activities yesterday + 2 from lookback =
activity_count: 7
# Required environment variables
REDIS_URL=redis://redis:6379 # Redis connection string
MAX_WORKERS=20 # Worker pool size (1-1000)- Development: 5-10 workers typically sufficient
- Production: 20-50 workers depending on load
- API Rate Limits: Consider Strava (600 requests/15min) and Google Sheets quotas
- Memory Usage: Each worker uses ~10-20MB of memory
POST /api/sync
Authorization: Bearer <jwt-token>
Content-Type: application/json
Response (202 Accepted):
{
"status": "accepted",
"message": "Sync request has been queued for processing"
}GET /api/sync/status
Authorization: Bearer <jwt-token>
Response (200 OK):
{
"eligible": true,
"reason": ""
}The system includes comprehensive error handling:
- Validation Errors: 400 Bad Request for missing Strava/Sheets configuration
- Authentication Errors: 401 Unauthorized for invalid/expired tokens
- Service Errors: 503 Service Unavailable for Redis connection issues
- OAuth Reauth: Automatic detection and handling of expired OAuth tokens
All sync operations include detailed logging:
- Queue Operations: Job enqueue/dequeue with timestamps and trace IDs
- Processing Steps: Step-by-step progress logging with performance metrics
- Error Details: Comprehensive error context for troubleshooting
- Token Management: OAuth token validity and refresh status
- API Interactions: External API call logging and rate limit tracking
Monitoring Job Processing:
# Watch automation engine logs for job processing
docker-compose logs -f automation-engine | grep -E "(Processing job|Successfully completed|ERROR)"
# Monitor queue operations
docker-compose logs -f backend-api automation-engine | grep -E "(enqueue|dequeue|jobs_queue)"
# Check specific user processing
docker-compose logs automation-engine | grep "user_id\":1"Key Log Messages to Monitor:
"Successfully enqueued job"- Job added to queue by backend API"Processing job"- Worker picked up job from queue"Successfully completed automation processing"- Job finished with activity count"Failed to retrieve user configuration"- User config issues"Google Sheets access requires re-authorization"- OAuth token expired
The system performs startup health checks:
- Database Connectivity: PostgreSQL connection validation with retries
- Redis Connectivity: Redis queue connection validation with retries
- OAuth Configuration: Google and Strava client credential validation
- Fail-Fast Behavior: System exits if critical dependencies are unavailable
/the-academy-sync/
├── cmd/ # Main Go applications
│ ├── backend-api/
│ ├── automation-engine/
│ └── notification-service/
├── internal/ # Shared private Go packages (TBD)
│ └── pkg/
│ ├── database/ # Shared DB Repository
│ └── queue/ # Shared Queue Client
├── web/ # React frontend application (Next.js)
├── terraform/ # Infrastructure as Code (TBD)
├── .github/ # CI/CD workflows (TBD)
├── docs/ # Project documentation
│ ├── BRD.md # Business Requirements Document
│ └── SDD.md # System Design Document
├── Dockerfile # Multi-stage Dockerfile for Go services
├── docker-compose.yml # Local development setup
├── go.mod # Go module definition
└── go.sum # Go module checksums
- Go 1.23+
- Node.js 18+
- Docker & Docker Compose
- PostgreSQL (for local development)
- Redis (for local development)
Quick Start:
-
Clone the repository:
git clone https://github.com/Perseverance/the-academy-sync-claude.git cd the-academy-sync-claude -
Configure environment:
cp .env.example .env # Edit .env with your configuration -
Start the entire application stack:
docker-compose up --build
-
Access the applications:
- Web UI: http://localhost:3000
- Backend API: http://localhost:8080
- PostgreSQL: localhost:5433
- Redis: localhost:6380
Development Commands:
# Start all services in the background
docker-compose up -d
# View logs from all services
docker-compose logs -f
# View logs from a specific service
docker-compose logs -f backend-api
# Stop all services
docker-compose down
# Stop and remove volumes (data will be lost)
docker-compose down -v
# Rebuild and restart all services
docker-compose up --build
# Restart a specific service
docker-compose restart backend-apiLive Reloading:
The Go services are configured with Air for automatic live reloading during development. When you modify Go source files, the affected service will automatically rebuild and restart.
Note: The Air configuration excludes test files (*_test.go, test_*.go, debug_*.go) from triggering rebuilds to prevent unnecessary restarts during debugging.
Database Persistence:
PostgreSQL data is persisted in a Docker volume. Your data will survive container restarts but will be lost if you run docker-compose down -v.
If jobs are not being processed:
-
Check for competing connections:
# Check BRPOP connections docker-compose exec redis redis-cli CLIENT LIST | grep "cmd=brpop" # Check for host processes ps aux | grep -E "automation-engine|automatio" | grep -v docker
-
Kill any host processes that might be consuming jobs:
# If you find automation-engine processes on host kill <PID>
-
Verify only Docker containers are connected:
- There should be exactly 1 BRPOP connection from the automation-engine container
- No connections should come from the host machine (172.x.0.1)
Alternatively, you can run services individually for development:
Build container images for each service:
Use the multi-stage Dockerfile for Go services:
# Backend API
docker build --build-arg SERVICE_NAME=backend-api -t the-academy-sync-backend-api .
# Automation Engine
docker build --build-arg SERVICE_NAME=automation-engine -t the-academy-sync-automation-engine .
# Notification Service
docker build --build-arg SERVICE_NAME=notification-service -t the-academy-sync-notification-service .Build and run the React frontend:
# Build the web application
cd web
docker build -t academy-sync-web .
# Run the web application
docker run -p 8080:8080 academy-sync-webThe web application will be available at http://localhost:8080.
The Academy Sync uses a hybrid configuration loading strategy that supports both local development and production environments.
The system automatically detects the environment using the following priority:
APP_ENVenvironment variableGO_ENVenvironment variable (fallback)- Default to
local
- Local/Development (
APP_ENV=local,development, ordev): Loads from.envfile and environment variables - Production/Staging (
APP_ENV=productionorstaging): Loads from Google Secret Manager with environment variable fallback
APP_ENV- Environment name (local,development,production, etc.)PORT- Service port (default: 8080)LOG_LEVEL- Logging level (DEBUG,INFO,WARNING,ERROR,CRITICAL) (default: INFO)
DATABASE_URL- Complete PostgreSQL connection string (auto-generated if not provided)POSTGRES_DB- Database name (default: academy_sync)POSTGRES_USER- Database username (default: postgres)POSTGRES_PASSWORD- Database password (required in production)POSTGRES_HOST- Database host (default: localhost)POSTGRES_PORT- Database port (default: 5433 for local, 5432 for production)
REDIS_URL- Complete Redis connection string (auto-generated if not provided)REDIS_HOST- Redis host (default: localhost)REDIS_PORT- Redis port (default: 6380 for local, 6379 for production)
MAX_WORKERS- Maximum concurrent workers for sync job processing (default: 20, range: 1-1000)
GOOGLE_CLIENT_ID- Google OAuth client IDGOOGLE_CLIENT_SECRET- Google OAuth client secretSTRAVA_CLIENT_ID- Strava OAuth client IDSTRAVA_CLIENT_SECRET- Strava OAuth client secret
JWT_SECRET- JWT signing secret (required in production)
SMTP_HOST- SMTP server host (default: smtp.gmail.com)SMTP_PORT- SMTP server port (default: 587)SMTP_USERNAME- SMTP usernameSMTP_PASSWORD- SMTP passwordFROM_EMAIL- From email address
GCP_PROJECT_ID- Google Cloud Project ID (for Secret Manager integration)
-
Copy the example environment file:
cp .env.example .env
-
Edit
.envwith your configuration values -
The configuration will be automatically loaded when starting any service
The system performs validation on startup:
- Critical fields must be present
- JWT secret is required in production environments
- Port must be a valid number
- Service will fail to start if validation fails
The Academy Sync uses structured JSON logging powered by Go's log/slog package. All logs are output to stdout/stderr for cloud-native deployments.
The system supports five log levels controlled by the LOG_LEVEL environment variable:
- DEBUG: Detailed information for diagnosing problems (includes OAuth flows, database queries, etc.)
- INFO: General information about system operation (default level)
- WARNING: Warning messages for potential issues
- ERROR: Error messages for failures that don't stop execution
- CRITICAL: Critical errors that may stop system operation
Docker Compose Development:
# Set in environment
LOG_LEVEL=DEBUG docker-compose up
# Or add to .env file
echo "LOG_LEVEL=DEBUG" >> .env
docker-compose upDirect Go Execution:
LOG_LEVEL=DEBUG go run ./cmd/backend-apiProduction Deployment:
Set LOG_LEVEL as an environment variable in your deployment configuration.
All logs are output in structured JSON format:
{
"time": "2025-06-14T11:51:29.460402+03:00",
"level": "INFO",
"msg": "Backend API starting",
"service": "backend-api",
"environment": "development",
"port": "8080",
"additional_fields": "..."
}This format enables easy parsing by log aggregation systems like ELK stack, Grafana Loki, or cloud logging services.
The configuration system includes full Google Secret Manager support for production deployments:
- Production Mode: When
APP_ENV=productionandGCP_PROJECT_IDis set, the system loads secrets from Google Secret Manager - Fallback Behavior: If Secret Manager is unavailable (no credentials, network issues, etc.), the system gracefully falls back to environment variables
- Authentication: Uses Application Default Credentials (ADC) - see GCP Authentication docs
- Logging: Provides clear feedback about Secret Manager connection status and number of secrets loaded
Secret Naming Convention:
database-url- Complete database connection stringdatabase-password- Database password (for URL construction)redis-url- Complete Redis connection stringgoogle-client-id/google-client-secret- OAuth credentialsstrava-client-id/strava-client-secret- OAuth credentialsjwt-secret- JWT signing secretsmtp-username/smtp-password- Email credentialsfrom-email- Email sender address
Example GCP Setup:
# Set up Application Default Credentials
gcloud auth application-default login
# Set project for Secret Manager
export GCP_PROJECT_ID=your-project-id
export APP_ENV=production
# Service will now load secrets from Secret Manager
./backend-apiThe Academy Sync uses golang-migrate/migrate for database schema management. All migration files are stored in internal/pkg/database/migrations/.
Migration files follow the pattern: NNNNNN_description.up.sql and NNNNNN_description.down.sql where:
NNNNNNis a 6-digit sequence number (e.g.,000001)descriptionis a brief description of the migration.up.sqlcontains the forward migration (creating/altering tables).down.sqlcontains the rollback migration (undoing the changes)
Install the migrate CLI tool:
# Install migrate CLI
go install -tags 'postgres' github.com/golang-migrate/migrate/v4/cmd/migrate@latestThe migration commands require a PostgreSQL database URL:
# Local development (using docker-compose)
export DATABASE_URL="postgres://postgres:password@localhost:5433/academy_sync?sslmode=disable"
# Or for production
export DATABASE_URL="postgres://username:password@host:port/database?sslmode=require"# Apply all pending migrations
migrate -path internal/pkg/database/migrations -database "$DATABASE_URL" up
# Apply a specific number of migrations
migrate -path internal/pkg/database/migrations -database "$DATABASE_URL" up 1# Rollback the last migration
migrate -path internal/pkg/database/migrations -database "$DATABASE_URL" down 1
# Rollback all migrations (WARNING: This will drop all tables)
migrate -path internal/pkg/database/migrations -database "$DATABASE_URL" down# Show current migration version
migrate -path internal/pkg/database/migrations -database "$DATABASE_URL" version
# Check if database is up to date
migrate -path internal/pkg/database/migrations -database "$DATABASE_URL" upIf migrations get into a bad state:
# Force set the migration version (use with caution)
migrate -path internal/pkg/database/migrations -database "$DATABASE_URL" force VERSION_NUMBERTo create a new migration:
# Create new migration files
migrate create -ext sql -dir internal/pkg/database/migrations -seq description_of_change
# This creates:
# internal/pkg/database/migrations/NNNNNN_description_of_change.up.sql
# internal/pkg/database/migrations/NNNNNN_description_of_change.down.sql- Always test both up and down migrations in a development environment
- Keep migrations small and focused on a single logical change
- Never edit existing migration files after they've been applied in production
- Use transactions when possible to ensure atomic operations
- Add appropriate indexes for performance
- Include rollback logic in every down migration
When using Docker Compose for local development, migrations are automatically applied when you start the services:
# Start all services (migrations will run automatically)
docker-compose up
# Or run in the background
docker-compose up -dThe migrate service will:
- Wait for PostgreSQL to be ready
- Apply all pending migrations
- Exit successfully
- Allow dependent services (backend-api, automation-engine, etc.) to start
If you need to run migrations manually:
# Set the local database URL
export DATABASE_URL="postgres://postgres:password@localhost:5433/academy_sync?sslmode=disable"
# Apply migrations
migrate -path internal/pkg/database/migrations -database "$DATABASE_URL" upgo build ./cmd/<service-name>- Build specific Go applicationgo run ./cmd/<service-name>- Run application directlygo test ./...- Run all testsgo test -v ./...- Run tests with verbose outputgo test -cover ./...- Run tests with coveragego fmt ./...- Format Go source filesgo vet ./...- Run static analysisgo test ./internal/pkg/config -v- Test configuration package specifically
cd web
# Install dependencies
npm install
# or
pnpm install
# Start development server
npm run dev
# or
pnpm run dev
# Build for production
npm run build
# or
pnpm run build
# Start production server
npm run start
# or
pnpm run startThe development server runs on http://localhost:3000 by default.
- Business Requirements Document - Detailed project requirements and scope
- System Design Document - Architecture, design decisions, and technical specifications
- CLAUDE.md - AI assistant development guidance
The system is designed for deployment on Google Cloud Platform using:
- Cloud Run for Go services
- Cloud Storage + CDN for React frontend
- Cloud SQL for PostgreSQL
- Memorystore for Redis with TLS
- Cloud Scheduler for automated triggers
- Secret Manager for credential storage
All infrastructure is managed via Terraform in the terraform/ directory.
# 1. Enable APIs (one-time)
gcloud services enable compute.googleapis.com sqladmin.googleapis.com secretmanager.googleapis.com run.googleapis.com vpcaccess.googleapis.com redis.googleapis.com servicenetworking.googleapis.com --project=<project-id>
# 2. Deploy infrastructure
cd terraform && terraform init && terraform workspace select staging
terraform apply -var-file=staging.tfvars
# 3. Configure secrets
cd ../scripts && cp ../.env.staging.example ../.env.staging
# Edit ../.env.staging with your values
./manage-secrets.sh update staging
# 4. Build and deploy
./build-and-push-images.sh staging
./migrate-db.sh staging
# 5. Redeploy Cloud Run services
cd ../terraform && terraform apply -var-file=staging.tfvars -target=google_cloud_run_service.backend_api -target=google_cloud_run_service.automation_engine -target=google_cloud_run_service.notification_serviceBefore deploying, ensure you have:
- Google Cloud SDK installed and configured
- Terraform v1.5+ installed
- Docker installed and authenticated to GCR (
gcloud auth configure-docker) - Go 1.23+ installed (for building services)
- Authenticated to GCP:
gcloud auth loginandgcloud auth application-default login
Follow these steps for a fresh deployment to a new environment:
IMPORTANT: Enable APIs first to avoid "API not enabled" errors during Terraform apply:
# Enable all required APIs
gcloud services enable \
compute.googleapis.com \
sqladmin.googleapis.com \
secretmanager.googleapis.com \
run.googleapis.com \
vpcaccess.googleapis.com \
redis.googleapis.com \
servicenetworking.googleapis.com \
--project=the-academy-sync-sdlc-testcd terraform
# Initialize Terraform
terraform init
# Create workspace for your environment
terraform workspace new staging # or "prod" for production
# Select the workspace
terraform workspace select staging
# Plan infrastructure changes
terraform plan -var-file=staging.tfvars -out=staging.tfplan
# Apply infrastructure
terraform apply staging.tfplanNote: The first apply will show Cloud Run deployment failures - this is expected because Docker images don't exist yet.
Terraform creates secrets with placeholder values. You need to update them with actual values:
# First, prepare your environment file
cp .env.staging.example .env.staging
# Edit with your actual values:
# - OAuth credentials (Google & Strava)
# - SMTP credentials
# - Frontend URL
vim .env.staging
# Update secrets in Google Secret Manager
cd scripts
./manage-secrets.sh update staging # Use 'update', not 'create'The script will:
- Read values from
.env.staging - Generate secure JWT_SECRET and ENCRYPTION_SECRET if needed
- Construct DATABASE_URL from Terraform outputs
- Construct REDIS_URL with TLS support
- Update all secrets in Google Secret Manager
# Build and push all service images
./build-and-push-images.sh stagingThis builds and pushes:
gcr.io/<project-id>/backend-api:staginggcr.io/<project-id>/automation-engine:staginggcr.io/<project-id>/notification-service:staging
Run migrations to set up the database schema:
# Run database migrations
./migrate-db.sh stagingNow that images exist, deploy the Cloud Run services:
cd ../terraform
terraform apply -var-file=staging.tfvars \
-target=google_cloud_run_service.backend_api \
-target=google_cloud_run_service.automation_engine \
-target=google_cloud_run_service.notification_service# Get service URLs
terraform output backend_api_url
terraform output automation_engine_url
terraform output notification_service_url
# Test health endpoints
curl $(terraform output -raw backend_api_url)/health
curl $(terraform output -raw automation_engine_url)/health
curl $(terraform output -raw notification_service_url)/health
# View logs if needed
gcloud run services logs read staging-backend-api --region=europe-central2 --limit=50For experienced users, enable APIs and deploy infrastructure in one command:
gcloud services enable compute.googleapis.com sqladmin.googleapis.com secretmanager.googleapis.com run.googleapis.com vpcaccess.googleapis.com redis.googleapis.com servicenetworking.googleapis.com --project=<project-id> && terraform apply -var-file=staging.tfvarsFor updating an existing deployment with new code changes:
# Pull latest changes
git pull origin mainOnly run if infrastructure changes are required:
cd terraform
terraform workspace select staging
terraform plan -var-file=staging.tfvars
terraform apply -var-file=staging.tfvarsOnly run if there are new migrations:
# Check current migration version
./scripts/migrate-db.sh staging --status
# Apply new migrations
./scripts/migrate-db.sh staging --verbose# Build and deploy all services
./scripts/build-and-push-images.sh stagingIf you see "API has not been used in project" errors:
# Enable the specific API
gcloud services enable <api-name>.googleapis.com --project=<project-id>
# Wait 2-3 minutes for propagationIf destroying infrastructure fails with "Failed to delete connection":
# List and manually delete VPC peerings
gcloud compute networks peerings list --network=<vpc-name> --project=<project-id>
gcloud compute networks peerings delete <peering-name> --network=<vpc-name> --project=<project-id>Use update instead of create when running manage-secrets.sh:
./manage-secrets.sh update staging # NOT 'create'When deploying to production with a different GCP project:
-
Update
terraform/prod.tfvars:project_id = "your-production-project-id"
-
Update
scripts/manage-secrets.shline 15:PROJECT_ID="your-production-project-id" -
Switch Terraform workspace:
terraform workspace select prod
The correct order for deployment operations:
- Terraform - Create/update infrastructure
- Secrets - Configure application secrets
- Migrations - Update database schema
- Images - Build and deploy application code
- Uses smaller Cloud SQL instance (db-f1-micro)
- Single Cloud Run instance per service
- Lower memory allocations
- Basic monitoring
- Uses larger Cloud SQL instance (db-n1-standard-1)
- Multiple Cloud Run instances with autoscaling
- Higher memory allocations
- Full monitoring and alerting
Monitor deployment status:
# Check Cloud Run service status
gcloud run services list --region=us-central1
# View service logs
gcloud run services logs read backend-api --region=us-central1 --limit=50
# Check database connectivity
gcloud sql instances describe academy-sync-db-staging
# Monitor job processing
gcloud logging read "resource.type=cloud_run_revision AND jsonPayload.job_type=manual_sync" --limit=20If deployment issues occur:
# List available revisions
gcloud run revisions list --service=backend-api --region=us-central1
# Rollback to previous revision
gcloud run services update-traffic backend-api \
--to-revisions=backend-api-00001-abc=100 \
--region=us-central1# Rollback last migration
./scripts/migrate-db.sh staging --down 1
# Force to specific version (use with caution)
./scripts/migrate-db.sh staging --force 3The deployment process is automated through several scripts in the scripts/ directory:
build-and-push-images.sh- Builds and deploys container imagesmanage-secrets.sh- Manages Google Secret Manager secretsmigrate-db.sh- Runs database migrations with various optionsrun-migrations.sh- Used by Docker for containerized migrations
Each script includes:
- Help documentation (
--helpflag) - Dry-run mode for testing
- Verbose output options
- Error handling and validation
For automated deployments, integrate these scripts into your CI/CD pipeline:
# Example GitHub Actions workflow
steps:
- name: Deploy to Staging
run: |
./scripts/build-and-push-images.sh staging
./scripts/migrate-db.sh staging --verboseCurrently, the frontend polls the /me endpoint after triggering manual sync to detect new activity logs. A more robust solution would be:
- Job ID Return: Modify the manual sync endpoint to return a job ID
- Status Endpoint: Add
GET /api/sync/job/{jobId}/statuswith long polling support - Redis Job Tracking: Store job status (pending, processing, completed, failed) in Redis
- Worker Updates: Update job status throughout processing lifecycle
- Frontend Integration: Poll status endpoint until job completes, then refresh data
This would provide:
- Real-time job progress tracking
- Reduced unnecessary API calls
- Better error handling and retry logic
- Ability to show processing progress to users
This project is licensed under the MIT License - see the LICENSE file for details.
The MIT License is a permissive open-source license that allows you to freely use, modify, and distribute this software, provided that the original copyright notice and license are included in all copies or substantial portions of the software.