Skip to content

rh-ai-quickstart/spending-transaction-monitor

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

523 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Deliver personalized spending alerts with AI anomaly detection

Detect fraud and unusual spending in real time with AI-powered alerts and natural language rules, improving security and customer trust.

Table of Contents


Detailed Description

An AI-driven application that enables users to define natural language alert rules for their credit card transactions. The system provides rule-based anomaly detection and location-based security monitoring, ingesting real-time data, evaluating transactions against user-defined rules, applying AI/ML analysis for anomaly detection, and sending alerts through preferred channels such as email or SMS.

Who is this for?

This quickstart guide is designed for:

  • Financial technology teams implementing AI-driven transaction monitoring solutions
  • Solution architects evaluating AI-powered anomaly detection platforms
  • Organizations looking to provide personalized spending insights to customers

The business case for AI-driven transaction monitoring

Many organizations are working to enhance customer experience through AI-powered financial monitoring. The Spending Transaction Monitor demonstrates how to combine modern AI/ML frameworks with real-time data processing to deliver personalized, user-centric financial alerts.

The key value propositions for implementing AI-driven transaction monitoring include:

  • Reduced anomaly exposure. Real-time alerts help customers identify unauthorized transactions quickly, reducing financial losses and improving trust.
  • Enhanced customer experience. Users define alerts in plain natural language, making the system accessible to non-technical users.
  • Personalized insights. Behavioral AI analysis detects anomalies based on individual spending patterns, not just static thresholds.
  • Location-aware security. GPS-based anomaly detection adds an additional layer of protection by comparing transaction locations with user whereabouts.
  • Multi-channel notifications. Alerts are delivered via email or SMS, meeting customers where they prefer to receive information.

Example use cases

Transaction monitoring scenarios suitable for this system include:

Category Example Trigger
Spending Pattern "Your dining expense of $98 is 45% higher than your average of $67 over 30 days."
Recurring Payment "Netflix charged $18.99 this month vs. your usual $15.49 — a 22% increase."
Location-Based "Transaction in Boston detected. Your last known location was Los Angeles."
Merchant-Based "Uber ride was $47.89, up from your last 5 ride average of $28.40."

Sample data

This project uses sample credit card transaction data from the Credit Card Transactions Dataset on Kaggle for demonstration and testing purposes.

What this quickstart provides

This quickstart provides the framework, components, and knowledge to accelerate your journey to deploying AI-powered transaction monitoring. The system demonstrates how natural language processing, behavioral analysis, and location-based security can be combined into a cohesive alerting platform.

What you'll build

Time to complete: 30-60 minutes (depending on deployment mode)

By the end of this quickstart, you will have:

  • A fully functional AI-powered transaction monitoring system deployed locally or on OpenShift
  • A working alert rule engine that parses natural language into machine-readable criteria
  • Experience creating and testing alert rules via the React-based UI
  • Understanding of how NLP, behavioral AI, and location-based security work together
  • (Optional) Keycloak integration for production-grade authentication
  • (Optional) OpenShift deployment for cloud-native scalability

Key technologies you'll learn

Throughout this quickstart, you'll gain hands-on experience with modern AI and cloud-native technologies:

AI & NLP Technologies:

  • LlamaStack - AI inference platform for natural language rule parsing
  • LangGraph - State machine framework for managing agent workflows
  • TensorFlow/PyTorch - ML frameworks for behavioral anomaly detection
  • RHOAI (Red Hat OpenShift AI) - Enterprise AI/ML platform

Backend & Data:

  • FastAPI - High-performance Python API framework
  • PostgreSQL - Relational database for transactions, rules, and users
  • Alembic - Database migration management

Frontend:

Authentication & Security:

  • Keycloak - OAuth2/OIDC authentication with PKCE

Cloud-Native Infrastructure:

  • OpenShift/Kubernetes - Container orchestration and deployment platform
  • Podman - Container runtime for local development
  • Helm - Kubernetes package manager

Architecture diagrams

The solution is deployed on OpenShift and integrates multiple components:

  • React Frontend (UI): User interface for managing alerts, viewing transactions, and receiving ML-powered recommendations
  • FastAPI Backend: Core API service handling authentication, business logic, and orchestration
  • Keycloak: OAuth2/OIDC authentication and authorization with PKCE flow
  • PostgreSQL + pgvector: Primary data store with vector support for embeddings
  • LlamaStack + LangGraph Agents: NLP service for parsing natural language alert rules into SQL queries
  • ML Recommendation System: KNN collaborative filtering for personalized alert suggestions
  • Location Service: GPS-based anomaly detection and location tracking
  • Notification Service: Multi-channel alert delivery (Email, SMS, Push, Webhook)
  • Background Services: Job queues for alerts, recommendations, and scheduled tasks

High-Level Architecture

graph TB
USER[User Web Mobile] --> FE[Frontend Nginx React]

FE <--> KC[Keycloak Auth]
FE --> API[FastAPI Backend]
API <--> KC

API --> DB[(PostgreSQL pgvector)]
API --> AI[AI Services LangGraph LlamaStack Recs]
AI --> DB

EXT[Transaction Source] --> API

API --> NOTIF[Notification Service]
NOTIF --> EMAIL[Email]
NOTIF --> SMS[SMS]
Loading

Detailed Component Flow

sequenceDiagram
    participant User
    participant UI as React UI
    participant Nginx
    participant KC as Keycloak
    participant API as FastAPI
    participant Agent as LangGraph Agent
    participant Llama as LlamaStack
    participant ML as ML Service
    participant DB as PostgreSQL
    participant Queue as Alert Queue
    participant Notif as Notification Service
    participant SMTP as Email/SMS

    %% Authentication
    User->>UI: Access App
    UI->>KC: OAuth2 Login (PKCE)
    KC-->>UI: Access Token
    UI->>Nginx: Authenticated Request
    Nginx->>API: Forward with Token
    API->>KC: Validate Token
    KC-->>API: User Info

    %% Create Alert Rule
    User->>UI: "Alert me if I spend > $500"
    UI->>API: POST /api/alerts/validate
    API->>Agent: Parse Natural Language
    Agent->>Llama: LLM Inference
    Llama-->>Agent: Structured Query
    Agent-->>API: Validated Rule + SQL
    API->>DB: Store alert_rules
    API-->>UI: Rule Created

    %% Get ML Recommendations
    User->>UI: View Dashboard
    UI->>API: GET /api/alerts/recommendations
    API->>DB: Check cached_recommendations
    alt Cache Miss
        API->>ML: Generate Recommendations
        ML->>DB: Query transactions + user features
        DB-->>ML: Transaction History
        ML->>ML: KNN Collaborative Filtering
        ML-->>API: Top 3 Recommendations
        API->>DB: Cache Results (24h TTL)
    end
    API-->>UI: Display Recommendations

    %% Transaction Ingestion & Evaluation
    User->>API: POST /api/transactions
    API->>DB: Insert transaction
    API->>Queue: Enqueue Alert Job
    Queue->>Agent: Evaluate Against Rules
    Agent->>DB: Execute SQL Query
    DB-->>Agent: Matching Transactions
    alt Alert Triggered
        Agent->>DB: Create alert_notification
        Agent->>Notif: Send Notification
        Notif->>SMTP: Email/SMS
        SMTP-->>User: Alert Received
        Notif->>DB: Update Status (SENT)
    end

    %% Location-Based Security
    User->>UI: Share GPS Location
    UI->>API: POST /api/users/location
    API->>DB: Update user location
    Note over API,Queue: Next transaction checks location
    Queue->>Agent: Evaluate with Location
    Agent->>DB: Compare transaction location
    alt Location Mismatch
        Agent->>Notif: Security Alert
        Notif->>SMTP: Send Alert
    end
Loading

ML/AI Pipeline Architecture

graph TB
NL[Natural language rule] --> PARSE[Parse and validate]
PARSE --> LLAMA[LlamaStack]
PARSE --> SQL[SQL query]

FE[Build user features] --> KNN[KNN find similar users]
KNN --> MLOUT[Alert recommendations]

DATA[Transactions and labels] --> TRAIN[Train or retrain KNN]
TRAIN --> KNN

CATIN[Merchant category] --> EMB[Embed and vector search]
EMB --> CATOUT[Normalized category]

TXNIN[New transaction] --> CHECK[Behavior and location checks]
CHECK --> ANOM[Anomaly indicators]
Loading

Project structure

The repository is organized into the following key directories:

Core Services:

  • packages/api/ - FastAPI backend with NLP rule parsing and transaction evaluation
  • packages/ui/ - React frontend for alert management and transaction visualization
  • packages/db/ - PostgreSQL database schemas, migrations, and seeding utilities
  • packages/evaluation/ - Rule evaluation framework and metrics
  • packages/ingestion-service/ - Transaction ingestion pipeline

Infrastructure & Configuration:

  • deploy/ - Helm charts and OpenShift deployment configurations
  • data/ - Sample transaction and user data for testing
  • scripts/ - CI/CD and utility scripts

Documentation:

  • docs/ - Technical documentation and guides
spending-transaction-monitor/
├── packages/
│   ├── api/
│   ├── db/
│   ├── ui/
│   ├── ingestion-service/
│   └── configs/
├── docs/
├── deploy/
├── data/
├── scripts/
├── .env.example
├── turbo.json
├── Makefile
├── pnpm-workspace.yaml
├── package.json
└── README.md

Transaction monitoring implementation

The transaction monitoring use case is implemented by combining the following components:

  • NLP Rule Parser that converts natural language rules into structured SQL queries
  • Transaction Evaluation Engine that processes incoming transactions against active rules
  • Behavioral Analysis Module that detects anomalies based on spending patterns
  • Location-based Security that compares transaction locations with user GPS data
  • Multi-channel Notification Service that delivers alerts via email or SMS

Key Features:

  • Users create alert rules (amount, merchant, category, timeframe, location; notification methods: email/SMS/push/webhook)
  • Location-based anomaly detection captures user GPS coordinates for enhanced security monitoring
  • Incoming transactions are stored and evaluated against active rules, including location-based risk assessment
  • Triggered rules produce alert notifications which are delivered via configured channels

Example Conversation: Creating an Alert Rule

User: "Alert me if I spend more than $500 in one transaction"

System: ✓ Rule validated and created. You'll receive email notifications when any single transaction exceeds $500.

User: "Alert me if my dining expense exceeds the average of the last 30 days by more than 40%"

System: ✓ Rule validated. This rule compares each dining transaction against your 30-day dining average and alerts when spending is 40% above normal.

Customizing for your use case

To adapt this quickstart for your specific transaction monitoring needs:

  • Modify the NLP prompts in packages/api/ to handle domain-specific rule types
  • Add new transaction categories in packages/db/ for your industry
  • Create custom evaluation metrics in packages/evaluation/
  • Build additional notification channels (Slack, webhook integrations)
  • Integrate with your existing transaction data sources

Requirements

Minimum hardware requirements

  • CPU: 4+ cores
  • Memory: 8Gi+ (16Gi recommended for full stack with Keycloak)
  • Storage: 20Gi

Minimum software requirements

Local Tools:

For OpenShift Deployment:

  • oc CLI - OpenShift command line tool
  • Helm - Kubernetes package manager
  • OpenShift 4.x cluster with RHOAI (Red Hat OpenShift AI)

Required user permissions

  • Local admin permissions for container runtime (Podman/Docker)
  • For OpenShift: Namespace admin permissions in the target project
  • Access to container registry for pulling/pushing images

Cluster Admin Privileges

IMPORTANT: This quickstart utilizes some features that can only be used if the role you're using for installing the charts has clusterAdmin. This includes enabling and setting up the model registry. To learn more about the clusterAdmin privileges please see the documentation.


Deploy

This section walks you through deploying and testing the Spending Transaction Monitor.

Clone the repository

First, clone and navigate to the project directory:

# Clone the repository
git clone https://github.com/rh-ai-quickstart/spending-transaction-monitor.git
cd spending-transaction-monitor

Expected outcome:

  • ✓ Repository cloned to local machine
  • ✓ Working directory set to project root

Container Deployment (Recommended)

See Mac M Series Installation Troubleshooting

Step 1: Start with Podman Compose

Start with pre-built images:

make run-local

Build and run from source:

make build-run-local

Expected outcome:

  • ✓ All containers started successfully
  • ✓ Services accessible at their respective URLs

Container URLs:

Step 2: Set up data

After starting services, set up the database and Keycloak:

pnpm setup:data       # Complete setup: Start DB + migrations + seed all data
pnpm seed:all         # Just seed data (DB + Keycloak) - migrations already run
pnpm seed:db          # Seed only database
pnpm seed:keycloak    # Setup only Keycloak realm

# Or using make
make setup-data       # Complete data setup: Start DB + migrations + all data

Note: pnpm setup:data now automatically starts the database, so you don't need to run pnpm db:start separately.

📖 See DEVELOPER_GUIDE.md for complete seeding documentation

Expected outcome:

  • ✓ Database migrations applied
  • ✓ Sample data loaded
  • ✓ Keycloak realm configured (if using authentication)

Step 3: Choose authentication mode

The application supports two authentication modes:

Production Mode (Default) - Keycloak OAuth2/OIDC

By default, the application uses Keycloak for secure authentication:

  • Automatic Setup: Keycloak realm and test users are automatically created on startup
  • OAuth2/OIDC Flow: Implements OpenID Connect with PKCE for secure authentication
  • Automatic Token Refresh: Tokens are automatically refreshed before expiration
  • Test Users (for authentication testing, no sample data):
  • Sample Users (with transaction data - use these to explore the app):

Access Points:

Development Mode - Auth Bypass

For local development, you can bypass authentication:

# Set environment variables for bypass mode
BYPASS_AUTH=true VITE_BYPASS_AUTH=true VITE_ENVIRONMENT=development make build-run-local

In bypass mode:

  • ✅ No login required - automatic authentication as dev user
  • ✅ Yellow "DEV MODE - Authentication Bypassed" banner visible
  • ✅ Faster development iteration
  • ⚠️ NOT for production use

Switching Between Modes:

# Production mode (Keycloak authentication)
make build-run-local

# Development mode (auth bypass)
BYPASS_AUTH=true VITE_BYPASS_AUTH=true VITE_ENVIRONMENT=development make build-run-local

Environment Variables:

Variable Values Description
BYPASS_AUTH true/false Backend auth bypass
VITE_BYPASS_AUTH true/false Frontend auth bypass
VITE_ENVIRONMENT development/staging/production Environment mode
KEYCLOAK_URL URL Keycloak server URL (default: http://localhost:8080)

Container Management Commands

make run-local      # Start with registry images
make build-local    # Build images from source
make build-run-local # Build and start
make stop-local     # Stop all services
make logs-local     # View service logs
make reset-local    # Reset with fresh data

Local Development (pnpm)

For local development without containers, use these pnpm commands:

Development Mode (Auth Bypass)

# Install dependencies
pnpm setup

# Start in development mode (auth bypassed)
pnpm start:dev

# Or start individual services
pnpm backend:setup     # Setup database
pnpm backend:start     # Start API (port 8002, auth bypass)
pnpm --filter @*/ui dev # Start UI (port 3000)

Production Mode (Keycloak)

# Start with Keycloak authentication
pnpm start:prod

# Access points:
# - Frontend: http://localhost:3000
# - API: http://localhost:8002
# - Keycloak: http://localhost:8080

Container Development

# With Keycloak authentication (default)
pnpm dev:containers:auth

# With auth bypass (no login required) - fastest iteration
pnpm dev:containers:noauth

# Standard container startup (without rebuild)
pnpm dev:containers

Or using Make directly:

# Build and run with Keycloak authentication (default)
make build-run-local

# Build and run with auth bypass (no authentication)
BYPASS_AUTH=true VITE_BYPASS_AUTH=true VITE_ENVIRONMENT=development make build-run-local

# Run without rebuilding
make run-local

Utility Commands

# Database management
pnpm db:start          # Start PostgreSQL container
pnpm db:stop           # Stop PostgreSQL container
pnpm db:upgrade        # Run migrations
pnpm db:seed           # Load sample data
pnpm db:verify         # Verify database connection

# Authentication
pnpm auth:start        # Start Keycloak container
pnpm auth:stop         # Stop Keycloak container
pnpm auth:setup-keycloak                # Setup Keycloak realm/client
pnpm auth:setup-keycloak-with-users     # Setup Keycloak with DB users

# Code quality
pnpm lint              # Run all linters
pnpm lint:fix          # Auto-fix linting issues
pnpm format            # Format code
pnpm test              # Run tests
pnpm type-check        # Run TypeScript checks

OpenShift Deployment

Note: Some features in this quickstart require clusterAdmin privileges, particularly for enabling and setting up the model registry. See Required user permissions for details.

Quick Deploy

Using pre-built images

make deploy

Using Quay.io instead of the OpenShift internal registry:

# 1) Authenticate to Quay (recommended: use a robot account token)
make REGISTRY_URL=quay.io QUAY_USERNAME=<quay-user-or-robot> QUAY_TOKEN=<token> login

# 2) Build + push to your Quay org + deploy
make REGISTRY_URL=quay.io REPOSITORY=<your-quay-org> IMAGE_TAG=<tag> build-deploy

Using the OpenShift internal registry instead of Quay.io

# Login and setup
# IMPORTANT: For OpenShift's internal registry, set REGISTRY_URL once and reuse it.
export REGISTRY_URL="$(oc get route default-route -n openshift-image-registry -o jsonpath='{.spec.host}')"
make login
make build-deploy

Step-by-step Deployment

# Login and setup
make login
make create-project

# Build and push images
make build-all
make push-all

# Deploy
make deploy

Expected outcome:

  • ✓ Helm chart deployed successfully
  • ✓ All pods running
  • ✓ Routes created

Verify Deployment

make status           # Check deployment status
make logs-api         # View API logs
make logs-ui          # View UI logs

OpenShift Management

make deploy           # Deploy to OpenShift
make undeploy         # Remove deployment
make status           # Check deployment status
make logs-api         # View API logs
make logs-ui          # View UI logs

Testing Alert Rules

After starting the application with make run-local, you can test alert rules interactively:

List available sample alert rules

make list-alert-samples

Shows all available test scenarios with their descriptions, such as:

  • "Alert when spending more than $500 in one transaction"
  • "Alert me if my dining expense exceeds the average of the last 30 days by more than 40%"
  • "Alert me if a transaction happens outside my home state"

Interactive testing menu

make test-alert-rules

This command provides:

  • 📋 Alert Rule Menu showing alert rule descriptions
  • 📊 Data preview with realistic transaction data adjusted to current time
  • 🔍 User context showing the test user profile and transaction history
  • Confirmation prompt before running the actual test

Example Workflow

  1. Start the application:

    make run-local
  2. Browse available test scenarios:

    make list-alert-samples
  3. Run interactive testing:

    make test-alert-rules
    • Select an alert rule by number (1-16)
    • Review the data preview showing exactly what will be tested
    • Confirm to proceed with the test
    • Watch the complete validation and creation process

What the Test Does

The test process:

  1. Seeds database with realistic user and transaction data
  2. Validates the alert rule using the NLP validation API
  3. Creates the alert rule if validation passes
  4. Shows step-by-step results including SQL queries and processing steps

Note: Make sure the API server is running (make run-local) before testing alert rules.

Validating the Alert Notification

After confirming a rule test:

  1. The system sends a test notification via the configured test SMTP server.
  2. To verify:
    • Open the SMTP server Web UI:
      👉 http://localhost:3002
    • Check the inbox for the test email.
    • Open the email to confirm:
      • The rule name/description is included.
      • The transaction details that triggered the rule are shown.

Expected outcome:

  • ✓ Email received in SMTP Web UI
  • ✓ Alert contains rule description and transaction details
  • ✓ Notification delivered within seconds of rule trigger

What you've accomplished

By completing this quickstart, you have:

  • ✓ Deployed a fully functional AI-powered transaction monitoring system
  • ✓ Understood the core platform architecture and components
  • ✓ Created and tested natural language alert rules
  • ✓ Validated end-to-end alert notification delivery
  • ✓ Learned how to customize the system for your own use cases

Recommended next steps

For Development Teams:

For Organizations Planning Production Deployment:

  • Plan your transition from local to OpenShift deployment
  • Integrate with your existing transaction data sources
  • Establish evaluation criteria and quality metrics for your use case
  • Review authentication configuration with Keycloak

For Customizing to Your Use Case:

  • Modify NLP prompts to handle domain-specific rule types
  • Add custom transaction categories for your industry
  • Build integration with your ITSM or notification systems
  • Develop use-case-specific evaluation metrics

Delete

You can stop the deployed services by running:

# Stop local containers
make stop-local

# Remove OpenShift deployment
make undeploy

This will remove all deployed services, pods, and resources.

Technical Details

Performance & scaling

OpenShift Management: The Spending Transaction Monitor is designed for scalability using standard Kubernetes and cloud-native patterns. All core components can be scaled using familiar Kubernetes techniques—horizontal pod autoscaling, replica sets, and resource limits.

Component Scaling:

  • API Service: Scales horizontally with multiple FastAPI workers per pod and multiple pod replicas
  • Database: PostgreSQL with connection pooling and read replicas for high-throughput scenarios
  • UI: Static assets can be served via CDN for global distribution

Performance Considerations:

  • Transaction evaluation is optimized for real-time processing
  • NLP rule parsing leverages caching for frequently used patterns
  • Notification delivery is asynchronous to avoid blocking transaction processing

Security

Security considerations for production deployments:

  • Authentication: Keycloak provides OAuth2/OIDC with PKCE for secure user authentication
  • API Security: All endpoints protected with JWT token validation
  • Database: Credentials managed via environment variables and Kubernetes secrets
  • Network: Internal service communication isolated within Kubernetes namespace
  • Sensitive Data: Transaction data encryption at rest and in transit

For production deployments, consider:

  • Enabling TLS for all external endpoints
  • Configuring network policies to restrict pod-to-pod communication
  • Managing secrets through a vault solution
  • Implementing audit logging for compliance requirements

Going deeper: component documentation

Now that you have the system running, you can dive deeper into specific components:

Guides:

Package Documentation:

Deployment:


Tags

  • Industry: Banking and securities
  • Product: OpenShift AI
  • Use case: Transaction monitoring, Anomaly detection, Personalized alerts
  • Contributor org: Red Hat

Contributing

Contributions are welcome! Please fork the repo and submit a PR.
See our CONTRIBUTING.md for guidelines.

License

This project is licensed under the Apache License 2.0.


Thank you for using the Spending Transaction Monitor Quickstart! We hope this guide helps you successfully deploy AI-driven transaction monitoring for your organization.

About

No description, website, or topics provided.

Resources

Contributing

Stars

Watchers

Forks

Packages

 
 
 

Contributors