Detect fraud and unusual spending in real time with AI-powered alerts and natural language rules, improving security and customer trust.
- Spending Transaction Monitor
- Table of Contents
- Detailed Description
- Requirements
- Deploy
- Technical Details
- Tags
- Contributing
- License
An AI-driven application that enables users to define natural language alert rules for their credit card transactions. The system provides rule-based anomaly detection and location-based security monitoring, ingesting real-time data, evaluating transactions against user-defined rules, applying AI/ML analysis for anomaly detection, and sending alerts through preferred channels such as email or SMS.
This quickstart guide is designed for:
- Financial technology teams implementing AI-driven transaction monitoring solutions
- Solution architects evaluating AI-powered anomaly detection platforms
- Organizations looking to provide personalized spending insights to customers
Many organizations are working to enhance customer experience through AI-powered financial monitoring. The Spending Transaction Monitor demonstrates how to combine modern AI/ML frameworks with real-time data processing to deliver personalized, user-centric financial alerts.
The key value propositions for implementing AI-driven transaction monitoring include:
- Reduced anomaly exposure. Real-time alerts help customers identify unauthorized transactions quickly, reducing financial losses and improving trust.
- Enhanced customer experience. Users define alerts in plain natural language, making the system accessible to non-technical users.
- Personalized insights. Behavioral AI analysis detects anomalies based on individual spending patterns, not just static thresholds.
- Location-aware security. GPS-based anomaly detection adds an additional layer of protection by comparing transaction locations with user whereabouts.
- Multi-channel notifications. Alerts are delivered via email or SMS, meeting customers where they prefer to receive information.
Transaction monitoring scenarios suitable for this system include:
| Category | Example Trigger |
|---|---|
| Spending Pattern | "Your dining expense of $98 is 45% higher than your average of $67 over 30 days." |
| Recurring Payment | "Netflix charged $18.99 this month vs. your usual $15.49 — a 22% increase." |
| Location-Based | "Transaction in Boston detected. Your last known location was Los Angeles." |
| Merchant-Based | "Uber ride was $47.89, up from your last 5 ride average of $28.40." |
This project uses sample credit card transaction data from the Credit Card Transactions Dataset on Kaggle for demonstration and testing purposes.
This quickstart provides the framework, components, and knowledge to accelerate your journey to deploying AI-powered transaction monitoring. The system demonstrates how natural language processing, behavioral analysis, and location-based security can be combined into a cohesive alerting platform.
Time to complete: 30-60 minutes (depending on deployment mode)
By the end of this quickstart, you will have:
- A fully functional AI-powered transaction monitoring system deployed locally or on OpenShift
- A working alert rule engine that parses natural language into machine-readable criteria
- Experience creating and testing alert rules via the React-based UI
- Understanding of how NLP, behavioral AI, and location-based security work together
- (Optional) Keycloak integration for production-grade authentication
- (Optional) OpenShift deployment for cloud-native scalability
Throughout this quickstart, you'll gain hands-on experience with modern AI and cloud-native technologies:
AI & NLP Technologies:
- LlamaStack - AI inference platform for natural language rule parsing
- LangGraph - State machine framework for managing agent workflows
- TensorFlow/PyTorch - ML frameworks for behavioral anomaly detection
- RHOAI (Red Hat OpenShift AI) - Enterprise AI/ML platform
Backend & Data:
- FastAPI - High-performance Python API framework
- PostgreSQL - Relational database for transactions, rules, and users
- Alembic - Database migration management
Frontend:
- React - Modern UI framework
- TypeScript - Type-safe JavaScript
Authentication & Security:
- Keycloak - OAuth2/OIDC authentication with PKCE
Cloud-Native Infrastructure:
- OpenShift/Kubernetes - Container orchestration and deployment platform
- Podman - Container runtime for local development
- Helm - Kubernetes package manager
The solution is deployed on OpenShift and integrates multiple components:
- React Frontend (UI): User interface for managing alerts, viewing transactions, and receiving ML-powered recommendations
- FastAPI Backend: Core API service handling authentication, business logic, and orchestration
- Keycloak: OAuth2/OIDC authentication and authorization with PKCE flow
- PostgreSQL + pgvector: Primary data store with vector support for embeddings
- LlamaStack + LangGraph Agents: NLP service for parsing natural language alert rules into SQL queries
- ML Recommendation System: KNN collaborative filtering for personalized alert suggestions
- Location Service: GPS-based anomaly detection and location tracking
- Notification Service: Multi-channel alert delivery (Email, SMS, Push, Webhook)
- Background Services: Job queues for alerts, recommendations, and scheduled tasks
graph TB
USER[User Web Mobile] --> FE[Frontend Nginx React]
FE <--> KC[Keycloak Auth]
FE --> API[FastAPI Backend]
API <--> KC
API --> DB[(PostgreSQL pgvector)]
API --> AI[AI Services LangGraph LlamaStack Recs]
AI --> DB
EXT[Transaction Source] --> API
API --> NOTIF[Notification Service]
NOTIF --> EMAIL[Email]
NOTIF --> SMS[SMS]
sequenceDiagram
participant User
participant UI as React UI
participant Nginx
participant KC as Keycloak
participant API as FastAPI
participant Agent as LangGraph Agent
participant Llama as LlamaStack
participant ML as ML Service
participant DB as PostgreSQL
participant Queue as Alert Queue
participant Notif as Notification Service
participant SMTP as Email/SMS
%% Authentication
User->>UI: Access App
UI->>KC: OAuth2 Login (PKCE)
KC-->>UI: Access Token
UI->>Nginx: Authenticated Request
Nginx->>API: Forward with Token
API->>KC: Validate Token
KC-->>API: User Info
%% Create Alert Rule
User->>UI: "Alert me if I spend > $500"
UI->>API: POST /api/alerts/validate
API->>Agent: Parse Natural Language
Agent->>Llama: LLM Inference
Llama-->>Agent: Structured Query
Agent-->>API: Validated Rule + SQL
API->>DB: Store alert_rules
API-->>UI: Rule Created
%% Get ML Recommendations
User->>UI: View Dashboard
UI->>API: GET /api/alerts/recommendations
API->>DB: Check cached_recommendations
alt Cache Miss
API->>ML: Generate Recommendations
ML->>DB: Query transactions + user features
DB-->>ML: Transaction History
ML->>ML: KNN Collaborative Filtering
ML-->>API: Top 3 Recommendations
API->>DB: Cache Results (24h TTL)
end
API-->>UI: Display Recommendations
%% Transaction Ingestion & Evaluation
User->>API: POST /api/transactions
API->>DB: Insert transaction
API->>Queue: Enqueue Alert Job
Queue->>Agent: Evaluate Against Rules
Agent->>DB: Execute SQL Query
DB-->>Agent: Matching Transactions
alt Alert Triggered
Agent->>DB: Create alert_notification
Agent->>Notif: Send Notification
Notif->>SMTP: Email/SMS
SMTP-->>User: Alert Received
Notif->>DB: Update Status (SENT)
end
%% Location-Based Security
User->>UI: Share GPS Location
UI->>API: POST /api/users/location
API->>DB: Update user location
Note over API,Queue: Next transaction checks location
Queue->>Agent: Evaluate with Location
Agent->>DB: Compare transaction location
alt Location Mismatch
Agent->>Notif: Security Alert
Notif->>SMTP: Send Alert
end
graph TB
NL[Natural language rule] --> PARSE[Parse and validate]
PARSE --> LLAMA[LlamaStack]
PARSE --> SQL[SQL query]
FE[Build user features] --> KNN[KNN find similar users]
KNN --> MLOUT[Alert recommendations]
DATA[Transactions and labels] --> TRAIN[Train or retrain KNN]
TRAIN --> KNN
CATIN[Merchant category] --> EMB[Embed and vector search]
EMB --> CATOUT[Normalized category]
TXNIN[New transaction] --> CHECK[Behavior and location checks]
CHECK --> ANOM[Anomaly indicators]
The repository is organized into the following key directories:
Core Services:
packages/api/- FastAPI backend with NLP rule parsing and transaction evaluationpackages/ui/- React frontend for alert management and transaction visualizationpackages/db/- PostgreSQL database schemas, migrations, and seeding utilitiespackages/evaluation/- Rule evaluation framework and metricspackages/ingestion-service/- Transaction ingestion pipeline
Infrastructure & Configuration:
deploy/- Helm charts and OpenShift deployment configurationsdata/- Sample transaction and user data for testingscripts/- CI/CD and utility scripts
Documentation:
docs/- Technical documentation and guides
spending-transaction-monitor/
├── packages/
│ ├── api/
│ ├── db/
│ ├── ui/
│ ├── ingestion-service/
│ └── configs/
├── docs/
├── deploy/
├── data/
├── scripts/
├── .env.example
├── turbo.json
├── Makefile
├── pnpm-workspace.yaml
├── package.json
└── README.md
The transaction monitoring use case is implemented by combining the following components:
- NLP Rule Parser that converts natural language rules into structured SQL queries
- Transaction Evaluation Engine that processes incoming transactions against active rules
- Behavioral Analysis Module that detects anomalies based on spending patterns
- Location-based Security that compares transaction locations with user GPS data
- Multi-channel Notification Service that delivers alerts via email or SMS
Key Features:
- Users create alert rules (amount, merchant, category, timeframe, location; notification methods: email/SMS/push/webhook)
- Location-based anomaly detection captures user GPS coordinates for enhanced security monitoring
- Incoming transactions are stored and evaluated against active rules, including location-based risk assessment
- Triggered rules produce alert notifications which are delivered via configured channels
Example Conversation: Creating an Alert Rule
User: "Alert me if I spend more than $500 in one transaction"
System: ✓ Rule validated and created. You'll receive email notifications when any single transaction exceeds $500.
User: "Alert me if my dining expense exceeds the average of the last 30 days by more than 40%"
System: ✓ Rule validated. This rule compares each dining transaction against your 30-day dining average and alerts when spending is 40% above normal.
To adapt this quickstart for your specific transaction monitoring needs:
- Modify the NLP prompts in
packages/api/to handle domain-specific rule types - Add new transaction categories in
packages/db/for your industry - Create custom evaluation metrics in
packages/evaluation/ - Build additional notification channels (Slack, webhook integrations)
- Integrate with your existing transaction data sources
- CPU: 4+ cores
- Memory: 8Gi+ (16Gi recommended for full stack with Keycloak)
- Storage: 20Gi
Local Tools:
- Python 3.11+
- Node.js 18+
- pnpm - Fast, disk space efficient package manager
- Podman or Docker - Container runtime
- make - Build automation
For OpenShift Deployment:
- oc CLI - OpenShift command line tool
- Helm - Kubernetes package manager
- OpenShift 4.x cluster with RHOAI (Red Hat OpenShift AI)
- Local admin permissions for container runtime (Podman/Docker)
- For OpenShift: Namespace admin permissions in the target project
- Access to container registry for pulling/pushing images
IMPORTANT: This quickstart utilizes some features that can only be used if the role you're using for installing the charts has clusterAdmin. This includes enabling and setting up the model registry. To learn more about the clusterAdmin privileges please see the documentation.
This section walks you through deploying and testing the Spending Transaction Monitor.
First, clone and navigate to the project directory:
# Clone the repository
git clone https://github.com/rh-ai-quickstart/spending-transaction-monitor.git
cd spending-transaction-monitorExpected outcome:
- ✓ Repository cloned to local machine
- ✓ Working directory set to project root
See Mac M Series Installation Troubleshooting
Start with pre-built images:
make run-localBuild and run from source:
make build-run-localExpected outcome:
- ✓ All containers started successfully
- ✓ Services accessible at their respective URLs
Container URLs:
- Frontend: http://localhost:3000
- API: http://localhost:3000/api/\* (proxied)
- API Docs: http://localhost:8000/docs
- SMTP Web UI: http://localhost:3002
- Database: localhost:5432
After starting services, set up the database and Keycloak:
pnpm setup:data # Complete setup: Start DB + migrations + seed all data
pnpm seed:all # Just seed data (DB + Keycloak) - migrations already run
pnpm seed:db # Seed only database
pnpm seed:keycloak # Setup only Keycloak realm
# Or using make
make setup-data # Complete data setup: Start DB + migrations + all dataNote: pnpm setup:data now automatically starts the database, so you don't need to run pnpm db:start separately.
📖 See DEVELOPER_GUIDE.md for complete seeding documentation
Expected outcome:
- ✓ Database migrations applied
- ✓ Sample data loaded
- ✓ Keycloak realm configured (if using authentication)
The application supports two authentication modes:
Production Mode (Default) - Keycloak OAuth2/OIDC
By default, the application uses Keycloak for secure authentication:
- Automatic Setup: Keycloak realm and test users are automatically created on startup
- OAuth2/OIDC Flow: Implements OpenID Connect with PKCE for secure authentication
- Automatic Token Refresh: Tokens are automatically refreshed before expiration
- Test Users (for authentication testing, no sample data):
[email protected]/password123(Regular user)[email protected]/admin123(Admin user)
- Sample Users (with transaction data - use these to explore the app):
[email protected]/password123[email protected]/password123- Plus 48 more seeded users with realistic transaction histories
Access Points:
- Frontend: http://localhost:3000 (redirects to Keycloak login)
- Keycloak Admin: http://localhost:8080 (admin / admin)
- API Docs: http://localhost:8000/docs
Development Mode - Auth Bypass
For local development, you can bypass authentication:
# Set environment variables for bypass mode
BYPASS_AUTH=true VITE_BYPASS_AUTH=true VITE_ENVIRONMENT=development make build-run-localIn bypass mode:
- ✅ No login required - automatic authentication as dev user
- ✅ Yellow "DEV MODE - Authentication Bypassed" banner visible
- ✅ Faster development iteration
⚠️ NOT for production use
Switching Between Modes:
# Production mode (Keycloak authentication)
make build-run-local
# Development mode (auth bypass)
BYPASS_AUTH=true VITE_BYPASS_AUTH=true VITE_ENVIRONMENT=development make build-run-localEnvironment Variables:
| Variable | Values | Description |
|---|---|---|
BYPASS_AUTH |
true/false |
Backend auth bypass |
VITE_BYPASS_AUTH |
true/false |
Frontend auth bypass |
VITE_ENVIRONMENT |
development/staging/production |
Environment mode |
KEYCLOAK_URL |
URL | Keycloak server URL (default: http://localhost:8080) |
make run-local # Start with registry images
make build-local # Build images from source
make build-run-local # Build and start
make stop-local # Stop all services
make logs-local # View service logs
make reset-local # Reset with fresh dataFor local development without containers, use these pnpm commands:
# Install dependencies
pnpm setup
# Start in development mode (auth bypassed)
pnpm start:dev
# Or start individual services
pnpm backend:setup # Setup database
pnpm backend:start # Start API (port 8002, auth bypass)
pnpm --filter @*/ui dev # Start UI (port 3000)# Start with Keycloak authentication
pnpm start:prod
# Access points:
# - Frontend: http://localhost:3000
# - API: http://localhost:8002
# - Keycloak: http://localhost:8080# With Keycloak authentication (default)
pnpm dev:containers:auth
# With auth bypass (no login required) - fastest iteration
pnpm dev:containers:noauth
# Standard container startup (without rebuild)
pnpm dev:containersOr using Make directly:
# Build and run with Keycloak authentication (default)
make build-run-local
# Build and run with auth bypass (no authentication)
BYPASS_AUTH=true VITE_BYPASS_AUTH=true VITE_ENVIRONMENT=development make build-run-local
# Run without rebuilding
make run-local# Database management
pnpm db:start # Start PostgreSQL container
pnpm db:stop # Stop PostgreSQL container
pnpm db:upgrade # Run migrations
pnpm db:seed # Load sample data
pnpm db:verify # Verify database connection
# Authentication
pnpm auth:start # Start Keycloak container
pnpm auth:stop # Stop Keycloak container
pnpm auth:setup-keycloak # Setup Keycloak realm/client
pnpm auth:setup-keycloak-with-users # Setup Keycloak with DB users
# Code quality
pnpm lint # Run all linters
pnpm lint:fix # Auto-fix linting issues
pnpm format # Format code
pnpm test # Run tests
pnpm type-check # Run TypeScript checksNote: Some features in this quickstart require
clusterAdminprivileges, particularly for enabling and setting up the model registry. See Required user permissions for details.
Using pre-built images
make deployUsing Quay.io instead of the OpenShift internal registry:
# 1) Authenticate to Quay (recommended: use a robot account token)
make REGISTRY_URL=quay.io QUAY_USERNAME=<quay-user-or-robot> QUAY_TOKEN=<token> login
# 2) Build + push to your Quay org + deploy
make REGISTRY_URL=quay.io REPOSITORY=<your-quay-org> IMAGE_TAG=<tag> build-deployUsing the OpenShift internal registry instead of Quay.io
# Login and setup
# IMPORTANT: For OpenShift's internal registry, set REGISTRY_URL once and reuse it.
export REGISTRY_URL="$(oc get route default-route -n openshift-image-registry -o jsonpath='{.spec.host}')"
make login
make build-deploy# Login and setup
make login
make create-project
# Build and push images
make build-all
make push-all
# Deploy
make deployExpected outcome:
- ✓ Helm chart deployed successfully
- ✓ All pods running
- ✓ Routes created
make status # Check deployment status
make logs-api # View API logs
make logs-ui # View UI logsmake deploy # Deploy to OpenShift
make undeploy # Remove deployment
make status # Check deployment status
make logs-api # View API logs
make logs-ui # View UI logsAfter starting the application with make run-local, you can test alert rules interactively:
make list-alert-samplesShows all available test scenarios with their descriptions, such as:
- "Alert when spending more than $500 in one transaction"
- "Alert me if my dining expense exceeds the average of the last 30 days by more than 40%"
- "Alert me if a transaction happens outside my home state"
make test-alert-rulesThis command provides:
- 📋 Alert Rule Menu showing alert rule descriptions
- 📊 Data preview with realistic transaction data adjusted to current time
- 🔍 User context showing the test user profile and transaction history
- ✅ Confirmation prompt before running the actual test
-
Start the application:
make run-local
-
Browse available test scenarios:
make list-alert-samples
-
Run interactive testing:
make test-alert-rules
- Select an alert rule by number (1-16)
- Review the data preview showing exactly what will be tested
- Confirm to proceed with the test
- Watch the complete validation and creation process
The test process:
- Seeds database with realistic user and transaction data
- Validates the alert rule using the NLP validation API
- Creates the alert rule if validation passes
- Shows step-by-step results including SQL queries and processing steps
Note: Make sure the API server is running (make run-local) before testing alert rules.
After confirming a rule test:
- The system sends a test notification via the configured test SMTP server.
- To verify:
- Open the SMTP server Web UI:
👉 http://localhost:3002 - Check the inbox for the test email.
- Open the email to confirm:
- The rule name/description is included.
- The transaction details that triggered the rule are shown.
- Open the SMTP server Web UI:
Expected outcome:
- ✓ Email received in SMTP Web UI
- ✓ Alert contains rule description and transaction details
- ✓ Notification delivered within seconds of rule trigger
By completing this quickstart, you have:
- ✓ Deployed a fully functional AI-powered transaction monitoring system
- ✓ Understood the core platform architecture and components
- ✓ Created and tested natural language alert rules
- ✓ Validated end-to-end alert notification delivery
- ✓ Learned how to customize the system for your own use cases
For Development Teams:
- Review the DEVELOPER_GUIDE.md for development setup and workflow
- Explore the API documentation at http://localhost:8000/docs
- Experiment with customizing alert rule prompts
- Set up additional notification channels
For Organizations Planning Production Deployment:
- Plan your transition from local to OpenShift deployment
- Integrate with your existing transaction data sources
- Establish evaluation criteria and quality metrics for your use case
- Review authentication configuration with Keycloak
For Customizing to Your Use Case:
- Modify NLP prompts to handle domain-specific rule types
- Add custom transaction categories for your industry
- Build integration with your ITSM or notification systems
- Develop use-case-specific evaluation metrics
You can stop the deployed services by running:
# Stop local containers
make stop-local
# Remove OpenShift deployment
make undeployThis will remove all deployed services, pods, and resources.
OpenShift Management: The Spending Transaction Monitor is designed for scalability using standard Kubernetes and cloud-native patterns. All core components can be scaled using familiar Kubernetes techniques—horizontal pod autoscaling, replica sets, and resource limits.
Component Scaling:
- API Service: Scales horizontally with multiple FastAPI workers per pod and multiple pod replicas
- Database: PostgreSQL with connection pooling and read replicas for high-throughput scenarios
- UI: Static assets can be served via CDN for global distribution
Performance Considerations:
- Transaction evaluation is optimized for real-time processing
- NLP rule parsing leverages caching for frequently used patterns
- Notification delivery is asynchronous to avoid blocking transaction processing
Security considerations for production deployments:
- Authentication: Keycloak provides OAuth2/OIDC with PKCE for secure user authentication
- API Security: All endpoints protected with JWT token validation
- Database: Credentials managed via environment variables and Kubernetes secrets
- Network: Internal service communication isolated within Kubernetes namespace
- Sensitive Data: Transaction data encryption at rest and in transit
For production deployments, consider:
- Enabling TLS for all external endpoints
- Configuring network policies to restrict pod-to-pod communication
- Managing secrets through a vault solution
- Implementing audit logging for compliance requirements
Now that you have the system running, you can dive deeper into specific components:
Guides:
- DEVELOPER_GUIDE.md - Development setup and workflow
- KEYCLOAK_MANAGEMENT.md - Keycloak configuration and management
- AppleSiliconPodmanSetup.md - Mac M-series troubleshooting
Package Documentation:
- packages/api/README.md - API service documentation
- packages/ui/README.md - Frontend documentation
- packages/db/README.md - Database schema and migrations
Deployment:
- deploy/OPENSHIFT_DEPLOYMENT.md - OpenShift deployment guide
- deploy/helm/ - Helm charts for Kubernetes deployment
- Industry: Banking and securities
- Product: OpenShift AI
- Use case: Transaction monitoring, Anomaly detection, Personalized alerts
- Contributor org: Red Hat
Contributions are welcome! Please fork the repo and submit a PR.
See our CONTRIBUTING.md for guidelines.
This project is licensed under the Apache License 2.0.
Thank you for using the Spending Transaction Monitor Quickstart! We hope this guide helps you successfully deploy AI-driven transaction monitoring for your organization.