👋 Welcome! This repo contains demos showcasing TrustyAI's guardrailing and model evaluation features within Red Hat Openshift AI.
- Evaluation Quickstart: This demo will quickly get you started running an evaluation against a deployed model.
- Guardrails Quickstart: This demo will quickly get you started with three detectors, for detecting hate speech, gibberish, and jailbreaking respectively.
- Custom Detectors: This demo shows off how to create custom detectors via Python, and provides an example of LLM self-reflection guardrailing.
- Lemonade Stand: A demo showing manual configuration of guardrails, as shown in the Guardrails for AI models video on the RH YouTube channel
If you run into issues, see the troubleshooting guide for common issues and their solutions.