Welcome to the ETX Delivery Labs β a hands-on workshop introducing Artificial Intelligence (AI) concepts and tools within the Red Hat ecosystem.
This workshop is designed for developers, architects, and operations teams looking to explore how AI can be integrated with Red Hat technologies such as OpenShift, RHEL AI, Podman, and more.
- The fundamentals of AI/ML practices in enterprise environments
- How to containerize and serve AI models using Podman and OpenShift
- Prompt engineering techniques to get the best from large language models (LLMs)
- Building and evaluating AI pipelines using tools like Ollama, Docling, and HuggingFace
- Guardrails and best practices for responsible AI usage
Before you begin, make sure youβve completed all setup steps outlined in the
π Prerequisites Section
Start the workshop by clicking below:
π Begin the Workshop
The workshop is broken into several self-contained modules:
- InstructLab Exercises β Learn how to build and serve fine-tuned models
- Prompt Engineering β Master the art of interacting with LLMs
- AI Guardrails β Explore methods for securing and validating model output
- Evals & Benchmarks β Measure and compare performance of different models
- This workshop assumes basic familiarity with containers, CLI tools, and Git.
- If you're running in a sandbox environment or using cloud-based OpenShift, be sure your resources meet the minimum requirements.
For questions or contributions, feel free to open a discussion or pull request.