A GPU-powered ensemble learning algorithm using hill climbing optimization to combine models for optimal predictive performance.
Built for speed. Tuned for performance. Inspired by chaos.
Hill Boost Ensemble finds the best-performing model on a dataset, and then iteratively adds other models only if they improve the overall performance. Think of it as smart ensembling with a ruthless edge.
Currently supports:
- β XGBoost
- β LightGBM
- β CatBoost
- β Keras-based MLP
Note: GPU is strictly required. This repo is born and raised on the Kaggle GPU runtime.
- Train all candidate models on the same dataset
- Evaluate performance using a custom metric (e.g., AUC, F1, accuracy)
- Start with the best single model
- Iteratively test all other models β include only if ensemble improves
- Final prediction is the weighted sum of selected models
Clone the repo:
git clone https://github.com/rwtarpit/HillBoost-Ensemble.git
cd HillBoost-EnsembleWe welcome contributions!
Check out CONTRIBUTING.md for setup steps, ideas, and good first issues to work on.