This project is a modular and configurable pipeline to train binary image classifiers using Convolutional Neural Networks (CNNs) with transfer learning.
This tool is intended for researchers, students, or developers who need to quickly create custom classifiers by simply dropping labeled images into folders and running a single command.
- Clean modular architecture based on Python and TensorFlow.
- Uses
MobileNetV2(or other Keras models) as backbone. - Custom classification head defined via configuration.
- Data augmentation and fine-tuning support.
- Easy-to-use training and evaluation workflow.
- Reproducibility and portability via
config.py.
Install the dependencies in a virtual environment:
python -m venv venv
source venv/bin/activate # or .\venv\Scripts\activate on Windows
pip install -r requirements.txtcnn-classifier/
│
├── core/ # Core logic (dataset, training, evaluation, model...)
├── data/ # Training and testing images
│ ├── train/
│ │ ├── positive/ # Class 1 (e.g., happy face)
│ │ └── negative/ # Class 0 (e.g., sad face)
│ └── test/
│ ├── positive/
│ └── negative/
│
├── models/ # Trained models will be saved here
├── config.py # Central configuration
├── run.py # Main entry point
├── requirements.txt # Dependencies
└── README.md
To run the training and evaluation using the default configuration:
python run.pyThis will:
- Load the training and testing datasets from
data/train/anddata/test/. - Train a binary image classifier with transfer learning.
- Fine-tune the last N layers of the base model.
- Evaluate it using accuracy, F1 score, and a confusion matrix.
- Save the trained model in the
models/folder.
You can fully control the pipeline through the config.py file. Key options include:
BASE_MODEL_NAME = "MobileNetV2"
CLASSIFIER_HEAD = [
{"type": "GlobalAveragePooling2D"},
{"type": "Dense", "units": 128, "activation": "relu"},
{"type": "Dropout", "rate": 0.5},
{"type": "Dense", "units": 64, "activation": "relu"},
{"type": "Dropout", "rate": 0.3},
{"type": "Dense", "units": 1, "activation": "sigmoid"}
]✅ Note: The list of supported base models is defined in
core/preprocessing.py.
These include popular architectures fromtensorflow.keras.applications, such asMobileNetV2,ResNet50,VGG16, etc.
IMG_SIZE = 128
AUGMENTATION_CONFIG = {
"rotation_range": 25,
"zoom_range": [0.85, 1.3],
"horizontal_flip": True
}INITIAL_EPOCHS = 40
FINE_TUNE_EPOCHS = 20
LEARNING_RATE_INITIAL = 1e-4
LEARNING_RATE_FINE_TUNE = 1e-5
UNFROZEN_LAYERS = 20THRESHOLD = 0.5After training, a model file (e.g., happiness_classifier_model.keras) will be saved in the models/ directory.
You can load it later for inference or evaluation:
from tensorflow.keras.models import load_model
model = load_model("models/happiness_classifier_model.keras")Prepare your dataset like this:
data/train/positive/ → Images of Will Smith
data/train/negative/ → Images of other people
data/test/positive/ → Will Smith (test set)
data/test/negative/ → Other people (test set)
Then run:
python run.pyYou’ll get output like:
✅ Accuracy: 0.9200 — F1 Score: 0.9231
🧩 Confusion matrix:
[[44 6]
[ 2 48]]
- The
models/anddata/folders are tracked but empty. You should place your own data and trained models there. - These folders contain
.gitkeepfiles to preserve structure in Git.
This project is released under the MIT License.
You are free to use, modify, and distribute it — with attribution.
Developed by Francisco Jesús Montero Martínez
For suggestions, improvements, or collaboration, feel free to reach out.