This project provides a command-line tool to parse machine learning models, benchmark them on CPU/GPU with datasets, and visualize their architecture using Netron.
-
Model Parsing
Extract model information usingMLParser. -
Benchmarking
Run performance tests with custom datasets on CPU and GPU viaBenchMark. -
Visualization
Launch a local Netron instance to inspect model architecture. -
CLI Interface
Simple interface with options for metrics, dataset benchmarking, and visualization.
-
Clone this repository:
- Make sure to install Poetry: https://python-poetry.org/
git clone https://github.com/marcellobeltrami/mlInspect.git cd mlInspect poetry install poetry run python src/mlinspect/main.py -h # prints help message
-
Run the project:
poetry run python main.py --input_model <path_to_model> [options]
Options
--input_model Path to the ML model file Required --metrics Show model information parsed by MLParser False --dataset_encoded_cpu Path to encoded dataset for CPU benchmarking False --dataset_encoded_gpu Path to encoded dataset for GPU benchmarking False --visualize Launch Netron to visualize the model
Limitations
- Model supported are
.keras, .h5, .pth and .ph - encoded datasets must be encoded
.npyfiles saved from numpy. See tests/ for example input files.
- Model supported are
├── mvp/
│ ├── ml_parser.py # Model parsing logic
│ ├── cli.py # CLI handler (UX)
├── visual/
│ └── view.py # Visualization wrapper
├── benchmark/
│ └── model_benc.py # Benchmarking module
├── main.py # Entry point
├── requirements.txt
└── README.md
python = ">=3.12,<3.14"
torch = "2.8.0"
tensorflow = "2.20.0"
tabulate = "0.9.0"
pandas = "2.3.2"
netron = "8.6.5"
numpy = "2.3.3"