| Name | Type | Clean | Dirty | Total |
|---|---|---|---|---|
| Ours (train) | Script | 15,242 | 11,456 | 26,698 |
| Ours (valid) | Script | 1,075 | 2,046 | 3,121 |
| Ours (test) | Script | 603 | 1,760 | 2,363 |
| Ours (all) | Script | 16,920 | 15,262 | 32,182 |
| Ours (adv_test) | Script | 0 | 5,332 | 5,332 |
The PyMalEvasion dataset is constructed by augmenting the PyPI Malregistry dataset with samples from VT. You can use this script to extract the sources from the archived PyPI Malregistry. After extracting the sources, we filtered out those under 512 bytes (e.g. containing typically harmless initialization or configuration scripts).
We further split the data into train/valid/test following a cluster-informed method. We apply the shallow FX, UMAP for dimensionality reduction and HDBSCAN for the actual clustering. Finally, the splits are chosen such that all samples in a cluster are from a single split, thus minimizing potential information leakage.
- Heuristics (simple modifications to add comments, documentation, padding)
- LLM constrained via AST and RAG
- LLM unconstrained
For AST-based constrained generation, the LLM is instructed to generate an action (add/edit/delete) and a code snippet for which the action to take place. Then, the AST of the original script is updated from the (smaller) AST of the snippet.
We employ 3 classification strategies: shallow (XGBoost on handcrafted features), CodeBERT (adapted from microsoft/CodeBERT, base model: microsoft/codebert-base) and LLM-based.
For the shallow classification we built 8 feature types and trained an XGBoost model for each one of the 255 feature combinations. Models are trained with HPO and 5-fold cross-validation.
PyMalEvasion is an open source project, not a CrowdStrike product. As such, it carries no formal support, expressed or implied.

