Official implementation of our ICLR 2026 paper: Sun, Chenhao, Yuhao Mao, and Martin Vechev. "Dual Randomized Smoothing: Beyond Global Noise Variance."
Our codebase builds on several previous works (Diffusion Denoised Smoothing, Improved Diffusion, Guided Diffusion, ACR Weakness, and locuslab/smoothing).
| Directory | Description |
|---|---|
certify/ |
Sigma estimator and classifier certification results |
code/ |
Training and certification code |
data/ |
Pre-generated optimal sigma datasets |
logs/ |
Training logs and checkpoints |
models/ |
Off-the-shelf diffusion and classification models |
reproduce/ |
Data and code to reproduce figures and tables in our paper |
scripts/ |
Scripts to run experiments |
conda create -n dual_rs python=3.9
conda activate dual_rs
conda install pytorch torchvision cudatoolkit=[VERSION] -c pytorch # choose the CUDA version for your system
pip install numpy pandas scipy statsmodels pillow tqdm
pip install tensorboardX timm transformers
pip install matplotlib seaborn # for reproducing figures- Run
bash scripts/prepare_models.shto download off-the-shelf diffusion models intomodels/. - The classifier models will be downloaded automatically on the first pipeline run.
- The training-based model must be downloaded manually from this link (from the paper Sun, Chenhao, et al. "Average certified radius is a poor metric for randomized smoothing."). Download
acr_weakness/checkpoints/cifar10/1.0.pth.tarand place it atmodels/train_based/1.0.pth.tar.
We provide a pre-generated dataset in data/. To generate your own instead, run:
# Via Slurm
sbatch scripts/build_sigma_label_map.sh "0.25 0.5 1.0" aaraki/vit-base-patch16-224-in21k-finetuned-cifar10 certify/classifier/train/100map 100 train
sbatch scripts/build_sigma_label_map.sh "0.25 0.5 1.0" aaraki/vit-base-patch16-224-in21k-finetuned-cifar10 certify/classifier/train/100map 100 test
# Or directly
bash scripts/build_sigma_label_map.sh "0.25 0.5 1.0" aaraki/vit-base-patch16-224-in21k-finetuned-cifar10 certify/classifier/train/100map 100 train
bash scripts/build_sigma_label_map.sh "0.25 0.5 1.0" aaraki/vit-base-patch16-224-in21k-finetuned-cifar10 certify/classifier/train/100map 100 testIteration 0 -- train the sigma estimator and fine-tune the classifier:
bash scripts/iter0_pipeline.shOnce complete, the log directory will contain a timestamp, e.g.,
logs/sigma_est/cifar10/num_2/0.250_0.500_1.000/noise_1.0/100_map/softce_con_lbd40.0_eta0.5_adp_max/cifar_resnet110/1/<timestamp>.
Iteration 1 -- retrain the sigma estimator using the fine-tuned classifier:
bash scripts/iter1_pipeline.sh <timestamp>For users not on a Slurm cluster, below are the individual steps of the pipeline.
Step 1: Mapped classification certification (for optimal sigma dataset construction)
# Train set
python code/certify_classifier_map.py \
--vit_path <aaraki/vit-base-patch16-224-in21k-finetuned-cifar10 or finetuned classifier checkpoint> \
--sigma 0.25 \
--skip 1 \
--map_from_N 100 \
--N 10000 \
--alpha 0.0005 \
--train_set \
--outfile certify/classifier/train/100map/noise_0.25.tsv
# Test set
python code/certify_classifier_map.py \
--vit_path <aaraki/vit-base-patch16-224-in21k-finetuned-cifar10 or finetuned classifier checkpoint> \
--sigma 0.25 \
--skip 1 \
--map_from_N 100 \
--N 10000 \
--alpha 0.0005 \
--outfile certify/classifier/test/100map/noise_0.25.tsvStep 2: Sigma estimator training
python code/train_sigma_est.py \
cifar10 \
cifar_resnet110 \
--noise_sd 1.0 \
--num_noise_vec 2 \
--sigma_cand 0.25 0.5 1.0 \
--loss_cl softce \
--loss_con \
--adp_con_wght max \
--lbd 40.0 \
--class_weights \
--sigma_label_dir data/sigma_label/map10e4 \
--mapped_from 100 \
--timestamp <yyyymmdd_hhmmss>Step 3: Sigma prediction (used for classifier fine-tuning)
python code/predict.py \
<sigma_estimator_checkpoint_path> \
1.0 \
<prediction_folder>/predict_train.tsv \
--N=100 \
--batch_size=1600 \
--split=test \
--alpha 0.0005 \
--sigma_cand 0.25 0.5 1.0 \
--skip=1Step 4: Classifier fine-tuning
python code/finetune_vit.py \
cifar10 \
--epochs 15 \
--batch 128 \
--num_noise_vec 1 \
--sigma_label_path <prediction_folder> \
--sigma_cand 0.25 0.5 1.0 \
--id 1Step 5: Classification certification (N=10,000, for the final radius)
python code/certify_classifier.py \
--vit_path <aaraki/vit-base-patch16-224-in21k-finetuned-cifar10 or finetuned classifier checkpoint> \
--sigma 0.25 \
--skip 1 \
--N 10000 \
--alpha 0.0005 \
--outfile certify/classifier/test/base/noise_0.25.tsvStep 6: Sigma estimation certification (N=10,000, for the final radius)
python code/certify_sigma_est.py \
<sigma_estimator_checkpoint_path> \
1.0 \
<sigma_est_certify_result_folder>/noise_1.0.tsv \
--N=10000 \
--split=test \
--sigma_cand 0.25 0.5 1.0 \
--alpha 0.0005 \
--skip=1Since the ImageNet pipeline typically requires multiple GPUs, we provide only the core Python commands and leave resource allocation to the user.
Step 1: Mapped classification certification (for optimal sigma dataset construction)
python code/certify_in_classifier_map.py \
--sigma 0.5 \
--skip 1 \
--map_from_N 100 \
--N 10000 \
--alpha 0.0005 \
--batch_size 16 \
--split train \
--outfile certify/classifier/train/imagenet/100map/noise_0.5.tsvStep 2: Sigma estimator training
python -u code/train_sigma_est_in.py \
imagenet \
resnet50 \
--noise_sd 1.0 \
--num_noise_vec 2 \
--sigma_cand 0.5 1.0 \
--loss_cl ce \
--loss_con \
--lbd 10.0 \
--eta 0.5 \
--class_weights \
--sigma_label_dir "data/sigma_label/imagenet/base" \
--adp_con_wght min \
--timestamp <yyyymmdd_hhmmss> \
--workers 16 \
--resumeStep 3: Sigma prediction (used for classifier fine-tuning)
python code/predict_in.py \
<sigma_estimator_checkpoint_path> \
0.25 \
<prediction_folder>/predict_train.tsv \
--N=20 \
--batch_size=20 \
--split=train \
--sigma_cand 0.5 1.0 \
--skip=1Step 4: Classifier fine-tuning
python code/finetune_vit_in.py \
imagenet \
--lr 2e-5 \
--epochs 1 \
--wd 0.01 \
--opt adamw \
--batch 32 \
--num_noise_vec 1 \
--sigma_label_path <prediction_folder> \
--sigma_cand 0.5 1.0 \
--timestamp <yyyymmdd_hhmmss>Step 5: Classification certification (N=10,000, for the final radius on 500 test samples)
python code/certify_in_classifier.py \
--classifier_path <beit_large_patch16_512 or the finetuned classifier checkpoint> \
--sigma 0.5 \
--skip 100 \
--N 10000 \
--batch_size 16 \
--split=train \
--alpha 0.0005 \
--outfile <classifier_certify_result_folder>/noise_0.5.tsvStep 6: Sigma estimation certification (N=10,000, for the final radius on 500 test samples)
python code/certify_in_sigma_est.py \
<sigma_estimator_checkpoint_path> \
1.0 \
<sigma_est_certify_result_folder>/noise_1.0.tsv \
--batch_size=32 \
--N=10000 \
--split=test \
--sigma_cand 0.5 1.0 \
--alpha 0.0005 \
--skip=100All data and code for reproducing the figures and tables in our paper are provided in reproduce/. Each script in reproduce/code/ is named after the figure or table it generates.
If you find this work useful, please cite:
@inproceedings{ chenhao2026dual,
title={Dual Randomized Smoothing: Beyond Global Noise Variance},
author={Sun, Chenhao and Mao, Yuhao and Vechev, Martin},
booktitle={The Fourteenth International Conference on Learning Representations},
year={2026},
url={https://arxiv.org/abs/2512.01782}
}