This is the official code repository for NeurIPS2022 Track Urban Scene Understanding via Hyperspectral Images: Dataset and Benchmark.
For our classification models, we use SGD(lr=0.001) as optimizer.
- You need to download our HSICityV2 dataset, and set dataset path in
train.py. - Run
pip install -r requirements.txtto install dependencies. - Use
python tools/train.py --model [model] --model_name [model] --window_size [size]to train models. If you want to train on multiple gpus, please refer to our exampletrain.shfor more information. - For testing, run
python tools/test.py --model [model] --model_name [model] --window_size [size].
Benchmarks:
| model | model_name | window_size |
|---|---|---|
| CNN_HSI | CNN_HSI | 5 |
| HybridSN | HybridSN | 25 |
| HybridSN | HybridSN | 21 |
| Two_CNN | Two_CNN | 21 |
| RSSAN | RSSAN | 17 |
| RSSAN | RSSAN | 21 |
| JigSawHSI | JigSawHSI | 21 |
- SVM code is put under
classification/svmfolder. Runpython train_hsi.pyorpython train_rgb.pyto train HSI/RGB models. - After training SVM, you will get two SVM models, namely
SVC,LinearSVC. Detailed model parameters can be seen intrain_(hsi|rgb).py. The models will be saved in.pklformat. - Run
python run.py --model [model] --log [log_file] --out [generate_dir]. This will generate prediction ingenerate_dir. - (Optional) Run
python put_palette.py --indir [indir] --outdir [outdir]to generate visible prediction results.
Our segmentation benchmark is based on mmsegmentation. We put our benchmarking configs under segmentation/experiments.
- You need to read MMseg Document to setup for mmsegmentation framework.
- Download our HSICityV2 dataset and put it under
datafolder. Runpython tools/convert_datasets/hsicity2.py --root [root to HSICityV2]to convert dataset. - Use
python tools/train.py [config](1 GPU) orbash tools/dist_train.sh [config] [n](n GPUs) to train benchmarking models. You can look up the following table to find corresponding config. Note that every config is designed to run with one GPU. If you have more than 1 GPU for training, you need to change iteration number correspondingly (e.g. 1GPU 160k iters = 2GPUs 80k iters = 4GPUs 40k iters). - The config file contains experiment configs, including optimizer, batch size, learning rate, etc. Please refer to TUTORIAL 1: LEARN ABOUT CONFIGS for more information.
Benchmarks in the paper:
| Model Name | Config File |
|---|---|
| FCN (r50) | fcn_r50-d8_0.5x_80k_hsicity2hsi.py |
| FCN (r101) | fcn_r101-d8_0.5x_80k_hsicity2hsi.py |
| Deeplabv3p (r50) | deeplabv3plus_r50-d8_0.5x_80k_hsicity2hsi.py |
| HRNet (w48) | fcn_hr48_0.5x_80k_bare_hsicity2hsi.py |
| CCNet (r50) | ccnet_r50-d8_0.5x_80k_hsicity2hsi.py |
| PSPNet (r50) | pspnet_r50-d8_0.5x_80k_hsicity2hsi.py |
| SegFormer (mit-b5) | segformer_mit-b5_0.5x_160k_hsicity2hsi.py |
| RTFNet | rtfnet_r152-0.5x_80k_hsicity2.py |
| FuseNet | fusenet_vgg_0.5x_80k_hsicity2.py |
| MFNet | mfnet_0.5x_80k_hsicity2.py |
| FCN (r50) RGB | fcn_r50-d8_0.5x_80k_hsicity2rgb.py |
| FCN (r101) RGB | fcn_r101-d8_0.5x_80k_hsicity2rgb.py |
| Deeplabv3p (r50) RGB | deeplabv3plus_r50-d8_0.5x_80k_hsicity2rgb.py |
| HRNet (w48) RGB | fcn_hr48_0.5x_80k_bare_hsicity2rgb.py |
| CCNet (r50) RGB | ccnet_r50-d8_0.5x_80k_hsicity2rgb.py |
| PSPNet (r50) RGB | pspnet_r50-d8_0.5x_hsicity2rgb.py |
| SegFormer (mit-b5) RGB | segformer_mit-b5_0.5x_160k_hsicity2rgb.py |
Additional experiments in Rebuttal:
| Model Name | Config File |
|---|---|
| FCN-r50 (64->32) | fcn_r50-d8_0.5x_80k_hsicity2hsi_dconv32.py |
| PSPNet-r50 (64->32) | pspnet_r50-d8_0.5x_80k_hsicity2hsi_dconv32.py |
| FCN-r50 RGB (No pretraining) | fcn_r50-d8_0.5x_80k_hsicity2rgb_nopretrained.py |
| PSPNet-r50 RGB (No pretraining) | pspnet_r50-d8_0.5x_80k_hsicity2rgb_nopretrained.py |
| FCN-r50 (ValSet/Coarse) | fcn_r50-d8_0.5x_80k_hsicity2hsisub_coarse.py |
| FCN-r50 (ValSet/Fine) | fcn_r50-d8_0.5x_80k_hsicity2hsisub_fine.py |
| PSPNet-r50 (ValSet/Coarse) | pspnet_r50-d8_0.5x_80k_hsicity2hsisub_coarse.py |
| PSPNet-r50 (ValSet/Fine) | pspnet_r50-d8_0.5x_80k_hsicity2hsisub_fine.py |
Use
python tools/test.py (config) (trained_model) [--eval_hsi True] [--show-dir dirxxx] [--opacity 1]to test trained models and generate segmentation result. You can refer to test.py for more options.