Skip to content

About [CVPR 2024] The official implementation of paper " Light the Night: A Multi-Condition Diffusion Framework for Unpaired Low-Light Enhancement in Autonomous Driving"

License

Notifications You must be signed in to change notification settings

jinlong17/LightDiff

Repository files navigation

LightDiff: Light the Night: A Multi-Condition Diffusion Framework for Unpaired Low-Light Enhancement in Autonomous Driving (CVPR 2024)

paper supplement

This is the official implementation of CVPR2024 paper Light the Night: A Multi-Condition Diffusion Framework for Unpaired Low-Light Enhancement in Autonomous Driving".

Jinlong Li1*, Baolu Li1*,Zhengzhong Tu2, Xinyu Liu1, Qing Guo3, Felix Juefei-Xu4, Runsheng Xu5, Hongkai Yu1

1Cleveland State University, 2University of Texas at Austin, 3A*STAR, 4New York University, 5UCLA

Computer Vision and Pattern Recognition (CVPR), 2024

teaser

Getting Started

Environment Setup

conda env create -f environment.yml
conda activate lightdiff
  • Following the installation of BEVDepth step by step.

Note: you can first install the environment of BEVDepth, after you successful install it, then you can install the environment of ControlNet.

Model Training

The training code is in "train.py" and the dataset code in "", which are actually surprisingly simple as follow with ControlNet. you need to set path in these python files.

python train.py

Model testing

  • [Image enhancement]: We have prepared a nighttime dataset from Nuscenes for low-light enhancement. Please download the testing data and the our model checkpoint. Remember to set the path in the training file accordingly.

  • [3D object detection]: We utilize two 3D perception state-of-the-art methods BEVDepth and BEVStereo trained on the nuScenes daytime training set.

python test.py   # using config file in ./models/lightdiff_v15.yaml

Image Quality Evaluation

You need to set path in "image_noreference_score.py".

python image_noreference_score.py

DATA Preparation

The directory will be as follows.

── nuScenes
│   ├── maps
│   ├── samples
│   ├── sweeps
│   ├── v1.0-test
|   ├── v1.0-trainval
  • Then you can use the python files in the folder nuscenes to process the nuScenes dataset, then you can obtain Nuscenes images of Training set and Testing set.

Training set

We select all 616 daytime scenes of the nuScenes training set containing total 24,745 camera front images as our training set.

Testing set

We select all 15 nighttime scenes in the nuScenes validation set containing total 602 camera front images are as our testing set. For your convenience, you can download the data from validation set.

Multi-modality Data Generation

Instruction prompt

We obtain instruction prompts by LENS.

Depth map

We obtain depth map for training and testing images by High Resolution Depth Maps.

Corresponding degraded dark light image for Training Set

We generate corresponding degraded dark light image in the training stage based on code from the ICCV_MAET, which is integrated into the data process in the training stage.

Althrough the degraded images may not precisely replicate the authentic appearance of real nighttime, our synthesized data distribution (t-SNE) is much closer to real nighttime compared to real daytime, as shown below:

Image description

Citation

If you are using our wokr for your research, please cite the following paper:

@inproceedings{li2024light,
 title={Light the Night: A Multi-Condition Diffusion Framework for Unpaired Low-Light Enhancement in Autonomous Driving},
 author={Li, Jinlong and Li, Baolu and Tu, Zhengzhong and Liu, Xinyu and Guo, Qing and Juefei-Xu, Felix and Xu, Runsheng and Yu, Hongkai},
 booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
 pages={15205--15215},
 year={2024}
}

Acknowledgment

This code is modified based on the code ControlNet-v1-1-nightly and BEVDepth. Thanks.

About

About [CVPR 2024] The official implementation of paper " Light the Night: A Multi-Condition Diffusion Framework for Unpaired Low-Light Enhancement in Autonomous Driving"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 8

Languages