By Peiyun Hu, Aaron Huang, John Dolan, David Held, and Deva Ramanan
You can find our paper on CVF Open Access. If you find our work useful, please consider citing:
@inproceedings{hu2021safe,
title={Safe Local Motion Planning with Self-Supervised Freespace Forecasting},
author={Hu, Peiyun and Huang, Aaron and Dolan, John and Held, David and Ramanan, Deva},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={12732--12741},
year={2021}
}
- Download nuScenes dataset, including the CANBus extension, as we will use the recorded vehicle state data for trajectory sampling. (Tip: the code assumes they are stored under
/data/nuscenes.) - Install packages and libraries (via
condaif possible), includingtorch,torchvision,tensorboard,cudatoolkit-11.1,pcl>=1.9,pybind11,eigen3,cmake>=3.10,scikit-image,nuscenes-devkit. (Tip: verify location of python binary with which python.) - Compile code for Lidar point cloud ground segmentation under
lib/grndsegusing CMake.
- Run
preprocess.pyto generate ground segmentations - Run
precast.pyto generate future visible freespace maps - Run
rasterize.pyto generate BEV object occupancy maps and object "shadow" maps.
Refer to train.py.
Refer to test.py.
Thanks @tarashakhurana for help with README.
