Skip to content

Masrur02/TADA

Repository files navigation

TADA: Traversability Aware Domain Adaptive Semantic Segmentation

Environment Setup

First, please install cuda version 11.0.3 available at https://developer.nvidia.com/cuda-11-0-3-download-archive. It is required to build mmcv-full later.

For this project, we used python 3.8.5. We recommend setting up a new virtual environment:

python -m venv ~/venv/tada
source ~/venv/tada/bin/activate

In that environment, the requirements can be installed with:

pip install -r requirements.txt -f https://download.pytorch.org/whl/torch_stable.html
pip install mmcv-full==1.3.7  # requires the other packages to be installed first

Please, download the MiT-B5 ImageNet weights provided by SegFormer from their OneDrive and put them in the folder TADA/.

Dataset Setup

Cityscapes: Please, download leftImg8bit_trainvaltest.zip and gt_trainvaltest.zip from here and extract them to data/cityscapes.

GTA: Please, download all the image and label packages from here and extract them to data/gta.

Synthia (Optional): Please, download SYNTHIA-RAND-CITYSCAPES from here and extract it to data/synthia.

MESH: You can collect your own forest environment dataset and put it in data/MESH.

The final folder structure should look like this:

Datasets
├── ...
├── data
│   
│   ├── cityscapes
│   │   ├── leftImg8bit
│   │   │   ├── train
│   │   │   ├── val
│   │   ├── gtFine
│   │   │   ├── train
│   │   │   ├── val
│  
│   ├── gta
│   │   ├── images
│   │   ├── labels
│   ├── rugd
│   │   ├── images
│   │   ├── labels
│   ├── MESH
│   │   ├── images
│   │   ├── labels
│   │ 
├── 

Data Preprocessing: Finally, please run the following scripts to convert the label IDs to the train IDs and to generate the class index for RCS:

python tools/convert_datasets/gta.py data/gta --nproc 8
python tools/convert_datasets/cityscapes.py data/cityscapes --nproc 8
python tools/convert_datasets/synthia.py data/synthia/ --nproc 8

Training

A training job for gta2cs can be launched using:

python run_experiments.py --config configs/tada/gtaHR2csHR_tada_hrda.py

A training job for syn2cs can be launched using:

python run_experiments.py --config configs/tada/synHR2csHR_tada_hrda.py

and a training job for rugd2mesh can be launched using:

python run_experiments.py --config configs/tada/rugd2mesh_tada_hrda.py

The logs and checkpoints are stored in

work_dirs/

Evaluation

A trained model can be evaluated using:

sh test.sh work_dirs/run_name/

The predictions are saved for inspection to work_dirs/run_name/preds and the mIoU of the model is printed to the console.

When training a model on Synthia→Cityscapes, please note that the evaluation script calculates the mIoU for all 19 Cityscapes classes. However, Synthia contains only labels for 16 of these classes. Therefore, it is a common practice in UDA to report the mIoU for Synthia→Cityscapes only on these 16 classes. As the Iou for the 3 missing classes is 0, you can do the conversion mIoU16 = mIoU19 * 19 / 16.

Checkpoints

Below, we provide checkpoints of TADA for the different benchmarks.

The checkpoints come with the training logs. Please note that: The logs provide the mIoU for 19 classes. For Synthia→Cityscapes, it is necessary to convert the mIoU to the 16 valid classes. Please, read the section above for converting the mIoU.

Framework Structure

This project is based on mmsegmentation version 0.16.0. For more information about the framework structure and the config system, please refer to the mmsegmentation documentation and the mmcv documentation.

The most relevant files for AFRDA are:

Deployment

For navigating, we integrate TADA with log-MPPI. And then we deploy it on a Clearpath Husky Robot. We assume ros-noetic, the anaconda environment mentioned in the Environment Setup, Husky Robot's sensor and base workspace is already on your Robot's onboard computer.

Navigation Instruction

  1. Open a terminal and run all the commands related to the robot's sensor and base workspace for getting the RGB-D image, the robot's odometry.
  2. Then open a terminal and run the TADA (Download the checkpoint for the forest environment and put it in the work_dirs/local-basic folder)
cd TADA
conda activate tada
sh in_ros.sh TADA_MESH

It will provide you with the segmentation output, 2D Traversability Output, and the point cloud with the traversability value.

  1. In another terminal, now run the ROS package of Elevation-mapping and send the point cloud topic from the previous step as the input topic. You may need to modify the package to add the traversability values as an additional layer alongside the elevation layer in the grid map. This process produces a 2.5D grid map with an elevation layer and traversability layer.
  2. Now, in another terminal, run the code to convert the 2.5D grid map into a 2D cost map
  3. Now run the log-mppi in another terminal by providing the cost map as the input topic. It will continue to provide you with velocities after you have set the goal. You can provide the goal from RViz itself or from the terminal.
  4. Open RViz, visualize the necessary topics, and from the 2D Nav goal option, give a goal to the planner.
  5. Now use a Python code to publish the velocities generated from the log-MPPI to Husky.

Acknowledgements

TADA is based on the following open-source projects. We thank their authors for making the source code publicly available.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published