First, please install cuda version 11.0.3 available at https://developer.nvidia.com/cuda-11-0-3-download-archive. It is required to build mmcv-full later.
For this project, we used python 3.8.5. We recommend setting up a new virtual environment:
python -m venv ~/venv/mic-seg
source ~/venv/mic-seg/bin/activateIn that environment, the requirements can be installed with:
pip install -r requirements.txt -f https://download.pytorch.org/whl/torch_stable.html
pip install mmcv-full==1.3.7 # requires the other packages to be installed firstPlease, download the MiT-B5 ImageNet weights provided by SegFormer
from their OneDrive and put them in the folder AFRDA/.
Cityscapes: Please, download leftImg8bit_trainvaltest.zip and
gt_trainvaltest.zip from here
and extract them to data/cityscapes.
GTA: Please, download all image and label packages from
here and extract
them to data/gta.
Synthia (Optional): Please, download SYNTHIA-RAND-CITYSCAPES from
here and extract it to data/synthia.
MESH: You can collect your own forest environment dataset and put them to data/MESH.
The final folder structure should look like this:
DEDA
├── ...
├── data
│
│ ├── cityscapes
│ │ ├── leftImg8bit
│ │ │ ├── train
│ │ │ ├── val
│ │ ├── gtFine
│ │ │ ├── train
│ │ │ ├── val
│
│ ├── gta
│ │ ├── images
│ │ ├── labels
│ ├── rugd
│ │ ├── images
│ │ ├── labels
│ ├── MESH
│ │ ├── images
│ │ ├── labels
│ │
├── Data Preprocessing: Finally, please run the following scripts to convert the label IDs to the train IDs and to generate the class index for RCS:
python tools/convert_datasets/gta.py data/gta --nproc 8
python tools/convert_datasets/cityscapes.py data/cityscapes --nproc 8
python tools/convert_datasets/synthia.py data/synthia/ --nproc 8A training job for gta2cs can be launched using:
python run_experiments.py --config configs/afr/gtaHR2csHR_afr_hrda.pyA training job for syn2cs can be launched using:
python run_experiments.py --config configs/afr/synHR2csHR_hrda.pyand a training job for rugd2mesh can be launched using:
python run_experiments.py --config configs/afr/rugd2mesh_hrda.pyThe logs and checkpoints are stored in
work_dirs/A trained model can be evaluated using:
sh test.sh work_dirs/run_name/The predictions are saved for inspection to
work_dirs/run_name/preds
and the mIoU of the model is printed to the console.
When training a model on Synthia→Cityscapes, please note that the
evaluation script calculates the mIoU for all 19 Cityscapes classes. However,
Synthia contains only labels for 16 of these classes. Therefore, it is a common
practice in UDA to report the mIoU for Synthia→Cityscapes only on these 16
classes. As the Iou for the 3 missing classes is 0, you can do the conversion
mIoU16 = mIoU19 * 19 / 16.
Below, we provide checkpoints of AFRDA for the different benchmarks.
The checkpoints come with the training logs. Please note that: The logs provide the mIoU for 19 classes. For Synthia→Cityscapes, it is necessary to convert the mIoU to the 16 valid classes. Please, read the section above for converting the mIoU.
This project is based on mmsegmentation version 0.16.0. For more information about the framework structure and the config system, please refer to the mmsegmentation documentation and the mmcv documentation.
The most relevant files for AFRDA are:
- configs/afr/gtaHR2csHR_afr_hrda.py: Annotated config file for AFR on GTA→Cityscapes.
- configs/afr/rugd2meshHR_afr_hrda.py: Annotated config file for AFR on RUHD→MESH.
- experiments.py: Definition of the experiment configurations in the paper.
- mmseg/models/decode_heads/hrda_head.py: Implementation of the hrda head with integrated AFR module.
- mmseg/models/uda/dacs.py: Implementation of the DAFormer/HRDA self-training.
- tools/inf_ros.py: Inference code for implementation in ROS.
- in_ros.sh: bash file for running the inference code with ros.
For navigating we integrate AFRDA with POVNav. And then we deploy it on a Clearpath Husky Robot. We assume ros-noetic, the anaconda environment mentioned in the Environment Setup, Husky Robot's sensor and base workspace is already on your Robot's onboard computer.
- Open a terminal
roscore- In another terminal: run the RGB and the localization launch file
cd husky_n_sensors
source devel/setup.bash
roslaunch realsense2_camera rs_d400_and_t265.launch
- In another terminal run the AFRDA (Download the checkpoint for the forest environment and put it in the work_dirs/local-basic folder)
cd AFRDA
Conda activate AFRDA
sh in_ros.sh AFRDA_mesh
- In another terminal use the following command to connect the joystick
sudo ds4drv
- In another terminal run the husky base launch file:
cd husky_ws
source devel/setup.bash
roslaunch husky_base base.launch- Now download the PovNav and in another terminal
cd povnav_ws
catkin_make
source devel/setup.bash
roslaunch pov_nav sim_pov_nav.launch- Open rviz, visualize the necessary topics, and from 2D Nav goal option give goal to the planner
- Now use a python code to publish the velocities generated from the PovNav to Husky
AFRDA is based on the following open-source projects. We thank their authors for making the source code publicly available.