This is a fork from the original 3d_coverage_path_planning method.
@INPROCEEDINGS{10018726,
author={Becker, Katrin and Oehler, Martin and Von Stryk, Oskar},
booktitle={2022 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)},
title={3D Coverage Path Planning for Efficient Construction Progress Monitoring},
year={2022},
volume={},
number={},
pages={174-179},
doi={10.1109/SSRR56537.2022.10018726}}This fork fixes upstream build failures. To build without installing ROS Noetic, catkin, OR-Tools, Protobuf, and related tools on the host, use Docker.
Pull the image:
docker pull shiven2001/3d_coverage_path_planning:latestRun a container and mount your catkin workspace so it contains this package (replace the host path with your workspace root):
docker run -it --name ros1_coverage_build \
--net=host \
-v /path/to/your/catkin_ws:/root/catkin_ws \
shiven2001/3d_coverage_path_planning:latestInside the container:
cd /root/catkin_ws
catkin build three_dimensional_coverage_path_planning --cmake-args -DCPPZMQ_BUILD_TESTS=OFF -DCMAKE_CXX_STANDARD=17If you prefer a stock Noetic image instead of the image above, use osrf/ros:noetic-desktop-full as the image name and install dependencies inside the container before catkin build.
The sections below follow the upstream project's documentation (parameters, workflows, troubleshooting).
The directory containing the model and the path can be given in the parameter /three_dimensional_coverage_path_planning/model/data_directory. This directory needs to be writable, as here also the different model representations are stored. It is also possible to pass absolute paths for the model files.
The recorded data are stored in the directory given in the parameter /three_dimensional_coverage_path_planning/data_recording/recorded_data_directory.
In the precomputations the viewpoints and the path is computed. The information are stored and loaded when the path shall be executed. In this way, the same path can be executed multiple times.
The preparations are not always necessary (e.g. if nothing changed in configuration or model).
- Convert mesh file to hdf5 using
lvr2_hdf5_mesh_tool -i <file>.ply -o <file>.h5. - Adapt parameters, see here
- Load behavior
three_dimensional_coverage_path_planning_precomputationsin FlexBE. - Launch file
precomputations_sim_or_robot_three_dimensional_coverage_path_planning.launch(if simulation or robot is running and connected)precomputations_no_robot_three_dimensional_coverage_path_planning.launch(if no simulation or robot is running)
- Start execution in FlexBE.
Here a precomputed path is loaded and executed. During the execution, the path is updated if obstacles occur. Each time a waypoint is reached, the data recording is started, e.g. the point cloud is recorded and accumulated to a model.
- Load behavior
three_dimensional_coverage_path_planning_executionin FlexBE. - Launch file:
sim_three_dimensional_coverage_path_planning.launch(with rviz)execute_three_dimensional_coverage_path_planning.launch(without rviz)
- Start execution in FlexBE.
The following parameters may need to be changed (in model namespace, unless otherwise specified):
- complete_mesh_path:
- absolute or relative path (see Data directories) to complete model file
- OBJ or PLY file (maybe also other format may work, only tested for these)
- target_mesh_path: see complete_mesh_path, for target model
- voxel_size: used for all models that do not have overridden it in their own namespace
- Model specific parameters:
- Point cloud (complete + target!):
num_sample_points,voxel_filter_leaf_size - SDF:
use_tsdf,fill_inside,floodfill_unoccupied,truncation_distance,slice_level
- Point cloud (complete + target!):
- Candidate generation (namespace
viewpoint_computation):percentage_candidates_to_keepcost_thresholdstart_point_for_iter_vertices
- Mesh Navigation Path planner (namespace
waypoint_order_computation):cost_limit
- Mesh Map (top level namespace):
mesh_file- mesh map parameters
The following parameters may need to be changed:
- Self filter / robot body filter configuration
base_frame(top level namespace)sensor_frame(top level namespace)robot_length/robot_width(top level namespace)
If it is a new type of sensor, a new data recorder plugin as subclass of DataRecorderBase needs to be implemented.
Additionally, the following parameters may need to be changed:
data_recorder_plugin: the plugin to be usedfield_of_view(namespace viewpoint computation): horizontal and vertical FOV of sensor as min/max in range [-180, 180] (used for visibility checks)min_/max_range(namespace viewpoint computation): minimum and maximum range of sensor (used for visibility checks)sensor_frame(top level namespace)
-
Version 1: Model from point cloud:
- Set
$hector setup set env HECTOR_USE_POINTCLOUD_ACCUMULATOR true.
- Set
-
Version 2: Model from voxblox:
- Adapt launch file
hector_sensor_proc_launch/launch/voxblox.launch:tsdf_voxel_size = 0.1mesh_filename = "file_name"
- Start launch file, otherwise here no mesh is generated.
- Adapt launch file
- Drive around to record data. Make sure that the environment can be seen completely enough, e.g. drive not too fast and turn around at start/end/corners.
- Do NOT stop simulation / switch off robot! Otherwise the transformation world-frame to building-frame would not be valid anymore.
- Save model:
- Version 1: Save point cloud using
$rosservice call /pointcloud_accumulator_node/save_pointcloud {"file_path: '<file_path>', file_name: '<file_name>.ply'"} - Version 2: Save voxblox mesh using
$rosservice call <...>/generate_mesh ...
- Version 1: Save point cloud using
- Version 1: from point cloud:
- Clean up point cloud:
- Open ply file in MeshLab.
- Use "Select vertices" and "Delete vertices" to clean up point cloud. (Show bounding box to make outliers more visible.)
- If there are too many vertices, it might be helpful for the surface reconstruction to filter the points. For this purpose, use "Filters > Sampling > Point Cloud Simplification" and select an appropriate number of samples or radius. Apply.
- Use "Export Mesh as", enable "Color" and "Normal" and disable "Binary encoding"
- Repeat if required.
- Mesh point cloud:
- Open cleaned ply file in MeshLab.
- Open "Filters > Remeshing, Simplification and Reconstruction > Surface Reconstruction: Ball Pivoting".
- Adapt Pivot Ball Radius (e.g. use 0.1 or 1.0%). Apply and wait.
- "Export Mesh as", enable "Color" and "Normal" and disable "Binary encoding".
- Clean up point cloud:
- Version 2: from voxblox mesh:
- Voxblox stores its meshes in parts (voxel blocks?) that are not connected. I.e. the mesh cannot be used directly. (If used directly, only few candidates can be generated with iter from start method and no paths can be computed).
- Cleanup mesh:
- Open ply file in MeshLab.
- Use "Select vertices" and "Delete vertices" to clean up the mesh.
- Use "Export Mesh as", enable "Color" and "Normal" and disable "Binary encoding".
- Remesh:
- Open ply file in MeshLab.
- Select "Filters > Remeshing, Simplification and Reconstruction > Remeshing: Isotropic Explicit Remeshing".
- Set Target length and Max. Surface Distance (e.g. 0.14 in world unit or 0.5%). Apply.
- Use "Export Mesh as", enable "Color" and "Normal" and disable "Binary encoding".
- Prepare precomputations:
- Copy mesh file in data directory.
- Convert to hdf5 file using
lvr2_hdf5_mesh_tool -i <file>.ply -o <file>.h5. - Adapt configuration:
- Change file names in
config/model/general.yamlandconfig/mesh_nav_map.yaml - Set reachable start point in
config/viewpoint_computation/mesh_navigation_candidate_generation.yaml - Probably set cost thresholds in
config/viewpoint_computation/mesh_navigation_candidate_generation.yamlandconfig/waypoint_order_computation/mesh_nav_planner.yaml
- Change file names in
- Run precomputations:
- See Precomputations
- Check models, especially the SDF! If necessary, do not use the SDF check (comment out in
config/viewpoint_computation/general.yaml). - Check the mesh map, probably adapt the cost thresholds and the start point and start again (see prepare).
- Execute path:
- See Execution
- Maybe reset map and trajectory etc.
- Check if recorded model has been stored! Otherwise call
$rostopic pub /three_dimensional_coverage_path_planning/finish_execution_of_path/goal.
Mesh display: no visual available, can't draw mesh! (maybe no data has been received yet?)- MeshMap was activated in rviz, but not published yet
Received vertex costs, but UUIDs dont match!- MeshMap was published in an updated version than earlier. This message stops after activating MeshMap in rviz once.