PART1 : Image Segmentation
PART2 : Object Recognition and Labelling
This is a mini-project on creating a ROS node for image segmentation for objects kept on a cluttered table with PCL(Point Cloud Library). It is run and tested on the Udacity simulator robo-nd environment.
-
The segmentation node publishes
sensor_msgs::PCLPointCloud2messages to the/pcl_objects topic. The segmentation can be visualized in RViz by selecting that topic to view. -
Initially given a point cloud in the
sensor_msgs::PointCloud2format of the one used in theransac_pointclouddirectory -
This needs to be converted to a
pcl::PCLPointCloud2data type to perform calculations using the point cloud library. conversion functions for this are provided in the<pcl_conversions/pcl_conversions.h>lib. -
For Cluster based segmentation we can use voxel grid filtering (see
ransac_pointclouddirectory) to condense the data without a large loss of accuracy. -
To focus on a region of interest in the z axis range (.5-1.1) (Table top) we use the passthrough filter (see
ransac_pointclouddirectory) function of the class. -
In the simulation environment, objects are placed on a flat planar table surface. Since we wish to segment objects on the surface, we can remove the table from the could. Since the table is planar, we can use the RANSAC geometric filtration algorithm and extract the outliers to remove points corresponding to the table face.
-
From the above point cloud, its apparent that the RANSAC algorithm has left the table edge since its not planar with the table face. Another passthrough filter removing all points below the table height should get rid of the edge.
-
Finally with the edge removed, we can use Euclidean cluster segmentation to identify each unique cluster. After computation, a different color is assigned to each cluster for visualization purposes.
The Gazebo world with the same table top and random objects from ransac_pointcloud directory is used. A simple stick robot with an RGB-D camera attached to its head via a pan-tilt joint is placed in front of the table.
NOTE : The underlying PCL filteration methods can be found in the RANSAC_pointcloud directory.
- Clone the repo to the
/srcdirectory of the ROS workspace.
git clone https://github.com/ashutoshtiwari13/ROS-PCL-Segmentation.git- Install the
Cythonutility
$ sudo pip install cython- Build and install
pcl-python
$ cd python-pcl
$ python setup.py build
$ sudo python setup.py install
$ sudo apt-get install pcl-tools- Make sure you have all the dependencies resolved by using the rosdep install tool and run catkin_make:
$ cd ~/catkin_ws
$ rosdep install --from-paths src --ignore-src --rosdistro=kinetic -y
$ catkin_make- Add following to your .bashrc file.
export GAZEBO_MODEL_PATH=~/catkin_ws/src/sensor_stick/models
source ~/catkin_ws/devel/setup.bashNote : source the ~/.bashrc too if error occurs
- Start the simulation with ROS Launch
$ roslaunch sensor_stick robot_spawn.launch- Start the image segmentation node
$ rosrun sensor_stick segmentationHere, a classifier is trained to detect the objects kept on the table.
- We need to make use of the
object_recognizedirectory cloned from the previous step.
Note : Completing the PART1 of the project is a must to run PART2
- Make sure you have all the dependencies resolved by using the rosdep install tool and run
catkin_make:
$ cd ~/catkin_ws
$ rosdep install --from-paths src --ignore-src --rosdistro=kinetic -y
$ catkin_make- Add following to your .bashrc file.
export GAZEBO_MODEL_PATH=~/catkin_ws/src/object_recognize/models
source ~/catkin_ws/devel/setup.bashNote : source the ~/.bashrc too if error occurs
- Launch the
training.launchfile to bring up the Gazebo environment. Only a empty scene is seen with only a sensor stick robot
$ roslaunch object_recognize training.launch- Capture and save the features of each of the object in the environment and run the below command to generate a
training_set.savfile
$ rosrun sensor_stick capture_features.py- For Training, make sure to install the
sklearn & scipypackages
pip install sklearn scipyRun the train_svm.py model to train an SVM classifier on your labeled set of features.
$ rosrun sensor_stick train_svm.py


