Supervised and autonomous grasp suggestion for objects based on their presently observed point cloud geometry.
This package includes a framework that requests grasps from a grasp sampler that implements the SuggestGrasps action found in rail_grasp_calculation_msgs, ranks them using the RankGrasps action implemented in [rail_grasp_calculation], and refines the ranking by using a pairwise grasp ranking formulation. The package also includes functionality to collect new training examples for the pairwise ranking model, scripts to retrain the model, and scripts to comprehensively evaluate different pairwise ranking models. Additionally, the package includes supporting nodes to run demos and perform physical execution of the suggested grasps on a Fetch robot.
If you'd like to get the package up and running quickly, follow the installation instructions and one of the examples below that fits your use case. If you'd like more detailed information about everything available in this package, we also include detailed documentation of all functionality at the end of this page.
This package requires scikit-learn. Installation instructions can be found here.
There are a few other dependencies for generating plots to evaluate classifiers. They can be installed with the following:
sudo apt-get install python-matplotlib
pip install treeinterpreterInstall the package by cloning the fetch_grasp_suggestion repository into your catkin workspace and building it as follows:
cd (your catkin workspace)/src
git clone https://github.com/fetchrobotics/fetch_grasp_suggestion.git
cd ..
catkin_makeThis package is designed to work with rail_grasp_calculation and rail_agile. These packages can be installed with instructions provided here
The following sections will function as short tutorials to get the system up and running for specific use cases.
The primary purpose of this package is to facilitate object grasping. The package includes a demo file to immediately show off this functionality, and it's also designed to be incorporated into larger systems that require pick-and-place. Both methods are explained below.
Grasping Demo
- Launch the grasping demo:
- Identify the topic of the point cloud you want to use as input. This will be referred to as CLOUD for the rest of these instructions. If you're using the Fetch's head camera, you can skip this step as it will be set by default.
- Identify the name of the file name and path of the classifier model you're using. This will be referred to as CLASSIFIER for the rest of these instructions. If you haven't trained your own classifier, you can skip this step and use the default pre-trained classifier.
- Run MoveIt! on the fetch with:
roslaunch fetch_moveit_config move_group.launch allow_active_sensing:=true - Launch the grasping demo with:
roslaunch fetch_grasp_suggestion grasp_suggestion_testing.launch cloud_topic:=CLOUD classifier_file:=CLASSIFIER, setting parameters only if you don't want to use the Fetch head camera and the pre-trained classifier.
- Run the rviz interface with:
roslaunch fetch_grasp_suggestion grasp_suggestion_testing_frontend.launch - Call the segmentation service to update the scene objects.
- This demo starts rail_segmentation by default. You can call the
segmenter from the command line with
rosservice call rail_segmentation/segment {} - If you'd like to use different object segmentation, see alternative segmentation below.
- This demo starts rail_segmentation by default. You can call the
segmenter from the command line with
- Call grasp suggestion by publishing to an object index to the
test_grasp_suggestion/grasp_objecttopic. For example, to grasp segmented object 0 from the command line, use:rostopic pub /test_grasp_suggestion/grasp_object std_msgs/Int32 "data: 0"- To use methods other than pairwise ranking (note, these generally don't perform as well and were only included for
evaluation), just change the topic you're publishing to:
- For grasps ranked by heuristics only:
rostopic pub /test_grasp_suggestion/grasp_object_heuristic std_msgs/Int32 "data: 0" - For grasps from the AGILE pipeline:
rostopic pub /test_grasp_suggestion/grasp_object_agile std_msgs/Int32 "data: 0" - For a random antipodal grasp:
rostopic pub /test_grasp_suggestion/grasp_object_random std_msgs/Int32 "data: 0"
- For grasps ranked by heuristics only:
- To use methods other than pairwise ranking (note, these generally don't perform as well and were only included for
evaluation), just change the topic you're publishing to:
- Executing the final grasp will require user input for safety. Follow the instructions on the command line to execute the grasp.
Integrating Object Grasp Suggestion into a Larger System
- Start grasp suggestion:
- Identify the topic of the point cloud you want to use as input. This will be referred to as CLOUD for the rest of these instructions. If you're using the Fetch's head camera, you can skip this step as it will be set by default.
- Identify the name of the file name and path of the classifier model you're using. This will be referred to as CLASSIFIER for the rest of these instructions. If you haven't trained your own classifier, you can skip this step and use the default pre-trained classifier.
- Launch grasp suggestion with:
roslaunch fetch_grasp_suggestion grasp_suggestion.launch cloud_topic:=CLOUD classifier_file:=CLASSIFIER, setting parameters only if you don't want to use the Fetch head camera and the pre-trained classifier.
- To calculate grasp suggestions, have your node connect to the
suggester/suggest_graspsROS server- Pass in a segmented object point cloud as the service request.
- The response will be a ranked grasp list, with the best grasp first.
A secondary use of this package is to calculate grasps over scenes without object segmentation. For example, if you want to clear objects from a cluttered scene and you don't have object segmentation that can reliably identify objects in clutter, you can use this package to generate grasps over an entire scene, rather than for specific objects. This package includes a demo file that performs cluttered scene clearing, and we also include instructions on integrating this functionality into a larger system. Both methods are explained below.
Cluttered Scene Clearing Demo
- Launch the cluttered scene demo:
- Identify the topic of the point cloud you want to use as input. This will be referred to as CLOUD for the rest of these instructions. If you're using the Fetch's head camera, you can skip this step as it will be set by default.
- Identify the name of the file name and path of the classifier model you're using. This will be referred to as CLASSIFIER for the rest of these instructions. If you haven't trained your own classifier, you can skip this step and use the default pre-trained classifier.
- Run MoveIt! on the fetch with:
roslaunch fetch_moveit_config move_group.launch allow_active_sensing:=true - Launch the cluttered scene demo with:
roslaunch fetch_grasp_suggestion cluttered_scene_demo.launch cloud_topic:=CLOUD classifier_file:=CLASSIFIER, setting parameters only if you don't want to use the Fetch head camera and the pre-trained classifier.
- Run the rviz interface with:
roslaunch fetch_grasp_suggestion cluttered_scene_demo_frontend.launch - Start the demo by publishing an empty message to the
cluttered_scene_demo/run_demotopic. For example, to start the demo from the command line, use:rostopic pub /cluttered_scene_demo/run_demo std_msgs/Empty "{}"- Executing each grasp will require user input for safety. Follow the instructions on the command line to execute whenever the demo pauses before grasp execution.
- The demo will loop until it can't find anymore grasps in the scene. It can be stopped earlier by following instructions to quit provided in the command line during each pause before grasp execution.
Integrating Scene Grasp Suggestion into a Larger System
- Start grasp suggestion:
- Identify the topic of the point cloud you want to use as input. This will be referred to as CLOUD for the rest of these instructions. If you're using the Fetch's head camera, you can skip this step as it will be set by default.
- Identify the name of the file name and path of the classifier model you're using. This will be referred to as CLASSIFIER for the rest of these instructions. If you haven't trained your own classifier, you can skip this step and use the default pre-trained classifier.
- Launch grasp suggestion with:
roslaunch fetch_grasp_suggestion grasp_suggestion.launch cloud_topic:=CLOUD classifier_file:=CLASSIFIER, setting parameters only if you don't want to use the Fetch head camera and the pre-trained classifier.
- To calculate grasp suggestions over a scene, have your node connect to the
suggester/suggest_grasps_sceneROS server- Pass in a point cloud of the complete scene as the service request.
- The response will be a ranked grasp list, with the best grasp first.
To collect new grasp preferences, we recommend using the new version of fetch_pbd. This package also includes an older pipeline for gathering grasp suggestions using an interactive marker server and the command line. WARNING: This functionality is deprecated in favor of fetch_pbd, but we include instructions here for completeness.
- Launch the grasp suggestion collector:
- Run MoveIt! on the fetch with:
roslaunch fetch_moveit_config move_group.launch allow_active_sensing:=true - Launch the grasp suggestion collector with:
roslaunch fetch_grasp_suggestion grasp_suggestion_collector.launch
- Run MoveIt! on the fetch with:
- Run the rviz interface with:
roslaunch fetch_grasp_suggestion grasp_suggestion_collector_frontend.launch - Call the segmentation service to update the scene objects.
- This pipeline starts rail_segmentation by default. You can call the
segmenter from the command line with
rosservice call rail_segmentation/segment {}
- This pipeline starts rail_segmentation by default. You can call the
segmenter from the command line with
- Generate grasps for an object by sending an object index to the
suggester/get_grasp_suggestionsaction server. - Cycle through the grasp suggestions, visualized in RVIZ with a marker representing the Fetch's gripper, by calling
the
selector/cycle_graspsROS service. - Select the best grasp and execute it by sending an empty goal to the
selector/execute_selected_graspaction server.- This will append new pairwise feature vectors as training data to the
grasp_data.csvfile, which will be located in the.rosdirectory under your home directory. - The final .csv file can optionally be added to the
data/grasp_preferencesdirectory offetch_grasp_suggestionto use relative paths for other launch files.
- This will append new pairwise feature vectors as training data to the
If you have training data in a .csv file, training a new classifier is as easy as running a single script. The trained
classifier can then be used by changing the classifier_file parameter in many of the launch files. New trained
classifiers can be added to the data/classifier/ directory of fetch_grasp_suggestion, or they can be loaded from
anywhere by passing an absolute path to the classifier_file parameter.
- Decide what types of classifier you'd like to train. Supported classifier types include:
["decision_tree", "random_forest", "ada_boost", "knn", "svm", "logistic_regression", "nn1", "nn2"]. This will be referred to as CLASSIFIERS for the rest of these instructions. - Identify the file name and path of your training data, which will be referred to as DATA for the rest of these instructions.
- Call the classifier training node with:
rosrun fetch_grasp_suggestion train_classifier.py _classifier_types:=CLASSIFIERS _file_name:=DATA
Examples:
- Train a random forest and a k nearest neighbors model over training data
new_grasp_data.csvfound in the(fetch_grasp_suggestion)/data/grasp_preferencesdirectory:
rosrun fetch_grasp_suggestion train_classifier.py _classifier_types:=[random_forest, knn] _file_name:=new_grasp_data.csv- Train a neural network with 2 hidden layers over training data
lots_of_data.csvfound in your home directory:
rosrun fetch_grasp_suggestion train_classifier.py _classifier_types:=[nn2] _file_name:=/home/(username)/lots_of_data.csvThis package contains many tools for evaluating different classifier types and different model parameters. For model
parameter cross validation, call rosrun fetch_grasp_suggestion cross_validate_params.py with parameters as specified
in its detailed documentation section. For comparing the effectiveness of different types
of classifiers, call rosrun fetch_grasp_suggestion evaluate_classifier.py with parameters as specified in
its detailed documentation section.
By default, this package is set up to use rail_segmentation for object
segmentation. If you'd like to use your own segmentation methodology, this can easily be connected to
fetch_grasp_suggestion with the following steps.
- Make sure your segmentation implementation publishes segmented objects to a topic with message type rail_segmentation/SegmentedObjectList.
- Change the
fetch_grasp_suggestionlaunch file you're using to start up your segmentation instead ofrail_segmentation - Set the
segmentation_topicparameter of anyfetch_grasp_suggestionnodes that you're running to your new segmentation topic.
The package contains a set of ROS nodes organized as shown in the diagram below. Each section is documented separately
in the following subsections.

The following nodes handle the core functionality of this package, including grasp suggestion and pairwise classification.
This node interfaces with grasp sampling and ranking to get a set of grasp poses with heuristics, then uses this information to re-rank the grasps by calling the pairwise ranking model on every grasp pair. This node also receives feedback on grasps selected by human operators to create new training examples for the pairwise ranking model. Relevant topics, services, action clients, and action servers, and parameters are as follows:
- Subscribers
/head_camera/depth_registered/points(pcl/PointCloud<pcl/PointXYZRGB>)
Point cloud stream of the entire environment, used for updating the scene context for grasp suggestion. The point cloud subscribed to can be changed by setting thecloud_topicparam.~/grasp_feedback(rail_manipulation_msgs/GraspFeedback)
Feedback for grasp selection. Publish to this topic when the user selects a grasp to generate new training data for the pairwise ranking model./rail_segmentation/segmented_objects(rail_manipulation_msgs/SegmentedObjectList)
(DEPRECATED) Incoming segmented objects, for use with the action server grasp suggestion pipeline. The topic can be changed to any other source of segmented objects by setting thesegmentation_topicparam. This has been deprecated in favor of the service grasp suggestion pipeline, which is recommended instead.
- Publishers
~/grasps(fetch_grasp_suggestion/RankedGraspList)
(DEPRECATED) Topic for publishing grasps after performing grasp suggestion with the action server pipeline. This has been deprecated in favor of the service grasp suggestion pipeline, which is recommended instead.
- Service Clients
/executor/clear_objects(std_srvs/Empty)
Clear collision objects from the scene. Supports the (optional) grasp executor node included in this package./executor/add_object(fetch_grasp_suggestion/AddObject)
Add a collision object to the scene. Supports the (optional) grasp executor node included in this package./classify_all(fetch_grasp_suggestion/ClassifyAll)
Perform pairwise classification for all pairs of grasps. Connects to the classifier node included in this package.
- Service Servers
~/suggest_grasps(rail_manipulation_msgs/SuggestGrasps)
Given an object point cloud, sample grasps and get an initial ranking for them by calculating grasp heuristics.~/suggest_grasps_scene(rail_manipulation_msgs/SuggestGrasps)
Given a scene point cloud, sample grasps and get an initial ranking for them by calculating grasp heuristics.~/suggest_grasps_baseline(rail_manipulation_msgs/SuggestGrasps)
Given a point cloud, sample grasps classified as antipodal using the full AGILE pipeline. This is only included for evaluation purposes, as the suggest_grasps or suggest_grasps_scene servers produce better results.~/suggest_grasps_random(rail_manipulation_msgs/SuggestGrasps)
Given an object point cloud, sample antipodal grasps with a random ordering. This is included only for baseline testing, and should not be used for any real applications!~/pairwise_rank(rail_manipulation_msgs/PairwiseRank)
Re-rank the most recently computed grasp list for an object using the pairwise ranking model.~/pairwise_rank_scene(rail_manipulation_msgs/PairwiseRank)
Re-rank the most recently computed grasp list for a scene using the pairwise ranking model.
- Action Clients
/rail_agile/sample_grasps(rail_grasp_calculation_msgs/SampleGraspsAction)
Antipodal grasp sampler using AGILE. If you'd like to change the source for the initial sampled grasps, you can remap this topic to any other grasp sampling action server that implements the rail_grasp_calculation_msgs SampleGraspsAction./rail_agile/sample_classify_grasps(rail_grasp_calculation_msgs/SampleGraspsAction)
Full AGILE grasp sampling pipeline. Only required for the~/suggest_grasps_baselineservice./grasp_sampler/rank_grasps_object(rail_grasp_calculation_msgs/RankGraspsAction)
Calculate heuristics and rank grasps for an object using rail_grasp_calculation./grasp_sampler/rank_grasps_scene(rail_grasp_calculation_msgs/RankGraspsAction)
Calculate heuristics and rank grasps for a scene using rail_grasp_calculation.
- Action Servers
~/get_grasp_suggestions(fetch_grasp_suggestion/SuggestGraspsAction)
(DEPRECATED) Sample grasps and calculate an initial ranking based on grasp heuristics by action server. This is deprecated in favor of the service implementation~/suggest_grasps, which is recommended instead.
- Parameters
segmentation_topic(string, "rail_segmentation/segmented_objects")
Topic for incoming segmented object data.cloud_topic(string, "head_camera/depth_registered/points")
Point cloud topic to update the scene where grasping is taking place.file_name(string, "grasp_data")
Name of a csv text file to save new training data to.min_grasp_depth(double, -0.03)
Minimum depth offset (in m) to adjust a suggested grasp by for execution.max_grasp_depth(double, 0.03)
Maximum depth offset (in m) to adjust a suggested grasp by for execution.
This node implements the pairwise ranking model and exposes it as a service.
- Service Servers
~/classify_grasp_pair(fetch_grasp_suggestion/ClassifyGraspPair)
Perform a single classification for a pair of grasps. If you're classifying more than one grasp, use the~/classify_allservice instead, as it is very slow to re-instantiate the model for every classification instance.~/classify_all(fetch_grasp_suggestion/ClassifyAll)
Perform pairwise classification on all pairs of grasps, returning a re-ordered grasp list as a result.
- Parameters
~/file_name(string, "decision_tree.pkl")
Filename for an already-trained classifier model. We include a pre-trained random forest model, and you can generate new models using thetrain_classifier.pyscript. Filenames can be relative to the package path or absolute.~/n_jobs(int, 1) For classifiers that support this parameter, the number of jobs to use when performing classification. Passing in -1 will use as much processing as is available.~/interpret_trees(bool, false)
Flag for generating a plot detailing the contributions of each feature over a set of classifications, only valid for decision tree and random forest classifiers. NOTE: This is buggy, in that it tends to crash after generating plots repeatedly, so leave this off unless you specifically want plots to be generated.~/object_feature_size(int, 6)
The length of the feature vector describing only the object (not the difference) of grasp heuristics).
The following nodes are optional nodes that can be used to execute the suggested grasps, or to select grasps using an interactive marker server implementation for grasp execution and generation of new training examples.
An optional standalone grasp executor for the Fetch robot. Relevant services and actions are as follows:
- Action Servers
~/execute_grasp(fetch_grasp_suggestion/ExecuteGraspAction)
Execute a grasp pose, which involves moving to the approach angle, opening the gripper, moving in a straight-line trajectory to the grasp pose, closing the gripper, and lifting the object.~/prepare_robot(fetch_grasp_suggestion/PresetMoveAction)
Move the arm to a "ready to grasp" pose that's out of the way of the camera.~/drop_position(fetch_grasp_suggestion/PresetMoveAction)
Move the arm to a preset position to drop an object.
- Service Servers
~/add_object(fetch_grasp_suggestion/AddObject)
Add an object to the MoveIt! collision scene.~/clear_objects(std_srvs/Empty)
Remove all collision objects from the MoveIt! collision scene.~/detach_objects(std_srvs/Empty)
Detach any collision objects currently attached to the gripper.~/drop_object(std_srvs/Empty)
Open the gripper and remove all collision objects.
(DEPRECATED) An optional interactive marker server used for collecting new training examples. This is deprecated in favor of a better interface implementation in fetch_pbd, which is recommended instead.
- Subscribers
/suggester/grasps(fetch_grasp_suggestion/RankedGraspList)
Subscriber for newly calculated and ranked grasps, coming from thesuggesternode by default. The topic can be changed by setting thegrasps_topicparameter./rail_segmentation/segmented_objects(rail_manipulation_msgs/SegmentedObjectList)
Segmented objects subscriber. Object information is used when generating new training examples. This topic can be changed by setting thesegmentation_topicparameter.
- Service Servers
~/cycle_grasps(fetch_grasp_suggestion/CycleGrasps)
Advance the currently displayed grasp forward or backward.
- Action Clients
/executor/execute_grasp(fetch_grasp_suggestion/ExecuteGraspAction)
Connection to the standalone graspexecutorincluded in this package.
- Action Servers
~/execute_selected_grasp(fetch_grasp_suggestion/ExecuteSelectedGraspAction)
Execute the currently selected grasp and create new pairwise training examples based on the grasp executed and the grasps seen by the user.
- Parameters
~/segmentation_topic(string, "rail_sgementation/segmented_objects")
Topic for subscribing to segmented objects.~/grasps_topic(string, "suggester/grasps")
Topic for subscribing to new calculated grasps~/file_name(string, "grasp_data")
Filename of a .csv text file to save new training data to.
The nodes in this section will run the full pipeline of processing a point cloud, sampling grasps for an object or scene, calculate heuristics for those grasps, rank the grasp list with pairwise ranking, and execute the best grasp. None of the nodes have a GUI (sorry!), but they can be called from the command line by simply publishing to the appropriate ROS topic. Further, before grasps are executed, the user has the option to view them first and must provide keyboard input to actually execute the grasp for safety reasons.
Test individual object grasping with a variety of different strategies, including our novel pairwise ranking method, ranking based only on our heuristics, the full AGILE pipeline, and a baseline random grasp from antipodal candidates. Each method is called by publishing a std_msgs/Int32 message with the index of the object to be grasped to the appropriate test topic. Relevant topics and parameters are as follows:
- Subscribers
~/grasp_object(std_msgs/Int32)
Grasp an object using the pairwise ranking method.~/grasp_object_heuristic(std_msgs/Int32)
Grasp an object using ranks derived directly from the heuristics.~/grasp_object_agile(std_msgs/Int32)
Grasp an object using the full AGILE pipeline.~/grasp_object_random(std_msgs/Int32)
Grasp an object using a random antipodal grasp.
- Publishers
~/debug(geometry_msgs/PoseStamped)
Grasp pose to be executed. It's recommended to display this topic in rviz when monitoring grasp execution.
- Parameters
~/segmentation_topic(string, "rail_sgementation/segmented_objects")
Topic for subscribing to segmented objects.~/debug(bool, true)
Publish the pose to be executed if true.
Run a demo where the robot will continue to calculate and perform grasps until a surface in front of it is empty. It will loop through execution cycles that consist of moving the arm out of the way of the camera, taking a point cloud snapshot, sampling and ranking grasps over the point cloud snapshot, executing the best grasp, moving to a drop position off to the side of the robot, and dropping the object. Starting the demo requires publishing an empty message to a topic after the node is running. Relevant topics and parameters are as follows:
- Subscribers
~/run_demo(std_msgs/Empty)
Execute the demo. As withtest_grasp_suggestion, each grasp will still require keyboard input before execution, for safety.
- Parameters
~/cloud_topic(string, "head_camera/depth_registered/points")
Topic of the point cloud that includes the scene to be cleared.
The following python scripts facilitate training and evaluating new classifiers. Each script is implemented as a ROS node, and can be run with rosrun to set parameters.
This is the main script for training a new classifier and saving the model. Newly trained classifiers will be saved in the current directory of the terminal in which the script was executed.
- Parameters
~/classifier_types(string[], ["decision_tree"])
The types of classifiers to train models for. Supported classifiers include:["decision_tree", "random_forest", "ada_boost", "knn", "svm", "logistic_regression", "nn1", "nn2"], where"nn1"is a neural network with a single hidden layer, and"nn2"is a neural network with two hidden layers. You can also pass in "all" to train one of each supported classifier.~/file_name(string, "grasp_data.csv")
The filename containing the training data. File paths can be relative or absolute. If the path is relative, the script assumes the file is located in the directory (location of fetch_grasp_suggestion)/data/grasp_preferences/.
Compare the performance of different types of classifiers with various tests and metrics, including k-folds cross validation, detailed results on a train/test split, learning curve plots, ROC curves, and precision-recall curves.
- Parameters
~/classifier_types(string[], ["decision_tree"])
The types of classifiers to test. Supported classifiers include:["decision_tree", "random_forest", "ada_boost", "knn", "svm", "logistic_regression", "nn1", "nn2"], where"nn1"is a neural network with a single hidden layer, and"nn2"is a neural network with two hidden layers. You can also pass in "all" to evaluate every supported classifier.~/file_name(string, "grasp_data.csv")
The filename containing all of the data, which will be used to create training and testing sets. File paths can be relative or absolute. If the path is relative, the script assumes the file is located in the directory (location of fetch_grasp_suggestion)/data/grasp_preferences/.~/split(double, 0.4)
Ratio used to split the data into a train and test set. The value corresponds to the fraction of the total data to be used for the test set.~/generate_plots(bool, false)
Flag for generating plots. If true, the script will generate training curves, ROC curves, and precision-recall curves for each classifier specified inclassifier_types. Otherwise, the script will only provide the accuracy from cross validation and the detailed classification report over the train/test split in the terminal.
Perform cross validation to examine the effects of different model parameters for a given classifier type.
- Parameters
~/classifier_types(string[], ["decision_tree"])
The types of classifiers to use. A separate cross-validation procedure will be carried out for each classifier type, with a set of parameters appropriate to that classifier type. Supported classifiers include:["decision_tree", "random_forest", "ada_boost", "knn", "svm", "logistic_regression", "nn1", "nn2"], where"nn1"is a neural network with a single hidden layer, and"nn2"is a neural network with two hidden layers. You can also pass in "all" to cross validate parameters for every supported classifier.~/file_name(string, "grasp_data.csv")
The filename containing all of the data, which will be used for training and testing during cross-validation. File paths can be relative or absolute. If the path is relative, the script assumes the file is located in the directory (location of fetch_grasp_suggestion)/data/grasp_preferences/.~/generate_plots(bool, false) Flag for generating plots that show the effects of different parameter values on classification rate. If true, one plot will be generated for each classifier type specified inclassifier_types. If false, the results will be reported only by text output in the terminal. Generating plots is recommended for getting a better understanding of the parameter effects.