Integrating a sensor

This page contains a step-by-step guide to integrate a sensor to GRIP.

What you need before starting

  • A ROS node or launch file that runs the sensor and publish the collected data on topics.

  • The list of the topics

Prerequisites

Examples will be provided using a Kinect v2. If you want to replicate this tutorial, make sure to install libfreenect2 and to follow these instructions to install iai_kinect2 in /home/user/projects/shadow_robot/base/src/

Procedure

  1. Start the framework: roslaunch grip_api start_framework.launch

  2. Specify the URDF file of the robot to the framework

  3. Set the composition of your robot(s), i.e. how many arms, hands and sensors need to be configured. In this case we are going to set only one sensor, but you can have several sensors

  4. In the Settings tab, in the Sensors config, click on New and save the file wherever you want

  5. In the margin, click on the + symbol and enter the name of your sensor. Another dialog window is going to ask you for a launch file that runs your sensor. If you don’t have any please refer to step 9.

  6. A template will appear in the editor with the following fields:
    • data_topics: Dictionary that must contain all the different data to collect with their associated topic

    • frame_id: Name of the frame associated to the sensor in the scene

    • initial_pose: Pose of the sensor in the scene

../../_images/sensor_integration.png
  1. Click on Save in the editor and here you are, your sensor is now integrated (i.e. you can see a state in the Task editor tab that allows you to collect data from the referenced topics).

  2. Integrate the rest of your robot

  3. If you have not provided a launch file during step 5 (that’s fine), make sure to run your node or launch file that starts the sensor before clicking on Launch robot.

Note

Instead of defining the pose in the editor, you can directly refer to poses defined in the pose editor. For instance if a pose named sensor_pose is defined in the pose editor, you can set initial_pose: sensor_pose.

Using a MoveIt! plugin

If you want MoveIt! to consider the data collected by an integrated sensor into the motion planning process, you can specify the corresponding YAML file inside the Sensor plugins editor. The documentation about how to create your plugin can be found here. If you are using point clouds or depth maps and want to automatically generate corresponding occupancy maps, you can use the following templates.
Occupancy maps from point clouds:
Occupancy maps from depth images:

Note

Make sure to repalce the placeholders <topic_name> and <sensor_frame> by their real values.