I like to Moveit! Object Manipulation Using a Group of Parallel Robots with ROS & PyBullet

Rico Ruotong Jia
8 min readDec 11, 2020
Salute in Delta Style, Source: Rico Ruotong Jia

In Robotics, one area under active research is multi-agent coordination. The future is promising: a group of robots can carry heavy objects for people, explore an unreachable space for humans, or provide fun, like playing tug of war with us.

Video Demonstration of Multi-Agent Coordination using ROS & PyBullet

My professors at Northwestern University, Dr. Kevin Lynch and Dr. Matt Elwin, have been studying this area for years. For the past few months, I have had a lot of fun working with them on getting 3 Delta robots (they’re parallel robots!) to carry an object to an arbitrary achievable pose. Our goal is to develop a ROS moveit! motion planning pipeline for the robots. ROS (Robot Operating System) is a very popular software framework for inter/intra-robot communication, control, planning, simulation, etc. Moveit! is an easy to use open source robotics manipulation platform, and is also the “crown jewel” among so many ROS sub-frameworks — in moveit, there is a complete motion planning pipeline, with an Rviz interactive GUI, motion planners, inverse kinematics solvers, plugins for pre/post-processing motion plans, and a universal ROS “action” interface that talks to actual robots for executing motions. The greatest thing I like about Moveit! is its modular design. We can swap out almost any module in the illustration below!

Modules in Moveit! Source: ROS Moveit! Tutorials

Another integral part of the project is testing. Due to Covid-19, having access to all lab equipment is harder than usual. Then, we decided to build a PyBullet simulator for the robots, and test the motion planning pipeline there.

Overall, this project provides some good examples for getting this pipeline to work in a fairly complex setting. Some of these examples might even be the first of its kind, as of late 2020 to the best of my knowledge. The project is part of a larger research project called Omnid, which is to manipulate an object with Delta robots mounted on 3 Omni-directional drives. The github Repo is here: https://github.com/RicoJia/Omnid_Project

Moveit! Motion Planning Pipeline

If you are a developer looking to implement a similar project, have fun reading this section! Otherwise, you can safely skim through the illustrations and skip to the following sections.

Overview of the Omnid Motion Planning Pipeline

The pipeline has a centralized design —Inside the chunky /move_group node, the Rviz Plugin is the “commander” of all actions, including dragging the interactive marker mounted on the object, visualizing a plan from a start pose to an end pose, and execute the plan through an ROS action Interface.

As an alternative, many people choose to use moveit_visual_tools instead of the Rviz plugin — I later found out that this way may give us more flexibility in terms of where to put markers and how. However, to keep things simple, I decided to stick to the good old Rviz plugin setup.

Planning Interface on Rviz Source: Rico Ruotong Jia

Here are a few challenges you might encounter you’d like to implement your project in a similar way:

How did you manage to get a parallel robot working with Moveit! in the first place?

A: Great question, it is a fun brain teaser in some way. Moveit! only takes robots into a planning group as a serial kinematic chain for historical reasons. What do we do to plan for multiple sub serial chains at the same time? An intuitive answer is: let’s make them a chain. Delta robot’s structure is quite friendly to “serialization”, i.e, some extra dummy links and joints are added, such that they are serially connected to different legs and are always static. For example, in the below illustration, we create a dummy base_link that connects to the base of one leg and another leg, and we add a mimic joint to keep the dummy base_link static

“Serialize” a Parallel Platform. Source: Rico Ruotong Jia

How do I get updated pose of an end-effector from Rviz in move_group_interface?

A: Subscribe to /rviz_moveit_motion_planning_display/robot_interaction_interactive_marker_topic/update, as far as I know, this is the best way, since move_group_interface is standalone from the Rviz Plugin, the planning_scene of Rviz Plugin is not accessible here. One untold fact I noticed is the published pose’s Z coordinate is not relative to the world frame. Instead, it is relative to the base_link (first link attached to the world frame), so watch out.

How do I update a robot pose through move_group_interface?

A: Just publish end-effector pose on /rviz/moveit/move_marker/*. You must specify the frame of the pose, such as /world, or the base_link frame. Upon receiving the new message, rviz plugin will call the IK plugin to update each robot accordingly.

What’s the purpose of IK plugin? When do I need one?

A: In short, inverse kinematics (IK) is to calculate joint values given the pose of the robot. In this project, IK is needed for 1. updating the robot pose when we click and drag the end-effector 2. calculating each robot’s joint values. In the case of a parallel manipulator, Moveit!’s IK plugins, KDL or IK Fast, does not calculate for the entire parallel structure. On the other hand, the IK of Delta Robot, can be calculated through given geometry analysis, here is a good tutorial. Also, a good IK plugin with some “geometric shortcuts” might be able to speed up the planning process.

How do you coordinate these robots at the same time when you click and drag them?

A: Move_group_interface first gets the most up-to-date object pose from /rviz_moveit_motion_planning_display/robot_interaction_interactive_marker_topic/update, then it calculates each robot’s high level end-effector pose (“high-level IK”). Even though move_group_interface just publishes every robot’s end-effector pose, it still needs to check if these poses can be reached by each robot. That is done though (pseudo code with real function names and scope)

solver = moveit::core::JointModelGroup::getSolverInstance(); solver_ptr->searchPositionIK(pose, …); //call the solver

How do you plan for each robot in a coordinated way?

This is done in the post-processing planner adapter. The adapter will have the planner of choice plan (I like RRT Connect in OMPL for its efficiency in high-dimensional configuration space), and get some waypoints for the object plaform (and for other joints as well, which will be discarded). Don’t worry, this planning process is very fast as it is just linear interpolation between joint values. Next, in the adapter, we calculate “high level IK” and low level IK, similar to /move_group_interface . Once this is done, we fill all joint value into the motion plan response, an return it back to the pipeline.

PyBullet Simulator

In my other post Robot Goes Wild: Delta Robot Bounces Ball using Deep Reinforcement Learning, I used the same simulator to train a robot to bounce a soccer ball 8 times! PyBullet is a great physics simulator with good precision and relatively low CPU usage. This is where I built a simulator for the project.

How to build a simulator for a parallel robot in PyBullet?

This process is easier than that for moveit! due to the great code examples provided by its creator, Erwin Coumans. However, when I first started out, those examples were overwhelming, still. For building a parallel robot, the minitaur example is a great reference as it illustrates how to load URDF into PyBulet, how to build a parallel platform, etc.

The first step is build URDF. This URDF can be much simpler than that for Moveit! as we do not necessarily have to “serialize” it. We declare each leg as a kinematic chain, and we separately declare the end-effector as a standalone link that moves freely. That’s it. Here is my Xacro URDF, which is a “cleaner version” of URDF can be converted using just one ROS command. The next step is to create a constraint to connect each leg to the end effector :

p.createConstraint(...) # you might want to use pb.JOINT_POINT_2_POINT for a 2 dof joint. 
Legs are connected to the end-effector through a constraint, followed by a surprise soccer ball.

Serial Elastic Actuators

An interesting part of this robot is the serial elastic actuators it uses for torque control. Each actuated joint is directly controlled by a serial elastic actuator, a torsion spring. We can apply the exact torque τ at any time using τ=-kΔθ, where θ is the angle between the motor shaft and the robot joint.

PyBullet does not have a torsion spring object, so we’d need to implement it ourselves. The first tricky part is to disable joint friction by setting joint control to VELOCITY_CONTROL, force=0. The second tricky part is applying torque to the joint. PyBullet’s applyExternalTorque applies a net torque to the joint, so we will have to use applyExternalForce to convert torque to force.

The result (left) is verified by checking the zero-damping oscillation case , using Lagrange Dynamics on Google Collab (right)

Verification of Torsion Spring in PyBullet

Interfacing with Moveit!

Since we’re interested in having 3 robots, in the simulated world, we first spawn 3 robots in PyBullet. Also, we spawn an object platform on top of the robot. Note that there’s a clearance between the robots and the object — this is because each robot end-effector cannot rotate, we can elevate the object so it can tilt. In real life, this can be done using rods and spherical joints.

Clearance That Allows the Object To Tilt

Then, we make an interface with Moveit!. This interface consists of:

  1. Updated joint_states
  2. a ROS action server
Overall Structure of the Simulator

The joint_states are used to update the robot state in Moveit!. First, get joint states information from PyBullet using

p.getJointState(...) #for all robot joints
p.getBasePositionAndOrientation() #TRICKY: use this instead of getLinkState for the object platform state.

Next, publish joint states on a topic with an arbitrary name, say /my_joint_states , then in ROS /joint_state_publisher (see /moveit_config/demo.launch) specify /my_joint_states as a “source list”, whose joint states will be used to broadcast TF

<node name="joint_state_publisher" pkg="joint_state_publisher" type="joint_state_publisher">
<rosparam param="source_list">[omnid/joint_states]</rosparam>
</node>

In terms of the action server, follow this awesome tutorial for a clear detailed implementation.

What’s Next

If you’d like to know more about the code, come check out my github: https://github.com/RicoJia/Omnid_Project

Also, there are some notes for future developers: https://docs.google.com/document/d/1iKX31U3NSdvEZ6RA3NsiCXwdtE4my4krcZCF_odcYEk/edit?usp=sharing

Let me know any suggestion or questions you may have!

--

--

Rico Ruotong Jia

A singer, a gym goer, a robot enthusiast, a human being. Northwestern University MSc. Robotics