Main Content

Intelligent Bin Picking System in Simulink®

Since R2024b

This example shows how to implement semi-structured intelligent bin picking of four different shapes of standard PVC fittings and simulate in an Unreal Engine® environment. This example uses a Universal Robots UR5e cobot to perform the bin picking task, enabling successful detection and classification of objects. The end effector of the cobot is a suction gripper, which enables the cobot to pick up and sort the PVC fittings into bins at four different locations in the workspace.

This image shows a simulation of the intelligent bin picking system sorting PVC fittings.

This example leverages Simulink model references to build the intelligent bin picking system from smaller components. The example provides a template harness that you can use to build a bin picking system. These examples show how to build each component present in the IntelligentBinPicking_Harness.slx model:

This approach enables you to scale to any target by utilizing this template. While this example only deploys to a Simulink 3D target, you can adapt this template for deployment to hardware targets as well. To learn more about modeling bin picking and similar manipulator applications in MATLAB and Simulink, see Bin Picking with MATLAB and Simulink.

Toolbox Dependencies

The example relies on the these toolboxes:

  • Robotics System Toolbox™ — Used for modeling the robotic manipulator, designing the collision-free planner, and simulating the robot in Unreal Engine.

  • Simulink 3D Animation™ — Used for constructing the bin-picking scene and for co-simulating with Unreal Engine.

  • Compute Vision Toolbox™ — Used to read camera outputs and add perception to the model. You must install the Computer Vision Toolbox Model for Pose Mask R-CNN 6-DoF Object Pose Estimation and Computer Vision Toolbox Model for Mask R-CNN Instance Segmentation support packages to run the perception component. For more information about installing add-ons, see Get and Manage Add-Ons. The Computer Vision Toolbox Model for Pose Mask R-CNN 6-DoF Object Pose Estimation and Computer Vision Toolbox Model for Mask R-CNN Instance Segmentation support packages require Deep Learning Toolbox™ and Image Processing Toolbox™.

Additional Resources

This example also provides a pretrained YOLOV4 object detector for identifying the PVC objects so that you can run the example without waiting for the object detector to train. Although not required, if you want to train the object detector model, you can install the Computer Vision Toolbox Model for YOLO v4 Object Detection support package. The trained object detector and the training dataset files are approximately 230MB in size. You can download the files from the MathWorks website.

dataFileLocation = exampleHelperDownloadData("UniversalRobots/IntelligentBinPickingDataSet", ...
"PVC_Fittings_Real_Dataset.zip");

Model Overview

Run the initRobotModelParam helper function to initialize and load all the necessary parameters. The model also executes this function at startup as part of a PreLoadFcn callback.

initRobotModelParam;
************PickAndPlaceV3::Initializing parameters***************
Loading Robot Model and Parameters...OK
Loading User Command Bus...OK
Loading Motion Planner Collision Object Bus...OK
Loading Object Detector Response Bus...OK
Loading Motion Planner Task Bus...OK
Loading Motion Planner Command Bus...OK
Loading Joint Trajectory Bus...OK
Loading Motion Planner Response Bus...OK
Loading Manipulator Feedback Bus...OK
Loading Robot Command Bus...OK
Loading Planner Tasks Maximum Errors...
OK
Loading Model Simulation Parameters...OK
Loading Object model point cloud...OK
**********PickAndPlace::Parameter Initialization finished**********

Open the model and inspect its contents. The video viewer shows the video feed of the tray containing the PVC fittings during simulation.

open_system('IntelligentBinPicking_Harness.slx')

Figure Video Viewer contains an axes object and other objects of type uiflowcontainer, uimenu, uitoolbar. The hidden axes object contains an object of type image.

Intelligent bin picking harness Simulink model.

The intelligent bin picking system model is comprised of four main components:

  1. Detect Items Using a Camera-Based Perception Component — This component accepts a camera image of the parts in the bin, classifies the parts, and identifies their poses.

  2. Identify Grasp/Release Pose and Generate Collision-Free Robot Trajectories with a Planning Component — Using the classified parts and their identified poses, this component computes a sufficient grasp and plans collision-free trajectories from the current pose to the target object pose.

  3. Define the Supervisory Logic with a Task Scheduler Component With the planned trajectories and the current robot pose, this component schedules actions and sends commands to the robot to clear the bin efficiently.

  4. Deploy to a Simulation or Hardware Target Component — The robot executes the received commands in simulation or hardware. This component also returns the outcome and image data from the camera that is part of the target.

Overview of Components

Each of these sections provides an overview of how each component works and link to examples that show how to build the component or to other resources with more information.

Detect Items Using a Camera-Based Perception Component

Both model harnesses in this example use a deep-learning based perception component. However, you could alternatively use a 3rd-party camera system that returns the classified objects and poses. For more information about how to build the camera perception component, see the example.

Identify Grasp/Release Pose and Generate Collision-Free Robot Trajectories with a Planning Component

The trajectory planning component is a triggered subsystem. This means that whenever the task scheduler needs a collision-free trajectory, the task scheduler sends a request to this subsystem to generate it. If the current target pose is for picking up an object, the task scheduler sends the pose of the object to the planner as well. Then, the planner must first determine an effective grasp for the object based on its pose. If the current target pose is a pose in space, determining the grasp is unnecessary.

Once the desired pose of the object is known, the planner determines a target joint configuration using the desired pose and then generates a collision-free trajectory from the current joint configuration to the target joint configuration. The trajectory planning algorithm in this example is designed using the manipulatorCHOMP optimizer. This algorithm optimizes trajectories for smoothness and collision avoidance by minimizing a cost function consisting of a smoothness cost and a collision cost. It is coupled with a TOPP-RA solver to produce time-optimal trajectories. For more information about this approach and the interfaces, see the Design a Trajectory Planner for a Robotic Manipulator example.

Define the Supervisory Logic with a Task Scheduler Component

The task scheduler is the primary coordinating mechanism in the model. It assesses the state of the system and determines the next actions. The task scheduler is contained within a Stateflow® chart. Open the Task Scheduler Stateflow chart to see the logical flow of events.

Task scheduler stateflow chart.

Deploy to a Simulation or Hardware Target Component

The target consists of the robot platform, which includes the bin and its stand, the robot, and the necessary sensors, such as the camera. The target accepts a bus that instructs the robot how to move and returns the actual, realized trajectory.

This subsystem contains target-specific interface layers, such as ROS, RTDE, or a similar protocol. For example, when ROS is handling the communication with the robot, the robot command bus is first deconstructed and translated into ros_control commands that are compatible with the robot. This component similarly converts robot feedback from ROS back into a more generic bus. This approach keeps the Simulink model agnostic to the communication method used with the target. While this example only communicates using a Simulink 3D target, you can adapt the model to communicate with other targets as well. For more examples of communicating with other targets for bin picking, see Bin Picking with MATLAB and Simulink.

Overview of Buses

To ensure that you can use the different components interchangeably, you can use standard interfaces between the components. This example acheives this by primarily using a system of buses. Buses can be thought of as the Simulink equivalent to MATLAB structures. They enable you to read large amounts of mixed data types in and out of referenced models efficiently. This model uses five main bus types:

  • Object Detector Response Bus — Send commands from the object detector to the task scheduler. This is used to provide details on detected objects and their poses to the scheduler. For more information, see the Design Camera Perception Component to Detect Bin Items example.

  • Motion Planner Command Bus — Send commands from the task scheduler to the planner. The main planner task is given in the tasks bus within the primary bus. For more information, see the Design a Trajectory Planner for a Robotic Manipulator example. This bus is also passed to the hardware / simulation target for use when the simulation target should know about objects being picked for verification. For example it can be used by the Simulation 3D block to verify that the target part has actually been picked.

  • Motion Planner Response Bus — Send status and validation flags from the planner to the task scheduler. This is used to verify that the planner has executed successfully. For more information, see the Design a Trajectory Planner for a Robotic Manipulator example.

  • Robot Command Bus — Send commands from the task scheduler to the robot target, i.e motion and grasping commands from the task scheduler to the hardware or simulation target.

  • Robot Feedback Bus — Provide status and action completion flags from the target back to the scheduler. Used primarily to verify motion of the hardware or simulation target.

Each of the examples in this example series provides detailed overviews of the buses that they use. You can also view a detailed breakdown of any of the bus types by executing the initial value on the command line.

Design Parameters and Their Impact

This example is configured for a UR5e cobot using a Robotiq ePick suction gripper and PVC parts, in a bin of specified height, width, and pose. These parameter choices are fixed and hard-coded into the example, but you can modify and verify parameter choices by using the reference model subsystem. This overview provides some impact of these assumptions:

  • The planner and simulation target use a rigidBodyTree object to model the robot. The number of joints determines the size of all joint behavior communications. For this 6-DoF robot, these sizes are represented as 6-by-M matrix or 6-by-M-by-K array.

  • The gripper attaches to the robot as part of the rigidBodyTree object, which the planner and simulation target components also use. The gripper type affects the grasp target pose. For a suction gripper, you only need to consider the z-orientation because x- and y-orientations do not impact pick success.

  • The simulation target uses PVC parts, provided as STLs, to simulate behavior and train the pose detection algorithm. The planner operates independently of these parts; you can provide them to the planner as obstacles using the Motion Planner Command Bus. For more details, see the Design a Trajectory Planner for a Robotic Manipulator example.

  • The bin configuration is provided to the simulation target as STLs placed in space. The planner receives this configuration as parameters that define the static placement environment. These are set in parameters such as binCenterPosition, binHeight, binLength, binOrientation, binRotation, and binWidth.

If you want to change these parameters, start with the referenced models and verify the changes at the component level first before combining it in the main model. You could do this using the existing referenced components, or you could replace the referenced components with your own systems and use the harnesses to verify.

Simulate Intelligent Bin Picking in Unreal Engine

Open the IntelligentBinPicking_Harness model and click run to simulate intelligent bin picking in Unreal Engine. This is achieved using these referenced models:

  • PosemaskRCNN_Detection_Module.slx — The perception component is a Pose Mask R-CNN network that has been trained on labeled images from Simulink 3D Animation. For more information about training the Pose Mask R-CNN network, see the Perform 6-DoF Pose Estimation for Bin Picking Using Deep Learning (Computer Vision Toolbox) example.

  • Simulink_3D_IBP_Target.slx — The simulation target is a semi-structured bin picking scene created using Simulink 3D Animation

  • CHOMP_Trajectory_Planner_Module.slx — The trajectory planner that uses an optimization-based planner, manipulatorCHOMP, coupled with a TOPP-RA solver, contopptraj, to produce time-optimal trajectories.

Click Run or execute this code to start the simulation.

sim('IntelligentBinPicking_Harness.slx');

This image shows the cobot picking up a PVC fitting with the suction gripper and a video feed of the tray containing the PVC fittings.

Related Topics

External Websites