CooperAtive Robot for Large Spaces manufacturing
Robot self-localization in dynamic environments Dissertation - 2014 / 2015 - MIEIC
Supervisor:
Armando Jorge Miranda de Sousa
[email protected]
Co-supervisor
Germano Manuel Correia dos Santos Veiga
[email protected]
Student:
Carlos Miguel Correia da Costa 200903044 –
[email protected]
Presentation outline 1. Introduction a) Context b) Motivations c) Objectives d) Dissertation overview
2. Proposed 3 / 6 DoF localization system a) b) c) d) e) f) g) h) i) j)
Supported sensors Data flow overview Point cloud assembly Preprocessing Initial pose estimation Cloud registration Inlier / outlier segmentation Localization analysis Dynamic map update Projection mapping
3. Evaluation setup a) b)
Robot platforms Testing environments
4. Results 5. Final remarks and contributions
2 / 32
1.a. Context
Introduction
• Large semi-structured manufacturing spaces present serious challenges for robot mobility, safety and reliability • Shipyards exhibit unique features that make then an interesting place for the deployment of a mobile robot • Fit-out operations of ship cabins that include stud welding and marking of CAD mode information onto the raw superstructure to assist posterior fit-out operations can be delegated to mobile robots • Only these operations are estimated to have a market of over 3,000 million € in Europe for a robot able to do these tasks 3 / 32
1.b. Motivations of CARLoS project
Introduction
• Robots can improve productivity in repetitive tasks that require precise manipulation of tools • Robots can decrease manufacturing costs • Robots can free human workers to perform more complex tasks
Support studs
• European shipyards need to reduce production costs to remain viable Installation of insulation layer 4 / 32
1.c. Objectives of CARLoS project
Introduction
• Development of a robot capable of: • Perform tasks autonomously • Navigate safely and accurately in environments with irregular floor • Allow precise docking of robot platform • Avoid injury to human co-workers • Avoid property or robot damage
• Tolerate slight changes in the environment in relation to known map
Robotnik G-Ball
5 / 32
1.d. Dissertation overview
Introduction
• Implementation of an efficient robot self-localization system (3/6 DoF), capable of operating in dynamic environments using LIDAR / RGB-D sensors
Navigation of robot in 3 and 6 degrees of freedom
6 / 32
2.a. Supported sensors
Localization system
• The proposed localization system supports any sensor that is capable of creating a point cloud of the environment, such as: • LIDARs • ToF cameras • RGB-D sensors
SICK NAV 350 LIDAR
Mesa SR4000 ToF
Kinect 1 RGB-D
Kinect 2 RGB-D
7 / 32
2.b. Data flow overview
Localization system
• Main processing modules of the proposed localization system: Preprocessing Initial pose estimation Point cloud registration Registration analysis Localization validation Map update 8 / 32
2.c.1. Point cloud assembly
Localization system
• Rolling buffer of ambient point clouds • Laser scan assembly with spherical linear interpolation • Point cloud created from LIDAR sensor data after: • A given number of LIDAR scans is merged • An assembly time period
Laser scan deformation (left) and correction with spherical linear interpolation (right) 9 / 32
2.c.2. Point cloud assembly
Localization system
Point cloud created from LIDAR scan assembly
10 / 32
2.d. Preprocessing
Localization system
• Point cloud preprocessing allows to: • Reduce the impact of sensor measurements noise • Voxel grids
• Control the level of detail of sensor data • Voxel grids • Random sampling
• Filter unnecessary points • Distance / passthrough / radius filter • Statistical outlier removal
• Add normals to sensor data • Reconstruct surface from sensor measurements • Select and describe keypoints to allow faster geometric feature registration
11 / 32
2.e.1. Initial pose estimation
Localization system
• When a robot starts operating or loses tracking of its position, it needs to perform global localization in order to compute its current pose on the map • Algorithm overview of RANSAC approach: • Select a given number of keypoints on sensor point cloud • Find the correspondent keypoints in the reference point cloud kd-tree • Compute the transformation matrix • Calculate the keypoints inlier percentage • Update accepted initial poses • Repeat until end of iterations
12 / 32
2.e.2. Initial pose estimation
Localization system
Initial pose estimation in Jarvis testing environment 13 / 32
2.e.3. Initial pose estimation
Localization system
Initial pose estimation in Guardian testing environment
14 / 32
2.f. Cloud registration
Localization system
• Point cloud registration can be done with: • Iterative Closest Point (ICP) • Point-to-Point / Point-to-Point Non-Linear / Point-to-Plane / Generalized
• Normal Distributions Transform (NDT)
Cloud registration in Jarvis testing environment
15 / 32
2.g. Inlier / outlier segmentation
Localization system
• Point cloud after registration is spilt into two sets of points (inliers / outliers)
Inlier / outlier segmentation in Guardian testing environment
16 / 32
2.h. Localization analysis
Localization system
• Pose estimations are filtered using the following metrics: • Inlier / outlier percentage • Inlier / outlier Root Mean Square Error • Inlier / outlier angular distribution • Translation / rotation corrections
17 / 32
2.i. Dynamic map update
Localization system
• Dynamic map update can be done using: • Full integration (all registered points) • Ideal for SLAM or outdated maps
• Partial integration (inliers / outliers points) • Ideal when there is an accurate map that needs to be incrementally updated
Localization map before (left) and after partial integration (right) 18 / 32
2.j. Projection mapping
Localization system
Projection mapping source image in Gazebo (top) and its projection (bottom) 19 / 32
3.a. Robot platforms
Jarvis platform
Pioneer platform
Evaluation
Guardian platform 20 / 32
3.b. Testing environments
Jarvis environment
Guardian environment
Evaluation
Pioneer environment
Kinect environment
21 / 32
4.a. Results overview Map resolution
Velocity
Jarvis DRL
10 mm
Jarvis AMCL
Platform
Evaluation
Translation error (millimeters)
Rotation error (degrees)
Computation time (seconds)
Mean
Standard deviation
Mean
Standard deviation
Mean
Standard deviation
50 cm / s
6.422
3.992
0.397
0.099
13.376
13.316
10 mm
50 cm / s
84.230
29.134
0.595
0.581
--
--
Pioneer DRL
25 mm
26 cm / s
22.434
12.052
5.429
0.676
4.569
2.530
Pioneer AMCL
25 mm
26 cm / s
111.470
46.783
6.496
1.386
--
--
Guardian DRL
2 mm
30 cm / s
5.897
4.452
0.095
0.104
24.409
33.459
Kinect DRL
20 mm
30 cm / s
17.926
9.789
3.027
0.638
30.714
11.407
Overview of main results achieved with the proposed localization system (DRL) and AMCL 22 / 32
4.b. 3 DoF SLAM with Pioneer
Evaluation
Proposed localization system
Vicon cameras
GMapping (10% sensor flow speed)
GMapping (100% sensor flow speed)
23 / 32
4.c.1. 3 DoF Jarvis (comparison with AMCL)
Evaluation
Comparison of results between DRL (top) and AMCL (bottom)
24 / 32
4.c.2. 3 DoF Jarvis (comparison with AMCL)
Evaluation
Comparison of results between DRL (top) and AMCL (bottom)
25 / 32
4.d. 3 DoF Guardian tests (simulator)
Evaluation
Videos of Guardian platform in Gazebo simulator
26 / 32
4.e. 3 DoF Guardian tests (robot)
Evaluation
Videos of Guardian platform in CROB (top) and Deltamatic (bottom)
27 / 32
4.f. 6 DoF SLAM with Kinect (free fly)
Evaluation
Video of 3D map done by the proposed localization system 28 / 32
4.g. 6 DoF Kinect test (free fly)
Evaluation
Video of the 6 DoF proposed localization system using a Kinect
29 / 32
4.h. 6 DoF Kinect free fly test comparison
Evaluation
Comparison of the 6 DoF results between DRL (left) and ethzasl_icp_mapper (right)
30 / 32
5. Final remarks and contributions
Conclusions
• Implementation of a efficient, accurate, modular and extensible 3 / 6 DoF localization / mapping system for ROS using PCL • Development of reusable ROS packages • Laser assembler • Automated self-localization testing infrastructure
• Creation / improvement of 3 / 6 DoF datasets • ICIT-2015 paper • “Robust and accurate localization system for mobile manipulators in cluttered environments”
31 / 32
Thank you! Questions? 32 / 32