A Mobile Robot Localization via Indoor Fixed Remote ... - MDPI

0 downloads 0 Views 8MB Size Report
Feb 4, 2016 - robot in the environment using multiple ceiling-mounted cameras. ... occlusion, where it needs multiple ceiling-mounted inexpensive cameras.
sensors Article

A Mobile Robot Localization via Indoor Fixed Remote Surveillance Cameras † Jae Hong Shim 1 and Young Im Cho 2, * 1 2

* †

Department of Mechatronics Engineering, Korea Polytechnic University, Si-Heung, Gyunggi-do 429-793, Korea; [email protected] Department of Computer Engineering, Gachon University, Sung-Nam, Gyunggi-do 461-701, Korea Correspondence: [email protected]; Tel.: +82-31-750-5800; Fax: +82-31-750-5768 This paper is an extended version of our paper published in Procedia Computer Science. Shim, J.H.; Cho, Y.I. Mobile robot localization using external surveillance cameras at indoor. In the Proceedings of the CHARMS 2015 Workshop, Belfort, France, 17–20 August 2015; Volume 56; pp. 502–507.

Academic Editors: Eric T. Matson, Byung-Cheol Min and Donghan Kim Received: 31 October 2015; Accepted: 2 February 2016; Published: 4 February 2016

Abstract: Localization, which is a technique required by service robots to operate indoors, has been studied in various ways. Most localization techniques have the robot measure environmental information to obtain location information; however, this is a high-cost option because it uses extensive equipment and complicates robot development. If an external device is used to determine a robot’s location and transmit this information to the robot, the cost of internal equipment required for location recognition can be reduced. This will simplify robot development. Thus, this study presents an effective method to control robots by obtaining their location information using a map constructed by visual information from surveillance cameras installed indoors. With only a single image of an object, it is difficult to gauge its size due to occlusion. Therefore, we propose a localization method using several neighboring surveillance cameras. A two-dimensional map containing robot and object position information is constructed using images of the cameras. The concept of this technique is based on modeling the four edges of the projected image of the field of coverage of the camera and an image processing algorithm of the finding object’s center for enhancing the location estimation of objects of interest. We experimentally demonstrate the effectiveness of the proposed method by analyzing the resulting movement of a robot in response to the location information obtained from the two-dimensional map. The accuracy of the multi-camera setup was measured in advance. Keywords: localization; mobile robot; surveillance camera; indoor; homography

1. Introduction Recently, various localization technologies for mobile robots have been studied with respect to acquiring accurate environmental information. Typically, mobile robots have self-organized sensors to obtain environmental information. However, such robots are very expensive to manufacture and have complicated body structures. The structural layout of indoor environments is typically a known state; thus, many localization studies have obtained the required information from external sensors installed on a robot. The infrared light, ultrasonic, laser range finder, RFID (Radio Frequency Identification), and RADAR (Radio Detecting and Ranging) are the popularly used sensors for localization. Hopper et al. [1] presented an active sensing system, which uses infrared emitters and detectors to achieve 5–10 m accuracy. However, this sensor system is not suitable for high-speed application as the localization cycle requires about 15 s and always requires line of sight. Ultrasonic sensor uses the time of flight measurement technique to provide location information [2]. However, the ultrasonic sensor requires a great deal of infrastructure for its high effectiveness and accuracy. Sensors 2016, 16, 195; doi:10.3390/s16020195

www.mdpi.com/journal/sensors

Sensors 2016, 16, 195

2 of 13

Laser distance measurement is executed by measuring the time that it takes for a laser light to be reflected off a target and returned back to the sensor. Since the laser finder is a very accurate and quick measurement device, this device is widely used in many applications. Subramanian et al. [3] and Barawid et al. [4] proposed a localization method based on a laser finder. The laser finder was used to acquire environment distance information that can be used to identify and avoid obstacles during navigation. However, their high performance relies on high hardware costs. Miller et al. [5] presented an indoor localization system based on RFID. RFID based localization used RF tags and a reader with an antenna to locate objects, but detection of each tag can only work over about 4–6 m. Bahl et al. [6] and Lin et al. [7] introduced the RADAR system, which is a radio-frequency (RF) based system for locating and tracking users inside buildings. The concept of RADAR is to measure signal strength information at multiple stations positioned to provide overlapping field of coverage. It aggregates real measurements with signal propagation modeling to determine object location, thereby enabling location-aware applications. The accuracy of the RADAR system was reported by 2–3 m. Recently, visual image location systems have been preferred because they are not easily disturbed by other sensors [2,8,9]. Sungho [10] used workspace landmark features as external reference sources. However, this method has insufficient accuracy and is difficult to install and maintain due to the required additional equipment. Kim et al. [11] proposed the augmented reality techniques to achieve an average location recognition success rate of 89%, though the extra cost must be considered. Cheoket et al. [12] developed a method of localization and navigation in wide indoor areas by using a vision sensor. Though the set-up cost is lower, this method is not easy to implement if users do not have knowledge about the basic concept of electronic circuit analysis Recently, two camera localization systems have been proposed [13,14]. The concept of this system is that the object distances can be calculated by a triangular relationship from two different images of the cameras. However, to ensure the measuring reliability, the relative coordinates between two cameras must be maintained at the same position. In addition, the set-up cost of the experimental environment is quite expensive due to the use of two cameras. Nowadays, surveillance systems exist in most modern buildings, and cheap cameras are usually installed around these buildings. Indoor surveillance cameras are typically installed without blind areas, and visual data are transferred to a central data server for processing and analysis. If a mobile robot can determine its position using indoor cameras, it would not require an additional sensor for localization and could be applied to multi-agent mobile robot systems [15,16]. Kuscue et al. [17] and Li et al. [18] proposed a vision-based localization method using a single ceiling mounted surveillance camera. However, there are several problems that must be addressed prior to the realization of this concept. First, lens distortion arises from the poor-quality lenses in surveillance cameras, and shadow effects are produced by indoor light sources [19,20]. Second, information about occluding objects cannot be obtained using a single camera. Third, calibrations of camera and a map for localization are carried out independently, and it is very time-consuming work. Herein, we propose a localization method for a mobile robot to overcome the abovementioned problems associated with indoor environments. A two-dimensional map containing robot and object position information is constructed using several neighboring surveillance cameras [21]. The concept of this technique is based on modeling the four edges of the projected image of the field of coverage of the camera and an image processing algorithm of the finding object’s center for enhancing the location estimation of objects of interest. This approach relies on coordinate mapping techniques to identify the robot in the environment using multiple ceiling-mounted cameras. It can be applied for localization in complex indoor environments like T- and L-shaped environments. In addition, simultaneous calibration of cameras and a two-dimensional map can be carried out. Via the above modeling process, a 2D map is built in the form of air-view and quite accurate location can be dynamically acquired from a scaled grid of the map. Significant advantages of the proposed localization are its minimal cost, simple calibration and little occlusion, where it needs multiple ceiling-mounted inexpensive cameras that are installed in opposition to each other and wirelessly communicate with the mobile robot and

Sensors 2016, 16, 195 Sensors 2016, 16, 195

3 of 13 3 of 13

experimentally demonstrate the effectiveness of the proposed method by analyzing the resulting update its current estimated position. Moreover, we experimentally demonstrate the effectiveness robot movements in response to the location information acquired from the generated map. of the Sensors proposed method by analyzing the resulting robot movements in response to3the location 2016, 16, 195 of 13 information acquired from the generated map. 2. Two-Dimensional Visual Map Using theofHomograph experimentally demonstrate the by effectiveness the proposed method by analyzing the resulting 2. Two-Dimensional Visual Map by location Using the Homograph robot movements in response to the information acquired from the generated map. 2.1. Projected Image Plane for Two Dimensional Map 2. Two-Dimensional Visual by Using the Homograph 2.1. Projected Image Plane for TwoMap Dimensional Map Indoor surveillance cameras are typically installed to view the same object from opposite Indoor surveillance cameras are typically installedthat to view theoccluded same object from as opposite 2.1. Projected Image Plane for Two Dimensional Map directions. Such images contain ground-area information may be by objects, shown directions. Such images contain ground-area information may be occluded by as shown in Figure 1.Indoor Therefore, two object images viewed from opposite directions mustfrom beobjects, combined into a surveillance cameras are typically installed tothat view the same object opposite in Figure 1. Therefore, two object to images viewed from opposite directions must be combined single image. WeSuch have attempted accomplish this using directions. images contain ground-area information thathomography. may be occluded by objects, as shown into a in FigureWe 1. Therefore, two object images viewed fromusing directions must be combined singleHomography image. have towherein accomplish this homography. is aattempted projection a plane isopposite transformed into another planeinto in aspace. A single image. We have attempted to accomplish this using homography. Homography projection planeanisimage, transformed into another in space. surveillance cameraisis amounted on awherein slant to a obtain as shown in Figure 2a.plane To observe the Homography is a projection wherein a plane is transformed into another plane in space. A A surveillance camera mounted on a slant obtain an image, as shown in Figure To observeinto the position and size of anis object viewed from to the camera, the image in Figure 2a is 2a. transformed surveillance camera is mounted on a slant to obtain an image, as shown in Figure 2a. To observe the position and image size ofof anFigure object 2b viewed the camera, the image in Figure 2a is transformed into the the air-view usingfrom homography. position and size of an object viewed from the camera, the image in Figure 2a is transformed into air-view ofimage Figure using homography. theimage air-view of 2b Figure 2b using homography.

Figure 1. Images viewedfrom from two neighboring indoor surveillance cameras. Figure 1. Images viewed Figure 1. Images viewed from two two neighboring neighboring indoor indoor surveillance surveillance cameras. cameras.

(a)

(b)

Figure 2. (a) Original image; (b) Air view image of (a).

(a)

(b)

Figure 2. 2. (a) image; (b) (b) Air Air view view image image of of (a). (a). Figure (a) Original Original image;

To transform an original image from a surveillance camera into an air-view image, a feature point Q of the original image is matched with the corresponding point q of the air-view image, as shown in Figure 3. We used a large placard of a chess board to match feature points between the original and air-view images. Using Equation (1), a homography matrix H is obtained using four points from both plane Q and plane q q “ HQ (1) (a) (b) 3. (a) Q plane of the original image;positions (b) q plane ofofthe air view of (a).points of these two The resultingFigure homographically transformed the sameimage feature images from two surveillance cameras are combined in a new projected plane, as shown in Figure 4 1 . This results in a single united plane, as shown in Figure 4 2 . We can then construct a two-dimensional (b)(ROI) of the actual floor area from map, as shown in Figure 4 3 , by (a) extracting the region of interest

Figure 3. (a) Q plane of the original image; (b) q plane of the air view image of (a).

Sensors 2016, 16, 195

4 of 13

the single plane. The homography transformation process makes the floor width of the projected 4 of be 13 (a) (b)the distortion of the camera could evenly like the air-view image. At that time, compensated together. Figure 2. (a) Original image; (b) Air view image of (a). To transform an original image from a surveillance camera into an air-view image, a feature point Q of the original image is matched with the corresponding point q of the air-view image, as shown in Figure 3. We used a large placard of a chess board to match feature points between the original and air-view images. Using Equation (1), a homography matrix H is obtained using four points from both plane Q and plane q

Sensors 195 image2016, to be16,spread

q

(1)

The resulting homographically transformed positions of the same feature points of these two images from two surveillance cameras are combined in a new projected plane, as shown in Figure 1 . This results in a single united plane, as shown in Figure 4○ 2 . We can then construct a 4○ 3 , by extracting the region of interest (ROI) of the two-dimensional map, as shown in Figure 4○ (a) (b) actual floor area from the single plane. The homography transformation process makes the floor Figure 3. (a) Q plane of the original image; (b) q plane of the air view image of (a). Figure 3. (a) Q plane of spread the original image; q plane of the air view of (a).the distortion width of the projected image to be evenly like(b) the air-view image. Atimage that time, of the camera could be compensated together.

1 Two Figure 1 Two Figure 4. 4. Extracting Extracting the the two-dimensional two-dimensionalmap mapfrom from two two projected projectedimages imagesby byhomography. homography. ○

images ○ A single single united united plane; plane; ○ Anextracted extractedtwo twodimensional dimensionalmap. map. 2 A 33 An images on on the projected plane;

If complex indoor If multiple multiple cameras cameras are are used used for for localization localization in in more more complex indoor environments, environments, rotational rotational relationships between image planes of the cameras are considered as shown in Equation (3). relationships between image planes of the cameras are considered as shown in Equation (3). »

nx ox — q “ – ny oy nz , oz

fi ax , ffi ay fl HQ a z

, where approach unit vectors, respectively. where n “ t nx ny nz u, o “ t ox oy oz u, a “ t ax approach vectors, respectively. 2.2. Object unit Modeling on the Two-Dimensional Map

ay

(2) (2) are normal, orientation and az u are normal, orientation and

Here, Modeling we present a Two-Dimensional method to acquire 2.2. Object on the Map the position and size of an object image on the two-dimensional map. Since two neighboring cameras view an object from opposite directions, Here, we present a method to acquire the position and size of an object image on the their images of the same object differ. However, the floor area occupied by the object is the same in two-dimensional map. Since two neighboring cameras view an object from opposite directions, the two images. Therefore, if the rest of the image (except for the floor area) is deleted, we can their images of the same object differ. However, the floor area occupied by the object is the same in the obtain the actual floor area of the object on the two-dimensional map. Even if an object is looked at two images. Therefore, if the rest of the image (except for the floor area) is deleted, we can obtain the from opposite directions by two cameras, image correspondence on the two-dimensional map can actual floor area of the object on the two-dimensional map. Even if an object is looked at from opposite be obtained by adopting area features of the object, i.e., center position and size of the area. To obtain the object floor area on the two-dimensional map, two projected images are transformed from their original images, as shown in Figure 5. Figure 5a shows the original images from the two cameras, Figure 5b shows binary images of Figure 5a with shadow effects removed, and Figure 5c shows the projected images of Figure 5b obtained by homography. If the coordinates

Sensors 2016, 16, 195

5 of 13

directions by two cameras, image correspondence on the two-dimensional map can be obtained by adopting area features of the object, i.e., center position and size of the area. To obtain the object floor area on the two-dimensional map, two projected images are transformed from their original images, as shown in Figure 5. Figure 5a shows the original images from the two Sensors 2016,Figure 16, 195 5b shows binary images of Figure 5a with shadow effects removed, and Figure 5 of5c 13 cameras, Sensors 2016, 16, 195 5 of 13 shows the projected images of Figure 5b obtained by homography. If the coordinates of the two of the two projected images are similar, the actual size and position of an in object in contact with the projected areimages similar, actualthe size andsize position of an object contact with the of the two images projected arethe similar, actual and position of an object in contact withfloor, the floor, as represented by the are image, are nearly the same. If of the rest of the images not including the as represented by theby image, nearly same. the rest the images including the floor area floor, as represented the image, are the nearly theIfsame. If the rest of thenot images not including the floor area are eliminated, the object image can be expressed with an air view. Equation (3) are eliminated, the object image can be image expressed an air view. Equation represents the size floor area are eliminated, the object can with be expressed with an air(3)view. Equation (3) represents the size and position of an object on the projected plane H(x, y). and position an and object on the of projected plane H(x, y). represents theofsize position an object on the projected plane H(x, y).

(a) (a)

(b) (b)

(c) (c)

Figure 5. Projected images of two surveillance cameras by homography. (a) Original images; (b) Figure 5. Projected Projected images imagesofoftwo two surveillance cameras by homography. (a) Original images; (b) Figure surveillance cameras by homography. (a) Original images; (b) Binary Binary 5. images; (c) Projected images. Binary (c) Projected images;images; (c) Projected images.images.





# H \ I HH ( x , y )  1 if yqI H1& (Hxpx, , y )yq&  f1X,I H px, 1,  i 1 I if I x y I“221( x , y )  1 , ( , ) & H(x, y) 2 1 1 Hpx, yq “ y)   H(x, otherwise 0  00 otherwise otherwise 

(3) (3)(3)

H andI HIpx, projected images of cameras 1 and 2, respectively. where I HH px, ( x, yq y ) and (yq x, yare ) are where thethe projected images of cameras 1 and 2, respectively. where I 1H11 ( x, y ) and 2I 2H2 ( x, y ) are the projected images of cameras 1 and 2, respectively. The common 5c,5c, can bebe presented as as shown in common area areabetween betweencameras cameras11and and2,2,asasshown shownininFigure Figure can presented shown The common area between cameras 1 and 2, as shown in Figure 5c, can be presented as shown Figure 6. The area is the object image on the two-dimensional map based on homography. in Figure 6. The area is the object image on the two-dimensional map based on homography. in Figure 6. The area is the object image on the two-dimensional map based on homography.

Figure 6. Commonarea area ofthe the projectedimages images ofcameras cameras 1 and2 2ininFigure Figure 4c. 3 Figure

. Figure6.6.Common Common areaof of theprojected projected imagesof of cameras11and and 2 in Figure44c.

Now, the size and center position of the object image on the two-dimensional map can be Now, size center position of image on the map can Now, the the size and and centeror position of the the object object is image ondetect the two-dimensional two-dimensional can be be calculated. Generally, labeling a contour technique used to the area shape ofmap an object in calculated. Generally, labeling orora acontour technique is is used to to detect thethe area shape of an object in calculated. Generally, labeling contour technique used detect area shape of an object the image. These techniques are suitable for detecting the area shape from an image, such as Figure the image. These techniques are suitable for detecting the area shape from an image, such assuch Figure in the image. These techniques are suitable for detecting area shape from an as 6, which is a binary image. We apply the contour techniquethe to rapidly determine its image, size and center 6, which is a binary image. We apply the contour technique to rapidly determine its size and center Figure 6, which is a binary image. We apply the contour technique to rapidly determine its size and position. Then, the moment of the area shape is calculated to obtain its center point and area. The position. Then, the moment of the area shape shape is calculated to obtainobtain its center point and area. The center position. the moment is We calculated its center andofarea. moment is used Then, to measure the sizeofofthe thearea area shape. obtain to the size and center point position the moment is used to measure the size of the area shape. We obtain the size and center position of the object area from the edge information via the abovementioned process. object area from the edge information via the abovementioned process. To compare the object area with its real position, the actual floor image is transformed onto a To compare the object area with its real position, the actual floor image is transformed onto a projected plane and the object area is detected by contour processing, as shown in Figure 7. projected plane and the object area is detected by contour processing, as shown in Figure 7.

Sensors 2016, 16, 195

6 of 13

The moment is used to measure the size of the area shape. We obtain the size and center position of the object area from the edge information via the abovementioned process. To compare the object area with its real position, the actual floor image is transformed onto a projected Sensors 2016,plane 16, 195and the object area is detected by contour processing, as shown in Figure 7. 6 of 13 Sensors 2016, 16, 195 Sensors 2016, 16, 195

6 of 13 6 of 13

Figure and the the projected projected floor floor image. image. Figure 7. 7. Two-dimensional Two-dimensional map map combined combined with with the the object object area area and Figure 7. Two-dimensional map combined with the object area and the projected floor image. Figure Two-dimensional combined with the object area and the projected floor image. 2.3. Calibration of7.the Detected Objectmap Area

2.3. Calibration of the Detected Object Area 2.3. Calibration of the Detected Object Area When the size and center position the object area obtained from contour processing are 2.3. Calibration of theand Detected Object Area of When thethesize center position of the obtained from fromcontour contourprocessing processing are When size and center position of the object object area area obtained arethe compared to the actual values of the object, considerable errors are revealed. We then measure compared to the actual values of the object, considerable errors are revealed. We then measure the When tothe andvalues centerofposition of the object area obtained from contour processing compared thesize actual considerable errors are revealed. then measure are the difference between the position of the the object, real and visually detected object usingWe a measurement grid, difference between the position ofofthe the real and visually detected object using ameasurement measurement grid, compared to the actual values of object, considerable errors are revealed. We then measuregrid, the difference between the position the real and visually detected object using a as shown in Figure 8. as shown in Figure 8. difference between the position of the real and visually detected object using a measurement grid, as shown in Figure 8. as shown in Figure 8.

(a) (a) (a)

(b) (b) (b)

Figure Grid 8. Grid placard measuringthe theposition position error error the visually detected object area (a)(a) camera Figure forfor measuring visuallydetected detectedobject objectarea area(a) camera Figure 8. 8. Grid placard placard for measuring the position error ofofthe visually camera 1; Figure 8. Grid 1; (b) camera 2.placard for measuring the position error of the visually detected object area (a) camera 1; (b) camera 2. (b) camera 2. 1; (b) camera 2.

a calibration experiment,we weused usedaacylindrical cylindrical object object (diameter, position In aIncalibration experiment, (diameter,20 20cm). cm).Figure Figure9 9shows shows position In In a between calibration experiment, we used aacylindrical object (diameter, 20cm). cm). Figure 9shows shows position a calibration experiment, weand used cylindrical object (diameter, 20 Figure 9object. position errors the real (black line) visually detected center position (red line) of the errors between the real (black line) and visually detected center position (red line) of the object. errors betweenthe thereal real(black (black line) line) and center position (red(red line) line) of theofobject. errors between andvisually visuallydetected detected center position the object.

Figure 9. First experimental results of position errors between the real and the visually detected Figure 9. First experimental results of position position errors between the detected Figure 9.position First experimental results errors between the real realand andvisually thevisually visually detected center of the object. 9. First experimental results of of position errors between the the real and the detected center center position of the object. center position of the object. position of the object.

There is considerable difference between the real and visually detected center position of the There is considerable difference between the real and center of of thethe object, asisshown in Figure 9. We consider that the error caused detected by scale changes in the image There considerable difference between the real andisvisually visually detected centerposition position object, as shown in Figure 9. We consider that the error is caused by scale changes in the image projection and in theFigure installation of thethat measurement After initial measurement to object, as shown 9. Weerror consider the error isgrid. caused bythe scale changes in the image projection and the installation error of the measurement grid. After the initial measurement to detect the object area, error compensation was performed using homography. When the visually projection and the installation error of the measurement grid. After the initial measurement to detect object area,mapped errorcompensation compensation was performed performed using homography. When the detected grids were to the floor image in an air view, obtained the position with an detect thethe object area, error was usingwe homography. When thevisually visually detected grids were mapped to the floor image in an air view, we obtained the position with an

Sensors 2016, 16, 195

7 of 13

There is considerable difference between the real and visually detected center position of the object, as shown in Figure 9. We consider that the error is caused by scale changes in the image projection and the installation error of the measurement grid. After the initial measurement to detect Sensors 2016,area, 16,195 195 of13 13 the object error compensation was performed using homography. When the visually detected Sensors 2016, 16, 77of grids were mapped to the floor image in an air view, we obtained the position with an error bound error bound of two-dimensional 7.1 cm cm on on the the two-dimensional two-dimensional map. Figure 10 shows showsofaathe representation of the the error bound of 7.1 map. 10 representation of of 7.1 cm on the map. Figure 10 shows aFigure representation error-compensated error-compensated results of the object area, as detected by homography. error-compensated the object as detected by homography. results of the object results area, asofdetected by area, homography.

Figure 10. Error compensation results for the object area detected by homography. Figure10. 10.Error Errorcompensation compensationresults resultsfor forthe theobject objectarea areadetected detectedby byhomography. homography. Figure

TopologyMap 3. MapBuilding Building and and Optimal Optimal Path Path 3.3.Topology Pathplanning planningis requiredfor foraaarobot robotto tosafely safelymove movewithout withoutcolliding collidinginto intoany anyobject objectplaced placedon on Path isisrequired required Path planning for robot to safely move without colliding into any object placed on the two dimensional map (Section 2). Herein, we employ the thinning algorithm [22]. The thinning the two dimensional map (Section 2). Herein, we employ the thinning algorithm [22]. The thinning the two dimensional map (Section 2). Herein, we employ the thinning algorithm [22]. The thinning algorithm leaves leaves aaa single single pixel pixel in in the the center center after after continuous continuous elimination elimination of of contour contour in in random random algorithm leaves algorithm single pixel in the center after continuous elimination of contour in random areas. Thus, Thus, aa path path by robot can can avoid avoid obstacles obstacles and and move move safely safely is generated by by the the areas. by which which aa robot robot can avoid obstacles and move safely isis generated by areas. generated the thinning algorithm. thinning algorithm. thinning algorithm. Aftergenerating generatingaaamoving movingpath pathwith withthe thethinning thinningalgorithm, algorithm,as asshown shownin inFigures Figures11 11and and12, 12,aaa After generating moving path with the thinning algorithm, 12 After movementindicator indicatoris requiredfor fora robot.Thus, Thus,aaatopological topologicalmap mapis generatedto tocreate createthe thepath path movement indicator isisrequired required for Thus, map isisgenerated to movement aarobot. robot. topological generated create the path indicator. However, the algorithm can generate a path that is difficult for a robot to move along; indicator. However, the algorithm can generate a path that is difficult for a robot to move along; thus, indicator. However, the algorithm can generate a path that is difficult for a robot to move along; thus, eliminating eliminating information about such paths paths by checking checking thearound area around around nodes required. To eliminating information aboutabout such paths by checking the area nodesnodes is required. To make thus, information such by the area isis required. To make robots move through nodes, an area around each node that is larger than that of the robot robots move through nodes, an area around each node that is larger than that of the robot should be make robots move through nodes, an area around each node that is larger than that of the robot shouldbe beto examined to eliminate nodesthat andcannot edgesbe that cannotbe be traversed by thethat robot. Notecannot thataa examined eliminateto nodes and edges traversed bytraversed the robot.by Note a robot should examined eliminate nodes and edges that cannot the robot. Note that robotifcannot cannot pass there ansearch objectin inthe thearound searchfield around node andiffield ifthe theexists searchbeyond fieldexists exists pass there is an object inisisthe field afield node and ifaathe search the robot pass ififthere an object search around node and search field beyond the boundary of an image around a node. boundary of an image around a node. beyond the boundary of an image around a node.

Figure11. 11.Example Exampleimage imageof ofobjects objectson onaatwo-dimensional two-dimensionalmap mapbefore beforeapplying applyingthe thethinning thinningalgorithm. algorithm. Figure Figure 11. Example image of objects on a two-dimensional map before applying the thinning algorithm.

Sensors 2016, 16, 195

8 of 13

Figure 11. Example image of objects on a two-dimensional map before applying the thinning algorithm.

Sensors 2016, 16, 195

Figure 12. Topology map after applying the thinning algorithm. Figure 12. Topology map after applying the thinning algorithm.

8 of 13

Figure 13 13 shows shows an an image image formed formed after after removing removing aa searched searched path path by by which which aa robot robot cannot cannot Figure robot could could pass pass was was generated generated with with aa topology topology map map by by applying applying the the pass. pass. A path by which a robot thinning algorithm. thinning algorithm.

Figure Figure 13. 13. Modified Modified driving driving path pathvia viasearching searchingpath. path.

The A* A* algorithm algorithm isis aagraph graphexploring exploringalgorithm algorithm that that calculates calculates an an optimal optimal driving driving path path with with aa The given starting starting point point and and goal goal [23]. [23]. ItIt uses uses aa heuristic heuristic estimate estimate on on each each node node to toestimate estimate the theshortest shortest given routeto tothe thetarget targetnode nodewith withminimal minimalcalculation. calculation. route Figure 14 14 shows shows the the shortest shortest robot robot path pathestimated estimated by bythe theA* A*algorithm. algorithm. A A robot robot moves moves along along Figure nodeson onthe theestimated estimatedpath. path. nodes

Figure Figure14. 14. Shortest Shortest robot robotpath pathby byA* A*algorithm. algorithm.

4. Experimental Results We performed a series of experiments to demonstrate the effectiveness of the proposed two-dimensional-map-based localization method using indoor surveillance cameras. The width and length of the floor viewed by the two neighboring cameras were 2.2 and 6 m, respectively. The detected two-dimensional map by homography represents the area of the floor viewed by the two cameras in an air view. We used a self-developed mobile robot with an omnidirectional wheel in the experiment. The surveillance camera had a resolution of 320 × 240 pixels and three RGB (Red

Sensors 2016, 16, 195

9 of 13

4. Experimental Results We performed a series of experiments to demonstrate the effectiveness of the proposed two-dimensional-map-based localization method using indoor surveillance cameras. The width and length of the floor viewed by the two neighboring cameras were 2.2 and 6 m, respectively. The detected two-dimensional map by homography represents the area of the floor viewed by the two cameras in an air view. We used a self-developed mobile robot with an omnidirectional wheel in the experiment. The surveillance camera had a resolution of 320 ˆ 240 pixels and three RGB (Red Green Blue) channels. The accuracy of the two dimensional map with the proposed method was experimentally obtained. Each position error in Figure 15 has two x- and y-axis components in a plane. To represent two error components as a single parameter at each position, we suggest the position error estimation shown in Figure 16. Here, xreal and yreal mean the real position, and xmeasur and ymeasur represent the visually Sensors 2016, 16, 195position on the two dimensional map. xe and ye are obtained using Equation (4).9 of 13 detected object

Figure 15. Position error estimate.

xe “ xreal ´ xmeasur ,

ye “, yreal ´ ymeasur

(4) (4)

Position error vverror is composed composed of of xxee and and yyee in in Equation Equation (4) (4) and and is is expressed expressed by by Equation Equation (5). (5). error is The is obtained obtained using using Equation Equation (6). (6). The magnitude magnitude of of position position error error estimate estimate vverror error is

verror px, yq “ rx,e , ye s |

,

(5) (5)

(6) , | b 2 |verror px, yq| “ (6) xe ` y2e We define the position error estimate E(x, y) as the absolute value of verror using Equation (7). We define the position error estimate E(x, y) as the absolute value of verror using Equation (7). (7) | E , , | E px, yq “ |verror px, yq| (7) Here, E(x,y) is the error plane of the difference between the real and visually detected object position. To examine the effectiveness of the proposed error compensation by homography (Section Here, E(x,y) is the error plane of the difference between the real and visually detected object 2), we compared E(x,y) before and after error compensation. The position error estimate before position. To examine the effectiveness of the proposed error compensation by homography (Section 2), error compensation is shown in Figure 16. The x–y plane of Figure 16 is the x–y plane of the we compared E(x,y) before and after error compensation. The position error estimate before error two-dimensional map, and the z plane is the value of E(x, y). The maximum and average values of compensation is shown in Figure 16. The x–y plane of Figure 16 is the x–y plane of the two-dimensional E(x, y) are 11.5 and 6.7 cm, respectively. After error compensation by homography, the maximum map, and the z plane is the value of E(x, y). The maximum and average values of E(x, y) are and average values of E(x, y) are 7.1 and 2.6 cm, respectively, as shown in Figure 17. The maximum 11.5 and 6.7 cm, respectively. After error compensation by homography, the maximum and average position error was decreased by 38% through the proposed error compensation method. The values of E(x, y) are 7.1 and 2.6 cm, respectively, as shown in Figure 17. The maximum position accuracy of the two-dimensional map obtained by two ceiling surveillance cameras is 7.1 cm, which error was decreased by 38% through the proposed error compensation method. The accuracy of the is sufficient for mobile robot localization. two-dimensional map obtained by two ceiling surveillance cameras is 7.1 cm, which is sufficient for mobile robot localization.

two-dimensional map, and the z plane is the value of E(x, y). The maximum and average values of E(x, y) are 11.5 and 6.7 cm, respectively. After error compensation by homography, the maximum and average values of E(x, y) are 7.1 and 2.6 cm, respectively, as shown in Figure 17. The maximum position error was decreased by 38% through the proposed error compensation method. The accuracy of the two-dimensional map obtained by two ceiling surveillance cameras is 7.1 cm, which Sensors 2016, 16, 195 10 of 13 is sufficient for mobile robot localization.

Sensors 2016, 16, 195 Sensors 2016, 16, 195

Figure before error errorcompensation. compensation. Figure16. 16.Position Positionerror error estimate estimate before

10 of 13 10 of 13

Figure17. 17. Position Position error Figure error estimate estimateafter aftererror errorcompensation. compensation. Figure 17. Position error estimate after error compensation.

Figure 18 shows two images from the two neighboring surveillance cameras. There are several Figure 18 shows two images from the two neighboring surveillance cameras. There are several objects on the floor. two A mobile controlled to move from onecameras. positionThere to theare opposite Figure 18 shows imagesrobot from was the two neighboring surveillance several objects ononthe floor. A mobile robot waswas controlled to move fromfrom one position to theto opposite position position using the proposed localization based on the two-dimensional mapposition described in Section 2.2. objects the floor. A mobile robot controlled to move one the opposite using the proposed localization based on the two-dimensional map described in Section 2.2. We used We usedusing the A* the path planning method for the mobile robot. The objects on the position thealgorithm proposed as localization based on the two-dimensional map described in Section 2.2. the A* algorithm as the path planning method for the mobile robot. The objects on the floor were floor were by homography the object area for in the plane, and the robot’s We used thedetected A* algorithm as the pathas planning method the projected mobile robot. The objects on the detected bypath homography as theconsidering object area in projected andtwo-dimensional the robot’s moving path was moving was by planned thethe object in the map. The floor were detected homography as the object areaarea inplane, the projected plane, and the robot’s planned considering object areapath in the two-dimensional map. experimental results of the experimental of the robot’s control shown in Figure 19. The error bounds between moving path results was the planned considering the are object area in theThe two-dimensional map. The robot’s path control are shown in Figure 19. The error bounds between the planned and actual the planned and actual movement path of the robot was ±5 cm. This means that the proposed experimental results of the robot’s path control are shown in Figure 19. The error bounds between movement path of actual themay robot ˘5 path cm. This means may be localization method be was effective for indoor mobile robots. the planned and movement of the robotthat wasthe ±5proposed cm. This localization means that method the proposed effective for indoor localization methodmobile may berobots. effective for indoor mobile robots.

Figure 18. Objects in the experimental environment.

Figure 18. Objects in the experimental environment. Figure 18. Objects in the experimental environment.

Sensors 2016, 16, 195

11 of 13

Figure 18. Objects in the experimental environment.

Figure 19. Experimental results of the robot’s path control using the proposed localization method. Figure 19. Experimental results of the robot’s path control using the proposed localization method.

To show that the proposed method can be applied for complex indoor environments, an To show the proposed method indoor can be environment. applied for complex environments, experiment wasthat carried out at a T-shaped As shownindoor in Figure 20a, three an experiment was carried out at a T-shaped indoor environment. As shown in Figure 20a, three surveillance cameras were used to build a two-dimensional map. Figure 20b–d show images from surveillance cameras were used to build a two-dimensional map. Figure 20b–d show images from the the three Sensors 2016,cameras. 16, 195 11 of 13 three cameras.

(a)

(b)

(c)

(d)

Figure Figure 20. 20. (a) (a) Map Map of of our our experimental experimentalenvironment; environment;(b) (b)Image Imageof ofcamera camera1;1;(c) (c)Image Imageof ofcamera camera2;2; (d) Image of camera 3. (d) Image of camera 3.

Figure 21 shows the two-dimensional map using the proposed method. The accuracy of the Figure 21 shows the two-dimensional map using the proposed method. The accuracy of the two-dimensional map by three ceiling surveillance cameras is 7 cm, which is satisfactory for mobile two-dimensional map by three ceiling surveillance cameras is 7 cm, which is satisfactory for mobile robot localization. robot localization.

Figure 20. (a) Map of our experimental environment; (b) Image of camera 1; (c) Image of camera 2; (d) Image of camera 3.

Figure 21 shows the two-dimensional map using the proposed method. The accuracy of the two-dimensional map by three ceiling surveillance cameras is 7 cm, which is satisfactory for mobile Sensors 2016, 16, 195 12 of 13 robot localization.

Figure map using using the the proposed proposed localization localization method. method. Figure 21. 21. Experimental Experimental results results of of the the two-dimensional two-dimensional map

5. Conclusions 5. Conclusions We have proposed a new vision-based approach for mobile-robot localization in an indoor We have proposed a new vision-based approach for mobile-robot localization in an indoor environment using multiple remote ceiling-mounted cameras. The proposed approach uses a environment using multiple remote ceiling-mounted cameras. The proposed approach uses a two-dimensional mapping technique between camera and ground-image plane coordinate systems. two-dimensional mapping technique between camera and ground-image plane coordinate systems. We used homography to transform the image planes. Two camera-image planes were combined We used homography to transform the image planes. Two camera-image planes were combined into a into a single ground-image plane with an air view, which resulted in a two-dimensional map. The single ground-image plane with an air view, which resulted in a two-dimensional map. The position position error bound of the developed two-dimensional map was within 7.1 cm. We performed a error bound of the developed two-dimensional map was within 7.1 cm. We performed a series of series of experiments to demonstrate the effectiveness of the proposed two-dimensional-map-based experiments to demonstrate the effectiveness of the proposed two-dimensional-map-based localization method. Among several obstacles fixed on the floor, the mobile robot successfully maneuvered to its destination position using only the two-dimensional map without the help of any other sensor. In future, we plan to extend the proposed method to the localization of multiple mobile robots in an indoor environment. Acknowledgments: This work was supported by the National Research Fund of Korea under Grant NRF 20151D1A1A01061271. Author Contributions: Young Im Cho proposed the research topic. Jae Hong Shim developed the algorithms and performed the experiments; Jae Hong Shim and Young Im Cho analyzed the experimental results and wrote the paper. Conflicts of Interest: The authors declare no conflict of interest.

References 1.

2. 3. 4. 5.

Hopper, A.; Harter, A.; Blackie, T. The active badge system. In Proceedings of the INTERACT ‘93 and CHI ‘93 Conference on Human Factors in Computing Systems, Amsterdam, the Netherlands, 24–29 April 1993; pp. 533–534. Ni, L.M.; Lau, Y.C.; Patil, A.P. Indoor location sensing using active RFID. Wirel. Netw. 2004, 10, 701–710. [CrossRef] Subramanian, V.; Burks, T.F.; Arroyo, A.A. Development of machine vision and laser radar based autonomous vehicle guidance systems for citrus grove navigation. Comput. Electr. Agric. 2006, 53, 130–143. [CrossRef] Barawid, O.C.; Mizushime, H.; Ishii, K. Development of an autonomous navigation system using a two-dimensional laser scanner in an orchard application. Biosyst. Eng. 2007, 96, 139–149. [CrossRef] Guerrieri, J.R.; Francis, M.H.; Wilson, P.F.; Kos, T.; Miller, L.E.; Bryner, N.P.; Stroup, D.W.; Klein-Berndt, L. RFID-assisted indoor localization and communication for first responders. In Proceedings of the First European Conference on Antennas and Propagation, Nice, France, 6–10 November 2006; pp. 1–6.

Sensors 2016, 16, 195

6.

7. 8.

9. 10. 11. 12. 13. 14. 15. 16. 17. 18.

19.

20. 21.

22. 23.

13 of 13

Bahl, P.; Padmanabhan, V. RADAR: An in-building RF based user location and tracking system. In Proceedings of the Nineteenth Annual Joint Conference of the IEEE Computer and Communications Societies, Tel Aviv, Israel, 26–30 March 2000. Lin, T.H.; Chu, H.H.; You, C.W. Energy-efficient boundary detection for RF-based localization system. IEEE Trans. Mob. Compt. 2009, 8, 29–40. Patridge, K.; Arnstein, L.; Borriello, G.; Whitted, T. Fast intrabody signalling. In Proceedings of the Demonstrationn at Wireless and Mobile Computer Systems and Applications, Monterey, CA, USA, 7–8 December 2000. Hills, A. Wireless Andrew. IEEE Spectr. 1999, 36, 49–53. [CrossRef] Sungho, H. Ultrasonic interference reduction technique in indoor location sensing system. J. Korea Acad. Ind. Cooper. Soc. 2012, 13, 364–369. Kim, J.B.; Jun, H.S. Vision-based location positioning using augmented reality for indoor navigation. IEEE Trans. Consum. Electron. 2008, 54, 954–962. [CrossRef] Choek, A.D.; Yue, L. A novel light-sensor-based information transmission system for indoor positioning and navigation. IEEE Trans. Instrum. Meas. 2011, 60, 290–299. [CrossRef] Osugi, K.; Miyauchi, K.; Furui, N.; Miyakoshi, H. Development of the scanning laser radar for ACC system. JSAE Rev. 1999, 20, 549–554. [CrossRef] Nakahira, K.; Kodama, T.; Morita, S.; Okuma, S. Distance measurement by an ultrasonic system based on a digital polarity correlator. IEEE Trans. Instrum. Meas. 2001, 50, 1478–1752. [CrossRef] Tsai, C.Y.; Song, K.T. Dynamic visual tracking control of a mobile robot with image noise and occlusion robustness. Image Vis. Comput. 2007, 27, 1007–1022. [CrossRef] Purarjay, C.; Ray, J. External cameras & a mobile robot: A collaborative surveillance system. In Proceedings of the Australasian Conference on Robotics and Automation, Sydney, Australia, 2–4 December 2009; pp. 1–10. Kuscu, E.; Rababaah, A.R. Mobile robot localization via efficient calibration technique of a fixed remote camera. J. Sci. Inform. 2012, 2, 23–32. Li, I.H.; Chen, M.H.C.; Wang, W.Y.; Su, S.F.; Lai, T.W. Mobile robot self-localization system using single webcam distance measurement technology in indoor environment. Sensors 2014, 14, 2089–2109. [CrossRef] [PubMed] Philipp, A.; Henrik, C. Behaviour coordination for navigation in office environment. In Proceedings of the AIEEE/RSJ International Conference on Intelligent Robots and Systems, Lausanne, Switzerland, 30 September–4 October 2002; pp. 2298–2304. Huuiying, C.; Kohsei, M.; Jun, O. Self-calibration of environmental camera for mobile robot navigation. J. Robot. Auton. Syst. 2007, 55, 177–190. Jaehong, S.; Youngim, C. A mobile robot localization using external surveillance cameras at indoor. In Proceedings of the International Workshop on Communication for Human, Agents, Robots, Machines and Sensors 2015 (CHARMS2015), Belfort, France, 17–20 August 2015. Gonzalez, R.; Woods, R. Digital Image Processing; Pearson Prenctice Hall: Upper Saddle River, NJ, USA, 2007; pp. 671–675. Hart, P.; Nelson, N.; Raphael, B. A formal basis for the heuristic determination of minimum paths in graphics. IEEE Trans. Syst. Sci. Cybern. 1968, 4, 100–107. [CrossRef] © 2016 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons by Attribution (CC-BY) license (http://creativecommons.org/licenses/by/4.0/).