Depth Edge Filtering Using Parameterized ... - Semantic Scholar

2 downloads 0 Views 6MB Size Report
Apr 3, 2017 - Keywords: depth edge filter; depth detection; structured light ... filtering. One notable technique was presented to create a depth edge map for ...
sensors Article

Depth Edge Filtering Using Parameterized Structured Light Imaging Ziqi Zheng, Seho Bae and Juneho Yi * College of Information and Communication Engineering, Sungkyunkwan University, Suwon 16419, Korea; [email protected] (Z.Z.); [email protected] (S.B.) * Correspondence: [email protected]; Tel.: +82-031-290-7142 Academic Editor: Vittorio M.N. Passaro Received: 8 March 2017; Accepted: 30 March 2017; Published: 3 April 2017

Abstract: This research features parameterized depth edge detection using structured light imaging that exploits a single color stripes pattern and an associated binary stripes pattern. By parameterized depth edge detection, we refer to the detection of all depth edges in a given range of distances with depth difference greater or equal to a specific value. While previous research has not properly dealt with shadow regions, which result in double edges, we effectively remove shadow regions using statistical learning through effective identification of color stripes in the structured light images. We also provide a much simpler control of involved parameters. We have compared the depth edge filtering performance of our method with that of the state-of-the-art method and depth edge detection from the Kinect depth map. Experimental results clearly show that our method finds the desired depth edges most correctly while the other methods cannot. Keywords: depth edge filter; depth detection; structured light

1. Introduction The goal of this work, as illustrated in Figure 1, is to accurately find all depth edges that have the minimum depth difference, rmin , in the specified range of distances, [ amin , amax ], from the camera/projector. We call this “depth edge filtering”. We propose a structured light based framework that employs a single color pattern of vertical stripes with an associated binary pattern. Due to the accurate control of the parameters involved, the proposed method can be employed for applications where detection of object shape is essential—for example, when a robot manipulator needs to grasp an unknown object by identifying inner parts with a certain depth difference. There has been a considerable amount of research on structured light imaging in the literature [1–5]. Here, we only mention some recent works to note. Barone et al. [1] presented a coded structured light technique for surface reconstruction using a small set of stripe patterns. They coded stripes in De Bruijn sequence and decomposed color stripes to binary stripes so that they can take advantage of using a monochromatic camera. Ramirez et al. provided a method [2] to extract correspondences of static objects through structured light projection based on De Bruijn sequence. To improve the quality of depth map, Shen et al. presented a scenario [3] for depth completion and denoising. Most works have aimed at surface reconstruction and there have been a few works for the purpose of depth edge filtering. One notable technique was presented to create a depth edge map for nonphotorealistic rendering [6]. They capture a sequence of images in which different light sources illuminate the scene from various positions. Then, they use shadows in each image to assemble a depth edge map. However, this technique is incapable of the control of parameters such as range of distances from the camera/projector and depth difference.

Sensors 2017, 17, 758; doi:10.3390/s17040758

www.mdpi.com/journal/sensors

Sensors 2017, 17, 758

2 of 12

Sensors 2017, 17, 758

2 of 12

A while ago, in [7,8], similar control of parameterizing structured light imaging was presented. They employed structured light with a pattern comprising black and white horizontal stripes of equal A while ago, in [7,8], similar control of parameterizing structured light imaging was presented. width, and detected depth edges with depth difference 𝑟 ≥ 𝑟𝑚𝑖𝑛 in a specified range of distances. They employed structured light with a pattern comprising black and white horizontal stripes of equal Since the exact amount of pattern offset along depth discontinuities in the captured image can be width, and detected depth edges with depth difference r ≥ rmin in a specified range of distances. relatedSince to the value from the camera, theydepth detected depth edges bycaptured finding image detectable thedepth exact amount of pattern offset along discontinuities in the can bepattern offset through Gabor amplitude. They automatically the width of stripe related to thresholding the depth valueof from the camera, they detected depth edges bycomputed finding detectable pattern offset it through thresholding of Gabor amplitude. by relating with the amount of pattern offset. They automatically computed the width of stripe by relating it with the amount of pattern offset. A major drawback of the previous methods is that they did not address the issue of shadow A major drawback of the previous methods is that they did not address the and issueresult of shadow region. Regions that the projector light cannot reach create shadow regions in double region. Regions that the projector light cannot reach create shadow regions and result in double edges. edges. Figure 1d shows the result of the method in [8] for the given parameters where, in the shadow Figure 1d shows the result of the method in [8] for the given parameters where, in the shadow regions, regions, double edges and missing edgesFurthermore, appear. Furthermore, to the useand of white simple black and double edges and missing edges appear. due to the usedue of simple black stripes, white stripes, the exact amount pattern offset may not be measurable depending on the object location the exact amount pattern offset may not be measurable depending on the object location from the from the camera. This deficiency requires additionally employing several structured lights with camera. This deficiency requires additionally employing several structured lights with width of stripe etc. tripled, etc. width doubled, of stripetripled, doubled, 𝑟𝑚𝑖𝑛 = 5cm Input [𝑎𝑚𝑖𝑛 , 𝑎𝑚𝑎𝑥 ] = [130𝑐𝑚, 190𝑐𝑚]

(a)

(b)

(c)

(d)

(e)

(f)

FigureFigure 1. Depth edges, which have the difference 5 cm in specified the specified of 1. Depth edges, which have theminimum minimum depth depth difference of of 5 cm in the range range of distances [130 cm, from thethe projector/camera, aredetected detected camera’s is shown distances [130190 cm, cm] 190 cm] from projector/camera, are in in (a).(a). TheThe camera’s view view is shown (b). Depth filtering resultsare are presented (c) (c) ground truth;truth; (d) method in [8]; (e)in Kinect, andKinect, in (b). in Depth edgeedge filtering results presentedforfor ground (d) method [8]; (e) ourmethod. method. and (f)(f) our work, we present present anan accurate controlcontrol of depth edge filteringedge by overcoming In thisIn this work, we accurate of depth filteringthe bydisadvantages overcoming the of the previous works [7,8]. We provide an overview of our method in Figure 2. We opt to use single 2. We disadvantages of the previous works [7,8]. We provide an overview of our method in aFigure color pattern of vertical stripes with an associated binary pattern as shown in Section 3. The use of the opt to use a single color pattern of vertical stripes with an associated binary pattern as shown in binary pattern helps with recovering the original color of the color stripes accurately. We give the Sectiondetails 3. The use of 3. the binary pattern helps with the, stripe original color the color stripes in Section Given the input parameters, amax ] and rmin width, w, isof automatically [ amin ,recovering accurately. We give the the details in Section 3. Given the input parameters, [𝑎𝑚𝑖𝑛having , 𝑎𝑚𝑎𝑥depth ] and 𝑟𝑚𝑖𝑛 , computed to create structured light patterns necessary to detect depth edges stripe width, 𝑤,greater is automatically to create the structured light patterns the necessary to detect difference or equal to rcomputed structured light images by projecting structured min . We capture light patterns on the scene. We first recover the original color of the color stripes in the structured light depth edges having depth difference greater or equal to 𝑟𝑚𝑖𝑛 . We capture structured light images by images order not to light be affected by the on theWe scene. Then, for each of homogeneous projecting theinstructured patterns ontextures the scene. first recover theregion original color of the color color, we use a Support Vector Machine (SVM) classifier to decide whether a given region is from stripes in the structured light images in order not to be affected by the textures on the scene. Then, shadow or not. After that, we obtain color stripes pattern images by filling in shadow regions using the for each region of homogeneous color, we use a Support Vector Machine (SVM) classifier to decide whether a given region is from shadow or not. After that, we obtain color stripes pattern images by filling in shadow regions using the color stripes that otherwise have been projected there. We finally apply Gabor filtering to the pattern images to produce the depth edges with depth difference greater or equal to 𝑟𝑚𝑖𝑛 . We have compared the depth edge filtering performance of our method with that of [8] and the

Sensors 2017, 17, 758

3 of 12

color stripes that otherwise have been projected there. We finally apply Gabor filtering to the pattern images to produce the depth edges with depth difference greater or equal to rmin . We have compared the depth edge filtering performance of our method with that of [8] and the Kinect sensor. Experimental results clearly show that our method finds the desired depth edges most Sensors 2017, 17, 758 3 of 12 correctly while the other methods cannot. The main contribution of our work lies in an accurate control of depthofedge filtering using a using novel amethod of effective identification of colorofstripes and shadow control depth edge filtering novel method of effective identification color stripes and removal in the structured light image. shadow removal in the structured light image.

Figure Figure 2. 2. An An overview overviewof ofour ourmethod. method.

2. Parameterized Structured Light Imaging 2. Parameterized Structured Light Imaging By parameterized structured light imaging, we refer to the technologies using structured light By parameterized structured light imaging, we refer to the technologies using structured light imaging that can control associated parameters. To the best of our knowledge, Park et al.’s work [7] imaging that can control associated parameters. To the best of our knowledge, Park et al.’s work [7] was the first of its kind. In our case, the controlled parameters are the minimum depth difference, was the first of its kind. In our case, the controlled parameters are the minimum depth difference, 𝑟𝑚𝑖𝑛 , a target range of distances, [𝑎𝑚𝑖𝑛 , 𝑎𝑚𝑎𝑥 ], and the width of stripe, 𝑤. The basic idea in [7] to rmin , a target range of distances, [ amin , amax ], and the width of stripe, w. The basic idea in [7] to detect detect depth edges is to exploit pattern offset along depth discontinuities. To detect depth depth edges is to exploit pattern offset along depth discontinuities. To detect depth discontinuities, discontinuities, they consecutively project a white light and structured light onto the scene and they consecutively project a white light and structured light onto the scene and extract a binary pattern extract a binary pattern image by differencing the white light and structured light images. This image by differencing the white light and structured light images. This differencing effectively removes differencing effectively removes texture edges. After removal of texture edges, they basically detected texture edges. After removal of texture edges, they basically detected the locations where pattern offset the locations where pattern offset occurs to produce depth edges. In contrast, we achieve the effect of occurs to produce depth edges. In contrast, we achieve the effect of texture edge removal by recovering texture edge removal by recovering the original color stripes in the color structured image. Details the original color stripes in the color structured image. Details will be given in the next section. will be given in the next section. The inin Figure 3a3a where amax andand rmin 𝑟are given The control controlof ofparameters parametersdeveloped developedinin[7] [7]can canbebeseen seen Figure where 𝑎𝑚𝑎𝑥 𝑚𝑖𝑛 are as the input parameters; then, the width, w, and a are determined. However, it was awkward that min given as the input parameters; then, the width, 𝑤 , and 𝑎𝑚𝑖𝑛 are determined. However, it was aawkward is found at a later step from other parameters. A substantial improvement over this method was min that 𝑎𝑚𝑖𝑛 is found at a later step from other parameters. A substantial improvement over made in [8] sowas thatmade are, 𝑎given] as the input parameters. Given the input parameters, [ amin ,in a[8] ] and maxso this method thatrmin [𝑎𝑚𝑖𝑛 𝑚𝑎𝑥 and 𝑟𝑚𝑖𝑛 are given as the input parameters. Given the provides the of provides stripes, w,the and number of structured light images, n as shown the method input parameters, thewidth method width of stripes, 𝑤, and number of structured lightin Figure 3b. They also showed its application to the detection of silhouette information for visual hull images, 𝑛 as shown in Figure 3b. They also showed its application to the detection of silhouette reconstruction In our work, we achieve[9]. much simpler control of the key parameters by employing information for[9]. visual hull reconstruction In our work, we achieve much simpler control of the akey color pattern as can be seen in Figure 3c. While the methods in [7,8] need several structured light parameters by employing a color pattern as can be seen in Figure 3c. While the methods in [7,8] images, we use a single color pattern and an associated binary pattern. need several structured light images, we use a single color pattern and an associated binary pattern.

(a)

(b)

(c)

this method was made in [8] so that [𝑎𝑚𝑖𝑛 , 𝑎𝑚𝑎𝑥 ] and 𝑟𝑚𝑖𝑛 are given as the input parameters. Given the input parameters, the method provides the width of stripes, 𝑤, and number of structured light images, 𝑛 as shown in Figure 3b. They also showed its application to the detection of silhouette information for visual hull reconstruction [9]. In our work, we achieve much simpler control of the key parameters Sensors 2017, 17, 758 by employing a color pattern as can be seen in Figure 3c. While the methods in4[7,8] of 12 need several structured light images, we use a single color pattern and an associated binary pattern.

(a)

(b)

(c)

Figure (c) our our method. method. Figure 3. 3. Control Control of of key key parameters: parameters: (a) (a) method method in in [7]; [7]; (b) (b) method method in in [8]; [8]; (c)

Sensors 2017, 17, 758

4 of 12

To better To better describe describe our our method, method, let let us us revisit revisit the the key key Equation Equation (1) (1) for for the the modelled modelled imaging imaging geometry of a camera, projector and object in [7,8]. This Equation can easily be derived from geometry of a camera, projector and object in [7,8]. This Equation can easily be derived from the the geometry in geometry in Figure Figure 44 using using similar similar triangles: triangles:   𝑓𝑑𝑟 1 1 11 f dr ∆ = 𝑓𝑑 ( − = (1) .. ∆exact = fd − )= 𝑒𝑥𝑎𝑐𝑡 +r𝑟) a 𝑎 b𝑏 a𝑎(𝑎 (a + )

Here, 𝑎, b𝑏and andf 𝑓are arethe thedistances distancesofofobject objectlocations locationsAAand andBBfrom fromthe theprojector/camera projector/camera and and Here, a, virtual image plane from the camera, respectively. ∆ denotes the exact amount of pattern offset 𝑒𝑥𝑎𝑐𝑡 virtual image plane from the camera, respectively. ∆exact denotes the exact amount of pattern offset when the the depth depth difference difference of of object object locations locations A A and and BB is is r.𝑟. when

(a)

(b)

Figure 4. Imaging geometry and pattern offset: (a) side view of image geometry; (b) ∆𝑒𝑥𝑎𝑐𝑡 vs. Figure 4. Imaging geometry and pattern offset: (a) side view of image geometry; (b) ∆exact vs. ∆visible . ∆𝑣𝑖𝑠𝑖𝑏𝑙𝑒 .

Since, in in [7,8], [7,8], they they used used simple simple black black and and white white stripes stripes with with equal equal width, width, ∆∆exact may not be Since, 𝑒𝑥𝑎𝑐𝑡 may not be measurable depending on the object location from the camera. The observable amount of pattern measurable depending on the object location from the camera. The observable amount of pattern offset, ∆∆visible ,,isisperiodic from the the camera camera is is increased increased or or decreased. decreased. offset, periodic as as the the distance distance of of object object location location from 𝑣𝑖𝑠𝑖𝑏𝑙𝑒 With fixed, the the relation relation between between ∆ exact and With rr and and dd fixed, ∆𝑒𝑥𝑎𝑐𝑡 andaadepicts depictsthat that there there are are ranges ranges of of distances distances where where detection of depth edges is difficult due to the lack of visible offset even though ∆ is significant. significant. exact is detection of depth edges is difficult due to the lack of visible offset even though ∆𝑒𝑥𝑎𝑐𝑡 Refer to setset thethe minimum amount of pattern offsetoffset that is needed to reliably detect Refer to Figure Figure5.5.They Theyhave have minimum amount of pattern that is needed to reliably depth edges to 2w/3. In order to extend the detectable range, additional structured lights with width detect depth edges to 2𝑤⁄3. In order to extend the detectable range, additional structured lights with of stripe 2w, 4w,2𝑤, etc.4𝑤, areetc. employed to fill the gapthe ofgap ∆exact Figure and the corresponding range, width of stripe are employed to fill of in ∆𝑒𝑥𝑎𝑐𝑡 in 5, Figure 5, and the corresponding a, of object is extended. In contrast, because we use color stripes pattern, exact is equivalent range, 𝑎, oflocations object locations is extended. In contrast, because we use color stripes∆pattern, ∆𝑒𝑥𝑎𝑐𝑡 is to ∆ . Thus, there is no need to employ several pattern images. visible to ∆ equivalent . Thus, there is no need to employ several pattern images. 𝑣𝑖𝑠𝑖𝑏𝑙𝑒

 exact  2 w

 exact

I: Detectable

 visible

Detectable range

 exact 

4w 3

𝑒𝑥𝑎𝑐𝑡

Refer to Figure 5. They have set the minimum amount of pattern offset that is needed to reliably detect depth edges to 2𝑤⁄3. In order to extend the detectable range, additional structured lights with width of stripe 2𝑤, 4𝑤, etc. are employed to fill the gap of ∆𝑒𝑥𝑎𝑐𝑡 in Figure 5, and the corresponding range, 𝑎, of object locations is extended. In contrast, because we use color stripes pattern, ∆𝑒𝑥𝑎𝑐𝑡 is Sensors 2017, 17, 5 of 12 equivalent to758 ∆𝑣𝑖𝑠𝑖𝑏𝑙𝑒 . Thus, there is no need to employ several pattern images.

 exact

 exact  2 w

I: Detectable

 visible

Detectable range

 exact 

4w 3

 exact  w  exact 

2w 3

amin

amax

a

Figure 5. 5. Pattern Pattern offset offset ∆ vs. detectable range [ a , , a𝑎max ]]. . Figure ∆exact 𝑒𝑥𝑎𝑐𝑡 vs. detectable range [𝑎min 𝑚𝑖𝑛 𝑚𝑎𝑥

3. 3. Use of Color Stripes Stripes Pattern Pattern We Weopt optto touse usecolor color stripe stripe patterns patterns by by which which we we can can extend extend the the range range of of distances distances by by filling filling in in the the Sensors 2017, 17, 758 5 of 12 gap in Figure 5. We consider a discrete spatial multiplexing method as a proper choice [10] gap of of ∆∆exact in Figure 5. We consider a discrete spatial multiplexing method as a proper choice [10] 𝑒𝑥𝑎𝑐𝑡 because itit shows simple algorithm is needed. employ four becausered, shows negligible errors and only simple matching algorithm We employ four colors: cyan, negligible yellow anderrors white.and Weonly also aamake usematching of two versions for each color: We bright and dark. colors: red, cyan, yellow and white. We also make use of two versions for each color: bright and dark. That is, their RGB (Red, Green and Blue) values are [L,0,0], [0,L,L], [L,L,0], and [L,L,L], L = 255 or 128, That is,Ltheir RGBlightness (Red, Green and Blue) values are [L,0,0], [0,L,L], [L,L,0],De and [L,L,L], L = 255 or 128, where denotes intensity. To create a color pattern, we exploit Brujin sequences [11] of where L denotes lightness intensity. To create a color pattern, we exploit De Brujin sequences [11] of length 3, that is, any sequence of three color stripes is unique in a neighborhood. This property helps length 3,each that is, anyin sequence of three colorbystripes is unique in a neighborhood. This property helps identify stripe the image captured the camera. identify each stripe in the image captured by the camera. Additionally, Additionally, we we use use an an associated associated binary binary stripes stripes of of which which RGB RGB values values can can be be represented represented as as [L,L,L], L = 255 or 128. That is, we also make use of bright (L = 255) and dark (L = 128) versions for [L,L,L], L = 255 or 128. That is, we also make use of bright (L = 255) and dark (L = 128) versions for binary binary stipes. have designed stripe so patterns that in both color binary stripes, stipes. We haveWe designed the stripethe patterns that inso both color stripes andstripes binaryand stripes, bright and bright and dark stripes appear The alternately. The color stripes are associated with the binary dark stripes appear alternately. color stripes are associated with the binary stripes so thatstripes bright so that in bright stripes in the correspond color pattern stripes pattern. in the binary to stripes the color pattern to correspond dark stripestoindark the binary Refer pattern. to FigureRefer 6. This Figure This setting greatly facilitateswhen the solution when the original ofin color setting6. indeed greatlyindeed facilitates the solution recovering therecovering original color of color color stripes the stripes in the color structured light image by referencing the lightness of binary stripes in the binary color structured light image by referencing the lightness of binary stripes in the binary structured structured light image. light image.

(a)

(b)

Figure 6. Color stripes pattern with an associated binary pattern: (a) color pattern; (b) binary pattern. Figure 6. Color stripes pattern with an associated binary pattern: (a) color pattern; (b) binary pattern.

The most attractive advantage of employing color stripes pattern of De Bruijin sequence is that attractive employing color stripes pattern Desafely Bruijinset sequence is that ∆𝑒𝑥𝑎𝑐𝑡The is most the same as theadvantage amount ofofvisible pattern offset ∆𝑣𝑖𝑠𝑖𝑏𝑙𝑒 . Weofcan the minimum ∆ same asoffset the amount of visible pattern depth offset ∆edges We𝑤can set the 2𝑤 minimum exact is the visible .to ⁄2. safely ⁄3 wasamount amount of pattern necessary for detecting In addition, used in of pattern offset necessary for detecting depth edges to w/2. In addition, 2w/3 was used in [7,8]. [7,8]. Thus, the width of stripe width, 𝑤, is computed using Equation (2) [8]: Thus, the width of stripe width, w, is computed using Equation (2) [8]: 2𝑓𝑑𝑟𝑚𝑖𝑛 𝑤= . (2) f dr)(𝑎 (𝑎𝑚𝑎𝑥 + 𝑟2𝑚𝑖𝑛 min𝑚𝑎𝑥 + 2𝑟𝑚𝑖𝑛 ) . (2) w= ( amax + rmin )( amax + 2rmin ) 4. Recovery of the Original Color of Stripes The problem of recovering the original color of color stripes in the structured light image is to determine the lightness L in each color channel. We exploit the associated binary image as reference to avoid decision errors. Figure 7 shows the procedure of recovering the original color of color stripes. The procedure consists of two steps. For every pixel in the color structured image, we first decide

Sensors 2017, 17, 758

6 of 12

4. Recovery of the Original Color of Stripes The problem of recovering the original color of color stripes in the structured light image is to determine the lightness L in each color channel. We exploit the associated binary image as reference to avoid decision errors. Figure 7 shows the procedure of recovering the original color of color stripes. The procedure consists of two steps. For every pixel in the color structured image, we first decide whether it comes from a bright (L = 256) color stripe or dark (L = 128) color stripe. Then, we recover the value of L in each color channel. Let us denote a pixel in the color structured light image and its corresponding pixel in the binary structure light images, C and B, respectively. Ci and Bi , i = r, g, b, represent their RGB values. Since bright stripes in the color pattern correspond to dark stripes in the binary pattern, it is very likely that a pixel from a bright color stripe appears brighter than its corresponding pixel from the binary stripe when they are projected onto the scene. Thus, in most cases, we can make a correct decision simply by comparing the max value of Ci with the max value of Bi , i = r, g, b. However, since RGB values of stripes in the captured images are affected by the object surface color, we may have decision errors, especially for pixels on the object surface that have high values in one channel. For example, when object surface color is pure blue [0,0,255] and color stripe is bright red [255,0,0], the RGB values of a pixel the object surface in the color and binary structured images can appear as [200,5,200] Sensors 2017, on 17, 758 6 of 12 and [100,100,205], respectively. In this case, only comparison of max channel value gives a wrong answer.we Hence, we employ an additional criterion that compares average value threechannels. channels. Hence, employ an additional criterion that compares the the average value of of allallthree Through numerous numerous experiments, experiments, we we have have confirmed confirmed that that this this simple simple scheme scheme achieves achieves correct correct pixel pixel Through classification into bright or dark ones. classification

Figure Figure 7. 7. Flow Flow of of recovering recovering the the original original color color of of color color stripes. stripes.

Next, we decide the value of each channel, L. Luminance and ambient reflected light can vary in Next, we decide the value of each channel, L. Luminance and ambient reflected light can vary in different situations. We take an adaptive thresholding scheme to make a decision. In case of a pixel in different situations. We take an adaptive thresholding scheme to make a decision. In case of a pixel bright color stripes, 𝐶𝑖 ∈ {0, 255} and 𝐵𝑖 = 128. We decide that if 𝐶𝑖 − 𝐵𝑖 > 𝑡ℎ𝑟𝐵 , then 𝐶𝑖 = 255; in bright color stripes, Ci ∈ {0, 255} and Bi = 128. We decide that if Ci − Bi > thr B , then Ci = 255; otherwise, 𝐶𝑖 = 0. 𝑡ℎ𝑟𝐵 is determined as Equation (3) and 𝑠 is computed from training samples: otherwise, Ci = 0. thr B is determined as Equation (3) and s is computed from training samples: 𝑡ℎ𝑟𝐵 = 𝑠 ∙ max {𝐶𝑖 − 𝐵𝑖 }. (3) 𝑖=𝑟,𝑔,𝑏 thr B = s· max {Ci − Bi }. (3) i =r,g,b In the case of a pixel in dark color stripes, 𝐶𝑖 ∈ {0, 128} and Bi = 255. We decide that if 𝐵𝑖 − 2𝐶𝑖 < 𝑡ℎ𝑟𝐷 , then 𝐶𝑖 = 128. Otherwise, 𝐶𝑖 = 0. 𝑡ℎ𝑟𝐷 is computed as Equation (4), and 𝑏 and 𝑡 are estimated from training samples: 𝑡ℎ𝑟𝐷 = −2𝑏 + 𝑡 ∙ min {𝐵𝑖 − 2𝐶𝑖 }. 𝑖=𝑟,𝑔,𝑏

(4)

We set a bias 𝑏 to ensure that most of the time 𝑡ℎ𝑟𝐷 is positive. This is necessary to deal with any positive {𝐵𝑖 − 2𝐶𝑖 } close to min{𝐵𝑖 − 2𝐶𝑖 } when min{𝐵𝑖 − 2𝐶𝑖 } is negative. The relationship between the original color and captured color is nonlinear. We seek to use a

Sensors 2017, 17, 758

7 of 12

In the case of a pixel in dark color stripes, Ci ∈ {0, 128} and Bi = 255. We decide that if Bi − 2Ci < thr D , then Ci = 128. Otherwise, Ci = 0. thr D is computed as Equation (4), and b and t are estimated from training samples: thr D = −2b + t· min { Bi − 2Ci }. i =r,g,b

(4)

We set a bias b to ensure that most of the time thr D is positive. This is necessary to deal with any positive { Bi − 2Ci } close to min{ Bi − 2Ci } when min{ Bi − 2Ci } is negative. The relationship between the original color and captured color is nonlinear. We seek to use a simple statistical method to determine parameters, s, b and t. We collect a series of images. Each set is comprised of three images, Mb , Mg and Mw , that are captured by projecting black [0,0,0], gray [128,128,128] and white [255,255,255] lights, respectively. Mb , Mg and Mw can be viewed as three image matrices that experimentally simulate the observed black, gray and white color. Note that we took every image in the same ambient environment. Usually, the more training samples we collect, the more representative parameters we can get. However, hundreds of samples are sufficient for our estimation in practice. We use multifarious objects in different shapes and with various textures to build scenes. We estimate s as follows: ( ) minimum Mwi − Mgi 1  . (5) min s= N∑ maximum Mwi − Mgi i N is the number of sets, minimum(·) and maximum(·) are element-wise functions and i means the ith set. In bright stripes, we already know that Ci − Bi should be 127 when the channel value is assigned 255 in patterns. Equation (5) is a sampling process about the relationship between the maximum and minimum of Ci − Bi when projected on the scene. thr B gives the smallest Ci − Bi . If Ci − Bi is greater than thr B in any channel, we like to believe its value is 255. We initially model thr D as t· min { Bi − 2Ci } as in Equation (6). This model shares the idea i =r,g,b

behind Equation (3). It makes t·min{ Bi − 2Ci } become the smallest value that min{ Bi − 2Ci } could be. However in dark stripes, Bi − 2Ci is close to zero. Simply scaling does not affect its sign, which might lead to an inappropriate decision. In order to alleviate external interference, we slightly adjust the threshold model above according to min{ Bi − 2Ci }. We increase the threshold when min{ Bi − 2Ci } is rather large or decrease it otherwise. Since Mb , Mg and Mw are observed lightness values of L = 0, 128 0 = Mwi − Mbi is an estimation of M . Thus, we adjust the threshold value based and 255, respectively, Mgi gi 2 0 . Hence, we approximate b as in Equation (7). The threshold on the difference between Mgi and Mgi for dark stripes is altered slightly in terms of the sign of min{ Bi − 2Ci }. s = 0.65, t = 0.47 and b = 50 were used in our experiments: 1 t= N

 minimum Mwi − 2Mgi ∑ mean M − 2M  , wi gi

(6)

i

b=

1 N

∑ max{ Mgi − Mgi0 }.

(7)

i

Lastly, we check whether the recovered color is one of the four colors we adopt to use. If not, we change it to the most probable color in four. We achieve this in two steps: (1) compare recovered color with four default colors to see how many channels match; (2) among the colors having the most matching channels, choose the color with minimum threshold difference over mismatching channels. Figure 8 shows an experimental result on the recovery of the original color of color stripes. In Figure 8b, the gray and green areas in the lower part and the noisy areas around main objects correspond to shadow regions. Because shadow regions are colorless, color assignment is meaningless. As previously stated, we can ignore texture edges on object surfaces by considering the original color of color stripes.

Figure 8 shows an experimental result on the recovery of the original color of color stripes. In Figure 8b, the gray and green areas in the lower part and the noisy areas around main objects correspond to shadow regions. Because shadow regions are colorless, color assignment is meaningless. As previously stated, we can ignore texture edges on object surfaces by considering the Sensors 2017, 17, 758 8 of 12 original color of color stripes.

(a)

(b)

Figure 8. 8. Recovery theoriginal original ofstripes: color (a) stripes: (a) image captured by recovered the camera; Figure Recovery of of the colorcolor of color image captured by the camera; (b) (b) recovered color stripes. color stripes.

Although empirically determined used,the thewhole whole thing works pretty Although empirically determinedparameters parameters are are used, thing works pretty wellwell in in non-shadow regions. However, recovered color is meaningless in shadow regions where stripe non-shadow regions. However, recovered color is meaningless in shadow regions where stripe patterns are totally lost. We detect and extend and colorextend stripes there otherwise have patterns are totally lost. Weshadow detect regions shadow regions colorthat stripes therewould that otherwise been projected. Details follow in the next section. would have been projected. Details follow in the next section. 5. Removal of Shadow Regions

5. Removal of Shadow Regions

In structured light imaging, shadows are created in regions that the projector light cannot reach.

In structured light imaging, shadows are created in regions that the projector light cannot reach. In shadow regions, stripe patterns are totally lost, and parameter controlled depth edge detection is not In shadow patterns totally lost, parameter depthwe edge detection is possibleregions, there. Instripe order to prevent are double edges andand missing edges incontrolled shadow regions, proactively not possible there. regions In order prevent edges and missing edges in shadow regions, we identify shadow andtoextend colordouble stripes that otherwise have been projected. proactively identify shadow regions and extend color stripes that otherwise have been There has been research on natural shadow removal [12,13]. Although their works doprojected. not deal There beensame research natural shadow removal [12,13]. Although theirbecomes works shaded, do not deal with thehas exactly scene on as ours, some conclusions are valuable. When a region it becomes darker and less textured. It indicates that colors and textures are important tools to with the exactly same scene as ours, some conclusions are valuable. When a region becomes shaded, detect shadow we have recoveredthat the original colortextures of the projected stripes, tools we divide it becomes darker regions. and lessAfter textured. It indicates colors and are important to detect a recovered color image into simply connected regions of homogeneous color and make a region-based shadow regions. After we have recovered the original color of the projected stripes, we divide a decision whether a given region is from shadow or not. We employ the following features. 5.1. Color Feature We convert recovered color into Lab space and build color histogram. As provided by Guo et al. [12], we set 21 bins in each channel. All the histograms are normalized by the region area. Eliminating parts of non-shadow regions by thresholding of the L channel beforehand saves a great deal of time on training and clustering data. 5.2. Texture Feature Textons, a concept stated in [10], can help us build a texture histogram. They construct a series of filters that are derived from a normal 2D Gaussian filter. We apply their filter bank to a large number of experimental images and categorize the data using a k-means algorithm to form k clusters whose mean points are called textons. Every pixel is clustered around its closest texton. Texture histograms are also normalized by the region area. 5.3. Angle Feature Shadow is colorless. We look at each pixel in a color pattern image and its corresponding pixel in a binary pattern image in the RGB space. Let us denote them by C and B, respectively. We form two →







vectors, OC and OB, from the origin to C and B. The angle between OB and OC should be small for a pixel in shadow. Shadow probability can be estimated using the cosine value of this angle; however, this angle feature alone is not enough to correctly classify shadow regions.

5.3. Angle Feature Shadow is colorless. We look at each pixel in a color pattern image and its corresponding pixel Shadow is colorless. Wethe look at space. each pixel in adenote color pattern corresponding pixel in a binary pattern image in RGB Let us them byimage 𝐶 andand 𝐵, its respectively. We form in a binary pattern image in the RGB space. Let us denote them by 𝐶 and 𝐵, respectively. We form two vectors, ⃗⃗⃗⃗⃗ 𝑂𝐶 and ⃗⃗⃗⃗⃗ 𝑂𝐵, from the origin to 𝐶 and 𝐵. The angle between ⃗⃗⃗⃗⃗ 𝑂𝐵 and ⃗⃗⃗⃗⃗ 𝑂𝐶 should be ⃗⃗⃗⃗⃗ and 𝑂𝐵 ⃗⃗⃗⃗⃗ , from the origin to 𝐶 and 𝐵. The angle between 𝑂𝐵 ⃗⃗⃗⃗⃗ and 𝑂𝐶 ⃗⃗⃗⃗⃗ should be two vectors, 𝑂𝐶 small for a pixel in shadow. Shadow probability can be estimated using the cosine value of this angle; Sensors 758 in shadow. Shadow probability can be estimated using the cosine value of this angle; 9 of 12 small 2017, for this a17, pixel however, angle feature alone is not enough to correctly classify shadow regions. however, this angle feature alone is not enough to correctly classify shadow regions. 5.4. 5.4.Classifier ClassifierTraining Training 5.4. Classifier Training We features together as in 9 to9train an SVM classifier. We We use usecolor, color,texture, texture,and andangle angle features together as Figure in Figure to train an SVM classifier. Weall use color, texture, and different angle features together assointhat Figure 9 to train an SVM as classifier. We number four colors with two lightness values every pixel is marked an integer We number all four colors with two different lightness values so that every pixel is marked as an integer number all four colors with two different lightness values so that every pixelinto is marked as an integer between stripe regions scraps and cluster it between11and and8.8.We Wecould couldeasily easilysegment segmentthe therecovered recoveredcolor color stripe regions into scraps and cluster between 1 and 8. We could easily segment the recovered color stripe regions into scraps and cluster it into shadow or non-shadow regions. Each scrap is a training example. We sampled roughly 3000 it into shadow or non-shadow regions. Each scrap is a training example. We sampled roughly 3000 into shadow or non-shadow regions. Each scrap is a training example. We sampled roughly 3000 examples examplesin in our our experiments. experiments.Because Becausethe thecamera cameraisison onthe theupper upperleft leftside sideof ofthe the projector projectorin in our our examples in our experiments. Because the camera is on the upper left side of the projector in our experiments, the shadows must be caused by the left or the top side of objects. This prior helps learn experiments, the shadows must be caused by the left or the top side of objects. This prior helps experiments, the shadows must be caused by the left or the top side of objects. This prior helps learn where to findtoshadow regions in which we extend stripes. Figure 10 shows that shadow regions are learn where find shadow regions in which we extend stripes. Figure 10 shows that shadow regions where to find shadow regions in which we extend stripes. Figure 10 shows that shadow regions are accurately detected usingusing our method. Generally, we fill each shadow region with the region above it. are accurately detected our method. Generally, we fill each shadow region with the region accurately detected using our method. Generally, we fill each shadowon region withside thetoregion above it. As for those shadow regions on the top position, we choose the regions the right replace them. above it. As for those shadow regions on the top position, we choose the regions on the right side to As for those shadow regions on the top position, we choose the regions on the right side to replace them. replace them.

Figure Figure9.9.Feature Featurevector vectorused usedto toidentify identifyshadow shadowregions. regions. Figure 9. Feature vector used to identify shadow regions.

(a) (a)

(b) (b)

(c) (c)

Figure 10. Detection of shadow regions: (a) camera’s view of the scene; (b) cosine value of angle Figure Detection of of shadow regions: regions: (a) (a) camera’s camera’s view view of of the the scene; (b) cosine Figure 10. 10. scene; in (b) cosine9.value value of of angle angle ⃗⃗⃗⃗⃗ →Detection → ; (c) shadow between 𝑂𝐵 and ⃗⃗⃗⃗⃗ 𝑂𝐶 shadow regions detected using the feature vector Figure between OB and OC; (c) shadow regions detected using the feature vector in Figure 9. ⃗⃗⃗⃗⃗ ; (c) shadow regions detected using the feature vector in Figure 9. between ⃗⃗⃗⃗⃗ 𝑂𝐵 and 𝑂𝐶

6. Depth Edge Detection We use Gabor filtering for depth edge detection as in [7,8]. They applied Gabor filtering to black and white stripe patterns to find where the spatial frequency of the stripe pattern breaks. Because depth edge detection using Gabor filtering can only be applied to a binary pattern, we consider bright stripes patterns and dark stripes patterns separately to create binary patterns as can be seen in Figure 11c,d, respectively. Along the depth edge in Figure 11, the upper stripe is supposed to have a different color from the lower one. Then, we exploit color information to detect potential depth edge locations by applying Gabor filtering to binary pattern images where binary stripes for each color are deleted in turn. Note that, as long as there are pattern offsets in the original color pattern image, the amount of offset in the binary patterns, which are obtained by deleting the binary stripe for each color, becomes larger than the original offset amount. This makes the response of Gabor filtering more vivid to changes in periodic patterns. Similar to the previous work [7], we additionally make use of texture edges to improve localization of depth edges. In order to get texture edges, we synthesize a gray scale image of the scene without stripe patterns simply by averaging max channel value of color pattern image and binary pattern image for each pixel. Figure 11 illustrates the process of detecting depth edges of which the offset amount is 1.78w. A Gabor filter of size 2w × 2w is used. Figure 11e,f shows the pattern without dark cyan and gray stripes, respectively. Figure 11g,h is their responses of Gabor filter which have been binarized. The regions of low Gabor amplitude, shown in black, indicate locations of potential depth edges. We process the bright stripes pattern in the same way. Figure 11i,p,q,r includes all the possible combinations of colors along the edges. Thus, the union of them yields depth edges. We simply apply thinning operation to the result of the union in order to get the skeleton.

responses of Gabor filter which have been binarized. The regions of low Gabor amplitude, shown in black, indicate locations of potential depth edges. We process the bright stripes pattern in the same way. Figure 11i,p,q,r includes all the possible combinations of colors along the edges. Thus, the union of them yields depth edges. We simply apply thinning operation to the result of the union in order Sensors 2017, 17, 758 10 of 12 to get the skeleton.

Figure depth edges edges from from the the recovered recovered color color stripes stripes in in aa shadow shadow region: region: Figure 11. 11. The The process process of of detecting detecting depth (a) stripes in in thethe shadow region (a);(a); (c) (c) binary pattern for (a) color color stripe stripepattern patternimage; image;(b) (b)recovered recoveredcolor color stripes shadow region binary pattern dark stripes; (d) binary pattern for bright stripes; (e) partial pattern without dark cyan stripes; (f) partial for dark stripes; (d) binary pattern for bright stripes; (e) partial pattern without dark cyan stripes; pattern without gray stripes;gray (g) Gabor response fromresponse (d); (h) Gabor response from (e); (i) depth edges (f) partial pattern without stripes; (g) Gabor from (d); (h) Gabor response from (e); between cyanbetween and graydark stripes; (j) partial pattern without (k)without partial pattern (i) depthdark edges cyan and gray stripes; (j) white partialstripes; pattern white without stripes; cyan stripes; (l) partial pattern without (m) Gabor response (j);(m) (n) Gabor (k) partial pattern without cyan stripes;red (l) stripes; partial pattern without red from stripes; Gabor response response from (k); (o) Gabor response from (l); (p) depth edges between cyan and white stripes; (q) depth edges from (j); (n) Gabor response from (k); (o) Gabor response from (l); (p) depth edges between cyan and between red and cyan stripes; (r) depth edges between red and white stripes; and (s) depth edges white stripes; (q) depth edges between red and cyan stripes; (r) depth edges between red and white detected as the (i,p,q,r). stripes; and (s) union depth of edges detected as the union of (i), (p), (q) and (r).

7. Experimental Results We have coded our method in Matlab (2015b, MathWorks, Natick, MA, USA) and the codes have not been optimized. We have used 2.9 GHz Intel Core i5 CPU, 8GB 1867 MHz DDR3 memory (Santa Clara, CA, USA) and Intel Graphics ( Iris Graphics 6100, Santa Clara, CA, USA). Figure 12 shows an example of experimental results. We have compared the performance of our method with those of the previous method [8] and using the Kinect sensor. To produce depth edges from a Kinect depth map, for every pixel, we scan depth values in its circular region of radius 5, and output the pixel if any pixel within its circle has depth difference of r ≥ rmin . The result clearly shows that our method finds the depth edges most correctly for the given parameters while the other methods cannot. Figure 12e shows the result of depth edge detection from the Kinect depth map where straight depth edge segments are not detected as straight. This is because depth values provided by the Kinect sensor along depth edges are not accurate due to interpolation. Irrespective of false positives or false negatives, there are two main causes: inaccurate color recovery and stripes will result in false edges. However, color recovery errors are well contained because we check on the De Bruijn constraint when identifying the original color of stripes. When a shadow region is not detected, false positives occur. On the other hand, when some non-shadow regions are treated as shadow near boundaries, incorrect depth edges are produced. Table 1 lists computation time for each step of our method shown in Figure 2. Figure 13 depicts an additional experimental result where we find depth edges that satisfy the depth constraint of rmin ≤ r ≤ rmax . We can achieve this by two consecutive applications of Gabor filtering to the pattern images: The first and second Gabor filter yield depth edges with r ≥ rmin and r ≥ rmax , respectively, and we remove the depth edges with r ≥ rmax . We can see that our method

cannot. Figure 12e shows the result of depth edge detection from the Kinect depth map where straight depth edge segments are not detected as straight. This is because depth values provided by the Kinect sensor along depth edges are not accurate due to interpolation. Irrespective of false positives or false negatives, there are two main causes: inaccurate color recovery and stripes will result in false edges. However, color recovery errors are well contained because we check on the De Bruijn constraint when Sensors 2017, 17, 758 11 of 12 identifying the original color of stripes. When a shadow region is not detected, false positives occur. On the other hand, when some non-shadow regions are treated as shadow near boundaries, incorrect depth edges the areothers. produced. 1 lists computation time for each stepwithout of our any method shown in outperforms WhileTable we have provided raw experimental results postprocessing Figure 2. the result could be easily enhanced by employing simple using morphological operations. operations, 𝑟𝑚𝑖𝑛 = 5cm Input [𝑎𝑚𝑖𝑛 , 𝑎𝑚𝑎𝑥 ] = [140𝑐𝑚, 200𝑐𝑚]

(a)

(b)

(c)

(d)

(e)

(f)

Figure 12. 12. Depth ≥ 55cm: cm: (a) (a) experimental experimental setup; setup; (b) (b) front front view; view; (c) (c) ground ground Figure Depth edge edge filtering filtering results results for for r𝑟 ≥ in [8]; [8]; (e) (e) Kinect; Kinect; (f) (f) our our method. method. truth; (d) method in Table 1. 1. Computation Computation time. time. Table

Image Size Image Size Recover color stripes stripes DetectRecover shadowcolor regions Detect shadow regions ExtendExtend color stripes in the regions color stripes in shadow the shadow regions DepthDepth edgesedges detection detection TotalTotal Sensors 2017, 17, 758

1280 ×800 1280 × 800 54.35 s 54.35 s 4.10 s 4.10 s 4.94 4.94s s 5.87 5.87s s 69.26s s 69.26

720 ×576 39.50 s 39.50 s 2.12 s 2.12 s 3.453.45 s s 3.993.99 s s 49.06 s s 49.06

720 × 576

11 of 12

Figure 13 depicts an additional experimental result we find depth edges that satisfy the Input 𝑟where 𝑚𝑖𝑛 = 4cm 𝑟𝑚 𝑎𝑥 = 9cm ] depth constraint of 𝑟𝑚𝑖𝑛 ≤ 𝑟 ≤ 𝑟𝑚𝑎𝑥 . We can achieve [𝑎 this by two consecutive , 𝑎 = [90𝑐𝑚, 160𝑐𝑚] applications of Gabor 𝑚𝑖𝑛 𝑚𝑎𝑥 filtering to the pattern images: The first and second Gabor filter yield depth edges with 𝑟 ≥ 𝑟𝑚𝑖𝑛 and 𝑟 ≥ 𝑟𝑚𝑎𝑥 , respectively, and we remove the depth edges with 𝑟 ≥ 𝑟𝑚𝑎𝑥 . We can see that our method outperforms the others. While we have provided raw experimental results without any postprocessing operations, the result could be easily enhanced by employing simple using morphological operations.

(a)

(b)

(c)

(d)

(e)

(f)

Figure Depth band-pass filtering results 4 cm 9 cm: experimental setup; front Figure 13. 13. Depth band-pass filtering results for for 4 cm ≤ r≤≤𝑟 ≤ 9 cm: (a)(a) experimental setup; (b)(b) front view; ground truth; method in [8]; Kinect; (f) our method. view; (c) (c) ground truth; (d) (d) method in [8]; (e) (e) Kinect; (f) our method.

8. Conclusions We have presented a novel method that accurately controls depth edge filtering of the input scene using a color stripes pattern. For accuracy, we employed an associated binary pattern. Our method can be used for active sensing of specific 3D information about the scene. We think that if a task is to find accurate depth edges, our method provides a better solution. Further research is in

Sensors 2017, 17, 758

12 of 12

8. Conclusions We have presented a novel method that accurately controls depth edge filtering of the input scene using a color stripes pattern. For accuracy, we employed an associated binary pattern. Our method can be used for active sensing of specific 3D information about the scene. We think that if a task is to find accurate depth edges, our method provides a better solution. Further research is in progress to use the proposed method to create a sketch of 3D reconstruction by compiling depth edges with various depth differences. Acknowledgments: This research was supported by Basic Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (2016R1D1A1B03930428). Author Contributions: Ziqi Zheng conceived and designed the experiment; Ziqi Zheng and Seho Bae performed the experiments; and Ziqi Zheng and Juneho Yi wrote the paper. Conflicts of Interest: The authors declare no conflict of interest. The founding sponsors had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, and in the decision to publish the results.

References 1. 2.

3.

4.

5. 6.

7. 8. 9. 10. 11. 12.

13.

Barone, S.; Paoli, A.; Razionale, A.V. A coded structured light system based on primary color stripe projection and monochrome imaging. Sensors 2013, 13, 13802–13819. [CrossRef] [PubMed] Ramirez, L.F.; Gutierrez, A.F.; Ocampo, J.L.; Madrigal, C.A.; Branch, J.W.; Restrepo, A. Extraction of correspondences in color coded pattern for the 3D reconstruction using structured light. In Proceedings of the 2014 IEEE Symposium on Image, Signal Processing and Artificial Vision, Armenia, Colombia, 17–19 September 2014; pp. 1–5. Shen, J.; Cheung, S.C.S. Layer depth denoising and completion for structured-light RGB-D cameras. In Proceedings of the 2013 IEEE Conference on Computer Vision and Pattern Recognition, Portland, OR, USA, 23–28 June 2013; pp. 1187–1194. Zhang, L.; Curless, B.; Seitz, S. Rapid shape acquisition using color structured light and multi-pass dynamic programming. In Proceedings of the 1st International Symposium on 3D Data Processing Visualization and Transmission, Padova, Italy, 19–21 June 2002; pp. 24–36. Salvi, J.; Forest, J. A new optimized de bruijn coding strategy for structured light patterns. In Proceedings of the 17th International Conference on Pattern Recognition, Cambridge, UK, 23–26 August 2004. Raskar, R.; Tan, K.; Feris, R.; Yu, J.; Turk, M. Non-photorealistic camera: Depth edge detection and stylized rendering using multi-flash imaging. In Proceedings of the 2004 ACM Transactions on Graphics, Los Angeles, CA, USA, 8–12 August 2004; pp. 679–688. Park, J.; Kim, C.; Na, J.; Yi, J.; Turk, M. Using Structured Light for Efficient Depth Edge Detection. Image Vis. Comput. 2008, 26, 1449–1465. [CrossRef] Na, J.; Park, J.; Yi, J.; Turk, M. Parameterized structured light imaging for depth edge detection. Electron. Lett. 2010, 46, 349–360. [CrossRef] Szeliski, R. Rapid octree construction from image sequences. CVGIP: Image Underst. Arch. 1993, 1, 23–32. [CrossRef] Salvi, J.; Fernandez, S.; Pribanic, T.; Llado, X. A state of the art in structured light patterns for surface profilometry. Pattern Recognit. 2010, 43, 2666–2680. [CrossRef] Fredricksen, H. The lexicographically least debruijn cycle. J. Comb. Theory 1970, 9, 509–510. Guo, R.; Dai, Q.; Hoiem, D. Single-image shadow detection and removal using paired regions. In Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition, Colorado Springs, CO, USA, 20–25 June 2011. Baba, M.; Mukunoki, M.; Asada, N. Shadow removal from a real image based on shadow density. In Proceedings of the 2004 ACM Transactions on Graphics, Los Angeles, CA, USA, 8–12 August 2004. © 2017 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).