Adaptive Coded Aperture Photography - Semantic Scholar

2 downloads 0 Views 6MB Size Report
alternatively– to lower shutter times. We explain how adaptive coded apertures can be computed quickly, how they can be applied in lenses by using binary ...
Adaptive Coded Aperture Photography Oliver Bimber1 , Haroon Qureshi1 , Anselm Grundh¨ofer2 , Max Grosse3 , and Daniel Danch1 1

Johannes Kepler University Linz, {firstname.lastname}@jku.at 2 Disney Research Zurich, [email protected] 3 Bauhaus-University Weimar, {max.grosse}@uni-weimar.de

Abstract. We show how the intrinsically performed JPEG compression of many digital still cameras leaves margin for deriving and applying image-adapted coded apertures that support retention of the most important frequencies after compression. These coded apertures, together with subsequently applied image processing, enable a higher light throughput than corresponding circular apertures, while preserving adjusted focus, depth of field, and bokeh. Higher light throughput leads to proportionally higher signal-to-noise ratios and reduced compression noise, or – alternatively– to lower shutter times. We explain how adaptive coded apertures can be computed quickly, how they can be applied in lenses by using binary spatial light modulators, and how a resulting coded bokeh can be transformed into a common radial one.

Fig. 1. Camera prototype. A programmable liquid crystal array is integrated at the aperture plane of a consumer compound lens. A mask pattern of 7 × 7 binary bits is addressed by a microcontroller.

1

Introduction and Motivation

Many digital still cameras (compact cameras and interchangeable lens cameras) apply on-device JPEG compression. JPEG compression is lossy because it attenuates high image frequencies or even rounds them to zero. This is done by

2

Oliver Bimber et al.

first transforming each 8 × 8 pixel image block to the frequency domain via discrete cosine transformation (DCT), and then quantizing the amplitudes of the frequency components by dividing them by component-specific constant values that are standardized in 8 × 8 quantization matrices. The compression ratio can be controlled by selecting a quality factor q which is used to derive the coefficients of the quantization matrices. Visible compression artifacts (called compression noise) commonly result from quantization errors because of too low quality factors – in particular, due to the presence of sensor noise. Therefore, sensor noise is usually filtered before compression to reduce such artifacts. The aperture diameter in camera lenses controls the depth of field and the light throughput. For common 3D scenes, narrow apertures support a large depth of field and a broad band of imaged frequencies, but suffer from low light throughput. Wide apertures are limited to a shallow depth of field with an energy shift towards low frequencies, but benefit from high light throughput. If JPEG compression is applied intrinsically, then its attenuation of high image frequencies leaves margin for a wider aperture, since these frequencies need not be supported optically. A wider aperture results in a higher light throughput that is proportional either to a higher signal-to-noise ratio (SNR) or to shorter shutter times. It also reduces compression noise because the SNR is increased. Thus, we trade lower compression quality either for higher SNR or for shorter shutter times. Our basic idea is the following: We allow the photographer to adjust focus and depth of field when sampling (i.e., capturing) the scene through a regular (circular) aperture. Then the captured sample image ig is JPEG-compressed to ˆig with the selected quality factor (as would be done by the camera). Using ˆig , we compute an optimized coded aperture that supports the frequencies which remain after JPEG compression and additional frequency masking. This coded aperture is then applied in the camera lens by using a spatial light modulator (SLM), and the scene is re-captured. The resulting image ic is finally JPEGcompressed to ˆic . The computed coded aperture and the subsequently applied image processing ensure that the focus, the depth of field, and the bokeh (i.e., the blurred appearance of out-of-focus regions) contained in ˆig are preserved in ˆic and that ˆic benefits from the higher light throughput of the coded aperture. Sampling a scene with multiple shots is quite common in photography. Bracketing and passive autofocus sampling are classical examples. The two-shot approach proposed should therefore not influence settled habits in photography.

2

Related Work and Contribution

Coded aperture imaging has been applied in astronomy and medical imaging for many years. Recently, it has also been employed in the context of computational photography applications. Spatially coded static [1–5] or dynamic [6, 7] apertures have been used, implemented either as binary [1, 2, 4, 5, 7], intensity [1], or color [3] masks. The main applications of coded aperture photography are postexposure refocusing [1, 6], defocus deblurring [1, 2, 5, 4, 7], depth reconstruction

Adaptive Coded Aperture Photography

3

[2, 3, 6, 5], matting [1, 3], and light field acquisition [1, 6, 7]. Static binary masks have also been applied to compensate for projector defocus [8, 9]. In [10], concentric ring-mirrors were used to split the aperture into multiple paths that are imaged on the four quadrants of the sensor for depth-of-field editing, synthetic refocusing, and depth-guided deconvolution. Aperture masks are optimized, for example, for given noise models [4], to maximize zero crossings in the Fourier domain [2, 5], to maximize Fourier-magnitudes [1, 4, 7], or are computed from a desired set of optical transfer functions (OTFs) or point-spread functions (PSFs) at different focal depths [11]. An analysis of various masks used for depth estimation is provided in [12]. None of the existing techniques compute and apply aperture masks that are optimized for the actual image content. This requires adapting the masks dynamically to each individual image. Programmable coded apertures that utilize various types of SLMs are becoming increasingly popular. Liquid crystal arrays (LCAs) have been applied to realize dynamic binary masks for light field acquisition [6], and LCoS panels have been used for light field acquisition and defocus deblurring [7]. Although these approaches allow dynamic exchange of the aperture masks, the individual sub-masks are still precomputed and applied without any knowledge of the image content. Temporally coded apertures without spatial coding (e.g., implemented with high-speed ferroelectric liquid crystal shutters [13] or with conventional electronic shutters [14, 15]) have been used for coded exposure imaging to compensate for motion blur. Our approach requires spatially coded aperture masks and is not related to motion deblurring. Adaptive coded apertures have been introduced for enhanced projector defocus compensation [9]. Here, optimal aperture masks are computed in real time for each projected image, taking into account the image content and limitations of the human visual system. It has been shown that adaptive coded apertures always outperform regular (circular) apertures with equivalent f-numbers in terms of light throughput [9]. The main contribution of this paper is to present a first approach to adaptive coded aperture photography. Instead of pre-computing and applying aperture masks that are optimized for content-independent criteria such as noise models, Fourier-magnitudes, Fourier zero-crossings, or given OTFs/PSFs, we compute and apply aperture masks that are optimized for the frequencies which are actually contained in a captured sample image after applying camera-intrinsic JPEG compression and frequency masking. We intend neither to increase depth of field nor to acquire scene depth or light fields. Instead, our masks maximize the light throughput. Increased light throughput leads either to a higher SNR with less compression noise or to shorter shutter times. We additionally present a method for transforming the coded bokeh that results from photographing with a coded aperture into the radial bokeh that has been captured initially with the circular aperture. This is called bokeh transformation. Furthermore, we demonstrate how adaptive coded intensity masks can be approximated with binary SLMs using a combination of pulse width modulation and coded exposure imaging. The supplementary material provides additional details.

4

3

Oliver Bimber et al.

Proof-of-Concept Prototype

We used a Canon EOS 400D with a Canon EF 100 mm, f/2.8 USM macro lens. The original diaphragm blades of the lens had to be removed to allow insertion of a programmable LCA to realize the adaptive coded aperture. The LCA is an Electronic Assembly EA DOGM132W-5 display with a native resolution of 132x32, and 0.4x0.35 mm2 pixel size. A 3D-printed, opaque plastic cast allows a light-tight and precise integration of the LCA at the aperture plane of the lens. A USB-connected Atmel ATmega88 microcontroller is used to address a matrix of 7 × 7 binary bits. Each individual bit is composed of 3 × 3 LCA pixels. Our prototype is illustrated in figure 1. Since the mechanical diaphragm had to be removed, and to enable a fair assessment of the quality enhancement, regular (i.e., circular) aperture masks were also rasterized and displayed on the LCA for comparison with the coded aperture masks. Thus, the rasterized versions of the two smallest possible circular apertures we can display (with 2% and 10% aperture opening) correspond to a 1-bit square shape and a 5-bit cross shape, respectively (see figure 4 for examples). Although the EOS 400D supports a quality factor of q=90 for on-device JPEG compression, we applied JPEG compression ourselves (implemented after [16]) to consider additional quality factors in our evaluation. Furthermore, we carried out a subsequent de-blocking step (implemented after [17], using the default narrow quantization constraint of =0.3, as explained in [17]).

4

Adaptive Coded Aperture Computation

Our goal is to compute aperture masks that retain the most important frequencies while maximizing light throughput by dropping the less important higher frequencies that are, in any case, strongly attenuated by JPEG compression. Let Ig and Iˆg be the Fourier transforms of the uncompressed and compressed versions of ig respectively. We define frequencies (fx , fy ) to be important if their magnitudes before and after compression are similar, and, therefore, their magnitude ratio Iˆg (fx , fy )/Ig (fx , fy ) ≥ τ . The larger the threshold τ , the more frequencies are dropped and the higher the light throughput gained with the resulting coded aperture. However, choosing an overly large τ results in ringing artifacts after transforming the remaining frequencies back to the spatial domain. We found that τ =0.97 (i.e., a magnitude similarity of 97%) represents a good balance between light throughput gain and image quality. For this threshold, the sum of absolute RGB pixel-differences between the original compressed image ˆig and the remaining important frequencies after thresholding and transformation to the spatial domain was less than 1% for all image contents, camera settings, lighting conditions, and compression-quality factors with which we experimented. We then construct a binary frequency mask (m) of the same resolution as Iˆg (fx , fy ) and set all entries that correspond to retained frequencies to 1, while all others are set to 0. Using m, we adopt the method explained in [9] to compute an intensity aperture pattern a by minimizing the variance of its Fourier

Adaptive Coded Aperture Photography

5

transform for all important frequencies: 2

M F a = e , mina kM F a − ek2 ,

(1)

where M is the diagonal matrix containing the binary frequency mask values of m, F is the discrete Fourier transform matrix (i.e., the set of orthogonal Fourier basis functions in its columns), a is the unknown vector of the coded aperture pattern, and e is the vector of all ones. To minimize the variance of the Fourier transform of a for the retained frequencies while maximizing light throughput, this over-constrained system can be solved in a least-squares sense with the 2 additional constraint to minimize kak2 4 . As described in [9], this can be solved quickly with the pseudo-inverse: a = (M F )∗ e = F ∗ M ∗ e = F ∗ M e,

(2)

where the conjugate-transpose pseudo-inverse matrix F ∗ is constant and can be pre-computed. Thus, for each new image, a simple matrix-vector multiplication of F ∗ with M e is sufficient for computing the optimal coded aperture pattern a. Negative values in a are clipped, and the result is scaled such that the maximum value is 1. Since the LCA used can only display binary aperture patterns, a must be binarized: given the minimum and maximum LCA transmittances, tmin and tmax , values ≥ tmin + (tmax − tmin )/2 are set to 1, others are set to 0. The resulting aperture shape is manually scaled to roughly match the depth of field in ig . Section 5 explains how remaining depth-of-field variations are removed. Results for various capturing situations (image content, compression quality, initial opening of regular aperture, focus, lighting) are presented in figure 2. Compared to regular circular apertures, adaptive coded apertures always achieve a gain in light throughput. Note that their ideal shape is often not round, in order to support optimal coverage of asymmetric spectra. This is not possible with simple circular aperture shapes. The gained light throughput is directly proportional to the coded aperture opening divided by the corresponding regular aperture opening. It is not appropriate to correlate f-numbers in this case, since the coded aperture masks are often irregular intensity patterns that cannot be described by a diameter. If we ignored other noise sources, such as dark noise and read noise, and considered shot noise only, then the gain in SNR would be proportional to the square root of the light throughput gain. Other multi-shot methods, such as averaging, are not an option if only two shots are available.√Averaging two images with identical settings reduces noise only by a factor of 2 [18]. The examples in figure 2 show that we achieve significantly higher gain factors in all situations. Furthermore, compression noise is also reduced with adaptive coded apertures (especially for low quality factors, as shown in figure 2,top row). Alternatively to increasing light throughput, the shutter time can be decreased proportionally. More results and examples are provided in the supplementary material. 4

This intrinsically maximizes the light throughput of the aperture: a small squared 2-norm of a (ai ≥ 0) also minimizes the variance of the normalized bit intensities in the spatial domain.

6

Oliver Bimber et al.

Fig. 2. Example results of adaptive coded aperture photography. Top row: decreasing JPEG compression quality factor (q=90,70,50,30). Corresponding images are brightness-matched. Second row: increasing regular aperture opening (2%,10%,27%) – controlling initial light throughput and depth of field. Third row: different focus settings (red arrow indicates the plane in focus). Bottom row: different lighting conditions. The photographs in rows 2-4 are compressed with q=70; the graphs on the right plot the light throughput gain for all quality factors. For rows 3 and 4, the standard opening of the regular aperture was 10%. The aperture patterns applied (before and after binarization) are depicted. The circular shape of the regular aperture is rasterized for the 7 × 7 resolution of the LCA.

Adaptive Coded Aperture Photography

5

7

Bokeh Transformation

An undesired difference between ig and ic (and consequently also between ˆig and ˆic ) is the bokeh. The bokeh corresponds to the PSF of lens and aperture. Lenses with regular circular apertures result in a Gaussian PSF with a radial bokeh of defocused points. Coded apertures, however, lead to a bokeh that is associated with the coded aperture pattern and its PSF. To ensure that the bokeh in ˆic matches that in ˆig , we must transform it from the specific coded bokeh into a common radial bokeh. The scale of the bokeh pattern (i.e., its size in the image) depends on the amount of defocus, which is unknown in our cases, since the scene depth is unknown. Note that this transformation also corrects for remaining depth-of-field variations between ig and ic . All results shown in figure 2 have been bokeh-transformed. Capturing an image through an aperture with given PSF can be considered as convolution. For our two cases (regular aperture and coded aperture), this would be (before compression): ig = i ⊗ gs00 + ηg , ic = i ⊗ cs0 + ηc ,

(3)

where i is a perfectly focused image (scene point), gs00 and cs0 are respectively the PSF kernels of the regular and the coded apertures at scales s00 and s0 , and ηg and ηc represent the image noise. The convolution theorem allows a formulation in the frequency domain: Ig = I · Gs00 + ζg , Ic = I · Cs0 + ζc .

(4)

In principle, our bokeh transformation can be carried out as follows: Ic − ζc · Gs00 + ζg , Iec = Cs0

(5)

where the division represents deconvolution with the coded PSF, and the multiplication represents convolution with the Gaussian PSF. In fact, the scales are entirely unknown since the scene depth is unknown, and equation 5 cannot be applied in a straightforward way. An easy solution is to test for all possible scale pairs (s0 and s00 ) and to select that which leads to the best matching result when comparing the inverse Fourier transform of Iec with ig . Note that the optimal scales may vary over the image because defocus is usually not uniform. To find the optimal scale pairs, we carry out the following procedure (see supplementary material for additional details): First, we register ic and ig using a homography derived from corresponding SIFT features as explained in [19], and match the brightness of ic with the brightness of ig . Then we carry out the bokeh-transformation (i.e., equation 5) N ×N times for combinations of N different scales in s0 and s00 . Instead of a simple frequency domain operation, as illustrated in equation 5, we apply iterative RichardsonLucy deconvolution (with 8 iterations) to consider Poisson-distributed shot noise and Gaussian-distributed read noise. The standard deviation σ for read noise can

8

Oliver Bimber et al.

be obtained from the sensor specs, given that ISO sensitivity (gain) and average operating temperature are known (in our case: σ=0.003 for the EOS 400D, ISO100, and room temperature). We also add shot noise and read noise after convolution. This leads to N × N bokeh transformed sample images that are close approximations of ig at the correct scales.

Fig. 3. Bokeh transformation applied to images with different focus. The bokeh of the regular aperture (rasterized circular, with 10% aperture opening) and that of the coded aperture are clearly visible at defocused highlights (defocused reflections of the truck). Note the low contrast of the LCA. The bokeh of the coded aperture also leads to artifacts (ripples in the defocused, twinkling forearm of the bowman). After bokeh transformation (bottom row) the bokeh of the regular aperture (rasterized) can be approximated. Corresponding images are brightness-matched for better comparison. The aperture patterns applied (before and after binarization) are depicted.

To find the correct scales for each image patch of size M × M pixels, we simply find the bokeh-transformed sample with the smallest (average) absolute RGB difference in that patch when compared to the same patch in ig . This is repeated for all patches. We chose a patch size of M =3 to avoid noise influence. Note that, since there is no robust correlation between matched scales and scene depth (due to the scale ambiguity, which can be attributed to the limitations discussed above), depth from defocus would be unreliable (although this is not necessary for bokeh transformation). After determining the scale pairs for each image patch, we bokeh-transform ic again (patch by patch) – but this time with the previously determined scales for each patch. The difference from the first bokeh transformation (which is only used to determine the correct scales) is, that we now omit the step of adding shot noise and read noise after convolution, since we do not intend to reduce artificially the final image quality. Note that adding noise after convolution is necessary for the first bokeh transformation

Adaptive Coded Aperture Photography

9

step to match ig and ic as closely as possible in order to find the correct scales. The final bokeh-transformed patches are stitched together to a new image iec , which is JPEG-compressed to ˆic . Figure 3 shows examples of the bokeh transformation. Note that the coded bokeh is transformed into the bokeh of the rasterized circular aperture. If ig could be captured with the original diaphragm of the lens, the reconstructed bokeh would be smoothly radial (see supplementary material). Note again that this is only a limitation of our hardware prototype, as mechanical constraints in the lens housing prevented us from using the original diaphragm and the LCA at the same time.

6

Intensity Masks

In principle, our adaptive coded apertures are intensity masks. They are only binarized because the applied LCA is binary. This is a limitation of our prototype, as a grayscale LCA with a large pixel footprint (to avoid diffraction-induced artifacts) would have to be custom-manufactured, whereas adequate binary LCAs are off-the-shelf products. Binarization, however, always leads to quantization errors. Although proof-of-concept results can be achieved with our prototype, a much more efficient solution (also with respect to contrast, switching times, and light throughput) would be to replace the LCA with a digital-micro-mirror array (DMA), such as a DMD chip. Switching times of typical DMAs are about 5 µs. Since they natively produce binary patterns, intensities are commonly generated via pulse width modulation (PWM) that is enabled by their fast switching times. Below, we demonstrate that PWM in combination with coded exposure imaging can also be applied to generate intensity masks for adaptive coded apertures. Instead of applying a single aperture pattern Pn during the entire exposure time t, we segment t into n temporal slots t = i ti , possibly of different durations, and apply different binary aperture patterns in each slot. With s = [t1 , ..., tn ], we can compute a binary pulse series b(x, y) for each aperture bit at position x, y with intensity a(x, y) by solving a(x, y) =

s · b(x, y)T t

(6)

for b(x, y). Here, b(x, y) is the vector of binary pulses [b1 , ..., bn ]x,y that, when activated in the corresponding slots [t1 ...tn ], reproduce the desired intensity a(x, y) with a precision that depends on n (n adjusts the desired tonal resolution of the intensity aperture mask). We solve for b(x, y) by sequentially summing the contributions ti /t of selected slots (traversing from the longest to the shortest) while seeking a solution for which this sum approximates a(x, y) with minimum error. For each selected slot, bi is set to 1, while it is set to 0 for unselected slots. Note that this is the same basic principle as applied for PWM in digital light processing (DLP) devices. The image that is integrated during the exposure time t (i.e., for the duration of all exposure slots with the corresponding binary pulse patterns) approximates

10

Oliver Bimber et al.

the image that would be captured with the intensity mask during the same exposure time.

Fig. 4. Intensity masks with pulse width modulation and coded exposure imaging. Top row: images captured at exposure slots s = [3.2s, 1.6s, 0.8s, 0.4s, 1/5s, 1/10s, 1/20s, 1/40s] with corresponding binary pulse patterns (bi ). Bottom row: image captured through regular aperture (2% aperture opening), images captured with adaptive coded aperture (before and after bokeh transformation), and close-ups. The bokeh transformation was carried out with the intensity mask pattern.

Figure 4 presents an example realized with our LCA prototype. The shortest possible exposure time of t=6.4 s is limited by the fastest switching time of the LCA (supporting 1/40 s for the shortest exposure slot in our example), and the desired tonal resolution (n=8 in our example). For the much faster DMAs, however, a minimum exposure time of 1.28 ms is possible (assuming a minimum exposure slot of 5 µs) with the same tonal resolution and the same binary temporal segmentation. Note that because LCA and sensor are not synchronized in our prototype, the contribution of each exposure slot was captured in individual images which were integrated by summing them digitally instead of integrating them directly by the sensor. While this requires a linearized sensor response, the final result can be gamma-corrected to restore the camera’s non-linear transfer function. Because of the successively decreasing SNR in each exposure slot, however, a direct integration by the sensor leads to a higher SNR and should therefore be preferred over digital integration.

7

Limitations and Outlook

In this paper, we have shown how to trade compression quality for signal-to-noise ratio or for shutter time by means of adaptive coded apertures. This is beneficial if (i) image compression by camera hardware is obligatory, or (ii) it is known at capture time that the image will be compressed later (e.g., for web content). Even a little compression can, depending on the image content, already lead to

Adaptive Coded Aperture Photography

11

a substantial gain in light throughput (a gain factor of 3.6 in our experiments). Alongside compression, downsampling would be another (alternative or complementary) option if the full resolution of the sensor is not required for a particular application. In this case, we would trade resolution for signal-to-noise ratio or for shutter time. We will explore this in the future. For evaluation, we captured and computed a total of 160 images of different scenes with different lighting conditions, focus and depth-of-field settings, and quality factors. Figure 2 illustrates representative examples. Our experiments showed, that adaptive coded apertures are most efficient for large depth-of-field photographs of general 3D scenes because JPEG compression is efficient for the high frequency content imaged by narrow apertures and because the light throughput is initially low. The light throughput gain increases with decreasing quality factor. This behavior is widely invariant to focus settings and lighting conditions unless they strongly influence the image frequencies (e.g., for underor over-exposed images, or for strong out-of-focus situations). Adaptive coded apertures are even robust for dynamic scene changes or slight movements of the camera during sampling and capturing, since the spectra of ig and ic will normally not differ significantly. They can be computed quickly when implemented in hardware (e.g., within 13 ms for a 1024x1024 image resolution using our CUDA implementation on an NVIDIA GeForce 8800 Ultra), while the more time-consuming bokeh transformation can be carried out off-line and not during capturing. In principle, ic could also be captured with an appropriately widened circular aperture whose radius is derived from the remaining frequencies after JPEG compression. This would have the advantage that the original diaphragms of consumer lenses could be used directly without an additional SLM. However, as shown in [9], it would be less efficient, since adaptive coded apertures also optimally cover asymmetric spectra that do not follow a 1/f distribution. The main limitations of our proof-of-concept prototype are the low light transmittance (only 30% when completely transparent) and the low contrast (7:1) of the LCA employed, which makes it inapplicable in practice. It also introduces slight color shifts and interference effects on partially polarized light. However, this does not affect the advantage of adaptive coded apertures in general and can easily be improved by using reflective DMAs or LCoS panels as explained in [5]. A higher aperture resolution (tonal and spatial) would also produce more precise results. Furthermore, the SLM and the sensor should also be synchronized to allow a direct sensor integration of the coded exposure slots that enable intensity masks. Improving our hardware prototype will be part of our future research. Currently, the applied coded aperture pattern is scaled manually to roughly match the depth of field in ig , while remaining depth-of-field differences between ic and ig are removed by the bokeh transformation. Our future work also includes automating this scale estimation. Another interesting avenue for future research is to investigate temporally adaptive coded apertures (i.e., content-dependent exposure codes) for enhanced motion deblurring.

12

Oliver Bimber et al.

References 1. Veeraraghavan, A., Raskar, R., Agrawal, A., Mohan, A., Tumblin, J.: Dappled Photography: Mask Enhanced Cameras for Heterodyned Light Fields and Coded Aperture Refocusing. ACM Trans. Graph. (Siggraph) 26 (2007) 69 2. Levin, A., Fergus, R., Durand, F., Freeman, W.T.: Image and Depth from a Conventional Camera with a Coded Aperture. ACM Trans. Graph. (Siggraph) 26 (2007) 70 3. Bando, Y., Chen, B.Y., Nishita, T.: Extracting Depth and Matte using a ColorFiltered Aperture. ACM Trans. Graph. (Siggraph Asia) 27 (2008) 1–9 4. Zhou, C., Nayar, S.K.: What are Good Apertures for Defocus Deblurring? In: IEEE International Conference on Computational Photography. (2009) 5. Zhou, C., Lin, S., Nayar, S.K.: Coded Aperture Pairs for Depth from Defocus. In: IEEE International Conference on Computer Vision (ICCV). (2009) 6. Liang, C.K., Lin, T.H., Wong, B.Y., Liu, C., Chen, H.: Programmable Aperture Photography: Multiplexed Light Field Acquisition. ACM Trans. Graph. (Siggraph) 27 (2008) 55:1–55:10 7. Nagahara, H., Zhou, C., Watanabe, T., Ishiguro, H., Nayar, S.K.: Programmable aperture camera using lcos. In: Proceedings of the 11th European conference on Computer vision: Part VI. ECCV’10, Berlin, Heidelberg, Springer-Verlag (2010) 337–350 8. Grosse, M., Bimber, O.: Coded Aperture Projection. In: IPT/EDT. (2008) 1–4 9. Grosse, M., Wetzstein, G., Grundh¨ ofer, A., Bimber, O.: Coded aperture projection. ACM Trans. Graph. 29 (2010) 22:1–22:12 10. Green, P., Sun, W., Matusik, W., Durand, F.: Multi-aperture photography. ACM Transactions on Graphics (Proc. SIGGRAPH) 26 (2007) 11. Horstmeyer, R., Oh, S.B., Raskar, R.: Iterative aperture mask design in phase space using a rank constraint. Opt. Express 18 (2010) 22545–22555 12. Levin, A.: Analyzing depth from coded aperture sets. In: Proceedings of the 11th European conference on Computer vision: Part I. ECCV’10, Berlin, Heidelberg, Springer-Verlag (2010) 214–227 13. Raskar, R., Agrawal, A., Tumblin, J.: Coded exposure photography: motion deblurring using fluttered shutter. In: ACM SIGGRAPH 2006 Papers. SIGGRAPH ’06, New York, NY, USA, ACM (2006) 795–804 14. Agrawal, A., Xu, Y.: Coded exposure deblurring: Optimized codes for psf estimation and invertibility. Computer Vision and Pattern Recognition, IEEE Computer Society Conference on 0 (2009) 2066–2073 15. Tai, Y.W., Kong, N., Lin, S., Shin, S.Y.: Coded exposure imaging for projective motion deblurring. Computer Vision and Pattern Recognition, IEEE Computer Society Conference on 0 (2010) 2408–2415 16. Wallace, G.K.: The jpeg still picture compression standard. Commun. ACM 34 (1991) 30–44 17. Zhai, G., Zhang, W., Yang, X., Lin, W., Xu, Y.: Efficient deblocking with coefficient regularization, shape-adaptive filtering, and quantization constraint. Multimedia, IEEE Transactions on 10 (2008) 735 –745 18. Healey, G., Kondepudy, R.: Radiometric ccd camera calibration and noise estimation. IEEE Trans. Pattern Anal. Mach. Intell. 16 (1994) 267–276 19. Hess, R.: An open-source siftlibrary. In: Proceedings of the international conference on Multimedia. MM ’10, New York, NY, USA, ACM (2010) 1493–1496