Moving Beyond HD - IEEE Xplore

0 downloads 0 Views 2MB Size Report
sumers are buying display technologies such as ... Moreover, the cost of LCD and plasma displays ... Projects/EyeModel/EyeSIG05S.pdf). Compare this value to ...
ENTERTAINMENT COMPUTING

Moving Beyond HD Michael Macedonia

Technology’s rapid advance is already leaving high-definition television behind.

T

he digital-imaging revolution is rapidly changing our expectations of traditional media such as photography, television, and film. The ability to capture and generate high-definition images with digital cameras continues to advance. Meanwhile, increasing numbers of consumers are buying display technologies such as high-definition liquid-crystaldisplay monitors. Analysts at DisplaySearch, an industry research firm based in Texas, project that sales of HDTV sets will reach $37 billion in 2010, up from an estimated $24 billion this year. Current consumer HD systems and computer monitors typically display at most 1,920  1,080 pixels, or roughly 50 dots per inch (dpi). Moreover, the cost of LCD and plasma displays quickly becomes a limiting factor. A 100-inch LCD display will likely cost $100,000—far above what the consumer market will pay. These displays also do not yet approach the resolution and dynamic range of film or magazine-quality print: 1,200 and 300 dpi, respectively. However, researchers and commercial innovators have been making major advancements over the past five years that will soon enable the low-cost deployment of large and immersive display systems for homes and businesses.

HOW MUCH CAN WE SEE? The resolution capability of the human eye provides the ultimate limiting factor in displays. Unfortunately, in the past estimates of this value varied considerably because we lacked a good model of the human eye. In 1998, however, Michael Deering proposed that the resolution of each eye topped out at 16 million pixels (“A Photon Accurate Model of the Human Eye;” www.michaelfrankdeering.com/ Projects/EyeModel/EyeSIG05S.pdf). Compare this value to what we see in a movie theater. Studios now regularly scan movies from film to digital intermediate formats for editing. The current state of the practice is 4K— a resolution of 4,096  2,160, or about 9 million pixels. Robert Clark, a researcher for the US Geological Survey, points out that to reach a human’s maximum visual acuity requires 530 pixels per inch for a 20  13.3-inch display viewed at 20 inches—a total of 74 million pixels. Clark also estimates that devices will need roughly 576 million pixels to display a 120-degree field of view at the limits of human vision (http://clarkvision.com/imagedetail/ eye-resolution.html). One device can already capture images at this resolution. The Gigapxl Project’s ultra-high-resolution camera took digital imaging to the limit in

2001. “We concluded that, consistent with the largest practicable film roll format—9-inch  18-inch—we could expect to achieve a resolution equivalent to 1,000 megapixels. Hence, came the name Gigapxl. With recent developments, this figure approaches 4,000 megapixels, but the name remains unchanged” (www.gigapxl. org/project.htm). The results are stunning but as yet have been produced only on 96  192-inch photographic paper.

PUTTING IT ON DISPLAY Several innovations achieved this decade have given developers the ability to display very-high-resolution images and graphics: the advancement of graphics processing units from Nvidia and ATI, the development of graphics clustering with software such as Chromium (http://graphics.stanford. edu/papers/cr/), and the enormous performance gains in network and system buses. Moreover, the GPU manufacturers developed hardware-based methods for using multiple GPUs for largescreen, high-pixel-count rendering. Although a single GPU can display only four million pixels, GPUs can scale higher when working in parallel. Developers need a graphics cluster to make real use of the ultimate desktop display. IDTech’s 9.2 million-pixel LCD is the world’s highest-resolution display. It boasts a density of 204 pixels per inch—12 times higher than the typical XGA monitor’s 1,024  768 pixels, or nearly 800,000 pixels. The $10,000 monitor has a 22.2-inch screen and offers the size and resolution required for high-end medical applications such as radiology, which demands resolutions that match film x-rays (www.idtech.co.jp/en/index. html). Filling a room, however, has always required a projector, and this remains true in digital cinema. Texas Instruments’ 2K Digital Light Processing Cinema technology—which has been deployed in projectors from Christie, Barco, and NEC—now claims 1,195 installations worldwide. The projector uses micromechanical mirrors to October 2006

115

ENTERTAINMENT COMPUTING

Figure 1. Panoramic virtual window.This Museum of Modern Art display used four overlapped projectors to create a 40  13-foot, 5-megapixel virtual window that displays a stunningly beautiful and realistic view of the Grand Canyon. Photo courtesy of Scalable Display Technologies (scalabledisplay.com).

provide bright, high-contrast displays for large throw distances. In a joint venture, Christie and AccessIT plan to use the technology to fill roughly 2,000 US screens by next year. Meanwhile, Sony has developed a 4K system to challenge TI’s movie-house dominance. Outside the digital cinema arena, the military simulation community— particularly the flight-simulation contingent—has spearheaded demand for very-high-resolution, panoramic displays. The US Air Force found that current display systems do not provide a pilot with adequate visual definition to identify other aircraft, ground vehicles, roads, and bridges at realistic ranges—such as spotting a road intersection from five miles away from an aircraft moving at 200 miles per hour. To address this issue, the Air Force Research Lab Human Effectiveness Directorate has been working since the mid-1990s to develop ultrahighresolution projector technologies. In 116

Computer

June 2001, Evans and Sutherland, a computer graphics firm that develops products used primarily by the military and large industrial firms for training and simulation, delivered the directorate’s first-ever ultrahigh-resolution laser projector prototype. The directorate will use the 1,080-verticalline  5,000-horizontal-line, 60-Hz, noninterlaced, monochrome device to support visual-research activities. E&S now produces the Digistar 3 Laser, a new full-dome digital theater system that offers the world’s highestresolution video projection system. The Digistar 3 Laser uses the E&S Laser Projector (ESLP) display to bathe the dome with more than 16 million pixels of video. The ESLP also offers an expanded color space: 12 bits per pixel supplant the standard 8 bits per pixel, producing 36-bit color instead of 24-bit color. According to an E&S spokesperson, “Digistar 3 Laser’s ability to reproduce color exceeds any other visual display

medium, including motion picture film” (www.es.com/products/digital_ theater/digistar3-laser.asp).

DO IT AT HOME The E&S display reaches the resolution level Michael Deering speculated would be needed for immersive reality. But laser projectors will likely remain absent from the family living room for the foreseeable future. CRT and color-wheel projectors are ubiquitous and can cost as little as several hundred dollars for consumer models. Multiple projectors behaving as a single imaging device offer an intermediate alternative, one that has been used since the early 1980s to create large panoramic displays and walk-in synthetic environments such as Cave Automatic Virtual Environments, or CAVEs. However, to avoid seams between images, the projections must overlap, which creates bright spots. “Display manufacturers use edge-blending to

correct this, but to get it right, these need highly precise optical components, custom lenses, and expensive manual labor to set up and maintain optimal quality in geometry, intensity, and color balance. Typically, the projectors’ analog drift, lamp shifts, or other time-varying nonlinearities make a perfect setup take as much as four hours per projector,” observed Rajeev J. Surati, cofounder of Scalable Display Technologies. In the late 1990s, researchers at the University of North Carolina, MIT, and Stanford started to explore how to exploit graphics computing and feedback from digital cameras to automatically compensate for these issues. According to UNC’s researchers (www.cs.unc.edu/~welch/media/ pdf/Vis99_multiproj.pdf), “The solution is to overlap the two images by a significant amount and modify the pixels in the overlap region to make the overlap as invisible as possible.

The reason this works is that now any slight projector misalignment or lens aberration only reveals itself as a slight blurring of the image and not as a sharp seam or gap.” This worked not only for flat surfaces but also in ordinary rooms with uneven surfaces or misaligned projectors. The next challenge involved finding how to inexpensively compensate for the nonlinear brightness of consumer projectors from companies such as Proxima. UNC’s Henry Fuchs and colleagues adapted the high-dynamicrange imaging method developed by Paul Debevec and Jitendra Malik, while using the same inexpensive camera for the geometry correction. Scalable Display Technologies (www. scalabledisplay.com), an MIT spin-off, is commercializing the technology. They demonstrated this capability at SIGGRAPH this year in Boston and at the Museum of Modern Art with a display that utilized four overlapped pro-

jectors to make the 40  13-foot, 5-megapixel virtual window shown in Figure 1.

N

ow all the pieces have come together to create the ultimate high-resolution video display: the digitization of media, accessibility of inexpensive computer systems, and creation of software such as Chromium that can generate large numbers of pixels; the manufacture of inexpensive projection systems that can display those pixels; and the availability of inexpensive graphics processors and Web cameras that provide automatic edgeblending. ■

Michael Macedonia is a member of Computer’s editorial board. Contact him at [email protected].

Computational Photography IEEE CG&A March/April 2007

The digital photography revolution has greatly facilitated the way in which we take and share pictures. However, it has mostly relied on a rigid imaging model inherited from traditional photography. Computational photography and video go one step further and exploit digital technology to enable arbitrary computation between the light array and the final image or video. Such computation can overcome limitations of the imaging hardware and enable new applications. It can also enable new imaging setups and postprocessing tools that empower users to enhance and interact with their images and videos. New visual media can therefore be invented, and tedious tasks that were once the domain of talented specialists can now be performed with a single mouse click. The field is by nature interdisciplinary and draws from computer graphics, machine vision, image processing, visual perception, optics, and traditional photography. This special issue will present innovative results in computational photography and video.

Guest Editors: Rick Szeliski, Microsoft Research Fredo Durand, MIT-CSAIL

October 2006

117