MCS Login



MCS Login


Science News - Pictures posing questions

Pictures Posing Questions The next steps in photography could blur reality

Patrick L. Barry

When a celebrity appears in a fan-magazine photo, there's no telling whether the person ever wore the clothes depicted or visited that locale. The picture may have been "photoshopped," we say, using a word coined from the name of the popular image-editing software, Adobe Photoshop.

In one new aspect of computational photography, a dome contains hundreds of precisely positioned flash units. A high-speed camera captures a frame as each flash fires in sequence. Computers can then relight the scene as they reconstruct it. Debevec/University of Southern California

But today's image processing is just a prelude. Imagine photographs in which the lighting in the room, the position of the camera, the point of focus, and even the expressions on people's faces were all chosen after the picture was taken. The moment that the picture beautifully captures never actually happened. Welcome to the world of computational photography, arguably the biggest step in photography since the move away from film.

Digital photography replaced the film in traditional cameras with a tiny wafer of silicon. While that switch swapped the darkroom for far more-powerful image-enhancement software, the camera itself changed little. Its aperture, shutter, flash, and other components remained essentially the same.

Computational photography, however, transforms the act of capturing the image. Some researchers use curved mirrors to distort their camera's field of view. Others replace the camera lens with an array of thousands of microlenses or with a virtual lens that exists only in software. Some use what they call smart flashes to illuminate a scene with complex patterns of light, or set up domes containing hundreds of flashes to light a subject from many angles. The list goes on: three-dimensional apertures, multiple exposures, cameras stacked in arrays, and more.

In the hands of professional photographers and filmmakers, the creative potential of these technologies is tremendous. "I expect it to lead to new art forms," says Marc Levoy, a professor of computer science at Stanford University.

Medicine and science could also benefit from imaging techniques that transcend the limitations of conventional microscopes and telescopes. The military is interested as well. The Defense Advanced Research Projects Agency, for example, has funded research on camera arrays that can see through dense foliage.

For consumers, some of these new technologies could improve family snapshots. Imagine fixing the focus of a blurry shot after the fact, or creating group shots of your friends and family in which no one is blinking or making a silly face. Or posing your children in front of a sunset and seeing details of their faces instead of just silhouettes.

Since the late 1990s, inexpensive computing power and improvements in digital camera technology have fueled research in all these areas of computational photography. Levoy says that scientists "look around and see more and more everyday people using digital cameras, and they begin to think, 'Well, this is getting interesting.'"

Robots to superheroes

Computational photography has roots in robotics, astronomy, and animation technology. "It's almost a convergence of computer vision and computer graphics," says Shree Nayar, professor of computer science at Columbia University.

a8307_2327.jpg

SUN AND SHADOWS. A conventional camera poorly captures scenes with both extreme brightness and dark shadows (top). Using computational photography techniques, it's possible to create an image that preserves more detail (bottom). Computer Vision Lab., Columbia Univ.

Attaching a video camera to a robot is easy, but it's difficult to get the robot to distinguish objects, faces, and walls and to compute its position in a room. "The recovery of 3-D information from [2-D] images is kind of the backbone of computer vision itself," Nayar says.

Other important optics and digital-imaging advances have come from astronomy. In that field, researchers have been pushing boundaries to view ever-fainter and more-distant objects in the sky. In one technique, for example, the telescope's primary mirror continuously adjusts its shape to compensate for the twinkling effect created by Earth's atmosphere (SN: 3/4/00, p. 156: Available to subscribers at http://www.sciencenews.org/articles/20000304/bob10.asp).

Rapid progress in computer animation during the 1980s and 1990s provided another cornerstone of the new photography. The stunning visual realism of modern animated movies such as Shrek and The Incredibles comes from accurately computing how light bounces around a 3-D scene and ultimately reaches a viewer's eye (SN: 1/26/02, p. 56: http://www.sciencenews.org/articles/20020126/bob10.asp). Those calculations can be run in reverse—starting from the light that entered the lens of a camera and tracing it back—to deduce something about the real scene.

Such calculations make it possible to decode the often-distorted images taken by these unconventional cameras. "What the computational camera does is it captures an optically coded image that's not ready for human consumption," Nayar explains. By unscrambling the raw images, scientists can extract extra information about a scene, such as the shapes of the photographed objects or the unique way in which those objects reflect and absorb light.

Photo fusion

One powerful way to do computational photography is to take multiple shots of a scene and mathematically combine those images. For example, even the best digital cameras have difficulty capturing extreme brightness and darkness at the same time. Just look at an amateur snapshot of a person standing in front of a sunlit window.

Compared with a single photo, a sequence of shots taken with different exposures can capture a scene with a wide range of brightness, called the dynamic range. Both a bright outdoor scene and the person in front of it can have good color and detail when the set of images is merged. The method was described by Nayar and others at a conference in 1999.

In a similar way, a series of frames in which the focus varies can produce a single, sharp image of the entire scene. Both these types of mergers can be arduously performed with standard image-editing software, but computational photography automates the process.

A related technique fuses a series of family portraits into a single image that's free of blinking eyes and unflattering expressions. After using a conventional camera to take a set of pictures of a group of people, the photographer might feed the pictures into a program described during a 2004 conference on computer graphics by Michael Cohen and his colleagues at Microsoft Research in Redmond, Wash.

The user indicates the photos in which each face looks best, and the software then splices them into a seamless image that makes everyone attractive at the same time—even though the depicted moment never happened. This software is now being offered with a high-end version of Microsoft's Windows Vista.

3-D FROM A DOUGHNUT. Photographing a person's face with a cone-shaped mirror in front of the lens creates a distorted, doughnut-shaped image (left). The cone provides two extra perspectives of the face on opposite sides of the center point, providing enough information to construct a 3-D model (right). Computer Vision Lab., Columbia Univ.

Want that family photo in 3-D? Nayar's group takes three-dimensional pictures with a normal camera by placing a cone-shaped mirror, like a cheerleader's megaphone, in front of the lens. Because some of the light from an object comes directly into the lens and the rest of the light first bounces off spots inside the cone, the camera captures images from multiple vantage points. From those data, computer software constructs a full 3-D model, as Nayar's group explained at the SIGGRAPH meeting last year in Boston.

A mirrored cone on a video camera might be especially useful to capture an actor's performance in 3-D, Nayar says.

Another alteration of a camera's field of view makes it possible to shoot a picture first and focus it later. Todor Georgiev, a physicist working on novel camera designs at Adobe, the San Jose, Calif.–based company that produces Photoshop, has developed a lens that splits the scene that a camera captures into many separate images.

Georgiev's group etched a grid of square minilenses into a lens, making it look like an insect's compound eye. Each minilens creates a separate image of the scene, effectively shooting the scene from 20 slightly different vantage points. Software merges the mini-images into a single image that the photographer can focus and refocus at will. The photographer can even slightly change the apparent vantage point of the camera. The team described this work last year in Cyprus at the Eurographics Symposium on Rendering.

In essence, the technique replaces the camera's focusing lens with a virtual lens.

Light motifs

The refocusing trick made possible by Georgiev's insect-eye lens can also be achieved by placing a tiny array of thousands of microlenses inside the camera body, directly in front of the sensor that captures images.

Conceptually, the microlens array is a digital sensor in which each pixel has been replaced by a tiny camera. This enables the camera to record information about the incoming light that traditional cameras throw away. Each pixel in a normal digital camera receives light focused into a cone shape from the entire lens. Within that cone, the light varies in important ways, but normal cameras average the cone of light into a single color value for the pixel.

By replacing each pixel with a tiny lens, Levoy's research team developed a camera that can preserve this extra information. Mathematically, say the researchers, the change expands the normal 2-D image into a "light field" that has four dimensions. This light field contains all the information necessary to calculate a refocused image after the fact. Ren Ng, now at Refocus Imaging in Mountain View, Calif., explained the process at a 2005 conference.

Capturing more information about incoming light waves can also create powerful new kinds of scientific and medical images. For example, Stephen Boppart and his colleagues at the University of Illinois at Urbana-Champaign create 3-D microscopic photos by processing the out-of-focus parts of an image.

The team devised software to examine how a tissue sample, for instance, bends and scatters light. In the February 2007 Nature Physics, the researchers describe how the device uses that information to discern the structure of the tissue. "What we've done is take this blurred information, descramble it, and reconstruct it into an in-focus image," Boppart says.

a8307_3494.jpg

ARTIFICIAL LIGHTING. By filming a person inside a dome containing hundreds of flashes (left), a filmmaker can re-light the scene afterward using a computer to calculate how the person would look under any combination of colored lights (right). Debevec/University of Southern California

In computational photography, the flash becomes more than a simple pulse of light. For example, a room-size dome built by Paul Debevec of the University of Southern California in Los Angeles and his colleagues makes it possible to redo the lighting of a scene after it's been shot. Hundreds of flash units mounted on the dome fire one at a time in a precise sequence that repeats dozens of times per second. A high-speed camera captures a frame for every flash.

The result is complete information about how the subject reflects light from virtually every angle. Software can then compute exactly how the scene would look in almost any lighting environment, the researchers reported at the 2006 Eurographics Symposium on Rendering. This method is particularly promising for making films.

What is reality?

With all this manipulative power come questions of authenticity. The more that photographs can be computed or synthesized instead of simply snapped, the less confident a viewer is that a picture can be trusted.

"Certainly, all of us have a certain emotional attachment to things that are real, and we don't want to lose that," Nayar says. For example, to get a perfect family portrait, one might prefer that nobody had blinked. But is a bad shot better than a synthesized moment?

Whether film or digital, photographic images have always departed from reality to some degree. "And every generation, I believe, will redefine how much you can depart," Nayar says. "What was completely unacceptable 20 years ago has become more acceptable today."

Perhaps 20 years from now, when a photographer changes a picture's vantage point, people will still consider the scene to be real. But using a computer to change the clothes that a person in the image is wearing might be going too far, Nayar proposes.

Often, the goal of computational photography isn't to depart from reality but to create a closer facsimile of it. For example, someone looking at people standing in front of a sunset can see the faces clearly and can focus on any part of the scene. A normal photograph, with its dark silhouettes and fixed focus, offers a viewer less than reality.

So, a manipulated image can be "closer, by some subjective argument, to what the real world is for a person looking at it," Levoy says.

It's difficult to say which of the many technologies under the umbrella of computational photography will ever reach the consumer market. The room-size dome containing hundreds of flash units will almost certainly remain in the realm of specialized photographers and movie studios. Other techniques may be suitable for everyday use, but whether and when they reach the market will depend on the vagaries of business and marketing.

In whatever form computational photography becomes commonplace, people continue to adopt it over conventional image making will take pictures that capture more of what they actually see, and sometimes what never was at all.

Sources:

Stephen A. Boppart

University of Illinois, Urbana-Champaign

405 N. Mathews Avenue

Urbana, IL 61801

Michael F. Cohen

Microsoft Research

One Microsoft Way

Redmond, WA 98052

Paul Debevec

Centers for Creative Technologies

University of Southern California

Los Angeles, CA 90089

Todor Georgiev

Adobe Systems

345 Park Avenue

San Jose, CA 95110-2704

Marc Levoy

Stanford University

Gates Bldg 3B-366

Stanford, CA 94305-9035

Shree Nayar

Columbia University

2960 Broadway

New York, NY 10027-6902

David Salesin

Adobe Systems

801 N. 34th Street

Seattle, WA 98103