Saturday 28 January 2017

Shree Nayar Transformed Smartphone Photography, Now He's Looking to the Future

A look at an impressive 30-year career and the work that's helping to shape the camera of the future.


By Corinne Iozzio


The future of camera technology
Marius Brugge
Among the hundreds of projects Shree Nayar and his team at the Columbia Vision Lab have taken on is a rolling 360-degree camera. Developed in the late 1990s, the design is a precursor to many of the 360-degree VR cameras we see today.
Standing in front of a projection screen in his office at Columbia University’s School of Engineering and Applied Science, Shree Nayar points to a close-up of a human eye. At first glance, it’s nothing remarkable: just a healthy brownish color with striations zigzagging between the edge of the iris and the pupil. But the cornea of the eye, Nayar explains, has a thin film of tear on it that makes it a reflective surface, a mirror. Straight on, that mirror is a circle; at an angle, it’s an ellipse; but it’s always a mirror.
Nayar clicks to the next slide, an inside-out image of that same eye. “You can go back to the picture and figure out exactly what’s falling on the full mirror, which is a wide-angle view of the world around the person.” The subject’s surroundings are apparent in the image, but algorithms developed by Nayar’s lab can isolate the specific thing the person is focusing on. This research, first published in 2004, is only one example of how Nayar believes we can develop new photographic technologies that will reveal our world in ways we’ve never seen before.
Nayar, 53, heads the Columbia Vision Laboratory, where he has been a pioneer in the discipline of computational imaging, or computational photography. Conventional digital photography largely emulates the structure of the original camera obscura, which uses lenses to deliver and replicate a 3-D image on a 2-D plane. Computational imaging uses digital processes and novel optics to capture light in ways that would be garbled or unrecognizable to our eyes. Following capture, it’s the sensor or image processor’s job to unscramble the data to reveal a final image. The approach opens up features and functions that would not be possible using traditional photography.
The future of camera technology
Marius Brugge
In an early prototype of a gigapixel camera (above) a sensor array captures light through a glass orb, collecting light fragments that will make up a final image.
Computational imaging is a technology that’s been trickling into the mainstream. Cameraphone sensors have done simple facial or object recognition and automatically corrected color and distortion for years, and HDR capabilities—many of which are based on a 15-year research collaboration between the Columbia Vision Lab and Sony—have quickly become the norm. Dual-sensor camera phones like the LG G5 and the iPhone 7 Plus capture depth information, which allows after-the-fact refocusing. And Canon’s new EOS 5D Mark IV features a novel dual-pixel Raw mode, which captures separate image info from two photodiodes on each pixel; the extra data allows photographers to subtly adjust focus and correct ghosting within Canon software after the picture is taken.
But Nayar wants to develop technologies that capture all the image data a photographer might need “before the damage has been done,” he says—that is, before the image is stored to memory. Today’s features and tricks are just the tip of the iceberg. Ultimately, this vast volume of data will let us to see the world in new ways.
Earning a reputation as one of the top imaging scientists in the world isn’t the career Nayar planned for growing up in New Delhi. Distracted by dreams of becoming a professional cricket player, he nevertheless shadowed his father, an engineer. He would pine to be on the cricket pitch with his friends, and Nayar says he believes the engineering bug caught him through osmosis alone. He enrolled in engineering college in India and moved to the U.S. in 1985, eventually earning a master’s degree from North Carolina State University and a Ph.D. from Carnegie Mellon.
The future of camera technology
Marius Brugge
With enough light, the prototype Eternal Camera can continuously power itself thanks to an image sensor that doubles as a solar panel.
Nayar had planned to do his graduate work in robotics—projects he’d completed at college in India were a big hit—but he found that the more advanced American institutions had already tackled many of the ideas he’d intended to pursue. Considering his options, he identified a need in the field: He had often found it odd that the cameras researchers used with robots were the very same cameras people used to document their family vacations. Systems that went beyond the conventional camera could provide robots with useful information about the world—and perhaps, it occurred to him, they could hold information that’s useful to humans too. So he pivoted.
Over time, Nayar discovered that his passion for his research lay not in robotics per se but in light. Again he pivoted. “I realized the thing that excited me about vision was not necessarily the [machine] intelligence, but light—the aesthetics of light, the way light manifests in not just images, but in art and in life,” he says, “Light is just so beautiful. It does things that are just magical.”
At the lab, Nayar’s team of about 10 students and postdocs explore what we can do with light, what we can capture, and what we can uncover that is not immediately apparent. “He has the ability to see things in a way that’s quite different,” says P. Anandan, a vice president at Adobe and head of the company’s research lab in India, who has known Nayar for more than 20 years. “Images are made by sensors and light, and [Shree] really understands the process of sensing light, and light itself, at a deep level.”
The lab explores both the functional (features like HDR) and physical (smaller and novel camera designs) implications of computational imaging. “One of the reasons the camera has shrunk into what sits in your phone is that there are lots of computations happening which allow you to capture these kind of coded images,” Nayar explains.
You can draw a straight line between the work that’s come from Nayar’s lab and now-commonplace technologies such as compact 360-degree cameras and smartphone HDR sensors. In fact, Richard Szeliski, director of computational photography at Facebook Research, regards the HDR research Nayar did with Sony as the “gold-standard technique and an incredibly elegant piece of work.”
The future of camera technology
Marius Brugge
The imaging sensor mentioned in the previous caption also doubles as a solar panel to provide continuous energy.
One of Nayar’s most notable achievements in reimagining the physical camera was producing a compact gigapixel camera in 2011. Previous gigapixel prototypes were a half-meter or more in size, but, through clever optics and computational algorithms, the team was able to create a gigapixel shooter that could fit in a shoebox. The camera had two main components: a hemispheric bowl lined with hundreds of off-the-shelf mobile image sensors, and a glass ball about the size of a grapefruit that serves as the lens. The ball nestles inside the sensor array. To fill in the tiny gaps between the sensors, the team fashioned goosebump-like protrusions on the rear of the ball that direct light to the sensors that may otherwise be missed, so the array receives a complete image with more than a billion pixels.
The massive resolution of these images allows photographers to find details that they never knew were there. Zooming in on one shot taken from Governor’s Island in New York Harbor reveals birds and planes flying miles in the distance. By reducing resolution slightly, the setup could shrink even further: You could capture one billion pixels with a lens the size of a tennis ball, 100 million pixels with one no bigger than a lime.
The lab’s current projects are finding novel ways to capture moments we might not otherwise see. A proof-of-concept called the Eternal Camera aims to create self-powered video cameras by building an image sensor that doubles as a solar panel. The prototype uses an array of 1,200 photoreceptors, which both measure light and harvest energy. For a well-lit indoor scene, the camera produces an image every second, forever, without requiring any external power. Nayar envisions this technology as a means to record wildlife without the burden of batteries or an obtrusive solar panel.
Nayar et al. are also developing flexible cameras that could wrap around lampposts, car fenders, and most other surfaces. These so-called Flexible Sheet Cameras consist of an elastic sheet of lenses, with each lens paired with an image sensor. As the sheet bends, its malleable qualities allow the lenses to deform, widening their field of view to cover any gaps that might happen if you bent an array of rigid lenses in a similar fashion. This elasticity eliminates any image gaps for the sensor array, producing a continuous image. Because the raw image would appear horribly warped to the human eye, computational algorithms unscramble it to reveal the scene.
The future of camera technology
Marius Brugge
The lens array on the Flexible Sheet Camera deforms when bent, which could allow for cameras that wrap around anything from lampposts to car fenders.
Ironically, photography isn’t Nayar’s passion; “I’m the last guy to buy a new camera,” he admits. But his self-described obsession with the aesthetics of light has cemented him as a vital part of the future of photography. And, seeing research applied in the real world is a dream come true for an academic. “It’s just wonderful to say, ‘Oh, you know that phone you’re using? I’ll tell you a little secret...’”
And, as the eyes of the industry turn toward virtual reality and 360-degree immersive experiences, the transfer of technology from lab to real life will inevitably continue. For Nayar, the holy grail of computational imaging is a simple idea wrapped in a complex problem. “You’d like to take a snapshot from a single point in space, and after you’ve taken that snapshot, you would like to be able to see the world in its full glory—maybe billions of pixels,” he says. By better understanding light and how it bounces off, in, and around objects, computational imaging could turn that one simple photograph into a complete 3-D world in which the viewer can roam. That’s Nayar’s sweet spot.
The photo industry, which has exploded to include newer and broader categories of devices, is paying close attention. “Think about Shree’s work on physics-based models for computer graphics, studying how light interacts with objects, and you can see how this could impact emerging technologies, particularly in the virtual reality space,” says Dawn Airey, CEO of Getty Images.
As far as Nayar is concerned, even if his work isn’t applied to products, it still has value: It helps researchers figure out where to go next, what to try, and how traditional and computational photography may one day complement one another. “Spending time with Shree is like seeing into the future of photography,” Airey says. “Speaking with him sets your mind reeling with the possibilities his work can unlock.”
http://www.popphoto.com/shree-nayar-transformed-smartphone-photography-now-hes-looking-to-future#page-9

0 Comments:

Post a Comment

Subscribe to Post Comments [Atom]

<< Home