Monday 7 August 2017

GOOGLE’S NEW ALGORITHM PERFECTS PHOTOS BEFORE YOU EVEN TAKE THEM

ELIZABETH STINSON


TAKING INSTAGRAM-WORTHY PHOTOS is one thing, editing them is another. Most of us just upload a pic, tap a filter, tweak the saturation, and post. If you want to make a photo look good without the instant gratification of the Reyes filter, enlist a professional. Or a really smart algorithm.
Researchers from MIT and Google recently showed off a machine learning algorithm capable of automatically retouching photos just like a professional photographer. Snap a photo and the neural network identifies exactly how to make it look better—increase contrast a smidge, tone down brightness, whatever—and apply the changes in a 20th of a millisecond.
“That’s 50 times a second,” says Michael Gharbi, an MIT doctoral student and lead author of the paper. Gharbi’s algorithm transforms photos so fast you can see the edited version in the viewfinder before you snap the picture.
Gharbi started working with researchers from Google last year to explore how neural networks might learn to mimic specific photographic styles. It follows similar research German researchers completed in 2015 when they built a neural network that could imitate the styles of painters like Van Gogh and Picasso. The idea, Gharbi says, is to make it easier to produce professional-grade images without opening an editing app.
Think of the algorithm as an automatic filter, but with more nuance. Most filters apply editing techniques to the entire image, regardless of whether it needs it. Gharbi’s algorithm can pinpoint specific features within an image and apply the appropriate improvements. “Usually every pixel gets the same transformation,” he says. “It becomes more interesting when you have images that need to be retouched in specific areas.” The algorithm might learn, for example, to automatically brighten a face in a selfie with a sunny background. You could train the network to increase the saturation of water, or bump up the green in trees when it recognizes a landscape photo.
Gharbi’s algorithm can parse those visual nuances because the researchers trained it with manually retouched images. The researchers fed the neural network more than 5,000 professionally edited photos, which taught it specific editing rules associated with “good” photos. If you fed the neural network your edited photos, it eventually could learn to reproduce your personal photographic style.
That alone makes it pretty cool. But the real achievement is Gharbi and his fellow researchers made the software lightweight enough to run on mobile phones. “The key to making it fast and run in real time is to not process all the pixels in an image,” he says. Instead of analyzing millions of pixels in any given photo, Gharbi’s algorithm processes a low-resolution version of the photo and decides which parts to retouch. The algorithm estimates how to adjust the color, luminosity, saturation, and more based on rules established in the neural network; it makes the changes, then converts the image back to high resolution. Because it’s not processing a full image every time, the system can operate at speeds beyond today’s phone’s computational abilities. “We’ve found a more efficient way to process an image,” he says.
The auto-editing feature remains in the research phase, but more practically, this model could make existing camera features faster. Gharbi says the algorithm could make the processing of HDR photos so fast that you no longer need to wait half a second to see your high-def pic. That might seem like an incremental improvement, but it was enough for Google to get involved. Garbi won't comment on whether this technology will appear in future versions of Android, but for the sake of my camera roll—and maybe yours, too—let's hope it does.
https://www.wired.com/story/googles-new-algorithm-perfects-photos-before-you-even-take-them/

0 Comments:

Post a Comment

Subscribe to Post Comments [Atom]

<< Home