Categories: world

Google explains Pixel 3's enhanced AI portraits

November 30, 2018 Technology 0 Views When Pixel 2 started, Google used a neural network and camera phase detection autofocus…

When Pixel 2 started, Google used a neural network and camera phase detection autofocus (namely the parallax effect it offers) to determine what’s in the foreground. This does not always work when you either have a scene that does not change much or that you shoot through a small aperture. Google remedied this with Pixel 3 by teaching a neural network to predict the depth using a variety of factors, such as typical sizes of objects and sharpness at different points in the scene. You should see fewer of the glitches that usually appear in portrait images, such as still objects that are still sharp.

A creative technique was needed to train this neural network, Google said. The company created a “Frankenphone” with five Pixel 3s to capture synchronized depth data, eliminating the problems of aperture and parallax.

While you may not be happy that the pixels still depend on single cameras for photos (thus limiting your photographic options), this illustrates the advantage of using AI. Google can improve image quality without connecting it to hardware upgrades.


Source link

Share
Published by
Faela