Categories: world

How Google's Night Sight Works, and why it's so good

Reads all gushing praise for Google's new Night Sight dim light shooting feature for Pixel phones, d be forgiven because…

Reads all gushing praise for Google’s new Night Sight dim light shooting feature for Pixel phones, d be forgiven because Google had just invented color film. In fact, night photography modes are not new, and many of the underlying technologies go back years. But Google has done a great job of combining its skills in computer imaging with its unmatched power in machine learning to drive the opportunity past something previously seen in a mobile device. We take a look at the story of photography with multiple photos, how it’s likely used by Google and speculate what AI brings to the party.

The challenge of low-light photography

 Long exposure stars in Joshua Tree National Park, shot with a Nikon D700. Picture of David Cardinal. All cameras are fighting in low light scenes. Without enough photos per pixel from the scene, the noise can easily dominate in an image. If you leave the shutter open longer to gather enough light to create a useful image, the amount of noise increases. Perhaps worse, it’s also hard to keep a picture sharp without a stable tripod. Increasing gain (ISO) will make a picture brighter, but it also increases the noise at the same time.

Larger pixels, usually found in larger sensors, are the traditional strategy to address the problem. Unfortunately, phone camera sensors are small, resulting in small photosites (pixels) that work well in good lighting but fail quickly because light levels decrease.

It gives camera camera designers two options to enhance low light images. The first is to use multiple images, which are then combined into a smaller audio version. Early implementation of this in a mobile device accessory was the SRAW mode for the DxO ONE extension for iPhone. It melted four RAW images to create an improved version. The second is to use smart finishing (with latest versions often driven by machine learning) to reduce noise and enhance the subject. Google’s Night Sight uses both of these.

Multi-Image, Single-Capture

Now we are used to our phones and cameras that combine multiple images into one, mostly to improve the dynamic range. Whether it’s a traditional bracket set of exposures used by most companies or Google’s HDR +, which uses multiple short-term images, the result can be a superior final shot – if artifacts caused by merge multiple images on a moving scene together can be minimized. Usually, it is done by selecting a base frame that best represents the scene and then joining useful parts of the other frames into it to enhance the image. Huawei, Google and others have also used the same approach to creating better-resolution telephoto observations. We have recently seen how important it is to choose the right foundation frame, as Apple has declared its “BeautyGate” snafu as a bug where wrong ramram was selected from the captured sequence.

So it’s only meaningful that Google, essentially combined these uses of multi-image capture to create better low-light images. This way, it builds on a range of great innovations in image processing. It is likely that Marc Levoys Android app SeeInTheDark and his 2015 “Extreme imaging using cell phones” document were the basis for this effort. Levoy was a pioneer in computer education at Stanford and is now a Distinguished Engineer who works with camera technology for Google. SeeInTheDark (a follow-up to its previous SynthCam iOS app) used a regular phone to accumulate frames that rotated each frame to match the accumulated image and then performed a series of noise reduction and image enhancement steps to produce a remarkable final low-light image. In 2017, a Google engineer, Florian Kanz, built on some of these concepts to show how a phone could be used to create professional-quality images even in very low light.

Stapling of several low-light images is a well-known technique

Photographers have stacked several frames together to improve the low light performance since the beginning of digital photography (and I suspect that some even did it with movies). In my case, I started doing it by hand, and later used a dirty tool called Image Stacker. Because early DSLR values ​​were useless at high ISO values, the only way to get good night shots was to take multiple frames and stack them. Some classic shots, like star trails, were originally best caught in that way. These days, the practice is not very common with DSLR and mirror-free cameras, as current models have excellent built-in high ISO and long exposure sound levels. I can leave the shutter open on my Nikon D850 for 10 or 20 minutes and still get some very useful shots.

So it’s good that phonemakers follow similar techniques. But unlike patient photographers who shoot stars with a tripod, the regular phone user wants immediate satisfaction, and will almost never use a tripod. So the phone has the extra challenges that cause low-light capture to happen quite fast, and also minimizes blurring from camera shake – and even from the subject’s motion. Even the optical image stabilization available on many advanced phones has its limits.

I’m not positive what answering machine first employed several photos to improve low light, but the first one I used was the Huawei Mate 10 Pro. Its Night Shot mode takes a series of images over 4-5 seconds and then puts them in a single photo. Since Huawei leaves the real-time preview, we can see that it uses multiple exposures during that time, which essentially creates multiple attachments.

In its original HDR + paper, Levoy makes multiple exposures more difficult to adjust (which is why HDR + uses many identical exposed frames), so Google’s Night Sight, like SeeInTheDark, is likely to use a series of frames with identical exposures. But Google (not least in the previous version of the app) does not leave the real-time image on the phone’s screen, so it’s just speculation from my side. Samsung has used a different tactic in the Galaxy S9 and S9 +, with a double-blind headliner. It can switch to an impressive f / 1.5 in low light to enhance image quality.

Compare Huawei and Google’s low-light camera features

I do not have Pixel 3 or Mate 20 yet, but I have access to a Mate 10 Pro with Night Shot and a Pixel 2 with a pre-release version of Night Sight. So I decided to compare myself. Over a series of tests, Google clearly performed Huawei, with lower sound and sharp images. Here is a test sequence that illustrates:

Daylight Painting with Huawei Mate 10 Pro

Daylight Painting with Google Pixel 2 [19659021] Without a night shot mode, here’s what you photograph the same scene in the nearest darkness with Mate 10 Pro. It selects a 6 second shutter speed, which shows in blur. ” width=”640″ height=”480″ srcset=”https://www.extremetech.com/wp-content/uploads/2018/10/Without-a-night-shot-mode-here-is-what-you-get-photographing-the-same-scene-in-the-near-dark-with-the-Mate-10-Pro.-It-chooses-a-6-second-shutter-time-which-shows-in-the-blur-640×480.jpg 640w, https://www.extremetech.com/wp-content/uploads/2018/10/Without-a-night-shot-mode-here-is-what-you-get-photographing-the-same-scene-in-the-near-dark-with-the-Mate-10-Pro.-It-chooses-a-6-second-shutter-time-which-shows-in-the-blur-300×225.jpg 300w, https://www.extremetech.com/wp-content/uploads/2018/10/Without-a-night-shot-mode-here-is-what-you-get-photographing-the-same-scene-in-the-near-dark-with-the-Mate-10-Pro.-It-chooses-a-6-second-shutter-time-which-shows-in-the-blur-768×576.jpg 768w” sizes=”(max-width: 640px) 100vw, 640px”/>

Without a night screen mode, here’s what you photograph the same scene in the nearest darkness with Mate 10 Pro. It chose a 6 second shutter speed, which shows in blur.

A version shot almost the night with Night Shot on Huawei Mate 10 Pro. EXIF data shows ISO3200 and 3 seconds total exposure time.

Same scene using Night Sight on a pixel 2. More accurate color and slightly sharper. EXIF data shows ISO5962 and 1 / 4s for shutter speed (presumably for each of many frames). Both images were compressed to a smaller overall size for use on the web.

Is machine learning part of Night Sight’s Secret Sauce?

Given how long image stacking has been and how many camera manufacturers have used any version of it, it’s fair to ask why Google Night Sight seems to be so much better than anything else out there. First, the technology in Levoy’s original paper is also very complex, so the years Google has to keep improving on it should give them a decent start to someone else. But Google also said that Night Sight uses machine learning to determine the right colors for a scene based on content.

It’s pretty cool, but also quite vague. It’s not clear if it segments individual objects so that they know that they should be a consistent color or color known objects appropriately or globally recognize a type of scene how intelligent auto exposure algorithms do and decide how scenes it should generally look like (green foliage, white snow and blue sky for example). I am sure that when the final version rolls out and photographers get more capacity experience, we learn more about this use of machine learning.

Another location where machine learning may have benefited is the first calculation of exposure. The core HDR + technology behind Night Sight, documented in Google’s SIGGRAPH paper, is based on a handheld data set with thousands of sample scenes to help determine the right exposure for use. It seems like an area where machine learning can lead to some improvements, especially when exposing exposure to very low light, where the objects in the scene are noisy and difficult to distinguish. Google has also experimented with using neural networks to improve the quality of the phone, so it would not be surprising to start to see some of the techniques used.

No matter what combination of these technologies Google has used, the result is certainly the best low light camera mode on the market today. It will be interesting because the Huawei P20 family rolls out if it has been able to propel its own Night Shot capability closer to what Google has done.

Now read: Best Android Phones for Photographers 2018, Mobile Photography Workflow: Pushing the Envelope with Lightroom and Pixel, and LG V40 ThinQ: How 5 cameras push the limits of phone photography

Share
Published by
Faela