Google engineers explain the science behind the Pixel 3's incredible camera

There are impressive details about lengths the Pixel 3 camera goes to make the user a better photographer.

Google's recently announced Pixel 3 smartphone managed to wow audiences with its smarts at the launch event. While the photos that we have taken  with our review unit do look impressive, they also beg the question as to how a smartphone with single camera is capable of much more than what we have seen with not just dual (iPhone X, Note 9), but even with a triple camera (Galaxy A7, P20 Pro) setup so far.

The team at DPReview managed to interview the brains behind these path-breaking techniques, which are basically the use of old or standard digital photography mechanisms but with modern and at times, weird twist.

Google Pixel 3

Google Pixel 3

The interview with Isaac Reynolds, product manager for camera on Pixel, and Marc Levoy, engineer and computational photography lead at Google, managed to give out plenty of details about the machine learning mechanisms and other rather odd techniques used to deliver better camera quality than the underlying sensor is capable of.

Synthetic Fill Flash

While the presenters on stage barely shed any light on this thanks to time constraints, the Synthetic Fill Flash, works in the same way that Google creates Portrait imagery, using a mix of edge detection and machine learning (ML).

Once the photo is clicked the camera will then selectively raise the exposure on the person and their face in the photograph that gives the subject a lively glow. In traditional photography with camera, this usually requires reflectors and other equipment to add more light on a subject against a brighter sunset or in scenarios with dim lighting. The Pixel's processing lets you capture details of both the subject (or even subjects) and the details in the background as well, making for a vibrant, lively and well exposed photograph.

Night Sight feature on the Google Pixel 3. Image: Youtube / Made by Google

Night Sight feature on the Google Pixel 3. Image: Youtube / Made by Google

Night Sight

According to the camera team, the Pixel 3 does take better pictures in the standard low light mode itself as compared to what you get on a Pixel or Pixel 2.

If there is really no light (street light 40 feet away) or very dim sources of light (like a sunset), the Night Sight mode comes to your rescue. You will get a bit of shutter delay (upto 4-5 seconds) but the results are reportedly worth waiting for. Wide-angle selfies also come with Night Sight so that colours even in low light are accurate thanks to machine learning.

Raw Capture

As per the team, the DNG files produced by aligning 10-15 frames and merging them is far from your typical smartphone's RAW capture. And thanks to this super accurate and smart frame alignment algorithm, there is no ghosting even if the subject happen to be moving.

Super Res Zoom

A new kind of burst photography that will actually increase the details of the photographs has been used in the new digital zoom mode branded as Super Res Zoom. The technology basically makes complete use of software algorithms to deliver image quality that is far better than its underlying sensor is capable of.

The system relies on small micro-shifts to gather this data and will also go as far as to purposely move the OIS system to get those shifts (if the user's hands are too steady) and then align the photos back in software to sub-pixel precision.

Top Shot camera feature on the Google Pixel 3. Image: Youtube / Made by Google

Top Shot camera feature on the Google Pixel 3. Image: Youtube / Made by Google

Top Shot

The photos that are recommended by the Top Shot feature are also the ones that have better quality with not just better tone-mapping, but a higher resolution than the rest of the buffered photos as well. One detail to note here is that the recommended photos in Top Shot selection are not of the standard 12 MP resolution.

While Motion feature (that lets you take small GIF-like videos) set to ON will force the camera to take better shots before and after the clicked photo, Motion set AUTO works in a smarter way. The brains in the camera will actually scan through photos shot in the buffer to check if there is a frame that could have been the perfect shot instead of the one you clicked and prompt the user hinting that there is better shot available, which the smartphone clicked instead of the user.

Most impressive of all is how all of this happens offline with no connection to any server, but just on the smartphone itself. More like the Pixel 3 camera has a brain and it seems to use it pretty well.

Portrait mode

The impressive Portrait mode has also been improved with the new Pixel 3. It now not just looks at the stereo depth map coming from the sensor's split dual pixel system, but a learning-based depth map that delivers better edge detection and background defocusing both of which now also work well with mid-distance shots.

Keep checking our reviews section for our detailed review of the Google Pixel 3 XL that should be out soon.

Loading...




Top Stories


also see

science