At the Made by Google launch event held in New York, it’s now clear that Google has mastered the art of doing more with less. And that’s a good thing.
Google unveiled the successors to its Pixel smartphone line-up with the standard-sized Pixel 3 (for normal humans) and a bigger Pixel 3 XL which basically gets you the same core features with a bigger battery and a gigantic display notch (more like a notch with a display attached to it) for those who are into that stuff.
But as was witnessed from the presentation at the launch event, Google didn’t talk specifications. In fact, Liza Ma, a Google product manager, and Brian Rakowski, vice president of product management, barely touched upon hardware, save for when it came to design (and a mention of the new security chip). This again shows Google’s reliance on software innovation, while other manufacturers battle it out to deliver smartphones with triple cameras and even quad-cameras on the back of their devices.
While lesser hardware also means lower component costs for Google, it also helps the Pixel brand deliver a slimmer design giving them more space inside to add other critical components, like new security chip called Titan M. In short, with just one camera at the back, your smartphone will look less intimidating to your subjects as well.
So, as Huawei (P20 Pro) and Samsung (Galaxy A7) keep adding cameras at the back, Google makes them look like fools, by delivering similar or even better image quality using just one camera using the power of machine learning, computer vision and software.
Google’s Pixel 3 camera innovations indeed make the new camera’s on Apple’s iPhone XS look drab and old school. While we have yet to compare the two camera behemoths in a detailed camera comparison, let’s have a look at what the new Pixel 3 delivers using just software and a single camera.
The new feature reduces the need to re-take a picture that either came out blurry, with closed eyes or a composition that usually gets missed out when our shots are not timed right. Top Shot basically is a built-in feature that does not need to be activated but works out-of-the-box in the default camera mode.
Click a picture with Motion set to ON or AUTO and if someone blinked, you simply tap to open the image you just clicked and Top Shot will get you a number of images both before and after your capture, that has been stored in the camera buffer. These are high-quality HDR images that have been shot before and even after the image you have captured almost giving you the ability time travel. The cool bit here is that this works for images that you have just clicked (with Motion ON/AUTO) and can be handpicked at a later time as well since all the images are stored. This will however take up some space.
Super Res Zoom
Aptly named, this is Pixel’s version of digital zoom that lets you take clearer photos using the same camera at the back (there’s only one here, to begin with). But instead of just cropping the output from the sensor as most other smartphones do, the Pixel 3 will shoot a burst of photos, as your hand naturally moves (pitches) back and forth while composing. Then it uses data from all the images to deliver more detailed images when zoomed in, than what you get from a regular digital zoom mode on other smartphones.. As mentioned by Brian Rakowski, the same technology is being used to image the surface of Mars. So yes, the inspiration is out of this world indeed.
Group Selfie Cam
Indeed, this is the one area where Google did add an extra camera (or hardware), on the front. The setup delivers wider selfie shots branded as the Group Selfie Cam that will not only widen the field of view, but also helps you get more people in the frame. While other manufacturers like Oppo have done this in the past, Google’s implementation of the same in the camera UI makes the transition from standard to wide, smooth and seamless using just the scrubber at the bottom to zoom out of a frame while shooting using the selfie camera.
Night Sight is an upcoming feature that again uses the power of machine learning to produce vibrant photographs in dim or low light conditions. The enhancement uses machine learning to determine the right colours in a dimly lit scene. Google at the presentation claimed that this almost eliminates the need for using a flash that often results in blown out highlights in low light images. Night Sight arrives on the Google Pixel 3 and older Pixel devices in November.
Motion Auto Focus
If you have subjects (kids, pets) that are too active to stay still and pose for you, the Pixel’s Motion Auto Focus may be the solution. All you have to do is tap on your subject and the camera will track and keep the subject in focus all the time.
Advanced Portrait Mode
While the Portrait mode using a single lens blew us away last year, Google has added some improvements to the mode that lets you tweak those photos as well. The Portrait mode now lets you edit the depth of field, change the focal subject of the photograph and even adjust the colours, like on an iPhone.
Google has indeed turned smartphone photography into a science by using complicated algorithms to process the minutest difference between photographs to deliver better quality images using minimal camera hardware. Thankfully, all of those complicated calculations are done behind the scenes, meaning all of you can focus on capturing the moment.