Google I/O 2019: Inching closer to replacing touch interactions with your voice

Google is pushing the boundaries of what the Assistant can do, fundamentally changing our phone usage.

I’m sure we’ve all envied characters from sci-fi flicks who have a virtual assistant to do things for them. It’s always around waiting for your command and demands nearly zero physical effort from you. It’s the ultimate dream — one that often surfaces whenever there’s the talk of a more contextually aware ecosystem of gadgets. A glimpse of a world inspired by these plots has already begun to appear in fits and starts with all-screen, foldable phones, smart homes, and more.

Google I/O 2019: Inching closer to replacing touch interactions with your voice

Google Assistant was the star of the show at IO19.

Google, the company many might believe is the only one today capable of accomplishing the goal, has been making remarkable headway on that front. And at its annual developer conference for 2019, it made that even more clear by pushing the boundaries of what the Google Assistant can do, fundamentally rethinking how we operate our phones. The company announced a series of groundbreaking advancements which, above all, inch us closer to replacing touch-based input with voice.

The voice assistants we have at our disposals today are far from the ones we’ve been fed in movies. Arguably their biggest obstacle is the number of barriers you have to go through in order to perform a task as simple as adding a new to-do. You’ve to invoke it with the custom phrase, speak in the language it understands, wait for a response, confirm the details, review them, and you finally have updated your to-do list.

Tackling the issues hindering voice assistants’ progress today

The most obvious reason behind these heaps of steps is to hide the latency. Even though processors and your internet connection have come a long way, they’re still far from what you would need for an entirely hands-free experience. Therefore, the foremost job Google had to take care of is to reduce that lag to a point it’s almost non-existent. And it did just that.

With a plastic body, ‘Squeeze for Google Assistant’ also makes it to the 3a. Image: Omkar G

Google Assistant. Image: Omkar G

“What if we could bring the AI that powers the assistant right onto your phone. What if the assistant was so fast at processing your voice that tapping to operate your phone would almost feel slow. It opens up many new use cases.”, said Scott Hoffman, VP of Engineering, Google while speaking at the keynote.

At I/O 2019, Google claimed after years of effort, it has managed to shrink down the AI models responsible for predicting your actions from a 100 GB to a mere 0.5 GB. Thanks to this, Google Assistant will soon be able to compute all your queries locally itself instead of going back and forth between your phone and Google’s servers.

The results are impressive as well and Google says these developments enable the Assistant to revert ten times quicker. The demos Google walked us through did look promising and showed the Assistant transitioning between actions in a matter of seconds.

(Also Read- Google I/O 2019: Google presents its plan to safeguard privacy in an AI-first future)

"By moving these powerful AI models right onto to your phone, we are envisioning a paradigm shift. This next-gen assistant will let you instantly operate your phone with your voice, multi-task across apps, and complete complex actions, all with nearly zero latency.”, added Hoffman.

Building the next-gen assistant

With this predicament out of the way, Google is now building what it calls the next-gen Assistant. Once it’s rolled out, you will speak to the Assistant without any friction. In addition, Google will be plugging the Assistant’s arms into much deeper levels of Android letting you effortlessly ask it to execute in-app actions — an approach similar to what Samsung offers with Bixby. Plus, the next-gen Assistant will be always on the lookout for possible actions and allow you to continuously command it. So for instance, if a text message arrives, you can simply say “Reply let’s meet tomorrow” and Assistant will instantly send it. You won’t have to wake it up first, nor tap the notification first or touch your phone at all for that matter.

(Also Read- Google I/O 2019: All you need to know about the new features coming to Android Q)

The other, key piece of the puzzle in Google’s quest to turn voice as your primary means of interaction is Duplex On The Web. Last year, the company demonstrated how it will soon be possible for you to ask the Assistant to book appointments by actually calling the business. It was a revolutionary move and despite speculations, Google did successfully ship it to a handful of regions in the United States. This time, Google took it to the next level and extended Duplex to tasks on the web.

What that means is, in the near future, Assistant will be able to handle bookings on the internet for you. In a demo, Google previewed the Assistant crawling through a bunch of web pages and automatically feeding in the form fields on a car rental website. While you can obviously tell it the specifics, Assistant will be smart enough to browse your emails and calendar for the dates, your preferences, whether you need a baby seat, you get the idea. All you will have to do is ask.

Ambient computing: closer than you think

The idea of ambient computing and an environment where your devices figure out what you want to perform actions on their own has been around for a while. It’s often called a pipedream and to an extent, that is true. For a robot-operated lifestyle, a lot of factors need to fall in the right place together. But unlike a couple of years ago, it doesn’t seem all that far-fetched anymore.

Google accentuated that even further at I/O 2019. Its voice assistant has grown dramatically more intelligent over just the past year. Pair that with how much Google’s algorithms know about you and you should be able to imagine a future the company is promising.

Google Assistant Continued Conversations.

Google Assistant Continued Conversations.

Of course, as far as touch-based input is concerned, it’s not going anywhere soon. You won’t be talking to the Assistant to use your phone at least for a couple of years. And even when Google widely rolls out its so-called next-gen Assistant, you surely won’t be employing it out in the open. There are only a bunch of scenarios where I can see people operating their phones with voice.

But that’s not quite the point. These advancements can be considered stepping stones for a world where products like the Google Assistant will truly act as a real assistant to you.

For now, though, you will make do with the voice assistant available on your phone today and wait a little longer for the ones people have in sci-fi movies.

The author is a freelance technology journalist from Ahmedabad. He tweets from @phonesoldier

Tech2 is now on WhatsApp. For all the buzz on the latest tech and science, sign up for our WhatsApp services. Just go to Tech2.com/Whatsapp and hit the Subscribe button.






also see

science