Apple
is ready to launch HomePod, its Siri-based smart-speaker on 9 February. The HomePod was unveiled at the
Worldwide Developers Conference
in June 2017. Phil Schiller, Apple’s senior vice president of worldwide marketing, spoke to
Sound and Vision
about the $349 smart-speaker which will first launch in the US, UK and Australia. [caption id=“attachment_4317253” align=“alignleft” width=“380”]
Apple HomePod.[/caption] On being asked why HomePod was being released now, Schiller said that the company had the knowledge and technology to deliver something special in the home segment. He added, “We’ve applied machine learning so that HomePod can sense the environment it’s in and sounds great wherever it’s placed. HomePod uses advanced machine learning techniques including deep neural networks (DNNs) optimised for the hardware to detect “Hey Siri” in challenging environments.” Schiller touched upon the expertise Apple has developed over the years in the music department and pairing that with Siri intelligence seemed like the next logical step. The audio innovations on the iPad Pro and the AirPods as well as making Apple Music more intelligent was discussed. “We use our A8 chip to run advanced software on the device for real-time acoustic modelling, audio beam-forming, echo cancellation, and more,” said Schiller. Smart speaker category is expected to see a boom in the coming years, if this year’s CES was any indication. While Schiller did not say how Apple saw the future playing out, he did say that Apple could create a new kind of musical experience and also look at making the HomePod a smart home hub. Siri’s integration with
HomeKit allows
users to turn on the lights and set scenes before going to bed through the smart speaker. Amazon’s Echo line of products are in their second generation and have a lion’s share of the market. Also, Echo Plus comes with a Zigbee smart home controller on board. So it definitely has a first mover advantage in this space. Apple’s advantage lies in the fact that its Siri voice assistant handles over 2 billion requests each week thanks to the widespread use of Apple devices. This is certainly an advantage and helps Apple to understand how people interact via voice and what are the kind of questions to expect.
Apple HomePod. Apple.com Schiller went deeper on how the spatial awareness technology works on the HomePod. He said that from the moment you plug in HomePod and start listening to music, it will try and sense its location in the room and tweak settings to that it’s taking full advantage of the environment it is in. “The microphone array in HomePod listens to the reflection of the music off neighbouring surfaces, senses where the bookshelf is, or if it’s in the corner of a room or against a wall, and then uses machine learning to understand what it’s hearing, interpret the sound, and adjust the audio,” said Schiller. This ability, paired with the software and the underlying Apple A8 chip, lets HomePod smartly beam centre vocals and direct energy away from the wall. This process is expected to begin within minutes after the first setup and is repeated every time you move the location of the HomePod. On HomePod’s voice recognition chops, Schiller said that it uses advanced machine learning techniques including deep neural networks which are optimised to detect ‘Hey Siri’ in the most challenging of environments. It is a combination of hardware and software. Post the ‘Hey Siri’, the message requests are sent to Apple using a Siri ID, which is encrypted.
HomePod Schiller also spoke about how Apple has been grooming its audio team which has been around since the time of the iPod. He gave the quad-speaker systems of the iPad Pro as well as the wireless AirPods as examples on how Apple is innovating on the audio front. With the HomePod the brief given to them was to get great audio in a small cylindrical form factor. “They developed a beam-forming array made up of seven tweeters, each with its own individual amplifier.” You can read the complete interview
here
.
)