Apple Executives Talk About Siri’s iOS 10 Improvements, and Benefits of Machine Learning

BY Evan Selleck

Published 24 Aug 2016

Apple Execs Siri upgrades

When Apple introduced iOS 10 earlier this year, one of the tentpole additions in the software were blanket improvements for the company’s digital personal assistant, Siri.

Siri has been improving over the years in general, but thanks to a focus on machine learning, and new features baked into the mobile operating system to allow for third-party application developers to access more stock apps and their features, Siri’s getting a big boost with iOS 10 later this year.

As a result, there’s obviously a lot of attention being put on Siri, the improvements therein, and how it all impacts Apple as a whole moving forward. Steven Levy over on Backchannel has put together an impressive interview with a variety of different executives at Apple, including Apple’s Senior Vice President of Internet Software and Services, Eddy Cue, and Phil Schiller, and Craig Federighi.

The interview doesn’t actually start with this statement regarding Siri, but it articulates why Siri is so important for Apple moving forward, and how good the digital personal assistant has become over the years — and why iOS 10 will be such a huge improvement over everything that’s already been upgraded over the years:

“With iOS 10, scheduled for full release this fall, Siri’s voice becomes the last of the four components to be transformed by machine learning. Again, a deep neural network has replaced a previously licensed implementation. Essentially, Siri’s remarks come from a database of recordings collected in a voice center; each sentence is a stitched-together patchwork of those chunks. Machine learning, says Gruber, smooths them out and makes Siri sound more like an actual person”

As for how it all started on this path, it goes back to 2014 when the company moved the digital personal assistant to a “neural-net-based system.”

“So Apple moved Siri voice recognition to a neural-net based system for US users on that late July day (it went worldwide on August 15, 2014.) Some of the previous techniques remained operational — if you’re keeping score at home, this includes “hidden Markov models” — but now the system leverages machine learning techniques, including deep neural networks (DNN), convolutional neural networks, long short-term memory units, gated recurrent units, and n-grams. (Glad you asked.) When users made the upgrade, Siri still looked the same, but now it was supercharged with deep learning.”

The piece details that Apple didn’t make a big deal out of the upgrade simply because they wanted to keep tweaking it, and tweaking it, to make sure that everything was not only working as it should, but that the process and end result was actually an improvement over how it used to work. With the improvements coming to Siri in iOS 10, it appears to have been a working success so far.

Everything is getting smarter at Apple, and Schiller pointed out that one of the areas that has improved the most is Apple’s A-series of processors, which are shipped inside the company’s mobile products, like the iPhone and iPad lineups.

“We’ve been seeing over the last five years a growth of this inside Apple,” says Phil Schiller. “Our devices are getting so much smarter at a quicker rate, especially with our Apple design A series chips. The back ends are getting so much smarter, faster, and everything we do finds some reason to be connected. This enables more and more machine learning techniques, because there is so much stuff to learn, and it’s available to [us].”

But Apple is keeping the pace in how it doles out projects for internal teams at Apple, it seems, providing access to these features, like machine learning, only to the teams that need it:

“We don’t have a single centralized organization that’s the Temple of ML in Apple,” says Craig Federighi. “We try to keep it close to teams that need to apply it to deliver the right user experience.

How many people at Apple are working on machine learning? “A lot,” says Federighi after some prodding.”

One team that needed access to machine learning, for something as straightforward as “palm rejection” was the team developing the Apple Pencil:

“One example of this is the Apple Pencil that works with the iPad Pro. In order for Apple to include its version of a high-tech stylus, it had to deal with the fact that when people wrote on the device, the bottom of their hand would invariably brush the touch screen, causing all sorts of digital havoc. Using a machine learning model for “palm rejection” enabled the screen sensor to detect the difference between a swipe, a touch, and a pencil input with a very high degree of accuracy. “If this doesn’t work rock solid, this is not a good piece of paper for me to write on anymore — and Pencil is not a good product,” says Federighi. If you love your Pencil, thank machine learning.”

The full interview is certainly worth checking out, and you can do so through the source link below.

[via Backchannel]