Nathan and I work in a Chicago area private school as administrators of many hundreds of Macs, Windows PCs, iPads, and Chromebooks. That gives us some insight on how such devices are used by a variety of people.
The mouse has become somewhat like 1999 thanks to the trend toward notebooks which, with track pads, continue the time honored point and click interface. What we are beginning to see, however, is a different way to manipulate and control devices, starting with mobile first.
Talk To Me, Baby
We are convinced that Siri is Apple’s future interface as we see more iPhone and iPad users getting into Siri, Siri Shortcuts, and talking vs. flipping windows to find an app. Even Android device users in the school have figured out that Google’s Assistant can be useful.
To be fair, all these talking digital assistants have one thing in common. They’re mostly stupid and literate as a 5-year old. We see the change taking place, slowly, inexorably, and in sufficient numbers to see not just a trend, but the future.
Some call it voice computing; perhaps Star Trek style. On James Vlahos‘ new book:
The titans of Silicon Valley are racing to build the last, best computer that the world will ever need. They know that whoever successfully creates it will revolutionize our relationship with technology—and make billions of dollars in the process. They call it conversational AI.
Isn’t that what Siri and Alexa and Assistant are all about?
What’s the process we work through to make it happen? Vlahos:
First, the sound waves of your voice have to be converted into words, so that’s automatic speech recognition, or ASR. Those words then have to be interpreted by the computer to figure out the meaning, and that’s NLU, or natural language understanding. If the meaning has been understood in some way, then the computer has to figure out something to say back, so that’s NLG, or natural language generation.
OK, that’s the process; now what about how it works in our everyday lives?
We’ve always been toolmakers and tool users. There are always things we hold or grab or touch or swipe. So when you imagine that all just fading away and your computing power is effectively invisible because we’re speaking to tiny embedded microphones in the environment that are connected to the cloud — that’s a profound shift.
That’s exactly what we see taking place in our school. Younger students and teachers have adapted to and adopted Siri, Alexa, and Google Assistant to handle specific chores and actions on their devices.
We also notice a gap that seems to be widening those who are actively interested in adding voice control vs. those that remain wedded to point and click to touch.
Siri, however, is the most well known, and from our experience, the most used of the talking intelligent assistants.
Google and Amazon and Apple just want to be liked by the most number of people so they’re going pretty broad, but [I think they will develop] the technology so my assistant is not the same as your assistant is not the same as your co-worker’s assistant. I think they’ll do that because it would be appealing. With every other product in our lives we don’t have a one-size-fits-all, so I don’t see why we would do that with voice assistants.
This change will not happen as quickly as the iPhone’s touchscreen took over the smartphone industry, but the march is inexorable. Try Siri Shortcuts yourself– you control what Siri will do, including the request itself. That’s the tip of the iceberg.
Apple’s future interface won’t be point and click to tap a touchscreen. It will be voice and Siri.