As part of CNET’s “The Next Big Thing” conference series at CES 2016, the “Is Typing Dead?” session on Wednesday afternoon captured our attention because of its focus on debating what’s next in human-machine interaction.
The actual voice behind Siri, Susan Bennett, took stage at the beginning of the session, and hearing her talking in her highly recognizable “Siri voice” made for an oddly interesting experience. She told the audience a brief but funny recap about her involvement in the creation of Siri, kicking off this simulating session on what will come after typing and touchscreens in the continued evolution of digital interfaces.
The four-person panelists consisted of industry thought leaders on interface designs, including Wendy Ju from Stanford University’s Interaction Design Research unit, Pattie Maes from MIT Media Lab, Marcus Behendt from BMW’s user experience department, and Vlad Sejnoha, CTO of Nuance Communications. Together, they discussed the state of voice command and gesture control, and casted their predictions for the future of user interfaces.
Voice command has been taking off in recent years with the likes of Siri and Amazon’s Alexa, and as we have seen at this year’s CES, more and more devices has added support for voice command and will start talking with users. But because of the inherent ambiguity in natural languages, as MIT’s Maes pointed out, speech is not always the most efficient way of communications, and therefore will be relegated to controlling only certain applications.
Moreover, the panelists agreed that sometimes voice command may misunderstand user intent because it is not picking up on all the non-verbal cues we use in conversations. And it would become a much more powerful tool for human- computer interaction if it is combined with personal data to learn about user’s preference and interests.
Gesture control is also a UI trend that is growing in popularity, whether it’s X-box’s kinetic gaming features, or the in-car gesture control that Volkswagen just added to its Golf electric model. BMW’s Behrendt sees gesture control mostly as communication enhancement, while also reminding everyone that some gestures may vary from culture to culture, which hinders universal adoption. And the panelists agrees that the bottom line here is that gesture control should be intuitive and shouldn’t be like a sign language that users have to learn to use.
In addition, the panelists also quickly ran through some emerging technologies that may one day power mainstream digital interfaces, such as gaze control (commend with sight), proximity-based control (such as beacons triggering actions), as well as biometric-based control that responds to the changes in your physiological stats. While all these may still be decades away from ready for mass adoption, they nevertheless points to a future where our devices will no longer just passively waiting for our commands, but rather actively uses contextual data to anticipate our needs and serve us before we even lift a finger.
For more of the Lab’s CES coverage, click here.