Live Photos is one of the more interesting and discussed features of the iPhone 6s. It makes me wonder if it’s only a matter of time before all photos are automatically sandwiched with high definition video. If the processing power is there to do it for photos, why not with audio and Siri?
In the iPhone 6s, Siri can be activated hands-free by saying ‘Hey Siri’. On the iPhone 6, this feature was only available while the iPhone was connected to power, presumably because the processing required would be a significant drain on battery life.
I believe sometime in the future our mobile devices will always be listening to us, and not just for a ‘Hello Siri’ trigger. It may begin by constantly buffering a few seconds of audio, which can then be referred back to by the user. For example, a user may have a conversation about setting up a meeting, and could then say ‘Hey Siri, put that on my calendar’. The device will be able to look back at the conversation, and create a calendar appointment.
Further in the future, the device may be able to do this itself without a trigger from the user. Again using the example of setting up a meeting, Siri will analyze the conversation in real-time, understand the context, and take relevant action - all without direct interaction from the user.
Obviously, this is a bit of a ways off, but research in this field has been going on for many years. Recently, Apple bought a UK company that works on natural language processing, which would be a key component of making this vision a reality. The other key component is low-power processing and voice recognition, which they have been iterating on for many years and the results can be seen in Siri’s evolution.