The smartphone supply chain has given rise to powerful, cheap computing components that can
augment our bodies, our homes, and our cities, gathering new kinds of data and providing
hyper-contextual intelligence. We talk about these things as IoT devices or wearables, but when
backed by machine learning in the cloud, and sewn up with other devices, they create totally new
kinds of inputs and outputs for media.
The smartphone has become the universal product of the 21st century — one day soon, everyone in
the world will have access to the real-time information and communication a smartphone affords.
But just as importantly, everyone will also have a high quality networked camera, a highly tuned
microphone, and a precise record of where they are in the world. These sensors will increasingly
be used not just to capture media, but as their own full-fledged sources of data.
Text has always been the lingua franca of computing: it’s how we create software, and how
software has received our inputs about the world – until now. Over the past few years, machine
learning has progressed to the point of rendering images and videos as transparent to computers
as text has always been: Google can find all your photos that include your pets, or that volcano
you toured in Hawaii, and can translate street signs in real time when you point your phone at
them. And the dual-camera system which first shipped in the iPhone 7 Plus can process depth,
making it suitable for capturing scenes in 3D space.
And while we’ve been talking to our phones, in the form of Siri and Google, for awhile now,
Amazon’s placement of Alexa outside the context of the phone – as an assistant that’s tied to a
place, rather than a person – has supercharged the use of voice interfaces. Taken together, the
camera and microphone as discrete inputs will provide additional context, and greatly increase
our virtual assistant’s ability to respond as a human would. It’s time to stop thinking about
cameras and microphones as digital updates to 20th century devices, and start thinking about
them as the eyes and ears of our software and services.
“Is this the right office for my 1pm meeting?” “Looks like you’re on the right block, but let me
see the building. Yes, you can enter at the red door on the right.”
We’ve already seen the first company founded on this principle: when they rebranded the company
from Snapchat to Snap, CEO Evan Spiegel stressed the point that they were not a social network,
but a camera company. And while the declaration was made alongside the announcement of Snap’s
first hardware camera, Spectacles, the implications go far deeper. Snap is the first company
founded on the notion that our modern, networked cameras are fundamentally different than their
analog ancestors. We’ve written extensively about how Snapchat is preparing consumers for the
augmented reality future, but along with that will come incredible amounts of data, which Snap
will use to power its advertising platform. Spectacles won’t have to ask to see where you are or
what you’re looking at, because they’ll already know — World Lenses look like a toy today, but
are already providing Snap with contextual data about where their users are and what they’re
interested in.
Along with these new types of inputs come entirely new types of media. Virtual
reality made waves last year as we saw the launch of major new consumer platforms, while Pokemon
Go and Snapchat are preparing consumers for the coming onslaught of augmented reality. While
these markets are nascent, they represent two sides of the same coin: virtual reality is the
purest media experience we’ve ever created, fully immersing us in the content. And augmented
reality will be the inverse: pushing media out to every corner of our physical world. Both are
still nascent technologies, but together represent the future of media, and are fertile
playgrounds for how to speak to the consumer of the future.
As brands looking to capitalize on these new interfaces, we must start from the data – what
information about your consumers would allow you to deliver a more personalized, tailored
experience with your product? That customization will extend beyond the product itself, but it’s
important that it begin from a position of improving the experience for the consumer. Once that
data strategy is identified, we can partner with the relevant platforms to create experiences
access that data. In some cases, it might make sense for a brand to create its own Advanced
Interface, if you can authentically create value. But that authenticity is key – you must offer
a compelling use case, and you must carefully protect the data you do capture. An inauthentic
experience will be a waste of resources, and badly secured user data will set back your trust by
years. Improve both the experience and your customers’ trust, and you’ll generate a wealth of
proprietary data, building a moat around their brand loyalty.