Dunkin’ Donuts Tries Out Branded Selfie Lenses On Snapchat & Kik

What Happened
In honor of the National Donut Day, Dunkin’ Donuts became the latest brand to experiment with sponsored selfie lenses, a novel, camera-based ad unit that has been gaining traction among brands seeking to reach younger consumers on messaging platform. To celebrate the unofficial holiday, the Boston-based brand is running its first sponsored selfie lens on Snapchat, which will turn a user’s face into a donut. Along with the lens, the brand is also placing sponsored Geofilters in various locations around the country and will be running Snap Ads to promote a frozen coffee drink.

Meanwhile on Kik, Dunkin’ Donut will be the first brand to try out the branded video sticker, which, similarly to the selfie lenses, overlays a sticker on a user’s face during video calls (although it does not integrate the facial features as the lenses do). The brand created three different donut-themed video stickers that Kik users can have fun goofing up their video chats with.

What Brands Need To Do
With the proliferation of face-altering lens feature across messaging and social platforms, mainstream consumers are increasingly getting accustomed to these camera-powered AR features as a result. This is what is laying the groundwork for mobile-powered augmented reality to take off, which will allow brands to infiltrate their target audience’s photos and videos via sponsored Lens or branded AR objects.

Besides, this is a good time to think about ways for augmented reality to drive new opportunities for your brand. AR can, for example, be a great way for customers to envision your products in their lives and to launch digital experiences from signage or product packaging. What we can do now through a smartphone is just the beginning. As Microsoft’s HoloLens, Magic Leap, and the rumored Apple glasses roll out over the next few years, a lot more will become possible.

 


Source: AdWeek

Microsoft Overhauls Skype In The Camera-Centric Mold Of Snapchat

What Happened
Microsoft released the latest update to its IM and video call app Skype on Thursday that brought a complete overhaul to the app. Most significantly, the revamped app now makes the camera just one swipe away from the chats, encouraging users to snap more pictures to share with each other. The app also added a Highlights section, which functions very similar to the Story feature that was popularized by Snapchat and imitated by Facebook’s messaging platforms.

What Brands Need To Do
It seems unlikely this overhaul alone is enough to put Skype back into competition with the other popular consumer messaging and social apps, given that it has neither the user base or engagement that its competitors have. Last year, Microsoft shared that Skype has 300 million monthly active users when it introduced bots to the chat platform, an initiative that has gained little traction over the past year. In comparison, Facebook Messenger recently hit 1.2 billion monthly users, while Snapchat, an app much younger than Skype, now has over 166 million daily active users.

Nevertheless, this update underlines Microsoft’s intention to bring Skype up to speed in the messaging space and better cater to the shifting consumer preference towards ephemeral sharing. It also points to a larger trend in mobile UX design where the camera start to take the center stage as it increasingly becomes an input source for capturing content and understanding user intent.

For more information on how brands may tap into the rapid development in camera-based mobile AR features to create engaging customer experiences, please check out the Advanced Interfaces section of our Outlook 2017.

 


Source: TechCrunch

Header image courtesy of Skype

Why Smart Home Appliance Maker Whirlpool Is Buying Recipe Site Yummly

What Happened
Whirlpool, a leading manufacturer of connected home appliances, announced on Thursday that it is acquiring Yummly, a recipe site with personalized recommendations and search. Beyond recipes, Yummly has also partnered with Instacart for one-hour grocery delivery in select cities.

This acquisition will allow Whirlpool to tap into Yummly’s recipe database to enhance its products by, for example, create better Alexa skills for its smart kitchen appliances, which added support for Amazon’s digital voice assistant earlier this year. For example, which the vast library of personalized recipes, the company will be able to make a smart fridge that can offer suggestions for home-cooked meals.

More importantly, perhaps, is the vast amount of user data on food and grocery preferences as well as cooking-related behavioral data that Whirlpool will gain with this acquisition. Whirlpool can apply insights from this data to cater to customers’ kitchen habits better and improve their products accordingly. 

What Brands Need To Do
This acquisition underscores Whirlpool’s commitment to creating connected home appliances with a superior user experience. As one of the world’s largest digital recipe platforms, Yummly boasts more than 20 million registered users, and the user data and cooking-related information that Whirlpool will now have access to will give the company a nice edge as the competition in the smart home space heats up. More brands should be thinking about how they can gain access to relevant consumer data and use it to supercharge your product or service.

 


Source: TechCrunch

Amazon Adds Computer Vision To Alexa With New Echo Look Device

What Happened
On Wednesday, Amazon unveiled a new Echo device, and it is a big departure from the smart speakers introduced in the Echo lineup so far. Named Echo Look, the newest hardware product from Amazon looks more like a desktop security camera than a connected speaker. But Amazon is actually positioning the device as a hand-free selfie camera and style assistant that can help you take selfies hands-free. Equipped with built-in LED lighting and a computer vision-based background blur feature, it promised to capture the best full-length pictures and videos of you in different outfits for review, comparisons, and style recommendations.

Echo Look comes with a companion app that has a Style Check feature to compare two outfits and rate which is better based on machine learning algorithms. The feature was first added to the iOS Amazon app last month as Outfit Compare and does not require an Echo Look to work. And because it is powered by Alexa, you can ask Echo Look to read you news, play music, or access any of the over ten thousands third-party Alexa skills, just like you would with all the other Echo speakers. This product is available by invitation only for now, and Amazon did not announce it will become widely available.

What Brands Need To Do
By introducing computer vision into the Echo lineup, Amazon is making a strong push to enhance Alexa’s capabilities. Positioning this new device as a “hands-free camera and style assistant,” as Amazon’s product page reads, is a strong reformation of Amazon’s ambition in conquering the fashion industry. Echo Look will help Amazon gain access to millions of its customer’s wardrobes, thus allowing it to glean a huge amount of data from the user-generated pictures to gain great insights on what its customers like to wear and would most likely buy.

Besides, it seems safe to assume that this is merely the first step in Alexa’s evolution. By adding cameras to Alexa-powered devices, the voice assistant now has “eyes” and no longer has to solely rely on voice command for input. Using camera as an input source and combined with machine learning and object recognition, Alexa will grow much more powerful in time.

Wait until Amazon start allowing developers using Alexa Voice Service (AVS) to incorporate visual input into their Alexa-powered products, the new kind of smart home devices will become available as a result present an exciting opportunity that brands will be able to leverage to engage with customers. For example, when a smart fridge can see that you’re about to run out of milk and triggers Alexa to remind you that, it would mean that CPG brands and food retailers will have to rethink their marketing strategies and product design to accommodate this type of conversational smart home devices and new shopper behaviors they engender.

If you’d like to learn more about how to effectively reach consumers on conversational interfaces, or to leverage the Lab’s expertise to take on related client opportunities within the IPG Mediabrands, please contact our Client Services Director Samantha Barrett ([email protected]) to schedule a visit to the Lab.

 


Source: Amazon

Images courtesy of Amazon’s product demo video

 

Facebook Rolled Out Special Camera Frame For Earth Day

What Happened
Facebook celebrated Earth Day this past Saturday with a themed camera frame, marking the latest seasonal frame to debut on the social network. The camera frames, which are static overlays for photos and videos taken in Facebook’s main app and Messenger app, are now open to third-party developers as part of the Camera Effects platform that the company launched last week at its annual developer conference F8. Facebook said the first batch of camera frames created by third-party developers will start to roll out in May, with more selfie lenses and face-tracking effects coming in June.

What Brands Need To Do
The arrival of new camera frames is but one result of Facebook’s launch of the new Camera Effects platform, which enables developers to create third-party mobile AR experiences inside its main apps. While static picture frames may not seem like the most high-tech thing as far as digital marketing experiences go, the fact that brands can use Facebook’s platform tools to easily create branded frames that could reach Facebook’s nearly 2 billion active users worldwide is a big deal. And that is just the starting point for brands to explore camera as the newest platform. In order to properly insert your brand into the media that Facebook users share to earn organic impressions., brands need to work with developers to create fun, interesting, and engaging camera effects that users will want to use and share.

For a more in-depth analysis on Facebook’s announcements at this year’s F8, check out our latest Fast Forward here.

 


Source: AdWeek

The Next Digital Interface Could Be Controlled By Your Facial Expressions

What Happened
Following the rise of voice-activated conversational interfaces powered by the personal AI assistants like Amazon’s Alexa, Microsoft’s Cortana, and Google Assistant, the next big thing in digital interaction could be hand-free devices controlled by facial expressions.

Denys Matthies, a computer interaction researcher, created prototype earbuds that can detect the wearer’s facial expressions and turn them into commands to control your phone. As The Verge explains:

The earbuds come with special electrodes that recognize the shape of the user’s ear canal using an electrical field. This bends and flexes in a consistent fashion when people pull certain faces, allowing the earbuds to detect specific expressions. So far, they can pick up on five separate movements with 90 percent accuracy: smiling, winking, making a “shh” sound, opening your mouth, and turning your head to the side. With these hooked up to your phone, you could open texts, play and pause your music, and so on, all without using your hands.

What Brands Need To Do
While still in early stage of its development, this earbud prototype gives a glimpse into the future of digital interactions, enabling a more intuitive and private user experience. It would be especially be suitable for hearable devices to allow for non-verbal feedbacks.  Moreover, this kind of facial expression-powered interfaces can also theoretically be achieved by face-tracking technologies, making it viable for devices other than earbuds as well. If realized and mass-commercialized, this type of devices could usher in a brand era of brand-customer interactions.

In the larger picture, the earbud is but one example in the kind of emerging advanced interfaces that are changing the way that brands will deliver their customer experiences and marketing messages. The smartphone supply chain has given rise to powerful, cheap computing components that can augment many parts of our bodies, our homes, and our cities with intelligence. We think about these things as IoT devices or wearables, but when backed by machine learning in the cloud, and down up with other devices, they create totally new kinds of advanced interfaces for media. For more on this topic, check out the first section in our Outlook 2017 report.

 


Source: The Verge