A month after the initial reports about Google planning to integrate ad-blocking features into its Chrome web browser, the search giant confirmed the plan on Thursday, announcing that it will preinstall an ad filter in Chrome starting early 2018. Chrome is the most used web browser in the world, owning over 54% of the global market share of web browsers as of this month.
As previously reported, the ad-blocker will aim to filter out ads that are deemed “annoying” by the standards of Coalition for Better Ads, a group that Google formed with advertising companies to improve online ad experiences. Google says it will give publishers at least 6 months to prepare for the ad filter coming next year, offering a self-service tool for analyzing ads on their sites to make sure they meet the standard set by the Coalition.
What Brands Need To Do
Back in 2015 when ad blockers were taking off and became a hot topic in the media industry partly thanks to Apple’s decision to enable ad-blocking extensions in Safari in iOS 9, we advised brands to counter the trend of Ad Avoidance – subpar online ad experiences and increasingly ad-free options are driving online users to actively avoid ads – by trying new ad formats, sponsored and branded content, and generally improving their online ad experiences. Two years later, new unblockable ad formats, such as sponsored selfie lenses and VR product placements, have emerged or become viable for brands to explore.
Since then, ad-blocker adoption has continued to grow worldwide, particularly on mobile. The 2017 Internet Trends report by Mary Meeker pointed out that nearly 400 million mobile devices ran ad blockers last year. This upcoming Chrome ad-blocker will only serve to accelerate the mainstream adoption of ad-blockers, further pushing publishers to optimize their ad experiences. For brands, this should be an opportunity to start exploring digital ad formats that are better integrated into the user experiences.
Earlier this week, Google rolled out a new measurement tool to help advertisers measure the offline impact of their digital ads by tracking offline purchases people make in merchants’ brick-and-mortar stores after clicking on their digital ads. Through partnerships with consumer data and credit card companies, Google has access to roughly 70% of U.S. credit and debit card transactions, which it now matches with ad clicks to automatically inform merchants when their digital ads translate into sales at a physical store.
What Brands Need To Do
In October, Google launched a Conversions API for its display ad network DoubleClick, which enables advertisers to connect offline activities like in-store purchases and phone bookings to their online campaigns. Measuring offline sales and store visits driven by digital ads is especially important for retailers and CPG brands, as those metrics are the typical KPIs for measuring their campaigns. As Google builds out their attribution tools, brands should learn to leverage the data these tools provide to better inform their campaigns.
This is a special edition of our Fast Forward newsletter, bringing you a summary of the major announcements from Google’s 2017 I/O developer conference. A fast read for you and a forward for your clients and team.
- Google Lens brings computer vision to Google Assistant and Photos
- Google Assistant receives major upgrades & branches out Into connected cars
- Expansion of the Daydream VR platform propels VR development forward
- Android O brings a more fluid user experience, with Android Go targeting the “next billion mobile users”
On Wednesday, Google kicked off its annual I/O developer conference at the Shoreline Amphitheater in Mountain View, CA. CEO Sundar Pichai took the stage to lead the main keynote address, where he laid out the key developments in several of Google’s areas of interest, including AI, voice assistants, virtual reality, and more. TechCrunch has a comprehensive round-up of everything that Google announced, but we have an exclusive take on what it means for brands.
Google Lens Adds Computer Vision To Google Services
The most significant announcement coming out of this year’s Google I/O conference is the debut of Google Lens, a set of computer vision features that allows Google services to identify what the camera captures and collect contextual data via images. Google has been using similar technology in the Google Translate app (built off their 2014 acquisition of World Lens) to automatically translate words that the camera captures in real time. Now, Google is adding this feature to Google Assistant and, later this year, to Google Photos as well.
Equipped with computer vision capabilities, Google Assistant gains the “eyes” it needs to see what the users are looking at and understand their intent. Google demoed several such scenarios on stage, including pointing the camera at a restaurant’s storefront to receive standard business information and reviews of that restaurant surfaced via Zagat and Google Maps, pointing it at an unidentified flower to ask Google Assistant to identify it, or pointing it at a concert poster to prompt Assistant to find how to buy tickets for the event. Lens allows Google Assistant to tap the smartphone camera as an input source, to inform user intent and create a more frictionless user experience.
For Google Photos, the addition of Google Lens’ computer vision capabilities makes the cloud photo storage service better at identifying the people in your photos and picking out the best shots in your photo library. This facilitates one new feature called Suggested Sharing, in which Google Photos will prompt you to share some AI-selected photos with the people that are in them with a simple tap. Users on the receiving end of the shared albums will also be prompted to add the pre-selected photos to the mix.
One additional feature powered by Google Lens is the Visual Positioning Service (VPS), which works like an indoor GPS, allowing Android devices to map out a specific indoor location and help them find a specific store in the mall or a specific item in a grocery store with turn-by-turn navigation. VPS is already working in select partner museums and Lowes home improvement stores if you happen to have one of two Tango-enabled devices. This advanced AR feature will also appear in the next Tango device, the ASUS ZenFone AR due out this summer.
The introduction of Google Lens brings the search giant up to speed in the consumer-facing AR development. Two of Google’s biggest competitors, Facebook and Amazon, recently unveiled their own take on the “camera-as-input” trend with the launch of Camera Effects Platform and Echo Look, respectively. For Google, the launch of Lens is all the more significant, as it officially branches Google’s core function, search, into the physical real world and opens the door for more offline use cases, which, in turn, massively increases the addressable market of searchable data and creates a virtuous cycle for Google to leverage those image data to fuel its AR and machine learning initiatives.
Google Assistant Grows More Capable With New Features
Beyond the major addition of computer vision capabilities, Google Assistant is getting some other new features to help it stay competitive against Amazon’s Alexa and other digital voice assistants. Among the slew of new features announced on stage, two stood out to us for their versatile uses cases and accessibility for developers.
First up, Actions, Google’s version of ‘skills’ or ‘apps’ for Google Assistant, added support for digital transactions. This allows Google Home and some Android phone users to shop online by conversing with Google Assistant, which will access payment methods and delivery addresses stored in Android Pay for a seamless checkout experience. The feature will launch first with Panera as a third-party partner.
This crucial update will allow more businesses to build mobile ordering and online shopping features into their Google Actions. Previously, Google Assistant could only make orders from partnering Google Express retailers, such as Costco, Whole Foods Market, Walgreens, PetSmart, and Bed Bath & Beyond. It also added the ability to check the inventory at local stores for product availability before users take a trip to the store.
Second, Google Assistant can now respond by sending visuals to your smartphone or TV via Chromecast. Dubbed “Visual Responses,” this important addition enables developers to surface texts, images, videos, and map navigations to user requests. Allowing for a variety of responses helps diversify Google Assistant’s replies beyond voice and add texture to the user experience. Supporting multiple displays entends Google Assistant to more platforms, allowing users to choose the optimal screen to engage with. This new feature comes just a week after Amazon unveiled Echo Show, which also introduced a visual component to Alexa’s voice-based conversational interface.
Beyond these two key updates, Google Assistant is also gaining several other features that make it smarter and more useful. They include:
- A “proactive assistance” feature that allows Google Assistant to automatically alerts you about travel, weather, and calendar updates by silently showing a spinning light-up ring on Google Home. Users can hear the updates by asking “OK Google, What’s up?” It is unclear when this notification-lite feature will roll out.
- Hands-free phone calls to U.S. and Canada numbers. It works similarly to Amazon’s recently released Alexa voice calling, but with the added ability to dial real phone numbers. Unlike Amazon, only outbound calls are supported for now because Google says it wants to be “mindful of customer privacy”.
- New entertainment integrations including the free tier of Spotify, SoundCloud, HBO, Hulu, CBS All Access, and some other popular music and video content streaming services. This allows users to ask Google Assistant to play a specific show or song, provided they have installed the corresponding apps on their devices.
- Text input for Google Assistant, which allows users to interact with the Assistant on Android devices by typing out their requests instead of speaking them out loud.
- Google also reminded the audience that Google Assistant will be coming to connected cars, as the company announced on Monday that Volvo and Audi are building new models that will run on Android systems.
Beyond these new features, Google is also aggressively expanding the Assistant to more platforms by announcing it will become accessible on Android TV OS later this year as well as iPhones and iPads via Google’s iOS app. The update to the Android TV platform will be accompanied by a brand-new launcher, allowing users to use voice command to access the over 3,000 Android TV apps available in the Play Store. According to Google, the Assistant is currently available on over 100 million devices. Notably, that’s a fraction of the 2 billion Android devices on the market, and doesn’t reflect user adoption. (For comparison, Apple’s Siri is currently available on 1 billion devices.)
In addition, Google is also following Apple’s lead to process AI-powered apps locally on mobile devices as well as in the cloud. This improves app performance and security, and also enables Google Assistants to adjust to a user’s specific preferences more quickly.
Standalone Daydream VR Headsets Aim To Broaden Consumer Appeal
It’s been a full year since Google unveiled its VR platform, Daydream, and so far, only a handful of compatible handsets have been released. Facing mounting competitors in the VR space, Google is taking another stab at virtual reality with new Daydream-enabled phones from partners, and a new standalone headset form-factor.
On the handset front, Google announced that Daydream will be supported by the new Samsung Galaxy S8 phones later this summer. As the best-selling line of Android phones, it’s’ a big win for Google, even if Samsung continues to support their own platform, GearVR, which is powered by a rival, Facebook’s Oculus. Plus, the upcoming flagship phone from LG will also support Daydream VR, making the platform considerably more accessible for mainstream users.
Google is teaming up with HTC Vive and Lenovo to build an untethered, standalone VR headset, allowing an immersive experience without additional phone or PC hardware. The headsets will support inside-out tracking, using the “WorldSense” technology from its Tango AR platform to track virtual space and making sure your view in VR matches up with your movements in the real world without the need for additional cameras or sensors. This move puts Google in the company of Oculus and Intel, both of whom have showed off early standalone headsets with self-contained tracking systems.
Fluid UI Design For Android O & Android Go For Emerging Markets
Near the end of the opening keynote, Google turned the attention to the next Android mobile OS, Android O. The preview highlighted a more fluid UI design, which includes features such as a Picture-in-Picture mode for multitasking while watching videos or during video calls, a more customized notification dots system, and a machine learning-powered smart text selection that makes it easier to choose the texts to copy and paste.
In addition, Google also launched a new data-conscious version of Android O named Android Go, targeting emerging global markets where mobile connectivity is still in development. Android Go is a modified version of Android for the lower-end handsets, completed with apps optimized for low bandwidth and memory. Google says Android devices with less than 1GB of RAM will automatically get Android Go starting with Android O. It is also committing to releasing an Android Go variant for all future Android OS. Google previously created a similar low-cost Android OS to serve the emerging markets called Android One, which initially rolled out in Pakistan, India, Bangladesh, Nepal, Indonesia, and other South Asian countries in 2014.
What Brands Need To Do
Google’s announcements at this year’s I/O event are very much covered by two trends emphasized in our Outlook 2017. The introduction of Google Lens marks Google’s official entry into camera-based mobile AR feature (the Tango AR platform is too inaccessible to count), a leading element in the current meta of Advanced Interfaces. The notable updates that Google Assistant received, in particular the computer vision capabilities that Google Lens brings, make the voice assistant a more helpful and intuitive Augmented Intelligence service for users. And the expansion of the Daydream VR platforms shows Google’s continued investment in virtual reality, another facet of the evolution of advanced digital interfaces.
The integration of Google Lens in Google Assistant poses some exciting new opportunities for brands to explore. For example, CPG brands may consider working with Google to make sure that Android users can use Lens to correctly identify your products and receive the correct information. For retailers, the addition of the VPS feature holds great potential for in-store navigations and AR promotions, once it becomes available to a higher number of mobile devices.
The new features coming to Google Assistant makes it a more capable contender in the fight against Amazon’s Alexa. In particular, the support for handling transactions and the “Visual Responses” should offer brands great opportunities to drive direct sales and engage customers with a multi-media experience. For auto brands, in particular, the integration of Google Assistant into some of the upcoming connected cars bring new use cases for engaging with car owners via conversational experiences. The addition of Visual Responses means it is now possible to deliver additional content, be it videos or images, about your products via Google Asistant, adding a visual component that is crucial for marketing fashion and beauty brands.
In terms of VR, Google’s initiatives should help expand the accessibility of its VR platform and get more users to watch the 360-degree and VR content available on YouTube and other Google platforms. For brands, this means increased opportunities to reach consumers with immersive content on Google-owned platforms. As more mainstream tech and media companies rush into VR to capitalize on the booming popularity of the emerging medium, brand marketers should start developing VR content that enhances your brand messaging and contributes to the campaign objectives.
How We Can Help
While mobile AR technologies and standalone VR devices are still in early stages of development, brands can greatly benefit by starting to develop strategies for these two emerging areas. If you’re not sure where to start, the Lab is here to help.
The Lab has always been fascinated by the enormous potential of AR and its ability to transform our physical world. We’re excited that Google is bringing computer vision to android devices and it allows us to develop AR experiences delivered by Google Assistant reach millions of users. If you’d like to discuss more about how your brand can properly harness the power of AR to engage your customers and create extra value, please reach out and get in touch with us.
The Lab has extensive experience in building Alexa Skills and other conversational experiences to reach consumers on smart home devices. So much so that we’ve built a dedicated conversational practice called Dialogue. The Zyrtec AllergyCast Alexa skill that we collaborated with J3 to create is a good example of how Dialogue can help brands build a voice customer experience, supercharged by our stack of technology partners with best-in-class solutions and an insights engine that extracts business intelligence from conversational data.
As for VR, our dedicated team of experts is here to guide marketers through the distribution landscape. We work closely with brands to develop sustainable VR content strategies to promote branded VR and 360 video content across various apps and platforms. With our proprietary technology stack powered by a combination of best-in-class VR partners, we offer customized solutions for distributing and measuring branded VR content that truly enhance brand messaging and contribute to the campaign objectives.
If you’d like to know how the Lab can help your brand figure out how to tap into these tech trend coming out of Google I/O this year to supercharge your marketing efforts, please contact our Client Services Director Samantha Barrett (firstname.lastname@example.org) to schedule a visit to the Lab.
Google’s Daydream VR platform is getting its first major software revamp later this year, Google revealed during a keynote focused on VR and AR on Day 2 of its I/O developer conference. The update, codenamed Daydream Euphrates, will roll out to all phones with Daydream support as well as the standalone VR headset it is making with HTC and Lenovo.
One of the biggest new features is Chrome VR, which will let Daydream owners browse the web inside VR and launch WebVR-based content when it rolls out this summer. All bookmarks and other personalizations will also be synced to it once you sign in with your Google account.
In addition, Google is also adding a cast option so that you can mirror the VR screen on a TV via Chromecast so other people can also see what you’re seeing in VR. New screenshot and screen-capture features are also added to facilitate sharing. YouTube is also getting a VR space where you can connect with friends and watch videos as if you were in the same room.
What Brands Need To Do
Altogether, this update for Daydream VR brings some new features to make the platform a bit more user-friendly and functional, which help make Daydream to stay competitive as the race of bringing VR to mass market starts to heat up. The launch of a standalone Daydream VR headset that works without an Android phone can be a great way for Google to attract iPhone users. As VR platform continues to mature, it is time for most brands to come up with a VR marketing strategy and start thinking about how VR content may help strike a deeper connection with your customers.
How We Can Help
Our dedicated team of VR experts is here to guide marketers through the distribution landscape. We work closely with brands to develop sustainable VR content strategies to promote branded VR and 360 video content across various apps and platforms. With our proprietary technology stack powered by a combination of best-in-class VR partners and backed by the media fire-power of IPG Mediabrands, we offer customized solutions for distributing and measuring branded VR content that truly enhance brand messaging and contribute to the campaign objectives.
If you’d like to learn more about how the Lab can help you tap into the immersive power of VR content to engage with customers, please contact our Client Services Director Samantha Barrett (email@example.com) to schedule a visit to the Lab.
Source: 9to5 Google
Google is set to kick off its 2017 I/O Developer Conference on Wednesday to announce some of its latest software and hardware news. As with years past, the Lab has been keeping a close tab on Google, with special interests in the developments Google Assistant and Google Home. Here’s a round-up of all the news Google has announced so far, along with what we expect to see from this year’s Google I/O event.
Android-Powered Connected Cars
Google is teaming up with Audi and Volvo to ship car systems running on Android operating system. This means cars running Android infotainment system will also include Google Assistant, allowing car owners to use voice command to carry out various tasks such as searching on the go, asking for directions, and making phone calls. Google is expected to show off live demonstrations of the operating system running on the Audi Q8 and Volvo V90 SUVs at the I/O event.
Conversational Interfaces And Voice Assistant
In addition, Google has also updated Allo, the messaging app it introduced last year that has yet to gain much traction among mobile users, with selfie-generated stickers. Google is also making it easier for Allo users to add people to group chats by supporting QR codes for groups.
Speaking of Google Assistant, the company is also reportedly working on bringing the voice assistant to iOS devices by adding it to the Google Search iOS app. It would be a similar tactic that Amazon deployed to get Alexa on iOS, and although it likely won’t guarantee much increase in usage, it does significantly boost the accessibility of its AI-powered assistant service for iOS users.
For this year’s event, we expect to see major updates to Google Assistant as well as new its hardware partners, as Google continues to duke it out with Amazon in the smart speaker space. So far, Amazon is leading that emerging market with a 70% market share, thanks to the first-mover advantage it scored with the Echo products. Google Home is a distant second with a 23.8% share, which means Google still has a lot of catching up to do.
Standalone Daydream VR Headset
Outside the conversational assistant and smart home space, we also hope to see some updates regarding Google’s Daydream VR. First launched at last year’s Google I/O event, the Daydream VR system has not gained much momentum in consumer adoption, largely hindered by the limited number of mobile handsets supporting it. Google is reportedly going to demo a “standalone Daydream VR headset” at this year’s I/O event, according to Variety.
Beyond these key areas of interests, we also expect to see more announcements on the next generation of the Android OS, Chrome OS, Instant Apps, Android Wear, and Android TV.
Please check back later this week for the Lab’s in-depth analysis of all the things marketers need to know from Google’s I/O conference event this year. Follow us on Twitter @ipglab for our live updates.
Sources: As linked in the post
Google plans to take on online ads that cause subpar user experiences by adding a built-in ad-blocker to its Chrome browser, the most popular web browser in the world. The ad-blocker will target “unacceptable ads” as defined by the Coalition for Better Ads, a online ad regulation group that Google is a member of. The Coalition’s Better Ads Standard, released last month, calls out pop-ups, autoplay video ads with sound, interstitial ads with countdowns, and large “sticky” ads as “below the threshold of consumer acceptability.” In addition, Google is reportedly also considering blocking all ads on sites which have ads that don’t meet those standards.
What Brands Need To Do
We first called out the trend of Ad Avoidance – subpar online ad experiences and increasingly ad-free options are driving online users to actively avoid ads – back in 2015 when ad-blockers start to take off among Internet users, which was partly trigger by Apple allowing ad-blocking extensions in its Safari browser on iOS devices. This upcoming Chrome ad-blocker will only serve to accelerate the mainstream adoption of ad-blockers, further pushing ad-servers and publishers to optimize their ad experiences on site. For brands, this should be an opportunity to start exploring newer digital ad formats that are better integrated into the user experiences, such as in-feed social ads or branded content.
Source: Wall Street Journal
Following the addition of “Similar Items” in image search earlier this week, Google is doubling down on surfacing related fashion products with a new “Style Ideas” feature. For fashion search results in the Google Android app and on mobile browsers, this feature will present a set of visually similar items, outfit montages, as well as real-life images featuring that item. The end result is somewhat akin to a Pinterest board created around that specific fashion products. Google says the “Style Idea” images are algorithmically selected without human involvement, using Google’s machine learning capabilities to identify the images featuring the product in question.
What Brands Need To Do
As the follow-up to “Similar Items,” this new feature indicates Google’s ambition in turning image search into a fashion shopping tool. While “Similar Items” only covers fashion accessories and shoes, “Style Ideas” will show up in the image search results for apparels. Together, these two features revitalize image search as a viable product discovery channel that fashion brands will need to pay attention to. While this feature currently runs on machine learning algorithms and is not monetized in any way, it is not hard to imagine how this could easily become a new ad product where fashion retailers selling the same items can bid to appear first.
Source: The Verge
Google has quietly rolled out a new “Similar Items“ feature within Google Image Search on both the mobile web as well as Android‘s Google app, which uses machine learning to surface similar looking products matching what users are looking for. So far, this is limited to fashion and lifestyle products, such as sunglasses, handbags, or shoes, but Google says it will expand in the coming months to cover other apparel and home & garden categories. The feature will not only recognize what the objects in the image are, down to their brand and model but also find out about their price and availability. The feature also includes links to buy those similar-looking items on ecommerce sites.
What Brands Need To Do
This is an interesting play on Google’s part to build out its shopping-related features and explore how computer vision may help it breath new life into its search ads. This new feature revitalizes image search as a viable product discovery channel that brands will need to pay attention to. As users can also search by images, this can easily become a way for consumers to snap a picture of a physical product and quickly find similar items to buy online. While this feature currently runs on machine learning algorithms and is not monetized in any way, it is not hard to imagine how this could easily become a new ad product where online retailers selling the same items can bid to appear in the front end of the carousel.
Source: Marketing Land
Google is ramping up brand integrations on its popular in-car navigation app Waze, starting with the addition of an “order-ahead” feature that allows drivers to order from certain partnered QSR brands and retailers straight from the Waze app. At launch, Dunkin’ Donuts was announced as the first partner, and Google says it plans to team up with other merchants if the test goes well. Outside Waze, mobile order-ahead apps have also been gaining tractions with other QSR brands such as Taco Bell, Starbucks, and more recently, McDonald’s.
What Brands Need To Do
The goal behind app integrations like this is to create a seamless mobile experience by allowing users to complete several tasks without having to jump between apps, such as ordering coffee and doughnuts for pickup while navigating through their morning commute. A similar example would be when Starwood Hotels integrated Uber services into their own mobile app to enable quick ride-hailing for people arriving or checking out of the hotel. From a brand perspective, choosing the right partner to integrate with can help boost brand loyalty, and integrations with popular apps serve as a valuable marketing channel for brands to reach new customers on mobile.