Stupeflix: User-Friendly Web-Based Video-Creation

I know there is typically no hyphen between “video” and “creation”, but hey, I was on a roll.

This afternoon we played around with the web tool of Stupeflix, a Paris-based company that is forging new ground in user-generated content creation. Using their web-app, I was able to create a snappy photo-montage video in about 10 minutes. You can pick one of their pre-defined templates (I picked “scrapbook”). It lets you upload your own soundtrack or choose from their royalty-free options. You can also upload your own videos into the tool, which it plays as part of the slideshow.

Creating the video was very fast, and would have been even faster if I’d just pulled in pictures from Facebook or instagram rather than digging on our server for CES pictures. Stupeflix recently was included in a Google-curated selection of web-apps for their Chromebooks aimed at the educational market.

Check out our video here!

Cool, but perhaps not the best setting for archery

Here at the lab we’re very interested in Microsoft Kinect and all of the possibilities it opens up for marketers. We also keep an eye on its more traditional use, as a peripheral for Xbox. Lately we’ve been fond of the Official 2012 Olympics game, especially the archery event powered by Kinect. I think some of us may actually be good enough at it to try real archery. Maybe?

But I digress. We recently saw an interesting article about Microsoft potentially incorporating Kinect technology into automobiles. Obviously not for playing the Olympics games, but there’s all sorts of interesting things you can do with an inward-facing 3D camera. You can know how many people are in the vehicle and where they’re sitting, you can control things, including media, using gesture and voice. Hopefully this can help reduce distracted driving while broadening the interactive capabilities of reaching people in the in-car environment. So although it may not save our archery score, it might save some lives – or at least be really fun to use.

via Cnet

 

The New Best Way To Annotate a Clothespin

A story on engadget that caught our eye today covered Second Story’s “Sightlines” project. Those of you who have seen our Stratacache unit in the Lab have seen some of the potential of transparent LCD technology. In effect, you have a video screen where the color white is keyed out as clear, so you can see through it.

In this project, a Kinect camera is used to determine where someone is standing and then displays graphics on the screen that hover in front of what’s inside the case so that it lines up properly. Check out the video for the project, where they use their test setup to sell the benefits of an oversized clothespin.

 

IPG Media Lab at Cannes Lions 2012

As you might have heard, IPG Mediabrands rented a magnificent villa in Cannes, France last week for the Cannes Lions festival. In the villa, there were fascinating talks, interesting meetings and amazing parties. As part of this endeavor, the IPG Media Lab was invited to showcase some technologies we’ve been exploring. So we packed up a comically large Pelican trunk with awesome lab gear and set off for France.

We operated a couple screens withint he villa itself and then ran an informal experiment in one of the Tuk-Tuks rented for the occasion.

Our main feature was a Kinect-based OOH experience from Seattle startup Freak’n Genius. In this experience, when you approach the screen you take control of one of the cartoon characters and can pose them any way you like. You’re encouraged to bring a friend over to join you, and when they do they take control of the other character and can pose it. Once this happens, a snapshot is taken and posted online, where you can retrieve and share your photo. The photo itself is branded, so as the user shares their image across social media, a brand (in this case the Lab) goes along for the ride.

Below is a picture of the installation all set up in Cannes:

And here is a photo someone took during a party at the villa one evening:

In addition to this screen, we also showcased social media information around Cannes with a dashboard screen we pulled together. We had a Google Map showing the latest geo-tagged Twitter and Instagram posts from around Cannes, along with Foursquare data for select venues including recent check-ins. In addition, we had deep dive Foursquare data on the villa itself as a venue, and the latest news from our virtual lab. here’s what it looked like:

One other thing we added to it was a visualization of mobile device concentration, powered by a device from Euclid. Their gadget, which we will stress requires 12V DC power from a power supply that can handle the wall voltage of your location (not that we learned that the hard way), listens for mobile devices that are searching for wifi. Using this technology we were able to monitor activity levels at the villa. See:

 

Finally, we did an experiment with Technology from Immersive Labs. Their product is an OOH solution that controls what video is being shown on a screen depending on the demographics of the audience of that screen, and/or weather. We decided to try and install this technology in one of the Tuk-Tuks that had been rented for the event. Here’s what the vehicle in question looked like:

 

And here’s what the experiment looked like in action:

It’s showing me a trailer for Ghost Rider (see upper right) because I’m a guy. The experiment worked out pretty well, although we did learn a lot about the practical difficulties of trying to run a Windows box off the power supply of a small vehicle whose engine turns on and off frequently. We were able to gather some analytics though, which was icing on the cake.

In summary, Cannes was a great success for Mediabrands, but also a great opportunity for the lab to try our technologies out in the real world away from the comfortable world of 110V power outlets, advanced AV infrastructure, and air conditioning.

A sunny northwestern afternoon of gesture control

Last night my colleague Eytan and I left the grey skies and pouring rain of New York City for the bright sunshine of Seattle, so that today we could go visit the Microsoft Kinect Accelerator and meet with some of the startups that are participating in the program. We had a great time and met lots of interesting folks.

The startups are all set up in one big room with small conference rooms along the side walls. There are Kinects everywhere, both the original model and the new one for Windows. Lots of great energy, and on Wednesdays they get bagels. Each startup has it’s own little set of cubes, and within those are islands of in-progress technology. Here’s a picture I took of the space:

We met with five exciting startup teams. Below is a short summary of each one, but with time we hope to cover them in more depth as they grow and launch exciting stuff.

  • Freak’n Genius – You either approach a public screen or launch a downloadable game. As you do, via Kinect, you (and a friend) take control of animated character(s) on the screen. As you move, they move. One branded demo they have allows you to create a video where you move the character and provide its voice, and then save the completed video for sharing elsewhere. Lots of fun and entertaining possibilities here.

 

  • Ubi – A new approach to create touchscreens on any surface. A screen is projected onto a wall, and a Kinect is placed atop the projector. Via a simple calibration program, the Kinect can sense where on the wall you are touching and translate those actions to the connected computer. The resulting interface supports multi-touch interactions.

 

 

  • Styku – A really interesting take on the elusive problem of finding clothing online that fits. They have a 4-Kinect rig that can do a full body scan of you in a matter of moments, and then generate an approximate avatar of your body. You can then apply different clothing items to the avatar. The program uses the original CAD drawings of the clothing to make sure the representation is as precise as possible. There’s also a one-Kinect version being worked on for home use.

 

 

  • Kimetric – A combination of capabilities we’ve seen in some technologies already in the lab, this startup from Argentina aims to take retail analytics to a new level. A Kinect camera not only gathers information about age and gender of who is standing in front of it, but also tracks which products they pick up and controls which content they see on an attached screen. Moreover, they have a nice HTML5 analytics interface to represent the large amounts of data they collect on the retail experience.

 

 

  • Nconnex – This team is developing a special tool that will allow you to make 3D scans of a room and/or furniture items using a Kinect sensor, and then manipulate them in a 3D environment. So as an example, a furniture store could lend you a Kinect and you could use it to scan your room. Then they would provide you with 3D models of their furniture, and you can arrange the furniture in the room to see how it will fit. Not only does this save you the trouble of remembering exactly how wide your bedroom is in feet, or worse measuring your whole house, but it also captures color and texture information.

 

Tweet of the Living Dead

So you want to live forever has a superstar hologram but you’re not as good at rapping as you would like? There may be a way for you to engage with your fan-base for years after you die.

Last week a UK start-up named DeadSoci.al made a big splash at the NextWeb conference in Amsterdam. Essentially you sign up for a free account and set up as many Facebook posts, Google+ updates, Tweets etc. as you would like, and schedule them against specified amounts of time that would have elapsed after your demise. Once a special person you designate logs into the site and indicates you’ve died, your automated social media posts kick in. And they continue for as long as you have them scheduled. See the short video below for more info.

 

[youtube http://www.youtube.com/watch?v=9-V5GwJRkVI]

Quantified Lab

Here at the lab, we walk about data a lot and we talk about social a lot. And we have gadgets from FitBit and Withings that socially share data. So we’ve been inspired to see if we can expand that concept a bit. An initial step was our setup of Botanicalls, which inaugurated our tweeting plant. Since then, we’ve hooked in to vendors such as Immersive Labs and InMotion to post to our Quantified Self Twitter feed. Below you can see a couple examples of lab data that is being tweeted out automatically on a regular basis:

It’s become a bit of an interesting extension of our Quantified Self theme, except instead of gadgets tracking an individual person’s activity, we have interesting technologies tracking the activity of the lab itself. Thus the concept “Quantified Lab.”

So follow the twitter feed: http://twitter.com/Quantified_Self  and keep an eye out for many more interesting lab stats.

In-game micro-payments for the greater-good

I’m not sure you’re supposed to hyphenate “greater good” but I was on a roll.

Giverboard is a project by PlayMob that allows game developers to add a charity layer to their games. As Techcrunch reported recently, the platform allows game developers to incorporate virtual goods that represent donations to various charities. In their hypothetical example, a tractor purchase in Farmville could fund a real farm in Africa.

What’s also particularly interesting from a marketing perspective is that brands can play a role in this ecosystem too.  They can sponsor virtual goods for charity as well with matching donations and other similar mechanisms. The underlying idea of Giverboard is that small frequent micro-payments may be a more effective revenue source for charities than large infrequent donations with more friction in the giving transactional process (login, enter credit card number etc.). But going back to brands, if Widgets Inc. sponsors an “End World Hunger” plush toy bear that you can buy to support the cause, that is more often than not a one-off interaction between consumers and the brand. But with small micro-payments for virtual toy bears in-game, the brand gets exposure on each of these more frequent transactions.

Below is a diagram of the user experience from TechCrunch:

 

The Waiting Is The Hardest Part

We all know the drill. We squeeze our way up to the little wooden podium and with dread ask the hostess how long our wait will be. She writes your name down at the bottom of a ruffled piece of paper on a clipboard, and hands you a plastic gadget. You’re told that the plastic gadget will light up and vibrate, but you’re secretly concerned that you might wander too far away from the podium and it won’t ring. You’ll be skipped over, and you’ll die hungry and alone.

Fear not! No Wait Inc is here to the rescue. They offer restaurants a convenient iOS app that will let their hosts and hostesses toss those clipboards away. Here’s how it works:

  • Restaurants sign up for the No Wait service and download the app, preferably to an iPad.
  • When a customer walks up, the host or hostess enters their info into the app, including basic items such as their name, size of party, estimated wait time and a phone number for the party. All but the last two are typically written down on paper, but in this case they’re stored in the app.
  • Eventually, a queue of waiting customers accumulates

  • When added to the queue, the guest gets a confirmation message. It includes a link to host-updatable information on the current wait time. When a table frees up and a party is ready to be seated, your host or hostess notifies them by tapping the notify button near their name.
  • The guest gets another text message letting them know their table is ready

  • Lastly, the host taps the seating button by the guest’s name to indicate the party has been seated and they can be removed from the queue.
  • The app also allows the host or hostess to track “walk outs”, including how long they had been quoted for a wait time. All this I’m sure is valuable info for analysis later by restaurant management.
  • The app can also manage reservations as well. Thus completely replacing host/hostess podium paperwork.

Besides the convenience aspect of this app, what’s interesting about this app is the ability to gather data over time that could prove useful in the overall management and planning of the restaurant. In addition, it can serve as an interesting way to send links/information/offers to patrons. Imagine if that initial welcome message included a link to an interesting Internet item to read, like this.

UPDATE (4/12): Every day, you get a handy summary e-mail that looks like this:

Hailo looks to compete with Uber in the US

UK Taxi-hailing startup Hailo has got its eyes set on the US, having raised money to enter the US market and compete with Uber.

While Uber uses their mobile app (and SMS) to connect riders with private “balck cars”, Hailo is deployed in London to regular taxicabs. When you launch the iOS or Android app, you can see free taxis near you on a map. You can hail one, and it comes to you. You can then pay for the ride within the app via stored credit card.

Interestingly, there’s no special equipment involved, or even enterprise-level deal with the taxi companies. Taxi drivers can adopt this system on their own, with just their smartphones. Another differentiating point is that your fare is the standard taxi fare, rather than the premium pricing model employed by Uber.

There are obviously pluses and minuses to each approach, and wit will be interesting to see how this space evolves.