So, Augmented Reality (AR) is a concept we throw around to describe a very diverse set of things. We talk about it as being this future event- it’s not. Sporting events were among the first major advocates of augmented reality. Those Madden-esque circles drawn during all the plays in last week’s Superbowl – AR.
Let’s say you are watching Wimbledon tennis matches online. You hover over a player and see a data stream of facts that you’ve customized – such as tournament performance, overall record, pictures of best shots, and upcoming matches. This combining of numeric data, video, static images, global positioning (GPS), jpegs was available a few weeks ago for the Australian Open, and a number of other high-profile sporting events including the Masters.
It’s not far fetched to imagine, that my bi-weekly conference call which includes 60 participants from Australia to San Francisco could pop up on the monitor in my office, displaying a mash-up of where each person is, their pictures in a row on top, the ability to hover over each person describing what they’re working on (and sync to my email telling me what I have not done for or with them). I can see who is still twittering in the direct feed on right and who has recently blogged from a connected version of my Google reader. The call documents sit neatly in a clickable file that arrived when it started.
INDUSTRY: While everyone is playing with AR – from HVAC and automobile designers to retailers and physicians – there are areas likely to get the greatest value out of it. The sky is the limit, but some people will get there sooner rather than later.
Retail is ripe for using this data with everything from improving customer interactions to detecting stocking levels on the sales floor.
CPG can make use of it, for instance if I could hover over the Barilla pasta and see that I have a pasta dish planned on my weekly menu list, and even get the recipe to pop-up on my device while in the aisle.
Tourism receives the ability to overlay unique data – such as a view of Pompeii as it existed before the volcano destroyed it versus today.
Gaming is way too obvious, anyone who’s even seen a Wii commercial kind of gets the point, so I won’t belabour it.
DEVICES AND CARRIERS: It all begins with mobile carriers and mobile device providers coming together with the core technology providers. AR ties well to the rise of the device and drives its extension. I didn’t have time to check if there is anything in the iPhone app store for this yet. Who knows, there might be. The carriers have the most to gain if they can figure out how to monetize the data involved in AR experiences and package it for sale appropriately
However, what type of device will be able to do all this? Obviously devices with better displays are favored for these interactions – ones that have some degree of touchscreen to add detail. The device cannot be tethered – we have become far too used to being able to go anywhere. The technology must succeed in giving the right amount of information at the right time. We will expect certain pieces to be part of the basic construct.
Even the optical companies and GPS manufacturers could bring significant offerings to bear – Olympus, Canon, Kodak, Garmin, Mio. Philips, the electronics behemoth already has an AR-like interactive TV offering released in Europe.
AUGMENTATED REALITY RETAILING: So, I can look at the front of my house through the camera on my Nokia phone and easily apply a directional measure to the light fixture recently displaced in the snowstorm. I can go to Lowes and share the picture with the salesman, who can load it to his system and then allow me to try different lighting styles on front of the house, which display on the screen and then redirect them to my boyfriend who is away on business and he can send back the same experience with his pencil markings drawn in for where the attachment brackets go. My camera doesn’t just need to capture the picture, but the metadata attached to it. In real time. This means the data can’t go pinging back to databases that may not even wish to store it. Until I make a decision on what type of lights to purchase, should Lowes care about having this data?
However, what if Lowes, for a nominal fee, might take the same picture and overlay install instructions over it? The step by step directions could come up sequentially after each task is completed, with the option to revert to remedial video when needed or even initiate a voice call to Lowes if it really became problematic. Last, it could provide psychological encouragement along the way – telling me I’m doing great, which invariably I will need.
Once done with my install, could I then migrate the whole experience to the web, either on the Lowes’ site or Flickr for others to use and add my notes about what I found challenging? Can I add my after pictures in a natural sequence for people? How would having this sort of capability make me more loyal to the early providers who embrace this path?
ON OTHER FRONTS: In some other cases, directionality will count – such as the view from your room at a resort – and the distance to – for instance a nearby nature preserve on one side and the mountain range on another. Thus, advanced solutions must be able to redraw the view as you go. A mountain range is one thing, but what happens when you’re trying to pinpoint a wine in grocery store when you are late to a dinner party and need to get in and out quickly. This does not even cover the ability to pay for the item, which I think we now are all expecting to be able to do, shortly.
I think what we end up with is a series of questions that look like this:
1. What core profile features are defined by the individual and always present – and what becomes context sensitive?
2. What data must be present near the application and device, because its latency would be unacceptable?
3. What data should be stored/saved after the interaction because it has additional value to the company?
4. What data should saved/stored after the interaction because it has additional value to the individual?
5. What are the current definitions of long range and short range – how much distance can we detect in the near future?
6. What type of data is this stored as? This concept of a rich interaction means multiple associated data, which will be resident in its own places, but does it need to be reflected in aggregate?
7. Where would you store it – both for the company and the individual?
8. How would you charge for this type of service? I think it’s safe to say that charge per use will not work in early applications (except medical perhaps, because they will pay for the infrastructure anyway and that need to pass it back somehow – most likely pay-per-use)
9. Who is likely to be an early adopter – both from the business and the individual perspectives? (medical, gaming and automotive aside, from the industry context)
10. When does voice – as in real honest everyday speech – become part of AR?
So I am going to poke around the expert and research communities to see what I can learn for us all and come back to you in a week or two with whatever I find. Wish me luck!
PS – IMHO, who has the coolest AR stuff out there? http://www.gesturetek.com/