App vs Experience

exp_app_venn

In a recent talk (captured on Youtube) a Microsoft employee explained the difference between an “app” and an “experience” in this way: “We have divas in our development group and they want to make special names for things.” He expressed an opinion many developers in the Microsoft stack probably share but do not normally say out loud. These appear to be two terms for the same thing, to wit, a unit of executable code, yet some people use one and some people use the other. In fact, the choice of terms tends to reveal more about the people who are talking about the “unit of code” than about the code itself. To deepen the linguistic twists, we haven’t even always called apps “apps.” We used to call them “applications” and switched over to the abbreviated form, it appears, following the success of Apple’s “App Store” which contained “apps” rather than “applications.” There is even an obvious marketing connection between “Apple” and “app” which goes back at least as far as the mid-80’s.

I am currently a Microsoft MVP in a sub-discipline of the Window Development group called “Emerging Experiences.” As an Emerging Experiences MVP for Microsoft, the distinction between “app” and “experience” is particularly poignant for me. The “emerging” aspect of our group’s name is fairly evident. EE MVPs specialize in technologies like the Kinect, Surface Hub and large screen devices, Augmented Reality devices, face recognition, Ink, wearables, and other More Personal Computing related capabilities. “Experiences” is problematic, however, because in using that term to describe our group, we basically raise a question about what the group is about. “Experiences” is a term that is not native to the Microsoft developer ecosystem but instead is transplanted from the agency and design world, much like the phrase “creative technologist” which is more or less interchangeable with “developer” but also describes a set of presuppositions, assumptions, and a background in agency life – in other words, it assumes a specific set of prior experiences.

postit_elvis

As a Microsoft Emerging Experiences MVP, I have an inherent responsibility to explain what an “experience” is to the wider Microsoft ecosystem. If I am successful in this, you will start to see the appeal of this term also and will be able to use it in the appropriate situations. Rather than try to go at it head-on, though, I am going to do something the philosopher Daniel Dennett calls “nudging our intuitions.” I will take you through a series of examples and metaphors that will provide the necessary background for understanding how the term “experience” came about and why it is useful.

sticky-note-elvis

The obvious thing to do at this point is to show how these two terms overlap and diverge with a convenient Venn diagram. As you can see from the diagram above, however, all apps are automatically experiences, which is why “experience” can always be substituted for “app.” The converse, however, does not hold. Not all experiences are apps.

postit

Consider the twitter wall created for a conference a few years ago that was distinctively non-digital. Although this wall did use the twitter API, it involved many volunteers hand-writing tweets that included hashtags about the conference onto post-it notes and then sticking them along one of the conference hallways. Conceptually this is a “thing” that takes advantage of a social networking phenomenon that is less than a decade old. While the twitter API sits at the heart of it, the interface with the API is completely manual. Volunteers are required to evaluate and cull the twitter data stream for relevant information. The display technology is also manual and relies on weak adhesive and paper, an invention from the mid-70’s. Commonly we would write an app to perform this process of filtering and displaying tweets, but that’s not what the artists involved did. Since there was no coding involved, it does not make sense to call the whole thing an app. It is a quintessential “experience.” The postit twitter wall provides us with a provisional understanding of the difference between an app and an experience. An app involves code, while an experience can involve elements beyond code writing.

tv_helmet(530)

Why do we go to movie theaters when we can often stay at home and enjoy the same movie? This has become a major problem for the film industry and movie theaters in general. Theater marketing over the past decade or so have responded by talking about the movie-going experience. What is this? At a technology level, the movie-going experience involves larger screens and high-end audio equipment. At an emotional level, however, it also includes sticky floors, fighting with a neighbor for the arm-rest, buying candy and an oversized soda, watching previews and smelling buttered popcorn. All of these things, especially the smell of popcorn which has a peculiar ability to evoke memories, remind us of past enjoyable experience that in many cases we shared with friends and family members.

siskelandebert

If you go to the movies these days, you will have noticed a campaign against men in hoodies video-taping movies. This, in fact, isn’t an ad meant to discourage you from pulling out your smart phone and recording the movie you are about to watch. That would be rather silly. Instead, it’s an attempt to surface all those good, communal feelings you have about going to the movies and subtly distinguishing them – sub rasa,  in your mind — from bad feelings about downloading movies from the Internet over peer-to-peer networks, which the ad associates with unsociable and creepy behavior. The former is a cluster of good experiences while the latter is presented as a cluster of bad experiences while cleverly never accusing you, personally, of improper behavior. In the best marketing form, it goes after the pusher rather than the user.

popcorn-machine

Returning to those happy experiences, it’s clear that an experience goes well beyond what we see in the foreground – in this case, the movie itself. A movie-going experience is mostly about other things than the movie. It’s about all the background activities we associate with movie-going as well as memories and memories of memories that are evoked when we think about going to the movies.

We can think about the relationship between an app and an experience in a similar way. When we speak about an app, we typically think of two things: the purpose of the app and the code used to implement it. When we talk about an experience, we include these but also forecasts and expectations about how a user will feel about the app, the ease with which they’ll use it, what other apps they will associate with this app and so on. In this sense, an experience is always a user experience because it concerns the app’s relationship to a user. Stripped down, however, there is also a pure app which involves the app’s code, its performance, its maintainability, and its extensibility. The inner app is isolated, for the most part, from concerns about how the intended user will use it.

google

At this point we are about at the place in this dialectic where we can differentiate experiences as something designers do and apps as something that developers make. While this is another good provisional explanation, it misses a bigger underlying truth: all apps are also experiences, whether you plan them to be or not. Just because you don’t have a designer on your project doesn’t mean users won’t be involved in judging it at some point. Even by not investing in any sort of overall design, you have sent out a message about what you are focused on and what you are not. The great exemplary of this, an actual  exception that proves the rule, is the Google homepage. A lot of designer thinking has gone into making the Google homepage as simple and artless as possible in order to create a humble user experience. It is intentionally unpresumptuous.

Méret_Oppenheim_Object

What this tells us is that a pure app will always be an abstraction of sorts. You can ignore the overall impression that your app leaves the user with, but you can’t actually remove the user’s experience of your app. The experience of the app, at this provisional stage of explanation, is the real thing, while the notion of an app is an abstraction we perform in order to concentrate on the task of coding the experience. In turn, all the code that makes the app possible is something that will never be seen or recognized by the user. The user only knows about the experience. They will only ever be aware of the app itself when something breaks or fails to work.

What then do we sell in an app store?

L.H.O.O.Q

We don’t sell apps in an App Store any more than we sell windows in the Windows Store. We sell experiences that rely heavily on first impressions in order to grab people’s attentions. This means the iconography, description and ultimately the reviews are the most important thing that go into making experiences in an app store successful. Given that users have limited time to devote to learning your experience, making the purpose of the app self-evident and making it easy to use and master are two additional concerns. If you are fortunate, you will have a UX person on hand to help you with making the application easy to use as well as a visual designer to make it attractive and a product designer or creative director to make the overall experience attractive.

Which gets us to a penultimate, if still provisional, understanding of the difference between an “app” and an “experience.”  These are two ways of looking at the same thing and delineate two different roles in the creation of an application, one technical and one creative. The coder on an application development team will need to primarily be concerned with making [a thing] work while the creative lead will be primarily concerned with determining what [it] needs to do.

the-art-of-living

There’s one final problem with this explanation, however. It requires a full team where the coding role and the various design roles (creative lead, user experience designer, interactive designer, audio designer, etc.) are clearly delineated. Unless you are already working in an agency setting or at least a mid-sized gaming company, the roles are likely going to be much more blurred. In which case, thinking about “apps” is an artificial luxury while thinking about “experiences” becomes everyone’s responsibility. If you are working on a very small team of one to four people, then the problem is exacerbated. On a small team, no one has the time to worry about “apps.” Everyone has to worry about the bigger picture.

Everyone except the user, of course. The user should only be concerned with things they can buy in an app store with a touch of the thumb. The user shouldn’t know anything about experiences. The user should never wonder about who designed the Google homepage. The user shouldn’t be tasked with any of these concerns because the developers of a good experience have already thought this all out ahead of time.

baigneuses

So here’s the final, no longer provisional explanation of the difference between an app and an experience. An app is for users; an experience is something makers make for users.

This has a natural  corollary: if as a maker you think in terms of apps rather than experiences, then you are thinking too narrowly. You can call it whatever you want, though.

I wrote earlier in this article that the term emerging experiences constantly requires explanation and clarification. The truth, however, is that “experience” isn’t really the thing that requires explanation – though it’s fun to do so. “Emerging” is actually the difficult concept in that dyad. What counts as emerging is constantly changing – today it is virtual reality head-mounted displays. A few years ago, smart phones were considered emerging but today they are simply the places where we keep our apps. TVs were once considered emerging, and before that radios. If we go back far enough, even books were once considered an emerging technology.

“Emerging” is a descriptor for a certain feeling of butterflies in the stomach about technology and a contagious giddy excitement about new things. It’s like the new car smell that captures a sense of pure potential, just before what is emerging becomes disappointing, then re-evaluated, then old hat and boring. The sense of the emerging is that thrill holding back fear which children experience when their fathers toss them into the air; for a single moment, they are suspended between rising and falling, and with eyes wide open they have the opportunity to take in the world around them.

I love the emerging experience.

The Knowledge Bubble

code

Coding is hard and learning to code is perhaps even harder.

The current software community is in a quandary these days about how to learn … much less why we must learn. It is now acknowledged that a software developer must constantly retool himself (as an actor must constantly rebrand herself) in order to remain relevant. There is a lingering threat of sorts as we look around and realize that developers are continually getting younger and younger while the gray hairs are ne’er to be seen.

Let me raise a few problems with why we must constantly learn and relearn how to code before addressing the best way to relearn to code. Let me be “that guy” for a moment. First, the old way of coding (from six months ago) was probably perfectly fine. Nothing is gained for the product by finding a new framework or new platform or, God forbid, new paradigm for your product. Worse, bad code is introduced while trying to implement code that is only half-understood and time is lost as developers spend their time learning it. Even worse worse, the platform you are switching to probably isn’t mature (oh, you’re going to break angularjs in the next major release because you found a better way to do things?) and you’ll be spending the next two years trying to fix those problems.

Second, you’re creating a maintenance nightmare because there are no best practices for the latest and greatest code you are implementing (code examples you find on the Internet written by marketing teams to show how easy their code is don’t count) and, worse and worser, no one wants to get stuck doing maintenance while you are off doing the next latest and greatest thing six months from now. Everybody wants to be doing the latest and greatest, it turns out.

Third, management is being left behind. The people in software management who are supposed to be monitoring the code and guiding the development practices are hanging on for dear life trying to give the impression that they understand what is going on but they do not. And the reason they do not is because they’ve been managers for the past six cycles while best practices and coding standards have been flipped on their heads multiple times. You, the developers, are able to steamroll right over them with shibboleths like “decoupling” and “agility”. Awesome, right? Wrong. Managers actually have experience and you don’t – but in the constantly changing world of software development, we are able to get away with “new models” for making software that no one has heard of, that sound like bs, and that everyone will subscribe to just because it is the latest thing.

Fourth, when everyone is a neophyte there are no longer any checks and balances. Everyone is constantly self-promoting and suffering from imposter syndrome. They become paranoid that they will get caught out – which has a destructive impact on the culture – and the only way out of it is to double down on even newer technologies, frameworks and practices that no one has ever heard of so they can’t contradict you when you start spouting it.

Fifth, this state of affairs is not sustainable. It is the equivalent of the housing and credit bubble of 2008 except instead of money or real estate it concerns knowledge. Let’s call it a knowledge bubble. The signs of a knowledge bubble are 1) the knowledge people possess is severely over-valued 2) there are no regulatory systems in place (independent experts who aren’t consultants or marketing shills) to distinguish properly valued knowledge from BS 3) the people with experience in these matters, having lived through past situations that are similar, are de-valued, depreciated and not listened to. This is why they always seem to hate everyone.

Sixth, the problem that the knowledge industry in coding is trying to solve has not changed for twenty plus years. We are still trying to gather data, entered using a keyboard, and storing it in a database. Most efficiencies that have been introduced over the past twenty years have come primarily from improved hardware speeds, improved storage and lower prices for these. The supposed improvements to moving data A to location B and storing it in database C over the past twenty years due to frameworks and languages has been minimal – in other words, these supposed improvements have simply inflated the knowledge bubble. Unless we as individuals are doing truly innovative work with machine learning or augmented reality or NUI input, we are probably just moving data from point A to point B and wasting everyone’s time searching for more difficult and obscure ways to do it.

So why do we do it? A lot of this is due to the race for higher salaries. In olden days – which we laugh at – coders were rewarded and admired for writing lines of code. The more you wrote, the more kung fu you obviously knew. Over time, it because apparent that this was foolish, but the problem of determining who had the best kung fu was still extant, so we came up with code mastery. Unfortunately, there’s only so much code you can master – unless we constantly create new code to master! Which is what we all collectively did. We blame the problems of the last project on faulty frameworks and faulty processes and go hunting for new ones and embrace the first ones we find uncritically because, well, it’s something new to master. This, in turn, provides us with more ammunition to come back to our gray haired and two years behind bosses (who are no longer coders but also not trained managers) with and ask for more titles and more money. (Viceroy of software developer sounds really good, doesn’t it? Whatever it means, it’s going to look great on my business card.)

On the other hand, constantly learning also keeps us fresh. All work and no play makes Jack a dull boy, after all. There have been studies that demonstrate that an active mental life will keep us younger, put off the symptoms of Alzheimer’s and dementia, and generally allow us to live longer, happier lives. Bubbles aren’t all bad.

Ian-McKellen

So on to the other problem. What is the best way to learn? I personally prefer books. Online books, like safari books online, are alright but I really like the kind I can hold in my hands. I’m certainly a fan of videos like the ones Pluralsight provides but they don’t totally do it for me.

I actually did an audition video for Pluralsight a while back about virtual reality which didn’t quite get me the job. That’s alright since Giani Rosa Gallini ended up doing a much better job of it than I could have. What I noticed when I finished the audition video was that my voice didn’t sound the way I thought it did. It was much higher and more nasally than I expected. I’m not sure I would have enjoyed listening to me for three hours. I’ve actually noticed the same thing with most of the Pluralsight courses – people who know the material are not necessarily the best people to present the material. After all, in movies and theater we don’t require that actors write their own lines. We have a different role, called the writer, that performs that duty.

Not that the voice acting on Pluralsight videos are bad. I’m actually very fond of listening to Iris Classon’s voice  – she has a lilting non-specific European accent that is extremely cool – as well as Andras Velvart’s charming Hungarian drawl. In general, though, meh to the Pluralsight voice actors. I think part of the problem is that the general roundness of American vowels gets further exaggerated when software engineers attempt to talk folksy in order to create a connection with the audience. It’s a strange false Americanism I find jarring. On the other hand, the common-man approach can work amazingly well when it is authentic, as Jamie King does in his Youtube videos on C++ pointers and references – it sounds like Seth Rogan is teaching you pointer arithmetic.

Wouldn’t it be fun, though, to introduce some heavy weight voice acting as a premium Pluralsight experience? The current videos wouldn’t have to change at all. Just leave them be. Instead, we would have someone write up a typescript of the course and then hand it over to a voice actor to dub over the audio track. Voila! Instantly improved learning experience.

Wouldn’t you like to learn AngularJS Unit Testing in-depth, Using ngMock with Sir Ian McKellan? Or how about C# Fundamentals with Patrick Stewart? MongoDB Administration with Hellen Mirren. Finally, what about Ethical Hacking voiced by Angelina Jolie?

It doesn’t even have to be big movie actors, either. You can learn from Application Building Patterns with Backbone.js narrated by Steve Downes, the voice behind Master Chief, or Scrum Master Skills narrated by H. Jon Benjamin, the voice of Archer.

Finally, the voice actors could even do their work in character for an additional platinum experience for diamond members – would you prefer being taught AngularJS Unit Testing by Sir Ian McKellan or by Magneto?

For a small additional charge, you could even be taught by Gandalf the Grey.

Think of the sweet promotion you’d get with that on your resume.

Virtual Reality Device Showdown at CES 2016

WP_20160108_10_04_05_Pro

Virtual Reality had its own section at CES this year in the Las Vegas Convention Center South Hall. Oculus had a booth downstairs near my company’s booth while the OSVR (Open Source Virtual Reality) device was being demonstrated upstairs in the Razer booth. The Project Morpheus (now Playstation VR) was being demoed in the large Sony section of North Hall. The HTC Vive Pre didn’t have a booth but instead opted for an outdoor tent up the street from North Hall as well as a private ballroom in the Wynn Hotel to show off their device.

WP_20160105_18_24_03_Pro

It would be convenient to be able to tell you which VR head mounted display is best, but the truth is that they all have their strengths. I’ll try to summarize these pros and cons first and then go into details about the demo experiences further down.

  • HTC Vive Pre and Oculus Rift have nearly identical specs
  • Pro: Vive currently has the best peripherals (Steam controllers + Lighthouse position tracking), though this can always change
  • Pro: Oculus is first out of the gate with price and availability of the three major players
  • Con: Oculus and Vive require expensive latest gen gaming computers to run in addition to the headsets ($900 US +)
  • Pro: PlayStation VR works with a reasonably priced PlayStation
  • Pro: PlayStation Move controllers work really well
  • Pro: PlayStation has excellent relationships with major gaming companies
  • Con: PlayStation VR has lower specs than Oculus Rift or HTC Vive Pre
  • Con: PlayStation VR has an Indeterminate release date (maybe summer?)
  • Pro: OSVR is available now
  • Pro: OSVR costs only $299 US, making it the least expensive VR device
  • Con: OSVR has the lowest specs and is a bit DIY
  • Pro: OSVR is a bit DIY

You’ll also probably want to look at the numbers:

  Oculus Rift HTC Vive Pre PlayStation VR OSVR Oculus DK2
Resolution 2160 x 1200 2160 x 1200 1920 x 1080 1920 x 1080 1920 x 1080
Res per eye 1080 x 1200 1080 x 1200 960 x 1080 960 x 1080 960 x 1080
FPS 90 Hz 90 Hz 120 Hz 60 Hz 60 / 75 Hz
Horizontal FOV 110 degrees 110 degrees 100 degrees 100 degrees 100 degrees
Headline Game Eve: Valkyrie Elite: Dangerous The London Heist Titans of Space
Price $600 ? ? $299 $350/sold out

 

WP_20160105_19_34_23_Pro

Let’s talk about Oculus first because they started the current VR movement and really deserve to be first. Everything follows from that amazing initial Kickstarter campaign. The Oculus installation was an imposing black fortress in the middle of the hall with lines winding around it full of people anxious to get a seven minute demo of the final Oculus Rift. This was the demo everyone at CES was trying to get into. I managed to get into line half an hour early one morning because I was working another booth. Like at most shows, all the Oculus helpers were exhausted and frazzled but very nice. After some hectic moments of being handed off from person to person, I was finally led into a comfortable room on the second floor of Fortress Oculus and got a chance to see the latest device. I’ve had the DK2 for months and was pleased to see all the improvements that have been made to the gear. It was comfortable on my head and easy to configure, especially compared to the developer kit model that I need a coin in order to adjust. I was placed into a fixed-back chair and an Xbox controller was put into my hand (which I think means Oculus Rift is exclusively a PC device until the Oculus Touch is released in the future) and I was given the choice of eight or so games including a hockey game in which I could block the puck and some pretty strange looking games. I was told to choose carefully as the game I chose would be the only game I would be allowed to play. I chose the space game, Eve Valkyrie, and until my ship exploded I flew 360 degrees through the void fighting off an alien armada while trying to protect the carriers in my space fleet.

WP_20160109_15_41_38_Pro

What can one say? It was amazing. I felt fully immersed in the game and completely forgot about the rest of the world, the marketing people around me, the black fortress, the need to get back to my own booth, etc. If you are willing to pay $700 – $800 for your phone, then paying $600 for the Oculus Rift shouldn’t be such a big deal. And then you need to spend another $900 or more for a PC that will run the rift for you, but then at least you’ll have an awesome gaming machine.

Or you could also just wait for the HTC Vive Pre which has identical specs and feels just as nice and even has its own space game at launch called Elite: Dangerous. While the Oculus booth was targeted at fans, in large part, the Vive was shown in two different places to different audiences. A traveling HTC Vive bus pulled out tents and set up on the corner opposite Convention Hall North. This was for fans to try out the system and involved an hour wait for outdoor demos while demos inside the bus required signing up. I went down the street the the Wynn Hotel where press demos run by the marketing team were being organized in one of the hotel ballrooms. No engineers to talk to, sadly.

Whereas Oculus’s major announcement was about pricing and availability as well as opening up pre-orders, HTC’s announcement was about a technology breakthrough that didn’t really seem like much of a breakthrough. A color camera was placed on the front of HMD that outlines real-world objects around the player in order, among other things, to help the player avoid bumping into things when using the Vive Pre with the Lighthouse peripherals in order to walk around a VR experience.

vive pre

The Lighthouse experience is cool but the experience I most enjoyed was playing Elite: Dangerous with two mounted joysticks. This is a game I’ve played on the DK2 until it stopped working with the DK2 following my upgrade to Windows 10 (which as a Microsoft MVP I’m pretty much required to do) so I was pretty surprised to see the game in the HTC press room and even more surprised when I spent an hour chatting away happily to one of ED’s marketing people.

So this is a big tangent but here’s what I think happened and why the ED Oculus support became rocky a few months ago. Oculus appears to have started courting Eve: Valkyrie a while back, even though Elite: Dangerous was the more mature game. Someone must have decided that you don’t need two space games for one device launch, and so ED drifted over to the HTC Vive camp. And suddenly, support for the DK2 went on the backburner at ED while Oculus made breaking changes in their SDK release and many people who had gotten ED to play with the Rift or gotten the Rift to play with ED were sorely disappointed. At this point, you can make Elite: Horizons (the upgrade from ED) work in VR with Oculus but it is tricky and not documented. You have to download SteamVR, even if you didn’t buy Elite: Horizons from Stream, and jury rig your monitor settings to get everything running well in the Oculus direct mode. Needless to say, it’s clear that Elite’s games are going to run much more nicely if you buy Steam’s Vive and run it through Steam.

As for comparing Oculus Rift and HTC Vive Pre, it’s hard to say. They have the same specs. They both will need powerful computers to play on, so the cost of ownership goes beyond simply buying the HMD. Oculus has the touch controllers, but we don’t really know when they will be ready. HTC Vive has the Lighthouse peripherals that allow you to walk around and the specialized Steam controllers, but we don’t know how much they will cost.

For the moment, then, the best way to choose between the two VR devices comes down to which space flying game you think you would like more. Elite: Dangerous is mainly a community exploration game with combat elements. Eve: Valkyrie is a space combat game with exploration elements. Beyond that, Palmer Luckey did get the ball rolling on this whole VR thing, so all other things being equal, mutatis mutandis, you should probably reward him with your gold. Personally, though, I really love Elite: Horizons and being able to walk around in VR.

WP_20160109_09_39_20_Pro

But then again, one could always wait for PlayStation VR (the head-mounted display formerly known as Project Morpheus). The PlayStation VR demo was hidden in the back of the PlayStation demos, which in turn was at the back of the Sony booth which was at the far corner of the Las Vegas Convention Center North Hall. In other words, it was hard to find and a hike to get to. Once you go to it, though, it became clear that this was, in the scheme of things, a small play for the extremely diversified Sony. There wasn’t really enough room for the four demos Sony was showing and the lines were extremely compressed.

Which is odd because, for me at least, the PlayStation VR was the only thing I wanted to see. It’s by far the prettiest of the four big VR systems. While the resolution is slightly lower than that of the Oculus Rift or HTC Vive Pre, the frame rate is higher. Additionally, you don’t need to purchase a $900 computer to play it. You just need a PlayStation 4. The PlayStation Move controllers, as a bonus, finally make sense as VR controllers.

Best of all, there’s a good chance that PlayStation will end up having the best VR games (including Eve: Valkyrie) because those relationships already exist. Oculus and HTC Vive will likely clean up on the indie-game market since their dev and deployment story is likely going to be much simpler than Sony’s.

WP_20160108_10_04_34_Pro__highres

I waited forty minutes to play the newest The London Heist demo. In it, I rode shotgun in a truck next to a London thug as motorcycles and vans with machine gun wielding riders passed by and shot at me. I shot back, but strangely the most fascinating part for me was opening the glove compartment with the Move controllers and fiddling with the radio controls.

Prepare for another digression or just skip ahead if you like. While I was using Playstion Move controllers (those two lit up things in the picture above that look like neon ice-cream cones) in the Sony booth to change the radio station in my virtual van, BMW had a tent outside the convention center where they demoed a radio tuner in one of their cars that responded to hand gestures. One spun ones finger clockwise to scan through the radio channels. Two fingers pressed forward would pause a track. Wave would dismiss. Having worked with Kinect gestures for the past five years, I was extremely impressed with how good and intuitive these gestures were. They can even be re-programmed, by the way, to perform other functions. One night, I watched my boss close his eyes and perform these gestures from memory in order to lock them into his motor memory. They were that good, so if you have a lot of money, go buy all four VR sets as well as a BMW Series 7 so you can try out the radio.

But I digress. The London Heist is a fantastic game and the Playstation VR is pretty great. I only wish I had a better idea of when it is being released an how much it will cost.

Another great thing about the Sony PlayStation VR area was that it was out in the open unlike the VR demos from other companies. You could watch (for about 40 minutes, actually) as other people went through their moves. Eventually, we’ll start seeing a lot of these shots contrasting what people think they are doing in VR with what they are really doing. It starts off comically, but over time becomes very interesting as you realize the extent to which we are all constantly living out experiences in our imaginations and having imaginary conversations that no one around us is aware of – the rich interior life that a VR system is particularly suited to reveal to us.

WP_20160106_12_33_29_Pro

I found the OSVR demo almost by accident while walking around the outside of the Razer booth. There was a single small room with a glass window in the side where I could spy a demo going on. I had to wait for Tom’s Hardware to go though first, and also someone from Gizmodo, but after a while they finally invited me in and I got to talk to honest to goodness engineers instead of marketing people! OSVR demoed a 3D cut scene rather than an actual game and there was a little choppiness which may have been due to IR contamination from the overhead lights. I don’t really know. But for $299 it was pretty good and, if you aren’t already the proud owner of an Oculus DK2, which has the same specs, it may be the way to go. It also has upgradeable parts which is pretty interesting. If you are a hobbyist who wants to get a better understanding of how VR devices work – or if you simply want a relatively inexpensive way to get into VR – then this might be a great solution.

You could also go even cheaper, down to $99, and get a Samsung Gear VR (or one of a dozen or so similar devices) if you already have a $700 phone to fit into it. Definitely demo a full VR head-mounted display first, though, to make sure the more limited Gear VR-style experience is what you really want.

I also wanted to make quick mention of AntVR, which is an indie VR solution and Kickstarter that uses fiducial markers instead of IR emitters/receivers for position tracking.  It’s a full walking VR system that looked pretty cool.

If walking around with VR goggles seems a bit risky to you, you could also try a harness rig like Omni’s. Ignoring the fact that it looks like a baby’s jumporee, the Omni now comes with custom shoes so running inside it is easier. With practice, it looks like you can go pretty fast in one of these things and maybe even burn some serious calories. There were lots of discussions about where you would put something like this. It should work with any sort of VR setup: the demo systems were using Oculus DK2. While watching the demo I kept wanting to eat baby carrots for some reason.

jumporee

According to various forecasters, virtual reality is going to be as important a cultural touchstone for children growing up today as the Atari 2600 was for my generation.

To quickly summarize (or at least generalize) the benefits of each of the four main VR systems coming to market this year:

1. Oculus Rift – first developed and first to release a full package

2. HTC Vive Pre – best controllers and position tracking

3. PlayStation VR – best games

4. OSVR – best value

Website Update 01-13-16

I have updated this WordPress blog to version 4.4.1.

I also moved my database from cleardb, which typically hosts MySQL for Microsoft Azure, to a MySQL Docker Container running in Azure.

After wasting a lot of time trying to figure out how to do this, I found a brilliant post by Morten Lerudjordet that took me by the hand and led me through all the obscure but necessary steps.

You might be a HoloLens developer if

obialex

You currently can sign up to be selected to receive a HoloLens dev kit sometime in the 1st quarter of 2016. The advertised price is $3000 and there’s been lots of kerfuffle over this online, both pro and con. On the one hand, a high price tag for the dev kit ensures that only those who are really serious about this amazing technology will be jumping in. On the other hand, there’s the justifiable concern that only well heeled consulting companies will be able to get their hands on the hardware with this entry price, keeping it out of the hands of indie developers who may (or may not) be able to do the most innovative and exciting things with it.

I feel that both perspectives have an element of truth behind it. Even with the release of the Kinect a few years ago (which had a much much lower barrier to entry) there were similar conversations concerning price and accessibility. All this comes down to a question of who will do the most with the HoloLens and have the most to offer. In the long run, after all, it isn’t the hardware that will be expensive but instead the amount of time garage hackers as well as industry engineers are going to invest into organizing, designing and building experiences. At the end of the day (again from my experience with the Kinect) 80 per cent of these would be bleeding edge technologists will end up throwing up their hands while the truly devoted, it will turn out, never even blinked at the initial price tag.

Concerning the price tag, however, I feel like we are underestimating. For anyone currently planning out AR experiences, is only one HoloLens really going to be enough? I can currently start building HoloLens apps using Unity 3D and have a pretty good idea of how it will work out when (if) I eventually get a device in my hands. There will be tweaking, obviously, and lots of experiential, UX, and performance revelations to take into account, but I can pretty much start now. What I can’t do right now — or even easily imagine – is how to collaborate and share experiences between two HoloLenses. And for me, this social aspect is the most fascinating and largely unexplored aspect of augmented reality.

Virtual reality will have its own forms of sociality that largely revolve around using avatars for interrelations. In essence, virtual reality is always a private experience that we shim social interactions into.

Augmented reality, on the other hand, is essentially a social technology that, for now, we are treating as a private one. Perhaps this is because we currently take VR experiences as the template for our AR experiences. But this is misguided. An inherently and essentially social technology like HoloLens should have social awareness as a key aspect of every application written for it.

Can you build a social experience with just one HoloLens? Which leaves me wondering if the price tag for the HoloLens Development Edition is just $3000 as advertised? Or is it really $6000?

Finally, what does it take to be the sort of person who doesn’t blink at coughing up 3K – 6K for an early HoloLens?

You might be a HoloLens developer if:

  1. Your most prized possession is a notebook in which you are constantly jotting down your ideas for AR experiences.
  2. You are spending all your free time trying to become better with Unity, Unreal and C++.
  3. You are online until 3 in the morning comparing Microsoft and Magic Leap patents.
  4. You’ve narrowed all your career choices down to what gives you skills useful for HoloLens and what takes away from that.
  5. You’ve subscribed to Clemente Giorio’s HoloLens Developers group and Gian Paolo Santapaolo’s HoloLens Developers Worldwide group on Facebook.
  6. You know the nuanced distinctions between various waveguide displays.
  7. You don’t get “structured light” technology and “light field” technology confused.
  8. You practice imaginary gestures with your hands to see what “feels right”.
  9. You watch the Total Recall remake to laugh at what they get wrong about AR.
  10. You are still watching the TV version of Minority Report to try to see what they are getting right about AR.

Please add your own “You might be a HoloLens developer if” suggestions in the comments. 🙂

How HoloLens Sensors Work

kinect_sensors

[hardware specs were released this week. This post is now updated to reflect the final specs.]

In addition to a sophisticated AR display, the Micosoft HoloLens contains a wide array of sensors that constantly collect data about the user’s external and internal environments. These sensors are used to synchronize the augmented reality world with the real world as well as respond to commands. The HoloLens’s sensor technology can be thought of as a combination of two streams of research: one from the evolution of the Microsoft Kinect and the other from developments in virtual reality positioning technology. While what follows is almost entirely just well-informed guesswork, we can have a fair degree of confidence in these guesses based on what is already known publicly about the tech behind the Kinect and well documented VR gear like the Oculus Rift.

While this article will provide a broad survey of the HoloLens sensor hardware, the reader can go deeper into this topic on her own through resources like the book Beginning Kinect Programming by James Ashley and Jarrett Webb, Oliver Kreylos’s brilliant Doc-OK blog, and the perpetually enlightening Oculus blog.

Let’s begin with a list of the sensors believed to be housed in the HoloLens HMD:

  1. Gyroscope
  2. Magnetometer
  3. Accelerometer
  4. Internal facing eye tracking cameras (?)
  5. Ambient Light Detector (?)
  6. Microphone Array (4 (?) mics)
  7. Depth sensors Grayscale Cameras (4)
  8. RGB cameras (1)
  9. Depth sensor (1)

The first three make up an Inertial Measurement Unit often found in head-mounted displays for AR as well as VR. The eye tracker is technology that became commercialized by 3rd parties like Eye Tribe following the release of the Kinect but not previously used in Microsoft hardware – though it isn’t completely clear that there is any sort of eye tracking being used. There is a small sensor at the front that some people assume is an ambient light detector. The last three are similar to technology found in the Kinect.

microphone array
copyright Adobe Stock

I want to highlight the microphone array first because it was always the least understood and most overlooked feature of Kinect. The microphone array is extremely useful for speech recognition because it can distinguish between vocal commands from the user and ambient noise. Ideally, it should also be able to amplify speech from the user so commands can be heard even over a noisy room. Speech commands will likely be lit up by integrating the mic array with Microsoft’s cloud-based Cortana speech rec technology rather than something like the Microsoft Speech SDK. Depending on how the array is oriented, it may also be able to identify the direction of external sounds. In future iterations of HoloLens, we may be able to marry the microphone array’s directional capabilities with the RGB cameras and face recognition to amplify speech from our friends through the biaural audio speakers built into HoloLens.

hololens-menu
copyright Microsoft

Eye tracking cameras are part of a complex mechanism allowing the human gaze to be used in order to manipulate augmented reality menus. When presented with an AR menu, the user can gaze at buttons in the menu in order to highlight them. Selection then occurs either by maintaining the gaze or by introducing an alternative selection mechanism like a hand press – which would in turn use the depth camera combined with hand tracking algorithms. Besides being extremely cool, eye tracking is a NUI solution to a problem many of us have like encountered with the Kinect on devices like Xbox. As responsive as hand tracking can be using a depth camera, it still has lag and jitteriness that makes manipulation of graphical user interface menus tricky. There’s certainly an underlying problem in trying to transpose one interaction paradigm, menu manipulation, into another paradigm based on gestures. Similar issues occur when we try to put interaction paradigms like a keyboard on a touch screen — it can be made to work, but isn’t easy. Eye tracking is a way to remove friction when using menus in augmented reality. It’s fascinating, however, to imagine what else we could use it for in future HoloLens iterations. It can be used to store images and environmental data whenever our gaze dwells for a threshold amount of time on external objects. When we want to recall something we saw during the day, the HoloLens can bring it back to us: that book in the book store, that outfit the guy in the coffee shop was wearing, the name of the street we passed on the way to lunch. As we sleep each night, perhaps these images can be analyzed in the cloud to discover patterns in our daily lives of which we were previously unaware.

Kinect has a feature called coordinate mapping which allows you to compare pixels from the depth camera and pixels from the color camera. Because the depth camera stream contained information about pixels belonging to human beings and those that did not, the coordinate mapper could be used to identify people in the RGB image. The RGB image in turn could be manipulated to do interesting things with the human-only pixels such as background subtraction and selective application of shaders such that these effects would appear to follow the player around. HoloLens must do something similar but on a vastly grander scale. The HoloLens must map virtual content onto 3D coordinates in the world and make them persist in those locations even as the user twists and turns his head, jumps up and down, and moves freely around the virtual objects that have been placed in the world. Not only must these objects persist, but in order to maintain the illusion of persistence there can be no perceivable lag between user movements and redrawing the virtual objects on the HoloLens’s two stereoscopic displays – perhaps no more than 20 ms of delay.

This is a major problem for both augmented and virtual reality systems. The problem can be broken up into two related issues: orientation tracking and position tracking. Orientation tracking determines where we are looking when wearing a HMD. Position tracking determines where we are located with respect to the external world.

head orientation tracking
copyright Adobe Stock: Sergey Niven

Orientation tracking is accomplished through a device known as an Inertial Measurement Unit which is made up of a gyroscope, magnetometer and accelerometer. The inertial unit of measure for an Inertial Measurement Unit (see what I did there?) is radians per second (rad/s), which provides the angular velocity of any head movements. Steve LaValle provides an excellent primer on how the data from these sensors are fused together on the Oculus blog. I’ll just provide a digest here as a way to explain how HoloLens is doing roughly the same thing.

The gyroscope is the core head orientation tracking device. It measures angular velocity. Once we have the values for the head at rest, we can repeatedly check the gyroscope to see whether our head has moved and in which direction it has moved. By comparing the velocity of that movement as well as the direction and comparing this to the amount of time that has passed, we can determine how the head is currently oriented compared to its previous orientation. In fact the Oculus does this one thousand times per second and we can assume that HoloLens is collecting data at a similarly furious rate.

Over time, unfortunately, the gyroscope’s data loses precision – this is known as “drift.” The two remaining orientation trackers are used to correct for this drift. The accelerometer performs an unexpected function here by determining the acceleration due to the force of gravity. The accelerometer provides the true direction of “up” (gravity pulls down so the acceleration we feel is actually upward, as in a rocket ship flying directly up) which can be used to correct the gyroscope’s misconstrued impression of the real direction of up. “Up,” unfortunately, doesn’t provide all the correction we need. If you turn your head right and left to make the gesture for “no,” you’ll notice immediately that knowing up in this case tells us nothing about the direction in which your head is facing. In this case, knowing the direction of magnetic north would provide the additional data needed to correct for yaw error – which is why a magnetometer is also a necessary sensor in HoloLens.

position tracking
copyright Adobe Stock

Even though the IMU, made up of a gyroscope, magnetometer and accelerometer, is great for determining the deltas in head orientation from moment to moment, it doesn’t work so well for determining diffs in head position. For a beautiful demonstration of why this is the case, you can view Oliver Kreylos’s video Pure IMU-Based Positional Tracking is a No-Go. For a very detailed explanation, you should read Head Tracking for the Oculus Rift by Steven LaValle and his colleagues at Oculus.

The Oculus Rift DK2 introduced a secondary camera for positional tracking that sits a few feet from the VR user and detects IR markers on the Oculus HMD. This is known as outside-in positional tracking being the external camera determines the location of the goggles and passes it back to Oculus software. This works well for the Oculus mainly because the Rift is a tethered device. The user sits or stands in a place near to the computer that runs the experience and cannot stray far from there.

There are some alternative approaches to positional tracking which allow for greater freedom of movement. The HTC Vive virtual reality system, for instance, uses two stationary devices in a setup called Lighthouse. Instead of stationary cameras like the Oculus Rift uses, these Lighthouse boxes are stationary emitters of infrared light that the Vive uses to determine it’s position in a room with respect to them. This is sometimes called an inside-out positional tracking solution because the HMD is determining it’s location relative to known external fixed positions.

Google’s Project Tango is another example of inside-out positional tracking that uses the sensors built into handheld devices (smart phones and tablets) in order to add AR and VR functionality to applications. Because these devices aren’t packed into IMUs, they can be laggy. To compensate, Project Tango uses data from onboard device cameras to determine the orientation of the room around the device. These reconstructions are constantly compared against previous reconstructions in order to determine both the device’s position as well as its orientation with respect to the room surfaces around it.

It is widely assumed that HoloLens uses a similar technique to correct for positional drift from the Inertial Measurement Unit. After all, HoloLens has four depth IR grayscale (?) cameras built into it. The IMU, in this supposition, would provide fast but drifty positional data while the combination of data from the four depth grayscale cameras and an RGB cameras provide possibly slower (we’re talking in milliseconds, after all) but much more accurate positional data. Together, this configuration provides inside-out positional tracking that is truly tether-less. This is, in all honestly, a simply amazing feat and almost entirely overlooked in most overviews of the HoloLens.

The secret sauce that integrates camera data into an accurate and fast reconstruction of the world to be used, among other things, for position tracking is called the Holographic Processing Unit – a chip the Microsoft HoloLens team is designing itself. I’ve heard from reliable sources that fragments from Stonehenge are embedded in each chip to make this magic work.

AR wordart

On top of this, the depth sensors, IR cameras, and RGB cameras will likely be accessible as independent data streams that can be used for the same sorts of functionality for which they have been used in Kinect applications over the past four years: art, research, diagnostic, medical, architecture, and gaming. Though not discussed previously, I would hope that complex functionality we have become familiar with from Kinect development like skeleton tracking and raw hand tracking will also be made available to HoloLens developers.

Such a continuity of capabilities and APIs between Kinect and HoloLens, if present, would make it easy to port the thousands of Kinect experiences the creative and software communities have developed over the years leading up to HoloLens. This sort of continuity was, after all, responsible for the explosion of online hacking videos that originally made the Kinect such an object of fascination. The Kinect hardware used a standard USB connector that developers were able to quickly hack and then pass on to –- for the most part –- pre-existing creative applications that used less well known, less available and non-standard depth and RGB cameras. The Kinect connected all these different worlds of enthusiasts by using common parts and common paradigms.

It is my hope and expectation that HoloLens is set on a similar path.

[This post has been updated 11/07/15 following opportunities to make a closer inspection of the hardware while in Redmond, WA. for the MVP Global Summit. Big thanks to the MPC and HoloLens groups as well as the Emerging Experiences MVP program for making this possible.]

[This post has been updated again 3/3/15 following release of final specs.]

Augmented Reality without Helmets

elementary

Given current augmented reality technologies like Magic Leap and HoloLens, it has become a reflexive habit to associate augmented reality with head-mounted displays.

This tendency has always been present and has to undergo constant correction as in this 1997 paper by the legendary Ron Azuma that provides a survey of AR:

“Some researchers  define AR in a way that requires the use of Head-Mounted Displays (HMDs). To avoid limiting AR to specific technologies, this survey defines AR as systems that have the following three characteristics:

 

1) Combines real and virtual

2) Interactive in real time

3) Registered in 3-D

 

“This definition allows other technologies besides HMDs while retaining the essential components of AR.”

Azuma goes on to describe the juxtaposition of real and virtual content from Who Framed Roger Rabbit as an illustrative example of AR as he has defined it. Interestingly, he doesn’t cite the holodeck from Star Trek as an example of HMD-less AR – probably because it is tricky to use fantasy future technology to really prove anything.

Nevertheless, the holodeck is one of the great examples of the sort of tetherless AR we all ultimate want. It often goes under the name “hard AR” and finds expression in Vernor Vinge’s Hugo winning Rainbow’s End.

The Star Trek TNG writers were always careful not to explain too much about how the holodeck actually worked. We get a hint of it, however, in the 1988 episode Elementary, Dear Data in which Geordi, Data and Dr. Pulaski enter the holodeck in order to create an original Sherlock Holmes adventure for Data to solve. This is apparently the first time Dr. Pulaski has seen a state-of-the-art holodeck implementation.

Pulaski:  “How does it work? The real London was hundreds of square kilometers in size.”

 

Data:  “This is no larger than the holodeck, of course, so the computer adjusts by placing images of more distant perspective on the holodeck walls.”

 

Geordi:  “But with an image so perfect that you’d actually have to touch the wall to know it was there. And the computer fools you in other ways.”

What fascinates me about this particular explanation of holodeck technology is that it sounds an awful lot like the way Microsoft Research’s RoomAlive project works.

RoomAlive

RoomAlive uses a series of coordinated projectors, typically calibrated using Kinects, to project realtime interactive content on the walls of the RoomAlive space using a technique called projection mapping.

You might finally notice some similarities between the demos for RoomAlive and the latest gaming demos for HoloLens.

microsoft-hololens-shooter

These experiences are cognates rather than derivations of one another. The reason so many AR experiences end up looking similar, despite the technology used to implement them (and more importantly regardless of the presence or absence of HMDs), is because AR applications all tend to solve the same sorts of problems that other technologies, like virtual reality, do not.

holo_targets_tng

According to an alternative explanation, however, all AR experiences end up looking the same because all AR experiences ultimately borrow their ideas from the Star Trek holodeck.

By the way, if you would like to create your own holodeck-inspired experience, the researchers behind RoomAlive have open sourced their code under the MIT license. Might I suggest a Sherlock Holmes themed adventure?

How Hololens Displays Work

HoloLens-displays

There’s been a lot of debate concerning how the HoloLens display technology works. Some of the best discussions have been on reddit/hololens but really great discussions can be found all over the web. The hardest problem in combing through all this information is that people come to the question at different levels of detail. A second problem is that there is a lot of guessing involved and the amount of guessing going on isn’t always explained. I’d like to correct that by providing a layered explanation of how the HoloLens displays work and by being very up front that this is all guesswork. I am a Microsoft MVP in the Kinect for Windows program but do not really have any insider information about HoloLens I can share and do not in any way speak for Microsoft or the HoloLens team. My guesses are really about as good as the next guy’s.

High Level Explanation

view_master

The HoloLens display is basically a set of transparent screens placed just in front of the eyes. Each eyepiece or screen lets light through and also shows digital content the way your monitor does. Each screen shows a slightly different image to create a stereoscopic illusion like the View Master toy does or 3D glasses do at 3D movies.

A few years ago I worked with transparent screens created by Samsung that were basically just LCD screens with their backings removed. LCDs work by suspending liquid crystals between layers of glass. There are two factors that make them bad candidates for augmented reality head mounts. First, they require soft backlighting in order to be reasonably useful. Second, and more importantly, they are too thick.

At this level of granularity, we can say that HoloLens works by using a light-weight material that displays color images while at the same time letting light through the displays. For fun, let’s call this sort of display an augmented reality combiner, since it combines the light from digital images with the light from the real world passing through it.

 

Intermediate Level Explanation

Light from the real world passes through two transparent pieces of plastic. That part is pretty easy to understand. But how does the digital content get onto those pieces of plastic?

Optical-Fibers

The magic concept here is that the displays are waveguides. Optical fiber is an instance of a waveguide we are all familiar with. Optical fiber is a great method for transferring data over long distances because is is lossless, bouncing light back and forth between its reflective internal surfaces.

hl_display_diagram

The two HoloLens eye screens are basically flat optical fibers or planar waveguides. Some sort of image source at one end of these screens sends out RGB data along the length of the transparent displays. We’ll call this the image former. This light bounces around the internal front and back of each display and in this manner traverses down its length. These light rays eventually get extracted from the displays and make their way to your pupils. If you examine the image of the disassembled HoloLens at the top, it should be apparent that the image former is somewhere above where the bridge of your nose would go.

 

Low Level Explanation

The lowest level is where much of the controversy comes in. In fact, it’s such a low level that many people don’t realize it’s there. And when I think about it, I pretty much feel like I’m repeating dialog from a Star Trek episode about dilithium crystals and quantum phase converters. I don’t really understand this stuff. I just think I do.

In the field of augmented reality, there are two main techniques for extracting light from a waveguide: holographic extraction and diffractive extraction. A holographic optical element has holograms inside the waveguide which route light into and out of the waveguide. Two holograms can be used at either end of the microdisplay: one turns the originating image 90 degrees from the source and sends it down the length of the waveguide. Another intercepts the light rays and turns them another ninety degrees toward the wearer’s pupils.

A company called TruLife Optics produces these types of displays and has a great FAQ to explain how they work. Many people, including Oliver Kreylos who has written quite a bit on the subject, believe that this is how the HoloLens microdisplays work. One reason for this is Microsoft’s emphasis on the terms “hologram” and “holographic” to describe their technology.

On the other hand, diffractive extraction is a technique pioneered by researchers at Nokia – for which Microsoft currently owns the patents and research. Due to a variety of reasons, this technique falls under the semantic umbrella of a related technology called Exit Pupil Expansion. EPE literally means making an image bigger (expanding it) so it covers as much of the exit pupil as possible, which means your eye plus every area your pupil might go to as you rotate your eyeball to take in your field of view (about a 10mm x 8mm rectangle or eye box). This, in turn, is probably why measuring the interpupillary distance is a large aspect of fitting people for the HoloLens.

ASPEimage002

Nanometer wide structures or gratings are placed on the surface of the waveguide at the location where we want to extract an image. The grating effectively creates an interference pattern that diffracts the light out and even enlarges the image. This is known as SRG or surface relief grating as shown in the above image from holographix.com.

Reasons for believing HoloLens is using SRG as its way of doing EPE include the Nokia connection as well as this post from Jonathan Lewis, the CEO of TruLife, in which Lewis states following the original HoloLens announcement that it isn’t the holographic technology he’s familiar with and is probably EPE. There’s also the second edition of Woodrow Barfield’s Wearable Computers and Augmented Reality in which Barfield seems pretty adamant that diffractive extraction is used in HoloLens. Being a professor at the University of Washington, which has a very good technology program as well as close ties to Microsoft, he may know something about it.

On the other hand, it doesn’t get favored or disfavored in this Microsoft patent clearly talking about HoloLens that ends up discussing both volume holograms (VH) as well as surface relief grating (SRG). I think HL is more likely to be using diffractive extraction rather than holographic extraction, but it’s by no means a sure thing.

 

Impact oN Field of View

An important aspect of these two technologies is that they both involve a limited field of view based on the ways we are bouncing and bending light in order to extract it from the waveguides. As Oliver Kreylos has eloquently pointed out, “the current FoV is a physical (or, rather, optical) limitation instead of a performance one.” In other words, any augmented reality head mounted display (HMD) or near eye display (NED) is going to suffer from a small field of view when compared to virtual reality devices. This is equally true of the currently announced devices like HoloLens and Magic Leap, the currently available AR devices like those by Vuzix and DigiLens, and the expected but unannounced devices from Google, Facebook and Amazon.  Let’s call this the keyhole problem (KP).

keyhole

The limitations posed by KP are a direct result of the need to use transparent displays that are actually wearable. Given this, I think it is going to be a waste of time to lament the fact that AR FOVs are smaller than we have been led to expect from the movies we watch. I know Iron Man has already had much better AR for several years with a 360 degree field of view but hey, he’s a superhero and he lives in a comic book world and the physical limitations of our world don’t apply to him.

Instead of worrying that tech companies for some reason are refusing to give us better augmented reality, it probably makes more sense to simply embrace the laws of physics and recognize that, as we’ve been told repeatedly, hard AR is still several years away and there are many technological breakthroughs still needed to get us there (let’s say five years or even “in the Windows 10 timeframe”).

In the meantime, we are being treated to first generation AR devices with all that the term “first generation” entails. This is really just as well because it’s going to take us a lot of time to figure out what we want to do with AR gear, when we get beyond the initial romantic phase, and a longer amount of time to figure out how to do these experiences well. After all, that’s where the real fun comes in. We get to take the next couple of years to plan out what kinds of experiences we are going to create for our brave new augmented world.

Terminator Vision

terminator70

James Cameron’s film The Terminator introduced an interesting visual effect that allowed audiences to get inside the head and behind the eyes of the eponymous cyborg. What came to be called terminator vision is now a staple of science fiction movies from Robocop to Iron Man. Prior to The Terminator, however, the only similar robot-centric perspective shot seems to have been in the 1973 Yul Brynner thriller Westworld. Terminator vision is basically a scene filmed from the T-800’s point-of-view. What makes the terminator vision point-of-view style special is that the camera’s view is overlaid with informatics concerning background data, potential dialog choices, and threat assessments.

termdialog

But does this tell us anything about how computers actually see the world? With the suspension of disbelief typically required to enjoy science fiction, we can accept that a cyborg from the future would need to identify threats and then have contingency plans in case the threat exceeds a certain threshold. In the same way, it makes sense that a cyborg would perform visual scans and analysis of the objects around him. What makes less sense is why a computer would need an internal display readout. Why does the computer that performs this analysis need to present the data back to itself to read on its own eyeballs?

terminator_vision

Looked at from another way, we might wonder how the T-800 processes the images and informatics it is displaying to itself inside the theater of its own mind. Is there yet another terminator inside the head of the T-800 that takes in this image and processes it? Does the inner terminator then redisplay all of this information to yet another terminator inside its own head – an inner-inner terminator? Does this epiphenomenal reflection and redisplaying of information go on ad infinitum? Or does it make more sense to simply reject the whole notion of a machine examining and reflecting on its own visual processing?

robocop

I don’t mean to set up terminator vision as a straw man in this way just so I can knock it down. Where terminator vision falls somewhat short in showing us how computers see the world, it excels in teaching us about how we human beings see computers. Terminator vision is so effective as a story telling trope because it fills in for something that cannot exist. Computers take in data, follow their programming, perform operations and return results. They do not think, as such. They are on the far side of an uncanny valley, performing operations we might perform but more quickly and without hesitation. Because of this, we find it reassuring to imagine that computers deliberate in the same way we do. It gives us pleasure to project our own thinking processes onto them. Far from being jarring, seeing dialog options appear on Arnold Schwartzenegger’s inner vidscreen like a 1990’s text-based computer game is comforting because it paves over the uncanny valley between humans and machines.

Virtual Names for Augmented Reality (Or Why “Mixed-Reality” is a Bad Moniker)

dog_tview

It’s taken about a year but now everyone who’s interested can easily distinguish between augmented reality and virtual reality. Augmented reality experiences like the one provided by HoloLens combine digital and actual content. Virtual reality experiences like that provided by Oculus Rift are purely digital experiences. Both have commonalities such as stereoscopy, head tracking and object positioning to create the illusion that the digital objects introduced into a user’s field of view have a physical presence and can be walked around.

Sticklers may point out that there is a third kind of experience called a head-up display in which informatics are displayed at the top corner of a user’s field of view to provide digital content and text. Because head-up display devices like the now passe Google Glass do not overlay digital content on top of real world content, but instead displays them more or less side-by-side, it is not considered augmented reality.

Even with augmented reality, however, a distinction can be drawn between informational content and digital content made up of 3D models. The informational type of augmented reality, as in the picture of my dog Marcie above, is often called the Terminator view, after the first-person (first-cyborg?) camera perspective used as a story telling device in the eponymous movie. The other type of augmented reality content has variously been described inaccurately as holography by marketers or, more recently, mixed reality.

The distinction is being drawn largely to distinguish what might be called hard AR from the more typical 2D overlays on smart phones that help you find a pizza restaurant. Mixed reality is a term intended to emphasize the point that not all AR is created equal.

Abandoning the term “augmented reality” in favor of “mixed reality” to describe HoloLens and Magic Leap, however, seems a bit drastic and recalls Gresham’s Law, the observation that bad money drives out good money. When the principle is generalized, as Albert Jay Knock did in his brilliant autobiography Memoirs of a Superfluous Man, it simply means that counterfeit and derivative concepts will drive out the authentic ones.

This is what appears to be happening here. Before the advent of the iPhone, researchers were already working on augmented reality. The augmented reality experiences they were building, in turn, were not Terminator vision style. Early AR projects like KARMA from 1992 were like the type of experiences that are now being made possible in Redmond and Hollywood, Florida. Terminator vision apps only came later with the mass distribution of smart phones and the fact that flat AR experiences are the only type of AR those devices can support.

I prefer the term augmented reality because it contains within itself a longer perspective on these technologies. Ultimately, the combination of digital and real content is intended to uplift us and enhance our lives. If done right, it has the ability to re-enchant everyday life. Compared to those aspirations, the term “mixed reality” seems overly prosaic and fatally underwhelming.

I will personally continue to use the moniker “mixed reality” as a generic term when I want to talk about both virtual reality and augmented reality as a single concept. Unless the marketing juggernaut overtakes me, however, I will give preference to the more precise and aspirational term “augmented reality” when talking about HoloLens, Magic Leap and cool projects like RoomAlive.