I’ve been playing the Kickstarted space simulation game Elite: Dangerous for the past several weeks with the Oculus Rift DK2. Totally work related, of course.
Basically I’ve had the DK2 since Christmas and had been looking for a really good game to go with my device (rather than the other way around). After shelling out $350 for the goggles, $60 more for a game didn’t seem like such a big deal.
In fact, playing Elite: Dangerous with the Oculus and an XBox One gamepad has been one of the best gaming experiences I have ever had in my life – and I’m someone who played E.T. on the Atari 2600 when it first came out so I know what I’m talking about, yo. It is a fully realized Virtual Reality environment which allows me to fly through a full simulation of the galaxy based on current astronomical data. When I am in the simulation, I objectively know that I am playing a game. However, all of my peripheral awareness and background reactions seem to treat the simulation as if it is real. My sense of space changes and my awareness expands into the virtual space of the simulation. If I don’t mistake the VR experience for reality, I nevertheless do experience a strong suspension of disbelief when I am inside of it.
One of the things I’ve found fascinating about this Virtual Reality simulation is that it is full of Augmented Reality objects. For instance, the two menu bars at the top of the screencap above, to the top left and the top right, are full holograms. When I move my head around, parallax effects demonstrate that their positions are anchored to the cockpit rather than to my personal perspective. If the VR goggles allowed me to do it, I would be able to even lean forward and look at the backside of those menus. Interestingly, when the game is played in normal 3D first person mode rather than VR with the Oculus, those menus are rendered as head-up displays and are anchored to my point of view as I use the mouse to look around the cockpit — in much the same way that google glass anchored menus to the viewer instead of the viewed.
The navigation objects on the dashboard in front of me are also AR holograms. Their locations are anchored to the cockpit rather than to me, and when I move around I can see them at different angles. At the same time, they exhibit a combination of glow and transparency that isn’t common to real-world objects and that we have come to recognize, from sci fi movies, as the inherent characteristics of holograms.
I realized at about the 60 hour mark into my gameplay \ research that one of the current opportunities as well as problems with AR devices like the Magic Leap and HoloLens is that not many people know how to develop UX for them. This was actually one of the points of a panel discussion concerning HoloLens at the recent BUILD conference. The field is wide open. At the same time, UX research is clearly already being done inside VR experiences like Elite: Dangerous. The hologram-based control panel at the front of the cockpit is a working example of how to design navigation tools using augmented reality.
Another remarkable feature of the HoloLens is the use of gaze as an input vector for human-computer interactions. Elite: Dangerous, however, has already implemented it. When the player looks at certain areas of the cockpit, complex menus like the one shown in the screencap above pop into existence. When one removes one’s direct gaze, the menu vanishes. If this were a usability test for gaze-based UI, Elite: Dangerous will have already collected hours of excellent data from thousands of players to verify whether this is an effective new interaction (in my experience, it totally is, btw). This is also the exact sort of testing that we know will need to be done over the next few years in order to firm up and conventionalize AR interactions. By happenstance, VR designers are already doing this for AR before AR is even really on the market.
The other place augmented reality interaction design research is being carried out is in Japanese anime. The image above is from a series called Sword Art Online. When I think of VR movies, I think of The Matrix. When I put my children into my Oculus, however, they immediately connected it to SAO. SAO is about a group of beta testers for a new MMORPG that requires virtual reality goggles who become trapped inside the MMORPG due to the evil machinations of one of the game developers. While the setting of the VR world is medieval, players still interact with in-game AR control panels.
Consider why this makes sense when we ask the hologram versus head-up display question. If the menu is anchored to our POV, it becomes difficult to actually touch menu items. They will move around and jitter as the player looks around. In this case, a hologram anchored to the world rather than to the player makes a lot more sense. The player can process the consistent position of the menu and anticipate where she needs to place her fingers in order to interact with it. Sword Art Online effectively provides what Bill Buxton describes as a UX sketch for interactions of this sort.
On an intellectual level, consider how many overlapping interaction metaphors are involved in the above sketch. We have a 1) GUI-based menu system transposed to 2) touch (no right clicking) interactions, then expressed as 3) an augmented reality experience placed inside of 4) a virtual reality experience (and communicated inside a cartoon).
Why is all of this possible? Why are the best augmented reality experiences inside of virtual reality experiences and cartoons? I think it has to do with cost of execution. Illustrating an augmented reality experience in an anime is not really any more difficult than illustrating a field of grass or a cute yellow gerbil-like character. The labor costs are the same. The difficulty is only in the conceptualization.
Similarly, throwing a hologram into a virtual reality experience is not going to be any more difficult than throwing a tree or a statue into the VR world. You just add some shaders to create the right transparency-glowy-pulsing effect and you have a hologram. No additional work has to be done to marry the stereoscopic convergence of hologram objects and the focal position of real world locations as is required for really good AR. In the VR world, these two things – the hologram world and the VR world – are collapsed into one thing.
There has been a tendency to see virtual reality and mixed reality as opposed technologies. What I have learned from playing with both, however, is that they are actually complementary technologies. While we wait for AR devices to be released by Microsoft, Magic Leap, etc. it makes sense to jump into VR as a way to start understanding how humans will interact with digital objects and how we must design for these interactions. Additionally, because of the simplification involved in creating AR for VR rather than AR for reality, it is likely that VR will continue to hold a place in the design workflow for prototyping our AR experiences even years from now when AR becomes not only a reality but an integral thread in the fabric of reality.