HoloLens Surface Reconstruction XEF File Format

[Update 4/23 – this turns out to be just a re-appropriation of an extension name. Kinect studio doesn’t recognize the HoloLens XEF format and vice-versa.]

The HoloLens documentation reveals interesting connections with the Kinect sensor. As most people by now know, the man behind the HoloLens, Alex Kipman, was also behind the Kinect v1 and Kinect v2 sensors.

kinect_studio_xef

One of the more interesting features of the Kinect was its ability to perform a scan and then play that scan back later like a 3D movie. The Kinect v2 even came with a recording and playback tool for this called Kinect Studio. Kinect Studio v2 serialized recordings in a file format known as the eXtended Event File format. This basically recorded depth point information over time – along with audio and color video if specified.

Now a few years later we have HoloLens. Just as the Kinect included a depth camera, the HoloLens also has a depth camera that it uses to perform spatial mapping of the area in front of the user. These spatmaps are turned into simulations that are then combined with code so that in the final visualization, 2D apps appear to be pinned to globally fixed positions while 3D objects and characters seem to be aware of physical objects in the room and interact with them appropriately.

xef

Deep in the documentation on the HoloLens emulator is fascinating information about the ability to play back previously scanned rooms in the emulator. If you have a physical headset, it turns out you can also record surface reconstructions using the windows device portal.

The serialization format, it turns out, is the same one that is used in Kinect Studio v2: *.xef .

An interesting fact about the XEF format is that Microsoft never released any documentation about what the xef format looked like. When I open up a saved xef file in Notepad++, this is what it looks like:

xef_np 

Microsoft also never released a library to deserialize depth data from the xef format, which forced many people trying to make recordings to come up with their own, idiosyncratic formats for saving depth information.

Hopefully, now that the same format is being used across devices, Microsoft will be able to finally release a lib for general use – and if not that, then at least a spec of how xef is formed.

Rethinking VR/AR Launch Dates

The HTC Vive, Oculus Rift and Microsoft HoloLens all opened for pre-orders in 2016 with plans to ship in early April (or late March in the case of the Oculus). All have run into fulfillment problems creating general confusion for their most ardent fans.

I won’t try to go into all the details of what each company originally promised and then what each company has done to explain their delays. I honestly barely understand it. Oculus says there were component shortages and is contacting people through email to update them. Oculus also refunded some shipping costs for some purchasers as compensation. HTC had issues with their day one ordering process and is using its blog for updates. Microsoft hasn’t acknowledged a problem but is using its developer forum to clarify the shipping timeline.

Maybe it’s time to acknowledge that spinning up production for expensive devices in relatively small batches is really, really hard. Early promises from 2015 followed by CES in January 2016 and then GDC in March probably created an artificial timeline that was difficult to hit.

On top of this, internal corporate pressure has probably also driven each product group to hype to the point that it is difficult to meet production goals. HTC probably has the most experience with international production lines for high tech gear and even they stumbled a bit.

Maybe it’s also time to stop blaming each of these companies as they reach out for the future. All that’s happened is that some early adopters aren’t getting to be as early as they want to be (including me, admittedly).

As William Gibson said, “The future is already here — it’s just not very evenly distributed.”

Sketching Holographic Experiences

augmented reality D&D

These experiences sketches are an initial concept exploration for a pen and paper role playing game like Dungeons & Dragons augmented by mixed reality devices. The first inklings for this were worked out on the HoloLens forums and I want to thank everyone who was kind enough to offer their creative suggestions there.

I’ve always felt that the very game mechanics that make D&D playable are also one of the major barriers to getting a game going. The D&D mechanics were eventually translated into a variety of video games that made progressing through an adventure much easier. Instead of spending half an hour or more working out all the math for a battle, a computer can do it in a fraction of the time and also throw in nice particle effects to boot.

What gets lost in the process is the story telling element as well as the social aspect of playing. So how to we reintroduce the dungeon master and socialization elements of role playing without having to deal with all the bookkeeping?

2 

With augmented reality, we can do much of the math and bookkeeping in the background, allowing players to spend more time talking to each other, making bad puns and getting into their characters. Instead of physical playing pieces, I think we could use flat player bases instead imprinted with QR codes that identify the characters. 3D animated models can be virtually projected onto the bases. Players can then move their bases on the underlying (virtual) hex maps and the holograms will continue to be oriented on their bases correctly.

3

All viewpoints are calibrated for the different players so everyone seems the same things happening – just from different POVs. The experience can also be enhanced with voice commands so when your magic user says “magic missile” everyone gets to see appropriate particle effects on the game table shooting from the magic user character’s hands at a target.

4

I feel that the dice should be physical objects. The feel of the various dice and the sound of the dice are an essential component of pen and paper role playing. Instead, I want to use computer vision to calculate the outcomes, present digital visualizations of successful and unsuccessful rolls over the physical dice, and then perform automatic calculations in the background based on the outcome.

5

The player character’s stats and relevant info should over over him on the game table. As rolls are made, the health points and stats should update automatically.

6

While some of the holograms on the table are shared between all players, some are only for the dungeon master. As players move from area to area, opening them up through a visual fog of war, the dungeon master will be able to see secret content like the location of traps and the stats for NPCs. It may also be cool to enable remote DMs who appear virtually to host a game. The thought here is that a good DM is hard to find and in high demand. It would be interesting to use AR technology to invite celebrity DMs or paid DMs to help get a regular game going.

7 

When I started planning out how a holographic D&D game would work, there was still some confusion over the visible range of holograms with HoloLens. I was concerned that digital D&D pieces would fade or blur at close ranges – but this turns out not to be true. The main concern seems to be that looking at a hologram less than a meter away for extended periods of time will trigger vergence-accomodation mismatch for some people. In a typical D&D game, however, this shouldn’t be a problem since players can lean forward to move their pieces and then recline again to talk through the game.

8

AR can also be used to help with calorie control for that other important aspect of D&D – snack foods and sodas.

Please add your suggestions, criticisms and observations in the comments section. The next step for me is creating some prototypes  in Unity of gameplay. I’ll post these as they become ready.

And just in case it’s been a long time and you don’t remember what’s so fun about role playing games, here’s an episode of the web series Table Top with Will Wheaton, Chris Hardwick and Sam Witwer playing the Dragon Age role playing game.

Understanding HoloLens Spatial Mapping and Hologram Ranges

There seems to be some confusion over what the HoloLens’s depth sensor does, on the one hand, and what the HoloLens’s waveguides do, on the other. Ultimately the HoloLens HPU fuses these two two streams into a single image, but understanding the component pieces separately are essential for creating good holographic UX.

spatmap

From what I can gather (though I could be wrong), HoloLens uses a single depth camera similar to the Kinect  to perform spatial mapping of the surfaces around a HoloLens user. If you are coming from Kinect development, this is the same thing as surface reconstruction with that device. Different surfaces in the room are scanned by the depth cameras. Multiple passes of the scan are stitched together and merged over time (even as you walk through the room) to create a 3D mesh of the many planes in the room.

things

These surfaces can then be used to provide collision detection information for 3D models (holograms) in your HoloLens application. The range of the depth spatial mapping cameras is 0.85 meters to 3.1 meters. This means that if a user wants to include a surface that is closer than 0.85 M in their HoloLens experience, they will need to lean back.

The functioning of the depth camera shouldn’t be  confused with the functioning of the HoloLens’s four “environment aware” cameras. These cameras are used to help in nailing down the orientation of the HoloLens headset in what is known as  inside-out positional tracking. You can read more about that in How HoloLens Sensors Work. It is probably the case that the depth camera is used for finger tracking while the 4 environment aware cameras are devoted to spatial mapping.

The depth camera spatial mapping in effect creates a background context for the virtual objects created by your application. These holograms are the foreground elements.

Another way to make the distinction based on technical functionality rather than on the user experience is to think of the depth camera surface reconstruction data as input , and holograms as output. The camera is a highly evolved version of the keyboard while the waveguide displays are modern CRT monitors.

It has been misreported that the minimum distance for virtual objects in HoloLens is also 0.85 Meters. This is not so.

The minimum range for hologram placement is more like 10 centimeters. The optimal range for hologram placement, however, is 1.25 m to 5 m. In UWP, the ideal range for placing a 2D app in 3D holographic space is 2 m.

Microsoft also discusses another range they call the comfort zone for holograms. This is the range where vergence-accommodation mismatch doesn’t occur (one of the causes of VR sickness for some people). The comfort zone extends from 1 m to infinity.

Range Name Minimum Distance Maximum Distance
Depth Camera 0.85 m (2.8 ft) 3.1 m (10 ft)
Hologram Placement 0.1 m (4 inches) infinity
Optimal Zone 1.25 m (4 ft) 5 m (16 ft)
Comfort Zone 1.0 m (3 ft) infinity

The most interesting zone, right now, is of course that short range inside of 1 meter. That 1 meter min comfort distance basically prevents any direct interactions between the user’s arms and holograms. The current documentation even says:

Assume that a user will not be able to touch your holograms and consider ways of gracefully clipping or fading out your content when it gets too close so as not to jar the user into an unexpected experience.

When a user sees a hologram, they will naturally want to get a closer look and inspect the details. When a human being looks at something up close, he typically wants to reach out and touch it.

sistine_chapel

Tactile reassurance is hardwired into our brains and is a major tool for human beings to interact with the world. Punting on this interaction, as the HoloLens documentation does, is a great way to avoid this basic psychological need in the early days of designing for augmented reality.

We can pretend for now that AR interactions are going to be analogs to mouse click (air-tap) and touch-less display interactions. Eventually, though, that 1 meter range will become a major UX problem for everyone.

[Updated 4/6 after realizing I probably have the functioning of the environment aware cameras and the depth camera reversed.]

[Updated 4/21 – nope. Had it right the first time.]

HoloLens Occlusion vs Field of View

prometheusmovie6812

[Note: this post is entirely my own opinion and purely conjectural.]

Best current guesses are that the HoloLens field of view is somewhere between 32 degrees and 40 degrees diagonal. Is this a problem?

We’d definitely all like to be able to work with a larger field of view. That’s how we’ve come to imagine augmented reality working. It’s how we’ve been told it should work from Vernor Vinge’s Rainbow’s End to Ridley Scott’s Prometheus to the Iron Man trilogy – in fact going back as far as Star Wars in the late 70’s. We want and expect a 160-180 degree FOV.

So is the HoloLens’ field of view (FOV) a problem? Yes it is. But keep in mind that the current FOV is an artifact of the waveguide technology being used.

What’s often lost in the discussions about the HoloLens field of view – in fact the question never asked by the hundreds of online journalists who have covered it – is what sort of trade-off was made so that we have the current FOV.

A common internet rumor – likely inspired by a video by tech evangelist Bruce Harris taken a few months ago – is that it has to do with cost of production and consistency in production. The argument is borrowed from chip manufacturing and, while there might be some truth in it, it is mostly a red herring. An amazingly comprehensive blog post by Oliver Kreylos in August of last year went over the evidence as well as related patents and argued persuasively that while increasing the price of the waveguide material could improve the FOV marginally, the price difference was prohibitively expensive and ultimately nonsensical. At the end of the day, the FOV of the HoloLens developer unit is a physical limitation, not a manufacturing limitation or a power limitation.

haunted_mansion

But don’t other AR headset manufacturers promise a much larger FOV? Yes. The Meta 2 (shown below) has a 90 degree field of view. The way the technology works, however, involves two LED screens that are then viewed through plastic positioned at 45 degrees to the screens (technically known as a beam splitter, informally known as a piece of glass) that reflects the image into the user’s eyes at approximately half the original brightness while also letting the real world in front of the user (though half of that light is also scattered). This is basically the same technique used to create ghostly images in the Haunted Mansion at Disneyland.

brain

The downside of this increased FOV is you are loosing a lot of brightness through the beam splitter. You are also losing light based on the distance it takes the light to pass through the plastic and get to your eyes. The result is a see-through “hologram”.

Iron-Man-AR

But is this what we want? See-through holograms? The visual design team for Iron man decided that this is indeed what they wanted for their movies. The translucent holograms provide a cool ghostly effect, even in a dark room.

leia

The Princess Leia hologram from the original Star Wars, on the other hand, is mostly opaque. That visual design team went in a different direction. Why?

leia2

My best guess is that it has to do with the use of color. While the Iron Man hologram has a very limited color palette, the Princess Leia hologram uses a broad range of facial tones to capture her expression – and also so that, dramatically, Luke Skywalker can remark on how beautiful she is (which obviously gets messed up by the Return of the Jedi). Making her transparent would simply wash out the colors and destroy much of the emotional content of the scene.

star_wars_chess

The idea that opacity is a pre-requisite for color holograms is confirmed in the Star Wars chess scene on the Millennium Falcon. Again, there is just enough transparency to indicate that the chess pieces are holograms and not real objects (digital rather than physical).

dude

So what kind of holograms does the HoloLens provide, transparent or near-opaque? This is something that is hard to describe unless you actually see it for yourself but the HoloLens “holograms” will occlude physical objects when they are placed in front of them. I’ve had the opportunity to experience this several times over the last year. This is possible because these digital images use a very large color palette and, more importantly, are extremely intense. In fact, because the holoLens display technology is currently additive, this occlusion effect actually works best with bright colors. As areas of the screen become darker, they actually appear more transparent.

Bigger field of view = more transparent , duller holograms. Smaller field of view = more opaque, brighter holograms.

I believe Microsoft made the bet that, in order to start designing the AR experiences of the future, we actually want to work with colorful, opaque holograms. The trade-off the technology seems to make in order to achieve this is a more limited field of view in the HoloLens development kits.

At the end of the day, we really want both, though. Fortunately we are currently only working with the Development Kit and not with a consumer device. This is the unit developers and designers will use to experiment and discover what we can do with HoloLens. With all the new attention and money being spent on waveguide displays, we can optimistically expect to see AR headsets with much larger fields of view in the future. Ideally, they’ll also keep the high light intensity and broad color palette that we are coming to expect from the current generation of HoloLens technology.

HoloLens Hardware Specs

visual_studio

Microsoft is releasing an avalanche of information about HoloLens this week. Within that heap of gold is, finally, clearer information on the actual hardware in the HoloLens headset.

I’ve updated my earlier post on How HoloLens Sensors Work to reflect the updated spec list. Here’s what I got wrong:

1. Definitely no internal eye tracking camera. I originally thought this is what the “gaze” gesture was. Then I thought it might be used for calibration of interpupillary distance. I was wrong on both counts.

2. There aren’t four depth sensors. Only one. I had originally thought these cameras would be used for spatial mapping. Instead just the one depth camera is, and it maps a 75 degree cone out in front of the headset, with a range of 0.8 M to 3.1 M.

3.  The four cameras I saw are probably just grayscale cameras – and it’s these cameras along with cool algorithms that are being used to do inside-out position tracking along with the IMU.

Here are the final sensor specs:

  • 1 IMU
  • 4 environment understanding cameras
  • 1 depth camera
  • 1 2MP photo / HD video camera
  • Mixed reality capture
  • 4 microphones
  • 1 ambient light sensor

The mixed reality capture is basically a stream that combines digital objects with the video stream coming through the HD video camera. It is different from the on-stage rigs we’ve seen which can calculate the mixed-reality scene from multiple points of view. The mixed reality capture is from the user’s point of view only. The mixed-reality capture can be used for streaming to additional devices like your phone or TV.

Here are the final display specs:

  • See-through holographic lenses (waveguides)
  • 2 HD 16:9 light engines
  • Automatic pupillary distance calibration
  • Holographic Resolution: 2.3M total light points
  • Holographic Density: >2.5k radiants (light points per radian)

I’ll try to explain “light points” in a later post – if I can ever figure it out.

The Timelessness of Emerging Technology

In the previous post I wrote about “experience.” Here I’d like to talk about the “emerging.”

flash-gordon

There are two quotations everyone who loves technology or suffers under the burden of technology should be familiar with. The first is Arthur C. Clarke’s quote about technology and magic. The other is William Gibson’s quote about the dissemination of technology. It is the latter quote I am initially concerned with here.

The future has arrived – it’s just not evenly distributed yet.

The statement may be intended to describe the sort of life I live. I try to keep on top of new technologies and figure out what will be hitting markets in a year, in two years, in five years. I do this out of the sheer joy of it but also – perhaps to justify this occupation – in order to adjust my career and my skills to get ready for these shifts. As I write this, I have about fifteen depth sensors of different makes, including five versions of the Kinect, sitting in boxes under a table to my left. I have three webcams pointing at me and an IR reader fifteen inches from my face attached to an Oculus Rift. I’m also surrounded by mysterious wires, soldering equipment and various IoT gadgets that constantly get misplaced because they are so small. Finally, I have about five moleskine notebooks and an indefinite number of sticky pads on which I jot down ideas and appointments. As emerging technology is disseminated from wherever it is this stuff comes from – probably somewhere in Shenzhen, China – I make a point of getting it a little before other people do.

astra-gnome

But the Gibson quote could just as easily describe the way technology first catches on in certain communities – hipsters in New York and San Francisco – then goes mainstream in the first world and then finally hits the after markets globally. Emerging markets are often the last to receive emerging technologies, interestingly. When they do, however, they are transformative and spread rapidly.

 robot

Or it could simply describe the anxiety gadget lovers face when they think someone else has gotten the newest device before they have, the fear that the future has arrived for others but not for them. In this way, social media has become a boon to the distressed broadcast industry by providing a reason to see a new TV episode live before it is ruined by your Facebook friends. It is about making the future a personal experience, rather than a communal one, for a moment so that your opinions and observations and humorous quips about it — for that oh-so-brief period of time — are truly yours and original. No one likes to discover that they are the fifth person responding to a Facebook post, the tenth on instagram, or the 100th on twitter to make the same droll comment.

shape

The quote captures the fact that the future is unevenly distributed in space. What it leaves out is that the future is also unevenly distributed in time. The future I see from 2016 is different from the one I imagined in 2001, and different from the horizon my parents saw in 1960, which in turn is different from the one H. G. Wells gazed at in 1933. Our experience of the future is always tuned to our expectations at any point in time.

 lastbattlefield

This is a trope in science fiction and the original Star Trek pretty much perfected it. The future was less about futurism in that show than about projecting the issues of the 60’s into the future in order to instruct a 60’s audience about diversity, equality, and the anti-war movement. In that show the future horizon was a mirror that tried to show us what we really looked like through exaggerations, inversions and a bit of paint makeup.

kitchen

I’m grateful to Josh Rothman for introducing me to the term retrofuturism in The New Yorker, which perfectly describes this phenomenon. Retrofuturism – whose exemplars include Star Trek as well as the steampunk movement – is the quasi-historical examination of the future through the past.

flight

The exploration of retrofuturism highlights the observation that the future is always relative to the present — not just in the sense that the tortoise will always eventually catch up to the hare, but in the sense that the future is always shaped by our present experiences, aspirations and desires. There are many predictions about how the world will be in ten or twenty years that do not fit into the experience of “the future.” Global warming and scarce resources, for instance, aren’t a part of this sense. They are simply things that will happen. Incrementally faster computers and cheaper electronics, also, are not what we think of as the future. They are just change.

modern_mechanix

What counts as the future must involve a big and unexpected change (though the fact that we have a discipline called futurism and make a sport of future predictions crimps this unexpectedness) rather than a gradual advancement and it must improve our lives (or at least appear to; dystopic perspectives and knee-jerk neo-Ludditism crimps this somewhat, too).

balloons

Most of all, though, the future must include a sense that we are escaping the present. Hannah Arendt captured this escapist element in her 1958 book The Human Condition in which she remarks on a common perspective regarding the Sputnik space launch a year previous as the first “step toward escape from men’s imprisonment to the earth.” The human condition is to be earth-bound and time-bound. Our aspiration is to be liberated from this and we pin our hopes on technology. Arendt goes on to say:

… science has realized and affirmed what men anticipated in dreams that were neither wild nor idle … buried in the highly non-respectable literature of science fiction (to which, unfortunately, nobody yet has paid the attention it deserves as a vehicle of mass sentiments and mass desires).

As a desire to escape from the present, our experience of the future is the natural correlate to a certain form of nostalgia that idealizes the past. This conservative nostalgia is not simply a love of tradition but a totemic belief that the customs of the past, followed ritualistically, will help us to escape the things that frighten us or make us feel trapped in the present. A time machine once entered, after all, goes in both directions.

flying-car

This sense of the future, then, is relative to the present not just in time and space but also emotionally. It is tied to a desire for freedom and in this way even has a political dimension – if only as a tonic to despair and cynicism concerning direct political action. Our love-hate relationship to modern consumerism means we feel enslaved to consumerism yet also are waiting for it to save us with gadgets and toys in what the philosopher Peter Sloterdijk calls a state of “enlightened false consciousness”.

living_room

And so we arrive at the second most important quote for people coping with emerging technologies, also known as Clarke’s third law:

Any sufficiently advanced technology is indistinguishable from magic.

What counts as emerging changes over time. It changes based on your location. It changes based on our hopes and our fears. We always recognize it, though, because it feels like magic. It has the power to change our perspective and see a bigger world where we are not trapped in the way things are but instead feel like anything can happen and everything will work out.

 city

Everything will work out not in the end, which is too far away, but just a little bit more in the future, just around the corner. I prepare for it by surrounding myself with gadgets and screens and diodes and notebooks, like a survivalist waiting for civilization not to crumble, but to emerge.

Kinect developer preview for Windows 10

 

unicorn

With Kind permission from the Product Group, I am reprinting this from an email that went out the Kinect developers mailing list.

Please also check out Mike Taulty’s blog for a walkthrough of what’s provided. The short version, though, is now you can access the perception API’s to work with Kinect v2 in a UWP app as well as use your Kinect v2 for face recognition as a replacement for your password. Please bear in mind that this is the public preview rather than a final release.

Snip >>

We are happy to announce our public preview of Kinect support for Windows 10.

 

This preview adds support for using Kinect with the built-in Windows.Devices.Perception APIs, and it also provides beta support for using Kinect with Windows Hello.

 

Getting started is easy. First, make sure you already have a working Kinect for Windows V2 attached to your Windows 10 PC.  The Kinect Configuration Verifier can make sure everything is functioning okay. Also, make sure your Kinect has a good view of your face –  we recommend centering it as close to the top or bottom of your monitor as possible, and at least 0.5 meters from your face.

 

Then follow these steps to install the Kinect developer preview for Windows 10:

 

1. The first step is to opt-in to driver flighting. You can follow the instructions here to set up your registry by hand, or you can use the following text to create a .reg file to right-click and import the settings:

Windows Registry Editor Version 5.00

[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\DriverFlighting\Partner]

“TargetRing”=”Drivers”

 

2. Next, you can use Device Manager to update to the preview version of the Kinect driver and runtime:

 

                            1. Open Device Manager (Windows key + x, then m).

                            2. Expand “Kinect sensor devices”.

                            3. Right-click on “WDF KinectSensor Interface 0”.

                            4. Click “Update Driver Software…”

                            5. Click “Search automatically for updated driver software”.

                            6. Allow it to download and install the new driver.

                            7. Reboot.

 

Once you have the preview version of the Kinect for Windows V2 driver (version 2.1.1511.11000 or higher), you can start developing sensor apps for Windows 10 using Visual Studio 2015 with Windows developer tools. You can also set up Windows Hello to log in using your Kinect.

 

1. Go to “Settings->Accounts->Sign-in options”.

2. Add a PIN if you haven’t already.

3. In the Windows Hello section, click “Set up” and follow the instructions to enable Windows Hello!

 

That’s it! You can send us your feedback at k4w@microsoft.com.

Hitchhiking the Backroads to Augmented Reality

To date, Microsoft has been resistant to sharing information about the HoloLens technology. Instead, they have relied on shock and awe demos to impress people with the overall experience rather than getting mired down in the nitty-gritty of the software and hardware engineering. Even something as simple as the field-of-view is never described in mundane numbers but rather in circumlocutions about tv screens X distance from the viewer. It definitely builds up mystery around the product.

Given the lack of concrete information, lots of people have attempted to fill in the gaps with varying degrees of success which, in their own way, make it difficult to navigate the technological true true. In an effort to simplify the research one typically has to do on one’s own in order to understand HoloLens and AR, I’ve made a sort of map for those interested in making their way. Here are some of the best resources I’ve found.

1. You should start with the Oculus blog, which is obviously about the Oculus and not about HoloLens. Nevertheless, the core technology the makes the Oculus Rift work is also in the HoloLens in some form. Moreover, the Oculus blog is a wonderful example of sharing and successfully explaining complicated concepts to the layman. Master these posts about how the Rift works and you are half way to understanding how HoloLens works:

2. Next, you should really read Oliver Kreylos’s (Doc OK) brilliant posts about the HoloLens field of view and waveguide display technology. Many disagreements around HoloLens would evaporate if people would simply invest half an hour into reading OK’s insights :

3. If you’ve gone through these, then you are ready for Dr. Michael J. Gourlay’s youtube discussion of surface reconstruction, occlusion, tracking and mapping. Sadly the audio drops out at key moments and the video drops out for the entire Q & A, but there’s lots of gold for everyone in this mine. Also check out his audio interview at Georgia Tech:

4. There have been lots of first-impression blog posts concerning the HoloLens, but Jasper Brekelmans provides far-and-away the best of these by following a clear just-the-facts-ma’am approach:

5. HoloLens isn’t only about learning new technology but also discovering a new design language. Mike Alger’s video provides a great introduction into the problems as well as some solutions for AR/VR interface and usability design:

6. Oculus, Leap Motion and others who have been designing VR experiences provide additional useful tips about what they have discovered along the way in articles like the now famous “Swayze Effect” (yes, that Swayze):

7. Finally, here are some video parodies and inspirational videos of VR and AR from the tv show Community and others:

I know I’ve left a lot of good material out, but these have been some of the highlights for me over the past year while hitchhiking on the backroads leading to Augmented Reality. Drop them in your mental knapsack, stick out your thumb and wait for the future to pick you up.