In the past year I’ve had the opportunity to participate in two visionOS hackathons and two XR conferences – indications that mixed reality is still going strong in 2024.
VisionDevCamp happened at the end of March in Santa Clara, CA and was my favorite for a variety of reasons including that it was a chance to reconnect with my friend Raven Zachery who ran the event with Dom Sagolla. I also got to meet a ton of Unity and iOS developers from around the country who are now the foundation of the visionOS community.
Coming out of that, Raven was asked to organize a workshop around visionOS for AWE in mid-June in Long Beach, CA. I had lots of plans about various rabbit holes I could go down about visionOS development but Raven encouraged me to keep things high level and strategic and I’m glad he did.
In September I had my second hackathon of the year at Vision Hack, an online event around visionOS organized by Matt Hoerl, Cosmo Scharf and Brian Boyd, jr. I basically spent three days online mentoring and helping anyone I could. That was a blast and I really loved all the skills and enthusiasm everyone brought to the event.
Then this past week I participated in the Augmented Enterprise Summit in Dallas, TX. I was on a panel there with Mitch Harvey, Andy Hung, Lorna Jean Marcuzzo and Mark Sage.
In that time, I also did a short course for XR Bootcamp to help Unity developers become familiar with Xcode and visionOS and participated in putting together a couple of proposals for the Meta Quest Lifestyle accelerator program.
It’s been a busier year than I really expected. I’m very happy to be closing in on almost a decade of AR headset development.
I’ve been developing for augmented reality head-mounted displays for about eight years. I first tried the original HoloLens in 2015. Then I got to purchase a developer unit in 2016 and started doing contract work with it. Later I was in the early pre-release dev program for Magicleap’s original device, which led to more work on the HoloLens 2, Magicleap 2, and Meta Quest Pro. I continue to work in AR HMDs today.
The original community around the HoloLens was amazing. We were all competing for the same work, but at the same time, we only had each other to turn to when we needed to talk to someone who understood what we were going through. So we were all sort of frenemies, except that because Microsoft was notoriously tight-lipped with their information about the device, we helped each other out on difficult programming tricks and tricky AR UI concepts – and this made us friends as well.
In those early days, we all thought AR, and our millions in riches (oh what a greedy lot we were), were just around the corner. But we never quite managed to turn that corner. Instead we had to begin devising theories around what that corner was going to look like, what the signs would be as we approached that corner, and what would happen after we made the corner. Basically, we had to become more stringent in our analyses.
Out of this, one big idea that came to the fore was “AR Ubiquity”. This comes out of the observation that monumental technological change happens slowly and incrementally, until it suddenly happens all at once. So at some point, we believe, everyone will just be wearing AR headsets instead of carrying smartphones. (This is also known, in some quarters, as the “Inflection Point”.)
Planning, consequently, should be based less on how we get to that point, or even when it will happen; and more about how to prepare for “AR Ubiquity” and what we will do afterwards. So AR Ubiquity, in this planning model, can come in 3 years, or in 7 years, or maybe for the most skeptical of us in 20 years. It doesn’t really matter because the important work is not in divining when it will happen (or even who will make it happen) but instead in 1) what it will look like and 2) what we can do — as developers, as startups, as corporations — once it arrives.
Once we arrive at a discourse about the implications of “AR Ubiquity” rather than trying to forecast when it will happen, we are engaging with a grand historical narrative about the transformative power of tech – which is a happy place for me because I used to do research on philosophical meta-history in grad school – though admittedly I wasn’t very good at it.
“AR Ubiquity”, according to the tenets of meta-history, can at the same time both be a theory about how the world works and also a motif in a story about how we fit into the technological world. Both ways of looking at it can provide valuable insights. As a theory we want to know how we can verify (or falsify) it. As a story element, we want to know what it means. In order to discover what it means, in turn, we can excavate it for other mythical elements it resembles and draws upon. (Meta-history, it should be acknowledged, can lead to bad ideas when done poorly and probably worse ideas when it is done well. So please take this witha grain of salt (a phrase which itself has an interesting history, it is worth noting).
I can recall three variations on the theme of disruptive (or revolutionary) historical change. There’s the narrative of the apocalyptic event that you only notice once it has already happened. There’s the narrative of the prophesied event that never actually happens but is always about to. And then there’s the heralded event, which has two beats: one to announce that it is about to happen, and another when it does happen. We long thought AR would follow model A, it currently looks like it is following model B, and I hope it will turn out that we are living through storyline C. Let’s unpack this a bit.
Model A
Apocalyptic history, as told in zombie movies and TV shows, generally have unknown origins. The hero wakes up after the fateful event has already happened, often in a hospital room, and over the course of the narrative, she may or may not discover whether it was caused by a sick monkey who escaped a viral lab, or by climate change, or by aliens. There’s also the version of apocalyptic history that circulates in Evangelical Christian eschatology known as The Rapture. In the Book of Revelations (which New Yorker writer Adam Gopnik calls the most cinematic Michael Bey ready book of the Bible), St. John of Patmos has a vision in which 144,000 faithful are taken into heaven. In popular media, people wake up to find that millions of the virtuous elect have suddenly disappeared while they have been left behind to try to pick up the pieces and figure out what to do in a changed world.
In the less dramatic intellectual upheavals described in Thomas Kuhn’s The Structure of Scientific Revolutions, you can start your scientific career believing that an element known as phlogiston is released during combustion, then end it believing that phlogiston doesn’t exist and instead oxygen is added to a combusted material when it is burned (it is oxidized). Or you might start believing that the sun revolves around the earth and end up a few decades later laughing at such beliefs, to the point that it is hard to understand how anyone ever believed such outlandish things in the first place.
It’s a bit like the way we try to remember pulling out paper maps to drive somewhere new in our car, or shoving cassette tapes into our Walkmans, or having to type on physical keys on our phones – or even recalling phones that were used just for phone calls. It seems like a different age. And maybe AR glasses will be the same way . One day it seems fantastical and the next we’ll have difficulty remembering how we got things done with those quaint “smart” phones before we got our slick augmented reality glasses.
Model B
The history of waiting might best be captured by the term Millennialism, which describes both Jewish and Christian belief in the return of a Messiah after a thousand years. The study of millennialist beliefs often cover both the social changes that occur in anticipation of a Millennialist event as well as the consequent recalculation of calendars that occurs when an anticipated date has passed and finally the slow realization that nothing is going to happen, after all.
But there are non-theistic analogs to Millennialism that share some common traits such as the Cargo Cult in Fiji or later UFO cults like the Heaven’s Gate movement in the 90’s. Marxism could also be described as a sort of Millenarist cult that promised a Paradise that adherents came to learn would never arrive. One wonders at what point, in each of these belief systems, people first began to lose faith and then decided to simply play along while lacking actual conviction. The analogy can be stretched to belief in concepts like tulip bulb mania, NFTs, bitcoin, and other bubble economies where conviction eventually becomes less important than the realization that everyone else is equally cynical. In the end, it is cynicism that maintains economic bubbles and millenarist belief systems rather than faith.
I don’t think belief in AR Ubiquity is a millenarist cult, yet. It certainly hasn’t reach the stage of widespread cynicism, though it has been in a constant hype cycle over the past decade as new device announcements serve to refresh excitement about the technology. But even this excitement is making way, in a healthy manner, for a dose of skepticism over the latest announcements from Meta and Apple. There’s a hope for the best but expect the worst attitude in the air that I find refreshing, even if I don’t subscribe to it, myself.
Model C
The last paradigm for disruptive history comes in the form of a herald and the thing he is the herald for. St. John the Baptist is the herald of the Messiah, Jesus Christ, for example. And Silver Surfer is the herald for Galactus. One is a forerunner for good tidings while the other is a harbinger of doom.
The forerunner isn’t necessary for the revolutionary event itself. That will happen in any case. The forerunner is there to let us know that something is coming and to point out where we should be looking for it. And there is just something very human about wanting to be aware of something before it happens so we can more fully savor its arrival.
Model C is the scenario I find most likely and how I imagine AR Ubiquity actually happening.
First we’ll have access to a device that demonstrates an actually useable AR headset with actually useful features. This will be the “it’s for real” moment that dispels the millenarist anxiety that we’re all being taken for a ride.
The “it’s real” moment will then set a bar for hardware manufacturers to work against. The forerunner device becomes the target all AR HMD companies strive to match, once someone has shown them what works, and within a few years we will have the actual arrival of AR Ubiquity.
At this time, reviews of the Apple Vision Pro and the Meta Quest 3 suggest that either could be this harbinger headset. I have my doubts about the Meta Quest 3 because I’m not sure how much better it can be than the MQ2 and the Meta Quest Pro, especially since it has removed eye tracking, which was a key feature of MQP and made the hand tracking more useful.
The AVP, on the other hand, has had such spectacular reviews that one begins to wonder if it isn’t too good to be true.
But if the reviews can be taken at face value, then AR Ubiquity, or at least a herald that shows us it is possible, might be closer than we think.
I’m just proposing a small tweak to the standard model of how augmented reality headsets will replace smartphones. We’ve been assuming that the first device that convinces consumers to purchase AR headsets will also immediately set off this transition from one device category to the other. But maybe this is going to occur in two steps. First a headset will appear that validates the theory that headsets can replace handsets. Then a second device will rotate the gears of history and lead us into this highly anticipated new technological age.
My colleague, Astrini Sie, and I delivered a talk called Porting HoloLens Apps to Other Platforms at AWE 2023. Astrini is an AI researcher, so we threaded AI and AR together to see what AI can teach AR.
“Although Microsoft has substantially withdrawn from its Mixed Reality and Metaverse ambitions, this left behind a sizable catalog of community built enterprise apps and games, as well as a toolset, the MRTK, on which they were developed. In this talk, we will walk you through the steps required to port a HoloLens app built on one of the MRTK versions to other platforms such as the Magic Leap 2 and the Meta Quest Pro. We hope to demonstrate that, due to clever engineering, whole ecosystems can be moved from one platform to other newer platforms, where they can continue to evolve and thrive.”
One of the fights that I thought we had put behind us is over the question ‘which interface is better?’ For instance, this question the was frequently brought up in comparisons of the mouse to the keyboard, its putative precursor. The same disputes came along again with the rise of natural user interfaces (NUI) when people began to ask if touch would put the mouse out of business. Always the answer has been no. Instead, we use all of these input modes side-by-side.
As Bill Buxton famously said, every technology is the best at something and the worst at something else. We use the interface best adapted to the goal we have in mind. In the case of data entry, the keyboard has always been the best tool. For password entry, on the other hand, while we have many options, including face and speech recognition, it is remarkable how often we turn to the standard keyboard or keypad.
Yet I’ve found myself sucked into arguments about which is the best interaction model, the HoloLens v1’s simple gestures, the Magic Leap One’s magnetic 6DOF controller, or the HoloLens v2’s direct manipulation (albeit w/o haptics) with hand tracking.
Ideally we would use them all. A controller can’t be beat for precision control. Direct hand manipulation is intuitive and fun. To each of these I can add a blue tooth XBox controller for additional freedom. And the best replacement for a keyboard turns out to be a keyboard (this is known as the Qwerty’s universal constant).
It was over two years ago at the Magic Leap conference that James Powderly, a spatial computing UX guru, set us on the direction of figuring out ways to use multiple input modalities at the same time. Instead of thinking of the XOR scenario (this or that but not both) we started considering the AND scenario for inputs. We had a project at the time, VIM – an architectural visualization and data reporting tool for spatial computing –, to try it out with. Our main rule in doing this was that it couldn’t be forced. We wanted to find a natural way to do multi-modal that made sense and hopefully would also be intuitive.
We found a good opportunity as we attempted to refine the ability to move building models around on a table-top. This is a fairly universal UX issue in spatial computing, which made it even more fascinating to us. There are usually a combination of transformations that can be performed on a 3D object at the same time for ease of interaction: translation (moving from position x1 to position x2), scaling the size of the object, and rotating the object. A common solution is to make each of these a different interaction mode triggered by clicking on a virtual button or something.
But we went a different way. As you move a model in space by pointing the Magic Leap controller in different directions like a laser pointer with the building hanging off the end, you can also push it away by pressing on the top of the touch pad or rotate it by spinning your thumb around the edge of the touch pad.
This works great for accomplishing many tasks at once. A side effect, though, is that while users rotated a 3D building with their thumbs, they also had a tendency to shake the controller wildly so that it seemed to get tossed around the room. It took an amazing amount of dexterity and practice to rotate the model while keeping it in one spot.
To fix this, we added a hand gesture to hold the model in place while the user rotated it. We called this the “halt” gesture because it just required the user to put up their off hand with the palm facing out. (Luke Hamilton, our Head of Design, also called this the “stop in the name of love” gesture.)
But we were on a gesture inventing roll and didn’t want to stop. We started thinking about how the keyboard is more accurate and faster than a mouse in data entry scenarios, while the mouse is much more accurate than a game controller or hand tracking for pointing and selecting.
We had a similar situation here where the rotation gesture on the Magic Leap controller was intended to make it easy to spin the model in a 360 degree circle, but consequently was not so good for very slight rotations (for instance the kind of rotation needed to correctly orient a life-size digital twin of a building).
We got on the phone with Brian Schwab and Jay Juneau at Magic Leap and they suggested that we try to use the controller in a different way. Rather than simply using the thumb pad, we could instead rotate the controller on its Z-axis (a bit like a screwdriver) as an alternative rotational gesture. Which is what we did, making this a secondary rotation method for fine-tuning.
And of course we combined the “halt / stop in the name of love” gesture with this “screwdrive” gesture, too. Because we could but more importantly because it made sense and most importantly because it allows the user to accomplish her goals with the least amount of friction.
Simon is one of the main contributors to the Microsoft MRTK framework for HoloLens and also to the XRTK framework for cross-platform mixed reality development. He is the author of several technical books on Unity. He is keeper of the flame on the Unity-UI Extensions source code.
Simon basically really intimidates me. He knows the Microsoft coding stack as well as the Unity stack, which makes him formidable. He’s currently working on extending the XRTK framework to support the Oculus Quest, which means if you have built your HoloLens or Magic Leap app on the XRTK, your app will automagically also run on the Quest thanks to Simon. That’s some seriously cool stuff.
He also happens to be a very nice person who is genuinely concerned about the well being of the people around him – which I found out the easy way over many online and in-person interactions. I’m not totally sure why he promotes himself as of the Darkside since he is clearly more of a Gray Jedi – but that’s not one of the 10 questions, so we may never know. Without further ado, here are Simon’s answers to the 10 Questions:
What movie has left the most lasting impression on you?
“The Matrix, it shows us how to stand tall, to face adversity with strength and uncover meaning in this world we call life.”
What is the earliest video game you remember playing?
“Given I have to recognise I’m getting old, my earliest game I recall was Pong on the Atari 2600. First game console our family owned. First games would be the penny shuffle machines in the arcades of old .”
Who is the person who has most influenced the way you think?
“William Shatner, for showing us how to boldly go and give us a glimpse of the world I’d liketo see us aspireto.”
When was the last time you changed your mind about something?
“Whenever the wife decides something and I have no other option but to agree.”
What’s a skill people assume you have but that you are terrible at?
“Recruiters are constantly sending me offers for jobs developing in JavaScript or Java, which I’ve avoided for most of my developer life.”
What inspires you to learn?
“My life’s goal is to always learn something new each and every day, to grow and develop. If we no longer aspire to develop ourselves we cease to be.”
What do you need to believe in order to get through the day?
“I have to believe the coffee will not run out, else the world becomes a much more vicious place. I also hope to defeat ignorance, but ignorance always finds new ways to baffle me.”
What’s a view that you hold but can’t defend?
“I have long held the belief that humankind will eventually realise its insignificance and start to work towards the betterment of ourselves and the planet we live on. However, I’m proven wrong each and every day (for now). Basically, I want the world of Star Trek, not the world of Star Wars.”
What will the future killer Mixed Reality app do?
“Once mixed reality technology finally becomes affordable enough and cool enough to wear all day long, I believe the killer experience will be something that integrates with our everyday. An app/experience that will enrich the world around us, show us new sights and experiences, and offer us new ways to interact. Be it a simple experience that adds wonder to a shopping centre experience, or uses geo location whilst visiting historic sights and completely immerse us whilst learning (in stead of just reading signs as we do now).”
What book have you recommended the most?
“Snowcrash by Neal Stephenson, it opens up so many new possibilities and levitates towards the dangers of being “plugged in” too much. Giving us a sense of wonder and danger in equal measure, leading us to live in a world augmented by technology but not driven by it.”
And then Simon volunteered two more unsolicited questions:
Favourite quote?
“The definition of insanity is doing the same thing over and over again and expecting a different result. — Albert Einstein (as well as others).”
The Magic Leap store (aka Magic Leap “World”) has published its first app for $9.99. This is an app created by Insomniac but published by Magic Leap itself, so in some sense is a trial run for its store. “Seedling” already made its first appearance at the Leap conference in Los Angeles in October.
$9.99 is also an interesting price, perhaps signaling a target for apps on Magic Leap devices. Back in the day, $.99 was the target price for Apple Store apps. When Microsoft came out with Windows Phone, they marketed the idea that apps should sell for more than that on their platform (more towards $1.49 or $1.99). On Steam, the magic price point for games seems to be $20 to $60.
For the HoloLens, which uses the online Windows Store as its distribution channel, the most frequent price point seems to be free. This makes sense since even with a purported 50K HoloLens devices currently in the world, the total market is still too small to support a reasonably priced game. Trimble initially went the other way with their SketchUp Viewer, which lists for about $1.5K, apparently trying to recoup their investment with a high price tag. Their subsequent HoloLens offering, part of a collaboration service, is free.
In order to buy Seedling, I had to go into my online magic leap creator’s account and add a payment method. This is an interesting aspect of all current VR and AR devices: entering data is rarely – and entering financial data is never – done through the actual device. We still live in a world where one must switch to either a phone or a computer in order to establish the credentials that will be used through the device.
This is ultimately a pre-NUI UX problem involving the difficulty of doing data entry without a keyboard and mouse (though we are finally getting comfortable with doing this on our smart phones, thanks to the rising comfort level with using web apps on tiny screens). This will be an ongoing problem for developing apps for the enterprisy market, where the exchange of data is pretty key.
Who knows, maybe solving this UX dilemma for the enterprise will end up being the killer app we’ve all been waiting for. I wonder how much someone would charge for it?
At the beginning of October I was invited to deliver two sessions at Techorama Netherlands: one on Cognitive Services Custom Vision and one about the HoloLens and the Magic Leap One. This is one of the best organized conferences I’ve been to and the hosts and attendees were amazing. I can’t say enough good things about it.
The lineup was also great with Scott Guthrie, Laurent Buignon, Giorgio Sardo, Shawn Wildermuth, Pete Brown, Jeff Prosise, etc. It is what is known as a first tier tech conference. What was especially impressive is that this is also the first time Techorama Netherlands was convened.
I want to also thank my friend Dennis Vroegop for hosting me and showing me around on my first trip to the Netherlands. He and Jasper Brekelmans took a weekday off to give me the full Amsterdam experience. It was also great to have beers with Roland Smeenk, Alexander Meijers and Joost van Schaik. I’m not sure why there is so much mixed reality talent in the Netherlands but there you go.
I’m currently sitting in my room at the L.A. Grand Hotel waiting for the L.E.A.P. conference to start. I’ve been holding off on this comparison post because I had promised Dennis Vroegop I would give it first as a talk at the Techorama Netherlands conference – which I did last week. I will do a feature comparison based on publicly available information, then highlight features unique to the Magic Leap, and then distinguish subtle but important differences that only become apparent from spending months with these devices at the developer level. Finally I want to point out design improvements in the Magic Leap that are so good for Mixed Reality that I predict they will be incorporated into the next version of HoloLens.
Keep in mind that this is a comparison of two different generations of devices. The Magic Leap One is coming out two years after the HoloLens and would be expected to be better. At the same time, the HoloLens v2 is being released some time in 2019 and can be expected to be better still.
1. Field of View
In raw numbers, the field of view of the Magic Leap One is approximately 25% better than the HoloLens. The HoloLens field of view is estimated to be about 29-30 degrees wide and 17 degrees high. The Magic Leap One is 40 degrees wide by 30 degrees high. There is a corresponding difference in resolution, with the HoloLens offering 1268 by 720 per eye and the Magic Leap One providing 1280 by 960 per eye.
The Magic Leap One uses the same wave guide display technology that the HoloLens does, however, so how did they pump up the FOV? First, the ML1 has a more powerful battery than the HoloLens does, and it’s often been claimed by Microsoft that FOV is largely dependent on the power of the projection. This is probably offset, though, by the fact that the ML1 is using more power to project in two planes instead of only one like the HoloLens does (with 6 Waveguide layers compared to 4 in the HoloLens).
Another trick is that the waveguides in the Magic Leap are closer to the wearer’s eyes than they are in the HoloLens. As a consequence, you can wear glasses underneath the HoloLens while you cannot do so comfortably under the Magic Leap device.
In addition to this, Jasper Brekelmans and Dennis Vroegop suggested over coffees along the Amstel River (in a conversation about David Copperfield) that because one’s peripheral vision is closed off in the ML1, the perceived FOV may be even larger than the actual. The theory behind this is that, due to the widespread use of glasses, we have become used to not paying attention to our peripheral vision so much and consequently are comfortable with this tunneling of our vision.
Blocking off the peripheral field of view might cause issues in certain industrial settings, but the general effect is that what you can see as a proportion of your overall FOV is much larger in the ML1 than it is in the HoloLens. Or another way of putting this is that the empty areas of your FOV, as a proportion of your available FOV, is much smaller than it is in the HoloLens.
On top of this, the aspect ratio of the FOV in the ML1 is much taller than in the HoloLens, which may end up doing a better job at accommodating vertical seccaddic movements of the eyes.
Because of the narrower gap between the device and the wearer’s eyes, the Magic Leap can’t accommodate glasses as the Hololens can. To compensate, Magic Leap is developing relationships with online eyeglass manufacturers to provide prescription inserts that can be placed in front of the waveguides and magnetically lock into place. There’s some controversy over whether this is a good or a bad thing. Some developers have expressed concern that this will make demoing Magic Leap at events more difficult than demoing HoloLens, since those with poor vision will either not be able to participate or, alternatively, we will be forced to carry around a large suitcase of prescription inserts to every event.
On the other hand, when I think of what MR will be like in the future, I tend to think of them resembling real glasses (and not electronic contacts, which simply scare me). When they reach the size and ubiquity of modern glasses, it will make sense for each person to have their own personalized device with their appropriate prescription. Magic Leap is on the right track in this case. It’s just in the intervening period that we have to figure out how to share our limited, expensive devices with others.
NVIDIA® Tegra X2 SOC
2 Denver 2.0 64-bit cores + 4 ARM Cortex A57 64-bit cores
(2 A57’s and 1 Denver accessible to applications)
8086h (Intel)
GPU. NVIDIA Pascal™, 256 CUDA cores; Graphic APIs: OpenGL 4.5, Vulkan, OpenGL ES 3.3+
2GB RAM
8GB RAM
64GB Storage
128GB Storage
The Magic Leap One is overall a much beefier machine than the current HoloLens. While both the HoloLens and the Magic Leap One advertise a 3 hour battery life, these can mean vastly different things. In order to drive all of its extra hardware, the Magic Leap One needs a much beefier battery. The ML1 is powered by a twin-cell battery with 36.77 Wh, running at 3.83 V. The HoloLens has a 1.65 Wh battery.
For overall performance, the larger battery means the world meshes (i.e. surface reconstruction, world mapping) are much denser and more frequently updated on the Magic Leap than on the HoloLens. The Time-of-Flight depth camera can fire off more frequently and for longer periods.
The larger battery and beefier specs also translate to much better 3D performance. The HoloLens is able to run 30,000 polygons at 60 fps. Beyond that, the fps begins to drop. The Magic Leap runs upwards of 1 million polygons at 60 fps.
On the downside, that more powerful battery rig needs a fan to cool it whereas the HoloLens is passively cooled. In laboratory and medical scenarios where a sterile environment must be maintained, active cooling with a fan could be a problem.
3. The HoloLens and Tracking
The HoloLens uses 4 monochrome cameras (“environment aware sensors”), an accelerometer, magnetometer and gyroscope in a sensor fusion configuration, and a custom HPU to perform head tracking. The Magic Leap one has a similar setup minus the HPU.
The HoloLens tracking is still somewhat better than the ML1’s. It loses tracking less frequently and digital content is less jittery when seen up close or while the wearer is in motion.
Overall, though, tracking performance is fairly close between the two devices.
4. Magic Leap Extras
The ML1 has a couple of features that are simply outside of the box. One is the eye tracking. There are inward facing cameras that track the wearer’s eye movements as invisible IR flashes.
The tracking is not continuous and is captured at a much lower resolution level than the displays. While they shouldn’t be used for direct user interactions, they are great for providing context for other interactions. It would be great if someone would write a keyboard that uses eye tracking to select keys. In the meantime, I wrote this heat vision demo that uses eye tracking to burn the walls of my house — I think of it as “Superman with a Migraine”. Note the eye-blink tracking.
The other cool extra in the Magic Leap is two planes of focus. Most VR devices have a single plane of focus at infinity. The HoloLens has a single plane of focus set at two meters.
In the magic leap one, when you look at near objects, objects further away (on the outer plane) seem to go out of focus. When you look at objects close up, the objects further away go out of focus. I would guess that the close plane is around a meter and the out one about 3 meters but I’m not really sure. In the Lumin OS .91, there is also a sporadic green shift in the near plane (which I expect will be fixed soon).
5. The Tether
The Magic Leap One is made up of two parts: the Light Pack and the Light Wear. They are connected by a cable. The Light Wear contains all the sensors, projectors and displays while the Light Wear, worn at the hip, contains all the computer bits and the battery.
This is an engineering choice that allows for a much larger power source. Without the tether solution, a large battery would not be possible. Without the large battery, the ML1’s enhanced depth sensing, improved graphics processing and larger field of view would not be possible.
In addition, this design makes the Magic Leap a much more comfortable fit on the head. The weight distribution is better than on the HoloLens, it is lighter, and it doesn’t require extra straps.
The tether solution is actually so effective that I would be surprised if the HoloLens v2 does not follow a similar design. The original one-piece “tetherless” solution Microsoft came up with for the HoloLens was visionary, but severely limiting.
6. Developing
If you have ever developed in Unity for the Android (or really any other device) then you know how to develop for Magic Leap in Unity. You press a button and your app compiles to an .mpk image (Android uses “.apk” file extensions). If your device is attached, you can deploy directly by clicking on “build and run”.
Magic Leap apps can also be built with the Unreal Engine.
HoloLens apps run on a Unity player sandboxed in a UWP app. The development cycle consequently involves exporting your HoloLens app as a Visual Studio project targeting UWP and then building and deploying in UWP. In general (and it may just be me) this has been tedious.
It became even worse when the immersive WinMR devices (or occluded WinMR – basically Microsoft VR) devices came out last year and the basic tools used for HoloLens development, known as the HoloLens Toolkit and then the Mixed Reality Toolkit, was expanded to supported both kinds of device. Because of some issues with Unity, building for WinMR required certain versions of Unity and above while developing for HoloLens required certain versions of Unity and below. And this state went on for several months to the point that finding the correct Windows SDK paired with the right MRTK version paired with the correct Unity version became a closely kept alchemical formula passed from developer to developer.
This experience may not be the same for everyone but it left me a bit traumatized. By contrast, Magic Leap development is simply a pleasure. I can build and see the results very quickly in my device. I can wear the device for hours at a time. I typically only stop development when the ML battery runs down and I have to let it recharge. I don’t have a Magic Leap Hub, which would allow me to charge while I dev, but I intend to get one.
The Magic Leap toolkit is still not quite as capable as the open source Mixed Reality Toolkit managed by Stephen Hodgson and others.
The Magic Leap also has a simulator rather than an emulator for developing without a device. This actually makes sense since the Hololens emulator runs the HoloLens OS in a virtual machine, which might be tricky given the much larger specs of the Magic Leap.
7. Interactions
The Magic Leap supports robust hand and gesture tracking as well as a 6DOF controller. The DOF in 6DOF stands for degrees of freedom. We know not only the direction the controller is pointing in (3DOF) but also its position.
I love the controller. I love it so much it made me finally admit to myself that I hate the HoloLens tap gesture. No one ever gets it right. It’s awkward. It’s uncomfortable and makes me feel like I’m performing a kung fu move.
By contrast, a controller just makes sense. The UX for MR, I believe, should always support three layers of interactions. Mixed reality UX should support hand gestures for ease of use. It should fall back to the controller for precision movements. It should finally fall back on the delta pad on the controller for accessibility.
For all of my antipathy toward the HoloLens tap, however, I have to say I miss the HoloLens bloom gesture (escape), which I keep trying to use in Magic Leap to no avail. Instead, in Magic Leap holding the controller’s Home button for three seconds is the escape gesture, which I don’t really like. It also bothers me that hand gestures aren’t supported in the core desktop (the Icon grid) – but this is still the Creator’s Edition (translation: dev edition) after all.
[Late edit thanks to SH: it should also be pointed out that the Lumin OS (the desktop layer) currently doesn’t support hand gestures, which I find baffling. For now, you can’t get past the login and other initial screens without a paired phone or a controller.]
Summing Up
So is the Magic Leap One better than the HoloLens v1? Oh yes. By leaps and bounds.
1. The development workflow is much more straight forward and pleasant.
2. The increased battery size and beefier hardware makes it possible to do things, performance wise, that the HoloLens tended to stop us from doing. Phone and tablet level experiences are doable now.
3. The Magic Leap One has a much better interaction model than the HoloLens does. How did anyone ever do MR without a controller? (Actually, everyone used an XBox controller in the end in order to get any sort of real work done, but we don’t talk about that much.)
Is it time to jump back into Mixed Reality development?
If you spent $3.2K to $5K for a HoloLens, then you owe it to yourself to spend $2,300 for a Magic Leap. It’s the device you originally wanted. The HoloLens was a brilliant device back in 2016 and really the first of its kind, but it had limitations. Many of the projects you were never able to realize in HoloLens (in the small dev community that developed around HoloLens, we all know what these are) are now doable with the improved Magic Leap specs. Additionally, your enterprise stories are much easier to sell with the controller. Instead of spending 5 minutes of your precious pitch time explaining how tap works, you can now just let your potential investors and clients go straight into the demo with a controller they basically already know how to use.
Is there a future in spatial computing?
Now there is. There was a brief pause between 2016 and the middle of 2018, but we currently have two great devices available with another shoe dropping soon. Microsoft will be coming out with a HoloLens v2 sometime in the first half of 2019 which I would predict will implement the tethered design Magic Leap is using. This will be an improvement over the current Magic Leap which in turn will be driven to improve its own tech.
Microsoft has an advantage because it started this journey back in the Kinect days and has the resources of Microsoft Research to draw on. Magic Leap has an advantage because, well, they aren’t Microsoft and don’t face the internal political problems a large tech giant does (though no doubt they have their own). More importantly, they have their own U.S.-based production lines (as well as production lines in Mexico) and are less reliant on China, which hopefully means they are capable of much quicker turn-arounds and initial SKU production.
When do we get smaller devices that wear like glasses?
I have no idea, but try to think in terms of 3, 5, 10 years. We always overestimate what can be done in 3 years but always underestimate how much things will change in 10. Somewhere in the middle, we will intersect with our MR futures.
Your comments, corrections and criticisms are welcome in the comments below. I’ll try to keep up with them and incorporate what you say into the main article as appropriate.
I meant to finish this earlier in the week. I spent the past weekend in Los Angeles at the VRLA conference in order to hear Jasper Brekelmans speak about the state of the art in depth sensors and visual effects. One of the great things about VRLA is all the vendor booths you can visit that are directly related to VR and AR technology. Nary a data analytics platform pitch or dev ops consulting services shill in sight.
Walking around with Jasper, we started compiling a list of how we would spend our Bitcoin and Ethereum fortunes once they recover some of their value. What follows is my must-have shopping list if I had so much money I didn’t need anything:
1. Red Frog HoloLens mod
First off is this modified HoloLens by Red Frog Digital. The fabrication allows the HoloLens to balance much better on a user’s head. It also applies no pressure to the bridge of the nose, but instead distributes it across the user’s head. The nicest thing about it is that it always provides a perfect fit, and can be properly aligned with the user’s eyes in about 5 seconds. They designed this for their Zombie Maze location-based experience and are targeting it for large, permanent exhibits / rides.
2. Cleanbox … the future of wholesome fun
If you’ve ever spent a lot of time doing AR and VR demos at an event, you know there are three practical problems you have to work around:
seating devices properly on users’ heads
cleaning devices between use
recharging devices
Cleanbox Technology provides a solution for venue-based AR/VR device cleaning. Place your head-mounted display in the box, close the lid, and it instantly gets blasted with UV rays and air. I’d personally be happy just to have nice wooden boxes for all of my gear – I have a tendency to leave them lying on the floor or scattered across my computer desk – even without the UV lights.
3. VR Hamster Ball
The guy demoing this never seemed to let anyone try it, so I’m not sure if he was actually playing a hamster sim or not. I just know I want one as a 360 running-in-place controller … and as a private nap space, obviously.
4. Haptic Vest and Gauntlets
Bhaptics was demoing their TactSuit, which provides haptic feedback along the chest, back, arms and face. I’m going to need it to go with my giant hampster ball. They are currently selling dev units.
5. Birdly
A tilt table with an attached fan and a user control in the form of flapping wings is what you need for a really immersive VR experience. Fortunately, this is exactly what Birdly provides.
6. 5K Head-mounted Display
I got to try out the Vive Pro, which has an astounding 2K resolution. But I would rather put my unearned money down for a VRHero 5K VR headset with 170 degree FOV. They seem to be targeting industrial use cases rather than games, though, since their demo was of a truck simulation (you stood in the road as trucks zoomed by).
7. A globe display
Do I need a giant spherical display? No, I do not need it. But it would look really cool in my office as a conversation piece. It could also make a really great companion app for a VR or AR experience.
8. 360 Camera Rig with Red Epic Cameras
Five 6K Red Dragon Epic Cameras in a 360 video rig may seem like overkill, but with a starting price of around $250K, before tripod, lenses and a computer powerful enough to process your videos – this could make the killer raffle item at any hi-tech conference.
9. XSens Mocap Suit
According to Jasper, the XSens motion capture fullbody, lycra suit with realtime kinematics is one of the best available. I think I was quoted a price something like $7K(?) to $13K(?) Combined with my hamster ball, it would make me unstoppable in PvP competitive Minecraft.
10. AntVR AR Head-mounted display
AntVR will be launching a kickstarter campaign for their $500 augmented reality HMD in the next few weeks. I’d been reading about it for a while and was very excited to get a chance to try it out. It uses a Pepper’s ghost strategy for displaying AR, has decent tracking, works with Steam, and at $500 is very good for its price point.
11. Qualcomm Snapdragon 845
The new Qualcomm Snapdragon 845 Chip has built-in SLAM – meaning 6DOF inside-out tracking is now a trivial chip-based solution – unlike just two years ago when nobody outside of robotics had even heard of SLAM algorithms. This is a really big deal.
Lenovo is using this chip in its new (untethered) Mirage Solo VR device – which looks surprisingly like the Windows Occluded MR headset they built with Microsoft tracking tech. At the keynote, the Lenovo rep stumbled and said that they will support “at least” 6 degrees of freedom, which has now become an inside joke among VR and AR developers. It’s also spoiled me, because I am no longer satisfied with only 6DOF. I need 7DOF at least but what I really want is to take my DOF up to 11.
12. Kinect 4
This wasn’t actually at VRLA, and I’m not ultimately sure what it is (maybe a competitor for the Google computer vision kit?) but Kinect for Azure was announced at the /build conference in Seattle and should be coming out sometime in 2019. As a former Kinect MVP and a Kinect book author, this announcement mellows me out like a glass of Hennessy in a suddenly quiet dance club.
While I’m waiting for bitcoin to rebound, I’ll just leave this list up on Amazon for, like, in case anyone wants to fulfill it for me or something. In the off chance that that actually comes through, I can guarantee you a really awesome unboxing video.