50 Shaders of Grey

This isn’t actually a blog post yet. Just a title and a promise. Someday I will backfill this post with amazingly deviant technical information about pixel shaders, vertex shaders, algorithmic drawing, matrix math, ray tracing, kernel convolutions and nipple clamps. Stay tuned.

Unity 5 and Kinect 2 Integration

pointcloud

Until just this month one of the best Kinect 2 integration tools was hidden, like Rappuccini’s daughter, inside a walled garden. Microsoft released a Unity3D plugin for the Kinect 2 in 2014. Unfortunately, Unity 4 only supported plugins (bridges to non-Unity technology) if you owned a Unity Pro license which typically cost over a thousand dollars per year.

On March 3rd, Unity released Unity 5 which includes plugin support in their free Personal edition – making it suddenly very easy to start building otherwise complex experiences like point cloud simulations that would otherwise require a decent knowledge of C++. In this post, I’ll show you how to get started with the plugin and start running a Kinect 2 application in about 15 minutes.

(As an aside, I always have trouble keeping this straight: Unity has plugins, openFrameworks as add-ins, while Cinder has bricks. Visual Studio has extensions and add-ins as well as NuGet packages after a confusing few years of rebranding efforts. There may be a difference between them but I can’t tell.)

1. First you are going to need a Kinect 2 and the Unity 5 software. If you already have a Kinect 2 attached to your XBox One, then this part is easy. You’ll just need to buy a Kinect Adapter Kit from the Microsoft store. This will allow you to plug your XBox One Kinect into your PC. The Kinect for Windows 2 SDK is available from the K4W2 website, though everything you need should automatically install when you first plug your Kinect into your computer. You don’t even need Visual Studio for this. Finally, you can download Unity 5 from the Unity website.

linktounityplugin

2. The Kinect 2 plugin for Unity is a bit hard to find. You can go to this Kinect documentation page and scroll half-way down to find the link called Unity Pro Packages. Aternatively, here is a direct link to the most current version of the plugin as of this writing.

unitypluginfolder

3. After you finish downloading the zip file (currently called KinectForWindows_UnityPro_2.0.1410.zip), extract it to a known location. I like to use $\Documents\Unity. Inside you will find three plugins as well as two sample scenes. The three Kinect plugins are the basic one, a face recognition plugin, and a gesture builder plugin, each wrapping functionality from the Kinect 2 SDK.

newunityproject

4. Fire up Unity 5 and create a new project in your known folder. In my case, I’m creating a project called “KinectUnityProject” in the $\Documents\Unity folder where I extracted the Kinect plugins and related assets.

import

5. Now we will add the Kinect plugin into our new project. When the Unity IDE opens, select Assets from the top menu and then select Import Package | Custom Package …

selectplugin

6. Navigate to the folder where you extracted the KinectforWindows_Unity components and select the Kinect2.0.xxxxx.unitypackage file. That’s our plugin along with all the scripts needed to build a Kinect-enabled Unity 5 application. After clicking on “Open”, an additional dialog window will open up in the Unity IDE called “Importing Package” with lots of files checked off. Just click on the “Import” button at the lower right corner of the dialog to finish the import process. Two new folders will now be added to your Project window under the Assets folder called Plugins and Standard Assets. This is the baseline configuration for any Kinect project in Unity.

unitywarning

7. Now we’ll get a Kinect with Unity project quickly going by simply copying one of the sample projects provided by the Microsoft Kinect team. Go into file explorer and copy the folder called “KinectView” out of the KinectforWindows_Unity folder where you extracted the plugins and paste it into the Assets directory in your project folder. Then return to the Unity 5 IDE. A warning message will pop up letting you know that there are compatibility issues between the plugin and the newest version of Unity and that files will automatically be updated. Go ahead and lie to the Unity IDE. Click on “I Made a Backup.”

added_assets

8. A new folder has been added to your Project window under Assets called KinectView. Select KinectView and then double click on the MainScene scene contained inside it. This should open up your Kinect-enabled scene inside the game window. Click on the single arrow near the top center of the IDE to see your application in action. The Kinect will automatically turn on and you should see a color image, an infrared image, a rendering of any bodies in the scene and finally a point cloud simulation.

allthemarbles

9. To build the app, select File | Build & Run from the top menu. Select Windows as your target platform in the next dialog and click the Build & Run button at the lower right corner. Another dialog appears asking you to select a location for your executable and a name. After selecting an executable name, click on Save in order to reach the final dialog window. Just accept the default configuration options for now and click on “Play!”. Congratulations. You’ve just built your first Kinect-enabled Unity 5 application!

les fruits dangereux

potato_clock

What ever happened to the potato clock? In a revived period of do-it-yourselfers, arduino artists and 3d printing presses – a bright new age of artisanal software and hardware development – what has happened to the epitome of nerdish home engineering? At one point in time you couldn’t escape your teenage years without an awareness of the potato clock – stashed somewhere between the x-ray glasses and the sea monkeys – and then poof, suddenly they vanish from the collective consciousness.

Over lunch with my good friends Joel and Nate, I raised this question and Nate was quick to identify the exact date on which the potato clock disappeared. It happened in 2001. On September 11th, to be precise. On that day, many seemingly innocent things were recognized for the danger they are. The test of this perceptual shift occurred on January 31, 2007 when a marketing campaign for the cartoon series Aqua Teen Hunger Force turned into a bomb scare. Whereas levity typically brings with it perspective and reestablishes old norms, in this case – five years after 9/11 – the guerrilla campaign involving led lights was seen as offensive, in poor taste and led to the resignation of Jim Samples, the man who built up Cartoon Network from nothing. Which led us to ask, over our fruit salads, what other common objects – objects like the lowly pototo — might be revealed to have a sinister aspect.

Children have always known the threatening qualities of fruits and vegetables. Adults, on the other hand, have always been somewhat oblivious to this implicit threat that children are aware of in their bones, and have been known to eat kale and Brussels sprouts with reckless abandon. We considered what it would take to capture the danger implicit in fruits and vegetables and realized all it required was a bit of tape and a phone.

WP_20150321_00_10_24_Pro

Consider the banana. At first appearance, it is a pleasant and unassuming fruit with a waxy peel and a soft, sweet interior. Attach a phone to it with some electrical tape and leave it in the lobby at the airport and its true nature reveals itself. Drop it in the mailbox at the post office and see just how innocent it really is.

WP_20150321_00_10_42_Pro

Leave this at the entrance to a police station and see who laughs.

WP_20150321_00_12_58_Pro

Context obviously matters. A banana in a fruit basket suggests a certain functional role. A banana with electronics taped to it sets up a different sort of context. Through experimentation, we found that even the type of tape used can shift the context in subtle but meaningful ways. Silvery duct tape, for instance, is much more threatening than glossy black electrical tape when attached to fruit.

WP_20150321_00_14_34_Pro

Which begs the question, what is the most threatening fruit? Here is an apple attached to an AT&T Pantech cell phone – one of the last free-with-contract AT&T phones that did not require a data plan and the automatic $25 data fee charged to your account whether you are using data or not – why can’t I just use it for phone calls and wifi, AT&T? In that scenario, the AT&T cellular contract is obviously the most frightening thing.

WP_20150321_00_17_18_Pro

But what happens when we exchange the Pantech for an HTC Windows Phone?

WP_20150321_00_19_33_Pro

Or an iPhone for that matter? Which is more intimidating: an iPhone 5 or an iPhone 6?

WP_20150321_00_44_07_Pro

Here is a Microsoft Band wrapped around an apple. There’s a metaphor in there somewhere.

WP_20150321_00_29_49_Pro

iPhone 5 and a kiwi.

WP_20150321_00_31_53_Pro

Samsung Galaxy 4 with onion and duct tape.

WP_20150321_00_32_01_Pro

I normally associate electrical tape with bombs and duct tape with kidnapping – which makes duct tape more viscerally terrifying for me. Some people claim they associate duct tape with ducks but I think that’s a canard. Onions make me want to cry.

WP_20150321_00_33_37_Pro

One of life’s riddles: is a coconut a fruit or a vegetable?

WP_20150321_00_35_01_Pro

No matter what I do, I can’t make strawberries look threatening.

WP_20150321_00_39_30_Pro

Carrots, on the other hand, are Nature’s terrorists.

WP_20150321_00_39_13_Pro

These carrots are organic, by the way. The extra cost is worth it for the additional fear factor.

WP_20150321_00_41_45_Pro

Rubber bands can be intimidating in the right context. Especially when that context is celery. The safety pin is overkill, maybe?

WP_20150321_00_37_44_Pro

This is a Nokia Lumia Windows Phone 7 developer unit with both front-facing and rear-facing cameras. It is attached to a red plum.

 WP_20150321_00_49_17_Pro

Kindle White meets cantaloupe.

WP_20150321_00_52_50_Pro

And of course, potatoes: fear incarnate.

The Next Book

min_lib

The development community deserves a great book on the Kinect 2 sensor. Sadly, I no longer feel I am the person to write that book. Instead, I am abandoning the Kinect book project I’ve been working on and off over the past year in order to devote myself to a book on the Microsoft holographic computing platform and HoloLens SDK. I will be reworking the material I’ve so far collected for the Kinect book as blog posts over the next couple of months.

As anyone who follows this blog will know, my imagination has of late been captivated and ensorcelled by augmented reality scenarios. The book I intend to write is not just a how-to guide, however. While I recognize the folly of this, my intention is to write something that is part technical manual and part design guide, part math tutorial, part travel guide and part cookbook. While working on the Kinect book I came to realize that it is impossible to talk about gestural computing without entering into a dialog with Maurice Merleau-Ponty’s Phenomenology of Perception and Umberto Eco’s A Theory of Semiotics. At the same time, a good book on future technologies should also cover the renaissance in theories of consciousness that occurred in the mid-90’s and which culminated with David Chalmers’ masterwork The Conscious Mind. Descartes, Bergson, Deleuze, Guattari and Baudrillard obviously cannot be overlooked either in a book dealing with the topic of the virtual, though  I can perhaps elide a bit.

A contemporary book on technology can no longer stay within the narrow limits of a single technology as was common 10 or so years ago. Things move at too fast a pace and there are too many different ways to accomplish a given task that choosing between them depends not only on that old saw ‘the right tool for the job’ but also on taste, extended community and prior knowledge. To write a book on augmented reality technology, even when sticking to one device like the HoloLens, will require covering and uncovering to the uninitiated such wonderful platforms as openFrameworks, Cinder, Arduino, Unity, the Unreal Engine and WPF. It will have to cover C#, since that is by and large the preferred language in the Microsoft world, but also help C# developers to overcome their fear of modern C++ and provide a roadmap from one to the other. It will also need to expose the underlying mathematics that developers need to grasp in order to work in a 3D world – and astonishingly, software developers know very little math.

Finally, as holographic computing is a wide new world and the developers who take to it will be taking up a completely new role in the workforce, the book will have to find its way to the right sort of people who will have the aptitude and desire to take up this mantle. This requires a discussion of non-obvious skills such as a taste for cooking and travel, an eye for the visual, a grounding in architecture and an understanding of how empty spaces are constructed, a general knowledge of literary and social theory. The people who create the next world, the augmented world, cannot be mere engineers. They will also need to be poets and madmen.

I want to write a book for them.

Screens, Sensors and Engines

Valve’s recent announcement about their new Vive headset for virtual reality as well as Epic’s announcement that the Unreal Engine is now free made me realize that it is time to once again catalog the current set of future technologies vying for our attention. Just as pre-NUI computer users need the keyboard and mouse, the post-NUI user needs sensors and just as the pre-NUI user required a monitor to see what she was doing, the post-NUI user needs a headset. Here is the list for 2015 from which, you will notice, Google Glass is now absent:

 

Virtual Reality Augmented Reality Sensors Development Platforms
       
Oculus Rift Microsoft HoloLens Microsoft Kinect 2 Unity 3D
Samsung Gear VR Magic Leap Leap Motion Unreal Engine
Google Cardboard castAR Myo WPF
Valve HTC Vive Epson Moverio Intel RealSense Cinder
Sony Project Morpheus   Orbbec openFrameworks
OSVR Razer   Eye Tribe Tracker  
Zeiss VR One      
       

The HoloLens Toolchain and XAML Grids

occlusion2occlusion

What tech stack will be used to develop applications for HoloLens, Microsoft’s innovative new augmented reality platform?

Keeping in mind that Alex Kipman is the visionary behind both HoloLens and the Kinect sensor, some clues can be gleaned from the current Kinect SDK. The Kinect SDK initially supported development in WPF and C++ with DirectX. Over time, however, and in line with Microsoft’s internal shift to be more open and embrace the tech stacks of non-Microsoft communities, the Kinect SDK has grown to include support for Unity3D, Cinder, OpenFrameworks, OpenCV and even MATLAB.

Legos vs Play-Doh

lego_vs_playdoh

This is a two-pronged approach to tooling that Microsoft luminary Rick Barraza has famously characterized as the distinction between building with legos and sculpting with play-doh.  Legos represent what Microsoft has traditionally been extremely good at: creating reusable components. Reusable components abstract the underlying technology layers so even beginning developers can accomplish difficult tasks without needing to understand the intricacies of networking, graphics cards, or memory management. As long as all you want to do is the 95% of tasks that Microsoft components support, legos may be all that you ever need. Microsoft has almost single-handedly created the modern enterprise software developer community based on component building, drag-and-drop IDEs, and copy /paste.

But what if you are not interested in building applications that look like everyone else’s? What if you want to do the 5% of things that reusable components do not allow you to do. This has typically been difficult. Not only does it require having a fine grain understanding of the underlying operating system but also a desire to hack around Microsoft’s safeguards. Because Microsoft had for so long embraced the component model of software development, it internally saw its role as one of safeguarding applications from attempts to follow any other model of software development. API classes are typically sealed to prevent extension and if one were to ask to have them unsealed, the inevitable reply was always “what would you use that for”?

Now lets consider a more playful software world in which it makes sense for classes to be unsealed by default so we can just start squishing them around in our hands to see what comes of it. This is software as exploration and in fact there is a whole community, frequently styled “creative coders”, who work in this way. The processing software programming platform created by Casey Reas and Ben Fry is the epitome of this movement. It was originally created as a better way to teach computer programming since it is built around drawing shapes rather than displaying text. From this simple and divergent starting point, all “hello world” applications are dramatically different.  Even more profound, however, it became clear that through simple loops and seed values, vastly different effects could be generated using only a few lines of code. Playing with processing feels like sculpting with play-doh. Other homologous tool chains were eventually created that shared processing’s emphasis on the visual rather than the textual: OpenFrameworks, Arduino, Cinder and Unity3D.

Which is a long-about way of saying that for premium experiences, these creative coding tool chains will likely be the tools of choice. If you want to get a jump start on HoloLens development, go learn these platforms:

 

HologramFramwork APIs

 hologramAPIs

I mentioned above that Microsoft has a two-pronged approach to tooling software developers. So far I have only mentioned the play-doh side. As we come closer to the Windows 10 release, however, there has been increased activity in the long dormant WPF platform team – enough to suggest that some sort of Holographic support might be released with a new version of WPF. WPF, after all, is Microsoft’s premier platform for component-based development. If the marketing ideal for HoloLens is to make as many HoloLens supported applications as possible straight out of the gate (as it should be), then an easy to use platform for building new HoloLens applications as well as porting old ones to the new paradigm is an obvious pre-requisite.

The image above is from enterprising colleagues who inspected the Windows 10 symbol packages to find out what kind of holographic support would be natively built into the new OS.  The initial impression is that low-hanging integration will be possible by using some sort of texture mapping model. For example, Silverlight provided a component called the VideoBrush that allowed any control that supported brushes to use a video rather than a solid or gradient texture as a background image. This included even complex 3D shapes or skewed geometries.

HoloLens Grids

occlusion4

To me, this suggests a grid-based programming model for quickly painting applications as textures onto valid surfaces identified by the HoloLens’s built-in depth sensors. The depth sensors will use computer vision algorithms to identify and tag surfaces in a room that can be used to project digital content. The user’s movements and any desirable digital-realistic skewing will be taken care of by the underlying holographic framework. For now, let’s assume that interactions will also be taken care of automatically and will follow a mouse-like hover/press idiom for convenience.

 simplegrid

XAML-based languages like WPF have a unique layout component called a Grid. Unlike tables in html, XAML grids define the layout and the content separately. The layout is specified in ColumnDefinitions and RowDefinitions – for instance one may specify a grid that is 3 X 3, or 2 X 1 (as above), and so on. Content is written out (or dragged) below the column and row definitions. Their placement in the layout is then defined by attaching positional directives on the content as shown below.

 grid2

In this code snippet, I haven’t defined a row so there is just one row by default. I have defined two columns which are a zero based array. Finally, I’ve specified on the green panel that I want it to be positioned in the first column by writing Grid.Column=”0” . The red panel, in turn, is placed in column 1, the second column in the series. The resulting WPF application is shown below:

grid3

In this case, the application is not particularly impressive. We can imagine the same code being written against the Holographic Framework with the following results, however:

occlusion3

And here is what the code for your first “Hello, World” application might look like:

holoworld

occlusion5

It is, I hope, not difficult to see how we can then go from a standard XAML Grid like the one above to a HoloLens-enabled Grid like the one below:

occlusion

At this point, given the dearth of information currently available (I’m dumpster diving through Windows 10 symbol packages, after all) this is obviously just a wild guess. I believe it is a plausible programming model, however, and would provide a royal road to quickly generate applications for Microsoft’s newest and brightest technology innovation.

The Coming Holo Wars and How to Survive Them

 

this is the way the RL world ends

cloud atlas: new seoul

We are the holo men,

We are the stuffed men.

Leaning together

Headpiece filled with straw. Alas!

Our dried voices, when

We whisper together,

Are quiet and meaningless

As wind in dry grass

Or rat’s feet over broken glass

In our dry cellar.

— T. S. Eliot

 

“Disruptive technology” is one of the most over-used phrases in contemporary marketing hyper-speech. Borrowing liberally from previous generations’ research into the nature of political and scientific revolutions (Leon Trotsky, Georges Sorel, Thomas Kuhn), self-promoting second raters have pillaged the libraries of these scholars of disruption and have co-opted their intellects in the service of filling the world with useless gadgets and vaporware. When everything is a disruptive technology, nothing is.

Just as Sorel drew on historical examples of general strikes to form his narrative of idealized proletarian revolution and Kuhn identified three examples of scientific revolution: the transition from the Ptolemaic to the Copernican model of the solar system, the abandoning of phlogiston theory, and the shift from Newtonian to relativistic physics – to distill his theory of the “paradigm shift”, we can similarly take one step back in order to find the treasure hidden in the morass of marketing opportunism.

There have been three* major shakeups in the tech sector over the past several decades; each one was marked by the invocation of the “war” metaphor, the leveraging of large sums of money and massive shifts in the fortunes of well known companies.

The PC Wars – the commoditization of the personal computer in the 80s led to the diminishing of IBM and a surprising victor, Microsoft, which realized that the key to winning the PC Wars lay not with the hardware but with the operating system that made the hardware accessible. Following that model, the mid- to late-90s saw the rise of the Internet, various attempts to create portal solutions, and a pitched battle between Netscape and Microsoft to produce the dominant browser. 

The Browser Wars – the Browser Wars saw the rise and fall of companies like Yahoo! and AOL and the eventual victor turned out not be the best browser but the best search engine: Google. More recently we’ve been going through the Mobile Wars in which Apple has been the clear winner – but also Amazon, Twitter and Facebook.

The Mobile Wars – covering both the rise of smart phones as well as tablet devices, the Mobile Wars have born fruit in the way we view consumer experiences, have shifted software development from desktop to web development, have made JavaScript a first class language, have made responsive design the de facto standard, have made the freelance creative designer the Renaissance person of the 21st century, and perhaps most important have accelerated geolocation technology. Geolocation, as will be shown below, is a key player in the next technology war.

 

between the idea and the reality

 

jupiter ascending

Shape without form, shade without color,

Paralyzed force, gesture without motion;

 

As a devotee of Adam Sandler movies, I was pleased to see him teamed with Judd Apatow and Seth Rogan in 2009’s Funny Men. Adam Sandler movies are up there with “Pretty Woman” and “Dumb and Dumber” in the cable industry as movies that can be shown at any time of day and still be guaranteed to draw viewers. There is a false moment in the middle of the movie, however, in which Adam Sandler and Seth Rogan are flown out to perform at a private party for MySpace. What’s MySpace you ask? It was a social network that was crushed in the dust by Facebook, of which you have probably heard, along with other even more obscure networks like Friendster and Bebo. MySpace are portrayed in the movie as an up-and-rising social network through a last-gasp cross-marketing placement with Universal Studios.

A major characteristic of today’s tech wars is that we do not remember the losers. It does not even matter how big these corporations were during their period of being winners. Once they are gone, it is as if they are completely erased from the timeline, their reputations liquidated in the same fashion as their Aeron chairs and stock options.

To be a winner in the tech wars is to be a survivor of the tech wars. This applies not just to corporations but also to the marketing, business and technical people who are carried in the wake of rising and falling technology trends. IT groups across the US now face the problem of trends they have ignored finally reaching the C-levels as they are being asked about their mobile strategies and why their applications are not designed to be responsive – and perhaps even whey they continue to be written in vb6 or delphi.

These casualties of the Mobile Wars must be wondering what choices they could have made differently over the past several years and what choices they should be making over the next. How does one survive the conflict that comes after the Mobile Wars?

 

between the motion and the act

 

2001 a space odyssey

Those who have crossed

With direct eyes, to death’s other kingdom

Remember us — if at all — not as lost

Violent souls, but only

As the holo men,

The stuffed men.

 

Surviving and even thriving in the coming Holo Wars is possible if you keep an eye out for the contours of future history – if you know what is coming. The first key is knowing who the major players are: Microsoft, Facebook, Google – though there is no guarantee any of them will still be standing when the Holo Wars are over.

Microsoft has catapulted to the front of the Holo Wars with its announcement of the HoloLens on January 21st. HoloLens is the brainchild of Alex Kipman, who also spearheaded the product development of the Kinect. It is expected to be built on some of the technology developed for the Kinect v2 sensor combined with new holographic display technology – possibly involving eye movement tracking – that has yet to be revealed.

Facebook became a participant in the Holo Wars when it bought Palmer Luckey’s company Oculus VR in mid-2014. The Oculus Rift, a virtual reality headset, is basically two mobile display screens placed in front of a user’s eyeballs in order to show stereoscopic digital visualizations. The key to this technology is John Cormack’s ingenious use of sensors to track and anticipate head movements to rotate and skew images in a realistic way in the virtual world revealed by the Rift.

Google participates in several ways. Even though the explorer program is now closed, Google Glass arrived with great fanfare and created excitement around the fashion and consumer uses of this heads-up display technology. Following Google’s major investment in Rony Abovitz’s Magic Leap in October 2014, a maker of mysterious augmented reality technology, it now appears that this is the more likely future direction of Google Glass or whatever it is eventually called. Magic Leap, in turn, has added some amazing names to its payroll including Gary Bradski of OpenCV fame and Neal Stephenson, the author of Snow Crash. The third leg of Google’s investment in a holographic future is the expertise in geolocation it has acquired over the past decade.

The next key to surviving the Holo Wars is to understand what skills will be needed when the fighting starts. The first skill is a deeper knowledge of computer graphics. Since the rise of the graphical user interface, software development platforms have increasingly abstracted away the details of generating pixels and managing human-computer interactions. Future demands for spatially aware pixels will force developers to relearn basic mathematical concepts, linear algebra, trigonometry and matrix math.

In addition to mathematics, machine learning will be important as a way of making overwhelming amounts of data manageable. Modern computer interactions are relatively simple. Users sit in one place, in a fixed position respective to the machine, and rarely deviate from this position. Input is passed through transducers that reduce desire and intent into simple signals. Digital reality experiences, on the other hand, not only receive gestural information which must be interpreted but also physical orientation, world coordinates, facial expressions and speech commands. A basic knowledge of Bayesian probability and stochastic calculus will be part of the tool chest of anyone who wants to successfully navigate the Holo joblists of the future.

To reforge ourselves with skills for surviving the next seven years, designers must also become better programmers and software programmers must become more creative. The freelance creative, a job role that expanded dramatically during the Mobile Wars, will have an even brighter future in a world pervaded by augmented reality experiences. In order to make the shift, however, creatives will need to move beyond their comfort zone of creating PSDs in Photoshop and learn motion graphics as well as basic computer programming. Programmers likewise will need to move beyond the conceit that coding is an inherently creative activity; moving data around from point A to point B is no more creative than moving books around a sprawling Amazon warehouse and then packing them up for shipping is a poetic.

Real creative coding involves learning how to construct digital-to-physical experiences with Arduino, how to program self-generating visual algorithms with Processing, how to create 3D worlds in Unity and how to create complex visual interactions with openFrameworks and Cinder. These activities will become the common vocabulary of the future programmers of augmented experiences. Hiring managers and recruiters will expect to find them on resumes and without them, otherwise experienced tech workers be unhireable or worse, relegated to maintaining legacy web applications.

 

not with a bang but a whimper

 

enders game

The eyes are not here

There are no eyes here

In this valley of dying stars

In this holo valley

This broken jaw of our lost kingdoms

In this last of meeting places

We grope together

 

How can one tell if these prescriptions for the future Holo Wars are real and actionable or simply more marketing hype attempting to take advantage of people’s natural gullibility regarding technical gadgets? Aren’t we always being burned by overly optimistic portrayals of the future that never come to pass? Where are our flying cars? Where are our remote work locations?

In order for the Holo Wars to play out, certain milestones need to be achieved. Consequently, if you start seeing these milestones realized, you will know that you are in fact living through a fight over the next disruptive technology that will destroy some major tech corporations while affirming others at the apex of the tech world, one that will also reward those that have positioned themselves with useful skills for this future economy and punish those who do not. These milestones are: technology, monetization, persistent holographic objects, belief circles, overlapping dissociative realities.

Technology: the first phase is occurring now with the three major players discussed above and several additional players such as Metaio, Qualcomm and Samsung engaged in building up consumer augmented reality hardware and supporting technologies such as geolocation and gestural interfaces.

Monetization: innovation costs money. The initial hardware and infrastructure effort will likely be subsidized by the major players. Over time, the monetization model will likely follow what we see on the internet with “free” consumer experiences being subsidized by ads. There will be a struggle between premium subscription based experiences offering to remove the ads while providing better, higher resolution experiences with better content. These portal solutions will also contend against free and low-cost plug-in content provided by hackers and freelance creatives. How this plays out will depend largely on whether the premium content providers will be able to block out independents through standards and compatibility issues as well as whether hackers will find ways to overcome these roadblocks. There is also the possibility that some of the players might be looking at a much longer game and will foster an open AR content generation community rather than attempt to crush it. If the AR economy opens up in this way, a new service sector will grow made up of one set of people generating digital worlds for another set to live in.

Persistent Holographic Objects: virtual worlds are typically subjective experiences. They can be made inter-subjective, as they are in MMOs, by creating virtual topology in which people co-exist and co-operate. In augmented worlds, on the other hand, shared topology is an inherent feature. AR shared topology is called reality. In order to make AR worlds truly inter-subjective, rather than simply objective or subjective, shared holo objects must be part of the experience. Pesistent holo objects such as a digital fountain, a digital garden, or a digital work of art will have a set location and orientation in the world. AR players will need to travel to these locations physically in order to experience them. Unlike private AR or VR experiences in which each player views copies of the same digital object, with a shared experience each player can be said to be looking at the same persistent holo object from different points of view. In order to achieve persistent holographic objects, we will require finer grained geolocation than we currently have. AR gear must also be improved to become more usable in direct sunlight.

Belief Circles: a healthy indie creative fringe-economy and persistent holographic objects will make it possible to customize intersubjective experiences. People have a natural tendency to form cliques, parties and communities. Belief circles, a term coined by Vernor Vinge, will provide coherent community experiences for different guilds based on shared interests and shared aspirations. Users will opt in and out of various belief circles as they see fit. The same persistent holographic objects may appear differently to members of different circles and yet be recognized as sharing a common space and perhaps a common purpose. For instance, the holosign in front of the local Starbucks will have a permanent location and consistent semantic purpose, in AR space, but a polymorphic appearance. To paraphrase a truism, beauty will be in the eye of one’s belief circle.

Overlapping Dissociative Realities: divergent intersubjectivities will produce both a greater awareness of synchronicity – and a sense of deja vu as AR content is copied freely into multiple locations — as well as an increased sense of cognitive dissonance. Consider the example of going into Starbucks for coffee. The people waiting in line will likely each be members of varying belief circles and consequently will be having different experiences of the wait. This is not a large departure since we typically do not care about what other people in line are doing and even avoid paying attention unless they take too long making a selection. In this case, divergent belief circles make it easier to follow our natural instinct to avoid each other. Everyone in the holo valley is anonymous if they want to be. When one arrives at the head of the line, however, something more interesting happens. Even though the customer and the barista likely belong to different belief circles, they must interact, communicate, and perform an economic exchange; these two creatures from different worlds. What will that be like? Will one then lift a corner of the holo lenses in order to rub a sore eye only to discover that this isn’t a Starbucks at all but really a Dunkin’ Donuts which had silently bought out the other chain in a hostile takeover the previous week? Will your coffee taste any different if it looks exactly the same?

* 1996 was witness to a small skirmish between OpenGL and Direct3D that has subsequently come to be known as the API Wars. While the API Wars have had long lasting ripples, I don’t see them as having the tectonic effect of the other historical phenomena I am describing – plus anyways Thomas Kuhn only provides three major examples of his thesis and I wanted to stick to that particular design pattern.

[Much gratitude to Joel and Nate for collaborating on these scenarios over a highly entertaining lunch.]

Top 21 HoloLens Ideas

holo

The image above is a best guess at the underlying technology being used in Microsoft’s new HoloLens headset. It’s not even that great a guess since the technology appears to still be in the prototype stage. On the other hand, the product is tied to the Windows 10 release date, so we may be seeing a consumer version – or at the very least a dev version – sometime in the fall.

Here are some things we can surmise about HoloLens:

a) the name may change – HoloLens is a good product name but isn’t quite where we might like it to be, in a league with Kinect, Silverlight or Surface for branding genius. In fact, Surface was such a good name, it was taken from one product group and simply given to another in a strange twist on the build vs buy vs borrow quandary. On the other hand, HoloLens sounds more official than the internal code name, Baraboo — isn’t that a party hippies throw themselves in the desert?

johnny mnemonic

b) this is augmented reality rather than virtual reality. Facebook’s Oculus Rift, which is an immersive fully digital experience, is an example of virtual reality. Other fictional examples include The Oasis from Ernest Cline’s Ready Player One, The Mataverse from Neal Stephenson’s Snow Crash, William Gibson’s Cyberspace and the VR simulation from The Lawnmower Man. Augmented reality involves overlaying digital experience on top of the real world. This can be accomplished using holography, transparent displays, or projectors. A great example of projector based AR is the RoomAlive project by Hrvoje Benko, Eyal Ofek and Andy Wilson at Microsoft Research. HoloLens uses glasses or a head-rig – depending on how generous you feel – to implement AR. Magic Leap – with heavy investment from Google – appears to be doing the same thing. The now dormant Google Glass was neither AR nor VR, but was instead a heads-up display.

kgirl

c) HoloLens uses Kinect technology under the plastic covers. While the depth sensor in the Kinect v2 has a field of view of 70 degrees by about 60 degrees, the depth capability in HoloLens is reported to include a field of view of 120 degrees by 120 degrees. This indicates that HoloLens will be using the Time-of-Flight technology used in Kinect v2 rather than the structured light from Kinect v1. This set up requires both an IR emitter as well as a depth camera combined with a sophisticated timing and phase technology to efficiently and relatively inexpensively calculate depth.

hands

d) the depth camera is being used for multiple purposes. The first is for gesture detection. One of the issues that faced both Oculus and Google Glass was that they were primarily display technologies. But a computer monitor is useless without a keyboard or mouse. Similarly, Oculus and Glass needed decent interaction metaphors. Glass relied primarily on speech commands and tapping and clicking. Oculus had nothing until their recent acquisition of the NimbleVR . NimbleVR provides a depth camera optimized for hand and finger detection over a small range. This can be mounted in front of the Oculus headset. Conceptually, this allows people to use hand gestures and finger manipulations in front of the device. A virtual hand can be created as an affordance in the virtual world of the Oculus display, allowing users to interact with virtual objects and virtual interactive menus in virtro.

The depth sensor in HoloLens would work in a similar way except that instead of a virtual hand as affordance, it’s just your hand. You will use your hand to manipulate digital objects displayed on the AR lenses or to interact with AR menus using gestures.

An interesting question is how many IR sensors are going to be on the HoloLens device. From the pictures that have been released, it looks like we will have a color camera and a depth sensor for each eye, for a total of two depth cameras and two RGB cameras located near the joint between the lenses and the headband.

holo_minecraft

e) HoloLens is also using depth data for 3d reconstruction of real world surfaces. These surfaces are then used as virtual projection surfaces for digital textures. Finally, the blitted image is displayed on the transparent lenses.

ra1

ra_2

This sort of reconstruction is a common problem in projection mapping scenarios. A great example of applying this sort of reconstruction can be found in the Halloween edition of Microsoft Research’s RoomAlive project. In the first image above, you are seeing the experience from the correct perspective. In the second image, the image is captured from a different perspective than the one that is being projected. From the incorrect perspective, it can be seen that the image is actually being projected on multiple surfaces – the various planes of the chair as well as the wall behind it – but foreshortened digitally and even color corrected to make the image appear cohesive to a viewer sitting at the correct position. One or more Kinects must be used to calibrate the projections appropriately against these multiple surfaces. If you watch the full video, you’ll see that Kinect sensors are used to track the viewer as she moves through the room and the foreshortening / skewing occurs dynamically to adjust to her changing position.

The Minecraft AR experience being used to show the capabilities of HoloLens requires similar techniques. The depth sensor is required not only to calibrate and synchronize the digital display to line up correctly with the table and other planes in the room, but also to constantly adjust the display as the player moves around the room.

eye-tracking

f) are the display lenses stereoscopic or holographic? At this point no one is completely sure, though indications are that this is something more than the stereoscopic display technique used in the Oculus Rift. While a stereoscopic display will create the illusion of depth and parallax by creating a different image for each lens, something holographic would actually be creating multiple images per lens and smoothly shifting through them based on the location of each pupil staring through its respective lens and the orientation and position of the player’s head.

One way of achieving this sort of holographic display is to have multiple layers of lenses pressed against each other and using interference shift the light projected into each pupil as the pupil moves. It turns out that the average person’s pupils typically move around rapidly in saccades, mapping and reconstructing images for the brain, even though we do not realize this motion is occurring. Accurately capturing these motions and shifting digital projections appropriately to compensate would create a highly realistic experience typically missing from stereoscopic reconstructions. It is rumored in the industry that Magic Leap is pursuing this type of digital holography.

On the other hand, it has also been reported that HoloLens is equipped with eye-tracking cameras on the inside of the frames, apparently to aid with gestural interactions. It would be extremely interesting if Microsoft’s route to achieving true holographic displays involved eye-tracking combined with a high display refresh rate rather than coherent light interference display technology as many people assume. Or, then again, it could just be stereoscopic displays after all.

occlusion

g) occlusion is generally considered a problem for interactive experiences. For augmented reality experiences, however, it is a feature. Consider a physical-to-digital interaction in which you use your finger/hand to manipulate a holographic menu. The illusion we want to see is of the hand coming between the player’s eyes and the digital menu. The player’s hand should block and obscure portions of the menu as he interacts with it.

The difficulty with creating this illusion is that the player’s hand isn’t really between the menu and the eyes. Really, the player’s hand is on the far side of the menu, and the menu is being displayed on the HoloLens between the player’s eyes and his hand. Visually, the hologram of the menu will bleed through and appear on top of the hand.

In order to re-establish the illusion of the menu being located on the far side of the hand, we need depth-sensors to accurately map an outline of the hand and arm and then cut a hand and arm shape out of the menu where the hand should be occluding it. This process has to be repeated as the hand moves in real-time and it’s kind of a hard problem.

borg

h) misc sensors : best guess is that in addition to depth sensors, color cameras and eye-tracking cameras, we’ll also get a directional microphone, gyroscope, accelerometer and magnetometer. Some sort of 3D sound has been announced, so it makes sense that there is a directional microphone or microphone array to complement it. This is something that is available on both the Kinect v1 and Kinect v2. The gyroscope, accelerometer and magnetometer are also guesses – but the Oculus hardware has them to track quick head movements, head position and head orientation. It makes sense that HoloLens will need them also.

 bono

i) the current form factor looks a little big – bigger than the Magic Leap is supposed to be but smaller than the current Oculus dev units. The goal – really everyone’s goal, from Microsoft to Facebook to Google – is to continue to bring down the size of sensors so we can eventually have heavy glasses rather than light-weight head gear.

j) vampires, infrared sensors and transparent displays are all sensitive to direct sunlight. This consideration can affect the viability of some AR scenarios.

k) like all innovative technologies, the success of HoloLens will depend primarily on what people use it for. the myth of the killer app is probably not very useful anymore, but the notion that you need an app store to sell a device is a generally accepted universal constant. The success of the HoloLens will depend on what developers build for it and what consumers can imagine doing with it.

 

 

Top 21 Ideas

Many of these ideas are borrowed from other VR and AR technology. In most cases, HoloLens will simply provide a better way to implement these notions. These ideas come from movies, from art installations, and from many years working at an innovative marketing agency where we prototyped these ideas day in and day out.

 

1. Shopping

shopping

Amazon made one click shopping make sense. Shopping and the psychology of shopping changes when we make it more convenient, effectively turning instant gratification into a marketing strategy. Using HoloLens AR, we can remodel a room with virtual furniture and then purchase all the pieces on an interactive menu floating in the air in front of us when we find the configuration we want. We can try and buy virtual clothes. With a wave of the hand we can stock our pantry, stock our refrigerator … wait, come to think of it, with decent AR, do we even need furniture or clothes anymore?

2. Gaming

 illumiroom

IllumiRoom was a Microsoft project that never quite made it to product but was a huge hit on the web. The notion was to extend the XBox One console with projections that reacted to what was occurring in the game but could also extend the visuals of the game into the entire living room. IllumiRoom (which I was fortunate enough to see live the last time I was in Redmond) also uses a Kinect sensor to scan the room in order to calibrate projection mapping onto surfaces like bookshelves, tables and potted plants. As you can guess, this is the same team that came up with RoomAlive. A setup that includes a $1,500 projector and a Kinect is a bit complicated, especially when a similar effect can now be created using a single unit HoloLens.

hud

The HoloLens device could also be used for in-game Heads-Up notifications or even as a second screen. It would make a lot of sense if XBox integration is on the roadmap and would set XBox apart as the clear leader in the console wars.

3. Communication

sw

‘nuff said.

4. Home Automation

clapper

Home automation has come a long way and you can now easily turn lights on and off with your smart phone from miles away. Turning your lights on and off from inside your own house may still involve actually touching a light switch. Devices like the Kinect have the limitation that they can only sense a portion of a room at a time. Many ideas have been thrown out to create better gesture recognition sensors for the home, including using wifi signals that go through walls to detect gestures in other rooms. If you were actually wearing a gestural device around with you, this would no longer be a problem. Point at a bulb, make a fist, “put out the light, and then put out the light” to quote the Bard.

5. Education

Microsoft-future-vision

While cool visuals will make education more interesting, the biggest benefit of HoloLens for education is simple access. Children in rural areas in the US have to travel long distances to achieve a decent education. Around the world, the problem of rural education is even worse. What if educators could be brought to the children instead? This is one of the stated goals of Facebook’s purchase of Oculus Rift and HoloLens can do the same job just as well and probably better.

6. Medical Care

bodyscan

Technology can be used for interesting diagnostic and rehabilitation functions. The depth sensors that come with HoloLens will no doubt be used in these ways eventually. But like education, one of the great problems in medical care right now is access. If we can’t bring the patient to the doctor, let’s bring the GP to the patient and do regular check ups.

7. Holodeck

matrix-i-know-kung-fu

The RoomAlive project points the way toward building a Holodeck. All we have to do is replace Kinect sensors with HoloLens sensors, projectors with holographic displays, and then try now to break the HoloLens strapped to our heads as we learn Kung Fu.

8. Windows

window

Have you ever wished you could look out your window and be somewhere else? HoloLens can make that happen. You’ll have to block out natural light by replacing your windows with sheetrock, but after that HoloLens can give you any view you want.

fifteen-million-merits

But why stop at windows. You can digitize all your walls if you want, and HoloLens’ depth technology will take care of the rest.

9. Movies and Television

vr-cinema-3d

Oculus Rift and Samsung Gear VR have apps that let you watch movies in your own virtual theater. But wouldn’t it be more fun to watch a movie with your entire family? With HoloLens we can all be together on the couch but watch different things. They can watch Barney on the flatscreen while I watch an overlay of Meet the Press superimposed on the screen. Then again, with HoloLens maybe I could replace my expensive 60” plasma TV with a piece of cardboard and just watch that instead.

10. Therapy

whitenoise

It’s commonly accepted that white noise and muted colors relax us. Controlling our environment helps us to regulate our inner states. Behavioral psychology is based on such ideas and the father of behavioral psychology, B. F. Skinner, even created the Skinner box to research these ideas – though I personally prefer Wilhelm Reich’s Orgone box. With 3D audio and lenses that extend over most of your field of view, HoloLens can recreate just the right experience to block out the world after a busy day and just relax. shhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhh.

11. Concerts

burning-man-festival-nevada

Once a year in the Nevada desert a magical music festival is held called Baraboo. Or, I don’t know, maybe it’s held in Tennessee. In any case, getting to festivals is really hard and usually involves being around people who aren’t wearing enough deodorant, large crowds, and buying plastic bottles of water for $20. Wouldn’t it be great to have an immersive festival experience without all the things that get in the way. Of course, there are those who believe that all that other stuff is essential to the experience. They can still go and be part of the background for me.

12. Avatars

avatar-body

Gamification is a huge buzzword at digital marketing agencies. Undergirding the hype is the realization that our digital and RL experiences overlap and that it sometimes can be hard to find the seams. Vernor Vinge’s 2001 novella Fast Times at Fairmont High draws out the implications of this merging of physical and digital realities and the potential for the constant self reinvention we are used to on the internet bleeding over into the real world. Why continue with the name your parents gave you when you can live your AR life as ByteMst3r9999? Why be constrained by your biological appearance when you can project your inner self through a fun and bespoke avatar representation? AR can ensure that other people only see you the way that you want them to.

13. Blocking Other People’s Avatars

BlackBlock

The flip side of an AR society invested in an avatar culture is the ability to block people who are griefing us. Parents can call a time out and block their children for ten minutes periods. Husbands can block their wives. We could all start blocking our co-workers on occasion. For serious offenses, people face permanent blocking as a legal sanction for bad behavior by the game masters of our augmented reality world. The concept was brilliantly played out in the recent Black Mirror Christmas special starring Jon Hamm. If you haven’t been keeping up with Black Mirror, go check it out. I’ll wait for you to finish.

14. Augmented Media

fiducial

Augmented reality today typically involves a smart phone or tablet and and a fiducial marker. The fiducial is a tag or bar code that indicates to the app on your phone where an AR experience should be placed. Typically you’ll find the fiducial in a magazine ad that encourages you to download an app to see the hidden augmented content. It’s novel and fun. The problem involves having to hold up your tablet or phone for a period of time just to see what is sometimes a disappointing experience. It would be much more interesting to have these augmented media experiences always available. HoloLens can be always on and searching for these types of augmented experiences as you read the latest New Yorker or Wired. They needn’t be confined to ads, either. Why can’t the whole magazine be filled with AR content? And why stop at magazines? Comic books with additional AR content would change the genre in fascinating ways (Marvel’s online version already offers something like this, though rudimentary). And then imagine opening a popup book where all the popups are augmented, a children’s book where all the illustrations are animated, or a textbook that changes on the fly and updates itself every year with the latest relevant information – a kindle on steroids. You can read about that possibility in Neal Stephenson’s Diamond Age – only available in non-augmented formats for now.

15. Terminator Vision

robocop

This is what we thought Google Glass was supposed to provide – but then it didn’t. That’s okay. With vision recognition software and the two RGB cameras on HoloLens, you’ll never forget a name again. Instant information will appear telling you about your surroundings. Maps and directions will appear when you gesture for them. Shopping associates will no longer have to wing it when engaging with customers. Instead, HoloLens will provide them with conversation cues and decision trees that will help the associate close the sale efficiently and effectively. Dates will be more interesting as you pull up the publicly available medical, education and legal histories of anyone who is with you at dinner. And of course, with the heartbeat monitor and ability to detect small fluctuations in skin tone, no one will ever be able to lie to you again, making salary negotiations and buying a car a snap.

16. Wealth Management

hololens11140

With instant tracking of the DOW, S&P and NASDAQ along with a gestural interface that goes wherever you go, you can become a day trader extraordinaire. Lose and gain thousands of dollars with a flick of your finger.

17. Clippit

clippit

Call him Jarvis if it helps. Some sort of AI personal assistant has always been in the cards. Immersive AR will make it a reality.

18. Impossible UIs

minority_report

phone

3dtouch

cloud atlas floating computer

I don’t watch movies the way other people do. Whenever I go to see a futuristic movie, I try to figure out how to recreate the fantasy user experiences portrayed in it. Minority Report is an easy one – it’s a large area display, possibly projection, with Kinect-like gestural sensors. The communication device from the Total Recall reboot is a transparent screen and either capacitive touch or more likely a color camera doing blob recognition. The 3D touchscreen from Pacific Rim has always had me stumped. Possibly some sort of leap motion device attached to a Pepper’s Ghost display? The one fantasy UX I could never figure out until I saw HoloLens is the “Orison” computer made up of floating disks in Cloud Atlas. The Orison screens are clearly digital devices in a physical space – beautiful, elegant, and the sort of intuitive UX for which we should strive. Until now, they would have been impossible to recreate. Now, I’m just waiting to get my hands on a dev device to try to make working Orison displays.

19. Wiki World

wikipedia

Wiki World is a simple extension of terminator vision. Facts floating before your eyes, always available, always on. No one will ever have to look up the correct spelling for a word again or strain his memory for a sports statistic. What movie was that actor in? Is grouper ethical to eat? Is Javascript an object-oriented language? Wiki world will make memorization obsolete and obviate all arguments – well, except for edit wars between Wikipedia editors, of course.

20. Belief Circles

wwc

Belief circles are a concept from Vernor Vinge’s Hugo award winning novel Rainbows End. Augmented reality lends itself to self-organizing communal affiliations that will create inter-subjective realities that are shared. Some people will share sci-fi themes. Others might go the MMO route and share a fantasy setting with a fictional history, origin story, guilds and factions. Others will prefer football. Some will share a common religion or political vision. All of these belief circles will overlap and interpenetrate. Taking advantage of these self-generating belief circles for content creation and marketing will open up new opportunities for freelance creatives and entrepreneurs over the next ten years.

21. Theater of Memory

camillo1.gif

Giulio Camillo’s memory theater belongs to a long tradition of mnemonic technology going back to Roman times and used by orators and lawyers to memorize long speeches. The scholar Frances Yates argued that it also belonged to another Renaissance tradition of neoplatonic magic that has since been superseded by science in the same way that memory technology has been superseded by books, magazines and computers. What Frances Yates – and after her Ioan Couliano – tried to show, however, was that in dismissing these obsolete modes of understanding the world, we also lose access to a deeper, metaphoric and humanistic way of seeing the world and are the poorer for it. The theater of memory is like Feng Shui – also discredited – in that it assumes that the way we construct our surroundings also affects our inner lives and that there is a sympathetic relationship between the macrocosm of our environment and the microcosm of our emotional lives. I’m sounding too new agey so I’ll just stop now. I will be creating my own digital theater of memory as soon as I can, though, as a personal project just for me.

upgraded blog engine

I just finished upgrading my blogengine.net from version 1.6 to version 3.1.1. Long overdue and about half a day’s work. BlogEngine.net is definitely a tool for developers rather than consumers. Now that it’s up, I’m pretty happy, though. Thanks also to the OrcsWeb support folks (OrcsWeb hosts my blog) who helped me when I kept locking myself out of the system because I can’t remember my ftp password.

I started this out with dasBlog back in 2006. I’m glad that I’ve only had to do a few upgrades in the years between.

$5 eBooks from Packt

The technical publisher Packt is offering eBooks for $5 through January 6th, 2015 as a holiday promotion. I encourage you to look very carefully through their selection and see what appeals. If you have time to read on, however, I’d like to explain in greater detail my mixed feelings about Packt (this was probably not the marketing department’s intention when they sent me an email asking me to publicize the promotion but I think it will ultimately be helpful to them).

Packt Publishing has always been hit or miss for me. They are typically much more adventurous regarding computer book topics than other publishers like Apress or O’Reilly (Apress is my publisher, by the way, and are pretty fantastic to work with and very professional). At the same time, I have the impression that Packt’s bar for accepting authors tends to be lower than other publishers’, which allows them to be prolific in their offerings but at the same time entails that they produce, quite honestly, some clunkers.

A specific example of one of their clunkers would be the Packt book Unity iOS Game Development Beginner’s Guide by Greg Pierce. The topic sounds great (at least it did to me) but it turns out the book mostly just copies from publicly available documentation.

To quote from one of the Amazon reviews from 2012 by C Toussieng:

“This book is unbelievably bad. What specifically? All of it. It takes information which can be easily garnered from the Unity and/or Apple websites, distills it down to a minimally useful amount, then charges you for it.

And this one from 2012 by JasonR:

The book basically covers a few pages of the Unity docs, then goes into 3rd party plugins they recommend, each plugin gets a couple of pages. Frankly, a simple search on Google will give you more insight.”

This is a shame since, even as more learning material is always appearing on the Internet which displaces the traditional place of technical books in the software ecosystem – material that is often free – there is still an important role for print books (and their digital equivalent, the eBook). While online material can be thrown out quickly, often covering about a fifth to a tenth of a chapter of a book that goes through the print publishing industry, they tend to lack the cohesiveness that is only possible in a work that has taken months to write and rewrite. A 300-page software book is a distillation of experience which has undergone multiple revisions and fact checking. A really good software book tries to tell a story.

The flip side, of course, is that modern technical books quickly become outdated while technical blog posts simply disappear. All in all, though, I find that sitting down with a book that tries to explain the broader impact of a given technology serves a different and more important purpose than a web tutorial that only shows how to perform streamlined – and often ideal –  tasks.

A propos of the thesis that good software books are distillations of years of experience – we could even say distillations of 10,000 hours of experience – I’d like to point you to some of the gems I’ve discovered through Packt Publishing over the years.

All of the Packt OpenCV books are interesting. I’m particularly fond of Mastering OpenCV with Practical Computer Vision Projects by Daniel Lélis Baggio, but I think all of them – at least the ones I’ve read – are pretty good. Daniel’s bio says that he “…started his works in computer vision through medical image processing at InCor (Instituto do Coração – Heart Institute) in São Paulo, Brazil.”

Another great one is Mastering openFrameworks: Creative Coding Demystified by Denis Perevalov. According to his bio, Denis is a computer vision research scientist at the Ural Branch of the Russian Academy of Sciences and co-author of two Russian patents of robotics.

One I really like simply because the topic is so specific is Kenny Lammers’ Unity Shaders and Effects Cookbook. His bio states that Kenny has been in the game industry for 13 years working for companies like “… Microsoft, Activision, and the late Surreal Software.”

I hope a theme is emerging here. The people who write these books actually have a lot of experience and are trying to pass their knowledge on to you in something more than easily digestible exercises. Best of all – ignoring the example from above – the material is typically highly original. It isn’t copy and pasted from 20 other websites covering the same material. Instead, the reader gets an opinionated and distinct take on the technology covered in each of these books.

What I especially appreciate about the $5 promotions Packt occasionally surfaces is that, for five dollars, you aren’t really obligated to try to read the entire book to get your money’s worth. I’ve taken advantage of similar deals in the past to simply read very specific chapters that are of interest to me such as Basic heads-up-display with custom GUI from Dr. Sebastian Koenig’s Unity for Architectural Visualization or Lighting and Rendering from Jen Rizzo’s Cinema 4D Beginner’s Guide. It’s also a great price when all I want to do is to skim a book on a topic I know pretty well in order to find out if there are any holes in my knowledge. Mastering Leap Motion by Brandon Sanders was extremely helpful for this and, indeed, there were holes in my knowledge.

According to his biography, by the way, Brandon is “… an 18-year-old roboticist who spends much of his time designing, building, and programming new and innovative systems, including simulators, autonomous coffee makers, and robots for competition. At present, he attends Gilbert Finn Polytechnic (which is a homeschool) as he prepares for college. He is the founder and owner of Mechakana Systems, a website and company devoted to robotic systems and solutions.”