The Imaginative Universal

Studies in Virtual Phenomenology -- by @jamesashley, Kinect MVP and author

One Kinect to rule them all: Kinect 2 for XBox One

two_kinects

Yes. That’s a bit of a confusing title, but it seems best to lay out the complexity upfront. So far there have been two generations of the Kinect sensor which combine a color camera, a depth sensing camera, an infrared emitter (basically used for the depth sensing camera) and a microphone array which works as a virtual directional shotgun microphone. Additional software called the Kinect SDK then allows you to write programs that read these data feeds as well as interpolating them into 3D animated bodies that are representations of people’s movements.

Microsoft has just announced that they will stop producing separate versions of the Kinect v2, one for windows and one for the XBox One,  but will instead encourage developers to purchase the Kinect for Windows Adapter instead to plug their Kinects for XBox One into a PC. In fact, the adapter has been available since last year, but this just makes it official. All in all this is a good thing. With the promise that Universal Windows Apps will be portable to XBox, it makes much more sense if the sensors – and more importantly the firmware installed on them – are exactly the same whether you are on a PC running Windows 8/10 or an XBox running XBox OS.

This announcement also vastly simplifies the overall Kinect hardware story. Up to this point, there weren’t just two generations of Kinect hardware but also two versions of the current Kinect v2 hardware, one for the Xbox and one for Windows (for a total of four different devices). The Kinect hardware, both in 2010 and in 2013, has always been built first as a gaming device. In each case, it was then adapted to be used on Windows machines, in 2012 and 2014 respectively.

The now discontinued Kinect for Windows v2 differed from the Kinect for the Xbox One in both hardware and software. To work with Windows machines, the Kinect for Windows v2 device uses the specialized power adapter to pump additional power to the hardware (there is a splitter in the adapter that attaches the hardware to both a USB port as well as a wall plug). The Xbox One, being proprietary hardware, is able to pump enough juice to its Kinect sensor without needing special adapter. Additionally, the firmware for the original Kinect for Windows v1 sensor diverged over time from the Kinect for Xbox’s firmware – which led to differences in how the two versions of the hardware performed. It is now clear that this will not happen with Kinect v2.

Besides the four hardware devices and their respective firmware, the loose term “Kinect” can also refer to the software APIs used to incorporate Kinect functionality into a software program. Prior to this, there was a Kinect for Windows SDK 1.0 through 1.8 that was used to program against the original Kinect for Windows sensor. For the Kinect for XBox One with the Kinect for Windows Adapter, you will want to use the Kinect for Windows SDK 2.0 (“for Windows” is still part of the title for now, even though you will be using it with a Kinect for XBox One, though of course you can still use it with the Kinect for Windows v2 sensor if you happen to have bought one of those prior to their discontinuation). There are also other SDKs floating around such as OpenNI and Libfreenect.

[Much gratitude to Kinect MVP Bronwen Zande for helping me get the details correct.]


Unity 5 and Kinect 2 Integration

pointcloud

Until just this month one of the best Kinect 2 integration tools was hidden, like Rappuccini’s daughter, inside a walled garden. Microsoft released a Unity3D plugin for the Kinect 2 in 2014. Unfortunately, Unity 4 only supported plugins (bridges to non-Unity technology) if you owned a Unity Pro license which typically cost over a thousand dollars per year.

On March 3rd, Unity released Unity 5 which includes plugin support in their free Personal edition – making it suddenly very easy to start building otherwise complex experiences like point cloud simulations that would otherwise require a decent knowledge of C++. In this post, I’ll show you how to get started with the plugin and start running a Kinect 2 application in about 15 minutes.

(As an aside, I always have trouble keeping this straight: Unity has plugins, openFrameworks as add-ins, while Cinder has bricks. Visual Studio has extensions and add-ins as well as NuGet packages after a confusing few years of rebranding efforts. There may be a difference between them but I can’t tell.)

1. First you are going to need a Kinect 2 and the Unity 5 software. If you already have a Kinect 2 attached to your XBox One, then this part is easy. You’ll just need to buy a Kinect Adapter Kit from the Microsoft store. This will allow you to plug your XBox One Kinect into your PC. The Kinect for Windows 2 SDK is available from the K4W2 website, though everything you need should automatically install when you first plug your Kinect into your computer. You don’t even need Visual Studio for this. Finally, you can download Unity 5 from the Unity website.

linktounityplugin

2. The Kinect 2 plugin for Unity is a bit hard to find. You can go to this Kinect documentation page and scroll half-way down to find the link called Unity Pro Packages. Aternatively, here is a direct link to the most current version of the plugin as of this writing.

unitypluginfolder

3. After you finish downloading the zip file (currently called KinectForWindows_UnityPro_2.0.1410.zip), extract it to a known location. I like to use $\Documents\Unity. Inside you will find three plugins as well as two sample scenes. The three Kinect plugins are the basic one, a face recognition plugin, and a gesture builder plugin, each wrapping functionality from the Kinect 2 SDK.

newunityproject

4. Fire up Unity 5 and create a new project in your known folder. In my case, I’m creating a project called “KinectUnityProject” in the $\Documents\Unity folder where I extracted the Kinect plugins and related assets.

import

5. Now we will add the Kinect plugin into our new project. When the Unity IDE opens, select Assets from the top menu and then select Import Package | Custom Package …

selectplugin

6. Navigate to the folder where you extracted the KinectforWindows_Unity components and select the Kinect2.0.xxxxx.unitypackage file. That’s our plugin along with all the scripts needed to build a Kinect-enabled Unity 5 application. After clicking on “Open”, an additional dialog window will open up in the Unity IDE called “Importing Package” with lots of files checked off. Just click on the “Import” button at the lower right corner of the dialog to finish the import process. Two new folders will now be added to your Project window under the Assets folder called Plugins and Standard Assets. This is the baseline configuration for any Kinect project in Unity.

unitywarning

7. Now we’ll get a Kinect with Unity project quickly going by simply copying one of the sample projects provided by the Microsoft Kinect team. Go into file explorer and copy the folder called “KinectView” out of the KinectforWindows_Unity folder where you extracted the plugins and paste it into the Assets directory in your project folder. Then return to the Unity 5 IDE. A warning message will pop up letting you know that there are compatibility issues between the plugin and the newest version of Unity and that files will automatically be updated. Go ahead and lie to the Unity IDE. Click on “I Made a Backup.”

added_assets

8. A new folder has been added to your Project window under Assets called KinectView. Select KinectView and then double click on the MainScene scene contained inside it. This should open up your Kinect-enabled scene inside the game window. Click on the single arrow near the top center of the IDE to see your application in action. The Kinect will automatically turn on and you should see a color image, an infrared image, a rendering of any bodies in the scene and finally a point cloud simulation.

allthemarbles

9. To build the app, select File | Build & Run from the top menu. Select Windows as your target platform in the next dialog and click the Build & Run button at the lower right corner. Another dialog appears asking you to select a location for your executable and a name. After selecting an executable name, click on Save in order to reach the final dialog window. Just accept the default configuration options for now and click on “Play!”. Congratulations. You’ve just built your first Kinect-enabled Unity 5 application!

The Next Book

min_lib

The development community deserves a great book on the Kinect 2 sensor. Sadly, I no longer feel I am the person to write that book. Instead, I am abandoning the Kinect book project I’ve been working on and off over the past year in order to devote myself to a book on the Microsoft holographic computing platform and HoloLens SDK. I will be reworking the material I’ve so far collected for the Kinect book as blog posts over the next couple of months.

As anyone who follows this blog will know, my imagination has of late been captivated and ensorcelled by augmented reality scenarios. The book I intend to write is not just a how-to guide, however. While I recognize the folly of this, my intention is to write something that is part technical manual and part design guide, part math tutorial, part travel guide and part cookbook. While working on the Kinect book I came to realize that it is impossible to talk about gestural computing without entering into a dialog with Maurice Merleau-Ponty’s Phenomenology of Perception and Umberto Eco’s A Theory of Semiotics. At the same time, a good book on future technologies should also cover the renaissance in theories of consciousness that occurred in the mid-90’s and which culminated with David Chalmers’ masterwork The Conscious Mind. Descartes, Bergson, Deleuze, Guattari and Baudrillard obviously cannot be overlooked either in a book dealing with the topic of the virtual, though  I can perhaps elide a bit.

A contemporary book on technology can no longer stay within the narrow limits of a single technology as was common 10 or so years ago. Things move at too fast a pace and there are too many different ways to accomplish a given task that choosing between them depends not only on that old saw ‘the right tool for the job’ but also on taste, extended community and prior knowledge. To write a book on augmented reality technology, even when sticking to one device like the HoloLens, will require covering and uncovering to the uninitiated such wonderful platforms as openFrameworks, Cinder, Arduino, Unity, the Unreal Engine and WPF. It will have to cover C#, since that is by and large the preferred language in the Microsoft world, but also help C# developers to overcome their fear of modern C++ and provide a roadmap from one to the other. It will also need to expose the underlying mathematics that developers need to grasp in order to work in a 3D world – and astonishingly, software developers know very little math.

Finally, as holographic computing is a wide new world and the developers who take to it will be taking up a completely new role in the workforce, the book will have to find its way to the right sort of people who will have the aptitude and desire to take up this mantle. This requires a discussion of non-obvious skills such as a taste for cooking and travel, an eye for the visual, a grounding in architecture and an understanding of how empty spaces are constructed, a general knowledge of literary and social theory. The people who create the next world, the augmented world, cannot be mere engineers. They will also need to be poets and madmen.

I want to write a book for them.