Ash Thorp Shout Out

Ash Thorp is another alumnus of Atlanta’s ReMIX conference for code and design who has made it big (or, to be accurate, big-ger). Like Chris Twigg and 3Gear, who were profiled in the last post, we were fortunate to have Ash Thorp present at ReMIX a few years ago when it was still possible to get him.

Ash was recently named to the Verge 50, squeezed somewhere between Tim Cook and Matthew McConaughey. The Verge website’s Fifty of 2014 is their list of the “most important people at the intersection of technology, art, science and culture.”

Ash Thorp is a visual designer for movies, designing both the intros for movies as well as the overall look and feel of a film. His specialty is sci fi and superhero movies and you’ve seen his work everywhere from Prometheus to Ender’s Game and beyond. He first came to our attention because of his website where he lifted the curtain a bit and showed how film design is actually done. From this, we could start piecing together the similarities between what he did and the more standard graphic design typically done for digital and print.

Ash Thorp is also the host of the Collective Podcast, which are a series of open-ended and meandering conversations about design, life and the universe. It is an earnest attempt by creative professionals to connect the world with their work and to use their work as designers as a prism for understanding the world. There is nothing quite like it in my field, the software development world, and we are all the poorer for it.

He’s also done lots of other cool projects, like his homage to Ghost in the Shell, that are all about being creative and sharing inspiration without an underlying profit motive. He is constantly trying to share and change and mold and give which is as much a testament to his boundless energy as it is to his essentially giving spirit.

Here is the brilliant presentation Ash gave at the ReMIX Conference a few years ago, revealing his approach to … well … work, life and the universe.

The Future of Interface Technology – Ash Thorp from ReMIX South on Vimeo.

Congrats to NimbleVR

I had the opportunity to meet Rob Wang, Chris Twigg and Kenrick Kin of 3Gear several years ago when I was in San Francisco demoing retail experiences using the Microsoft Kinect and Surface Table at the 2011 Oracle OpenWorld conference. I had been following their work with stereoscopic finger and hand tracking with dual Kinects and sent them what was basically a fan letter and they were kind enough to send me an invitation to their headquarters.

At the time, 3Gear was co-sharing office space with several other companies in a large warehouse space. Their finger tracking technology blew me away and I came away with the impression that these were some of the smartest people I had ever met working with computer vision and the Kinect. After all, they’re basically all Phd’s with backgrounds at companies like Industrial Light and Magic and Pixar.

I’ve written about them several times on this blog and nominated them for the Kinect v2 preview progrram. I was extremely excited when Chris agreed to present at the ReMIX conference some friends and I organized in Atlanta a few years ago for designers and developers. Here is a video of Chris’s amazing talk.

Bringing ‘Minority Report’ to your Desk: Gestural Control Using the Microsoft Kinect – Chris Twigg from ReMIX South on Vimeo.

Since then, 3Gear have worked on the problem of finger and hand tracking on various commercial devices in multiple configurations. In October of 2014 the guys at 3Gear initiated a Kickstarter project for a sensor they had developed called Nimble Sense. Nimble Sense is a depth sensor built from commodity components that is intended to be mounted on the front of an Oculus Rift headset. It handles the difficult problem of providing a good input device for the VR system which has the obvious side-effect of preventing you from seeing your own hands.

The solution, of course, is to represent the interaction controller – in this case the user’s hands – in the virtual world itself. Leap Motion, which produces another cool finger tracking device, also is working on creating a solution for this. The advantage the 3Gear people have, of course, is that they have been working on this particular problem with particular expertise in gesture tracking – rather than merely finger tracking – as well as visualization.

After exceeding their original goal in pledges, 3Gear abruptly cancelled their kickstarter on December 11th and the official 3Gear.com website I have been going to for news updates about the company was replaced.

This is actually all good news. Nimble VR, a rebranding of 3Gear for the Nimble Sense project, has been purchased by Oculus (which in turn, you’ll recall, was purchased by Facebook several months ago for around $2 billion).

For me this is a Cinderella story. 3Gear / Nimble VR is an extremely small team of extremely smart people who have passed on much more lucrative job opportunities in order to pursue their dreams. And now they’ve achieved their much deserved big payday.

Congratulations Rob, Chris and Kenrick!

Projecting Augmented Reality Worlds

WP_20141105_11_05_56_Raw

In my last post, I discussed the incredible work being done with augmented reality by Magic Leap. This week I want to talk about implementing augmented reality with projection rather than with glasses.

To be more accurate, varieties of AR experiences are often projection based. The technical differences depend on which surface is being projected on. Google glass projects on a surface centimeters from the eye. Magic Leap is reported to project directly on the retina (virtual retinal display technology).

AR experiences being developed at Microsoft Research, which I had the pleasure of visiting this past week during the MVP Summit, are projected onto pre-existing rooms without the need to rearrange the room itself. Using fairly common projection mapping techniques combined with very cool technology such as the Kinect and Kinect v2, the room is scanned and appropriate distortions are created to make projected objects look “correct” to the observer.

An important thing to bear in mind as you look through the AR examples below is that they are not built using esoteric research technology. These experiences are all built using consumer-grade projectors, Kinect sensors and Unity 3D. If you are focused and have a sufficiently strong desire to create magic, these experiences are within your reach.

The most recent work created by this group (led by Andy Wilson and Hrvoje Benko) is a special version of RoomAlive they created for Halloween called The Other Resident. Just to prove I was actually there, here are some pictures of the lab along with the Kinect MVPs amazed that we were being allowed to film everything given that most of the MVP Summit involves NDA content we are not allowed to repeat or comment on.

WP_20141105_004

WP_20141105_016

WP_20141105_013

 

IllumiRoom is a precursor to the more recent RoomAlive project. The basic concept is to extend the visual experience on the gaming display or television with extended content that responds dynamically to what is seen onscreen. If you think it looks cool in the video, please know that it is even cooler in person. And if you like it and want it in your living room, then comment on this thread or on the youtube video itself to let them know it is definitely an M viable product for the XBox One, as the big catz say.

The RoomAlive experience is the crown jewel at the moment, however. RoomAlive uses multiple projectors and Kinect sensors to scan a room and then use it as a projection surface for interactive, procedural games: in other words, augmented reality.

A fascinating aspect of the RoomAlive experience is how it handles appearance preserving point-of-view dependent visualizations: the way objects need to be distorted in order to appear correct to the observer. In the Halloween experience at the top, you’ll notice that the animation of the old crone looks like it is positioned in front of the chair she is sitting on even the the projection surface is actually partially extended in front of the chair back and at the same time extended several feet behind the chair back for the shoulders and head.  In the RoomAlive video just above you’ll see the view dependent visualization distortion occurring with the running soldier changing planes at about 2:32”.

 

You would think that these appearance preserving PDV techniques will fall apart anytime you have more than one person in the room. To address this problem, Hrvoje and Andy worked on another project that plays with perception and physical interactions to integrate two overlapping experiences in a Wizard Battle scenario called Mano-a-Mano or, more technically, Dyadic Projected Spatial Augmented Reality. The globe at visualization at 2:46” is particularly impressive.

My head is actually still spinning following these demos and I’m still in a bit of a fugue state. I’ve had the opportunity to see lots of cool 3D modeling, scanning, virtual experiences, and augmented reality experiences over the past several years and felt like I was on top of it, but what MSR is doing took me by surprise, especially when it was laid out sequentially as it was for us. A tenth of the work they have been doing over the past two years could easily be the seed of an idea for any number of tech startups.

In the middle of the demos, I leaned over to one of the other MVPs and whispered in his ear that I felt like Steve Jobs at Xerox PARC seeing the graphical user interface and mouse for the first time. He just stroked his beard and nodded. It was a magic moment.

Why Magic Leap is Important

 magic-leap-shark-640x426

This past weekend a neighbor invited our entire subdivision to celebrate an Indian holiday called Diwali with them – The Festival of Lights. Like many traditions that immigrant families carry to the New World in their luggage, it had become an amalgamation of old and new. The hosts and other Indians from the neighborhood wore traditional South-East Asian formalwear. I was painfully underdressed in an old oxford, chinos and flip-flops. Others came in the formalwear of their native countries. Some just put on jackets and ties. We organized this Diwali as a pot-luck and had an interesting mix of biryanis, spaghetti, enchiladas, pancakes with syrup, borscht, tomato korma, Vietnamese spring rolls and puri.

The most important part of the celebration was the lighting of fireworks. For about two solid hour, children ran through a smoky cul-de-sac waving sparklers while firecrackers went off around them. Towards the end of this celebration, one of our hosts pulled out her iPhone in order to Facetime with her father in India and show him the children playing in the background just as they would have back home, forming a line of continuity between continents using a 1500 year old ritual and an international cellular system. Diwali is called the Festival of Lights, according to Wikipedia, because it celebrates the spiritual victory of light over darkness and ignorance.

When I got home I did some quick calculations. In order to get to that Apple moment our host had with her father – we no longer have Hallmark moments but only Apple moments today – took approximately seven years. This is the amount of time it takes for a technology to seem fantastic and impractical – because we don’t believe it can be done and can’t imagine how we would use it in everyday life if it was – to having it be unexceptional.

2001-telepresence (1)

Video conferencing has been a staple of science fiction for ages, from 2001: A Space Odyssey to Star Trek. It was only in 2010, however, that Apple announced the FaceTime app making it generally available to anyone who could afford an iPhone. I’m basing the seven years from fantasy to facticity, though, on length of time since the initial release of the iPhone in 2007.

Magic Leap, the digital reality technology that has just received half a billion dollars of funding from companies like Google, is important because it points the way to what can happen in the next seven years. I will paint a picture for you of what a world with this kind of digital reality technology will look like and it’s perfectly okay if you feel it is too out there. In fact, if you end up thinking what I’m describing is plausible, then I haven’t done a good enough job of portraying that future.

Magic Leap is creating a wearable product which may or may not be called Dragonstone glasses and which may or may not be a combination of light field technology – like that used in the Lytro camera – and depth detection – like the Kinect sensor. They are very secretive about what they are doing exactly. When Leap Magic CEO Rony Abovitz talks about his product, however, he uses code to indicate what it is and what it isn’t.

In an interview with David Lidsky, Abovitz let slip that Dragonstone is “not holography, it’s not stereoscopic 3-D. You don’t need a giant robot to hold it over your head, you don’t need to be at home to use it. It’s not made from off-the-shelf parts. It’s not a cellphone in a View-Master.” At first reading, this seems like a quick swipe at Oculus Rift, the non-mobile, stereoscopic virtual reality solution built from consumer parts by Oculus VR and, secondarily, Samsung Gear VR, the mobile add-on to Samsung’s Galaxy Note 4 that turns it into a virtual reality device with stereoscopic audio. Dig a little deeper, however, and it’s apparent that his grand sweep of dismissal takes in a long list of digital reality plays over the years.

Let’s start with holography. Actually, let’s start with a very specific hologram.

let the wookie win

The 1977 holographic chess game from Star Wars is the precursor to both virtual and augmented reality as we think of them – for convenience, I am including them all under the “digital reality” rubric. No child saw this and didn’t want it. From George Lucas imaginative leap, we already see an essential aspect of the digital experience we crave that differentiates it from the actual technology we have. Actual holography involves a frame that we view the virtual image through. In Lucas’s vision, however, the holograms take up space and have a location.

harryhausen

What’s intriguing about the Star Wars scene is that as a piece of film magic, the technology behind the chess game wasn’t particularly innovative. It’s pretty much just the same claymation techniques Ray Harryhausen and others had been using since the 50’s and involves superimposing a animated scene over a live scene. The difference comes in how George Lucas incorporates it into the story. Whereas all the earlier films that mixed live and animated sequences sought to create the illusion that the monsters were real, in the battle chess scene, it is clear that they are not – for instance because they are semi-transparent. Because the elements of the chess game are explicitly not real within the movie narrative – unlike Wookies, Hutts, and Ton-tons – they are suddenly much more interesting. They are something we can potentially recreate.

AR

The difference between virtual reality and augmented reality is similarly one of context. Which is which depends on how we, as the observer, are related to the digital experience. In the case of augmented reality, the context is the real world into which digital objects are inserted. An example of this occurs in Empire Strikes Back [1980], where the binoculars on Hoth provide additional information presented as an overlay on the real world.

The popular conception of virtual reality, as opposed to the technical accomplishment, probably dates to the publication of William Gibson’s Neuromancer in 1984. Gibson’s “cyberspace” is a fully digital immersive world. Unlike augmented reality where the context is our reality, in cyberspace the context is a digital space into which we, as observers and participants, are superimposed.

 titan

To schematize the difference, in augmented reality, reality is the background and digital content is in the foreground; in virtual reality, the background that we perceive is digital while the foreground is a combination of digital and actual objects. I find this to be a clean way of distinguishing the two and preferable to the tendency to distinguish them based on different degrees of immersion. To the extent that contemporary VR is built around improving the video game experience, we see that POV games have, as a goal, to create increasingly realistic world – but what is more realistic than the real world. On the other side, augmented reality, when done right, have the potential to be incredibly immersive.

magic quadrant

We can subdivide augmented reality even further. We’ll actually need to in order to elucidate why AR in Magic Leap is different from AR in Google Glass. Overlaying digital content on top of reality can take several forms and tends to fall along two axes. An AR experience is either POV or non-POV. It can also be either informational or interactive.

terminator_view

Augmented Reality in the POV-Informatics quadrant is often called Terminator Vision after the 1984 sci-fi Austrian body-builder augmented film. I’m not sure why a computer, the Terminator, would need a display to present data to itself, but in terms of the narrative it does wonders for the audience. It gives a completely false sense of what it must be like to think like a computer.

google glass

Experiences in the non-POV-Informatics quadrant are typically called Heads-Up-Displays or HUD. They have their source in military applications but are probably best known from first-person-shooters where the view-point is tied to objects like windshields or gun-sights rather than to the point-of-view of the player. They also don’t take up the entire view and consequently we can look away from them – unlike Terminator Vision. Google Glass is actually an example of a HUD – though it is sometimes mistaken for TV — since the display only fills up the right corner of the visual field.

fiducial

Non-POV interactive can be either magic mirror experiences or hand-held games and advertisements involving fiducials. This is a common way of creating augmented reality experiences for the iPad and smartphones. The device camera is pointed toward a fiducial, such as a picture in a catalog, and a 3-D model is layered over the video returned by the camera. Interestingly Qualcomm, one of the backers in Magic Leaps recent round of funding, is also a leader in developing tools for this type of AR experience.

hope

POV interactive, the final quadrant, is where Magic Leap falls. I don’t need to describe it because its exemplar is the sort of experience that Rony Abovitz says Dragonstone is not – the hologram from Star Wars. The difference is that where Abovitz is referring to the sort of holography we can do in actual reality, Magic Leap’s technology is the kind of holography that, so far, we have only been able to do in the movies.

If you examine the two images I’ve included from Star Wars IV, you’ll notice that the holograms are seen not from a single point of view but from multiple points of view. This is a feature of persistent augmented reality. The digital AR objects virtually exist in a real-world location and exist that way for multiple people. Even though Luke and Ben have different photons shooting at their eyes displaying the image of Leia from different perspectives, they are nevertheless looking at the same virtual Princess.

This kind of persistence, and the sort of additional technology required to make it work, helps to explain part of the reason Google is interested in it. Google, as we know, already has its own augmented reality play. Where Google brings something new to a POV interactive AR experience is in its expertise in geolocation, without which persistent AR entities would be much harder to create.

This sort of AR experience does not necessarily imply the use of glasses. We don’t know what sort of pseudo-technology is used the the Star Wars universe, but there are indications that it is some sort of projection. In Vernor Vinge’s sci-fi novel Rainbow’s End [2006], persistent augmented reality is projected on microscopic filaments that people experience without wearables.

Because Magic Leap is creating the experience inside a wearable close-range display, i.e. glasses, additional tricks are required. In addition to geolocation – which is only a guess at this point – it will also require some sort of depth sensor to determine if real-world objects are located between the viewer and the object’s location. If there is, then the occlusion of the virtual entity has to be simulated in the visualization – basically, a chunk has to be cut out of the image.

magic-leap-whale

If I have described the Magic Leap technology correctly – and there’s a good chance I have not given the secretiveness around it – then what we are looking at seven years out is a world in which everything we see is constantly being photoshopped in real-time. At a basic level, this fulfills the Magic Leap promise to re-enchant the world with digital entities and also makes sense of their promotional materials.

There are also some interesting side-effects. For one, an augmented world would effectively turn everything and everyone into a potential billboard. Given Google’s participation, this seems even likely. As with the web, advertisements will pay for the content that populates an augmented reality world. Like the web and mobile devices, the same geolocation that makes targeted content possible may also be used to track our behavior.

magic

There are additional social consequences. Many strange aspects of online behavior may make its way into our world. Pseudo-anonymity, which can encourage bad behavior in good people, can become a larger aspect of our world. Instead of appearing as themselves, people may prefer enhanced versions of themselves or even avatars.

jedi_council

In seven years, it may become normal to sit across a conference table from a giant rabbit and Master Chief discussing business strategies. Constant self-reinvention, which is a hallmark of the online experience, may become even more prevalent. In turn, reputation systems may also become more common as a way to curb the problems associated with anonymity. Liking someone I pass in the street may become much more literal.

Jedi

There is also, however, the cool stuff. Technology, despite all the frequent articles to the contrary, has the power to bring people together. Imagine one day being able to share an indigenous festival with loved ones who live thousands of miles away. My eleven year-old daughter has grown up with friends from around the world whom she has met online. Technology allows her not only to chat with them with texts, but also to speak with them while she is performing chores or walking around the house. Yet she has never met any of them. In seven years, we may live in a world where physical distance no longer implies emotional distance and where sitting around chatting face-to-face with someone you have never actually met face-to-face does not seem at all strange.

For me, Magic Leap points to a future where physical limitations are no longer limitations in reality.

Kinect SDK 2.0 Live!

WP_20141022_009

Today the Kinect SDK 2.0 – the development kit for the new, improved Kinect version 2 – went live.  You can download it immediately.

Kinect for Windows v2 is now out of its beta and pre-release phase.

Additionally, the Windows Store will now accept apps developed for Kinect. If you have a Kinect for Windows v2 sensor and are running Windows 8, you will be able to use it to run apps you’ve downloaded from the Windows Store.

And if you don’t have a Kinect for Windows v2? In that case, you can use the Kinect sensor from your XBox One and – with a $50 adapter that Microsoft just released – turn it into a sensor you can use with your Windows 8 computer.

You basically now have a choice of purchasing a Kinect for Windows v2 kit for $200, or a separate Kinect for Xbox One for $150 and an adapter for $50.

Alternatively, if you already have the sensor that came with your Xbox One, Microsoft has effectively lowered the entry bar to $50 so you can start trying the new Kinect:

1. Buy the Kinect v2 adapter.

2. Download the SDK to your 64-bit Windows 8 machine.

3. Detach the Kinect from your XBox One and plug it into your computer.

Code Camp, MVP, etc.

Final-Design-For-Kinect-For-Windows-v2-Revealed_title

It has been a busy two weeks. On the first of the month I was renewed for the Microsoft MVP program. I started out as a Client App Dev MVP many years ago and am currently an MVP in the Kinect for Windows program. I’m very grateful to the Kinect for Windows team for re-upping me again this year. It’s a magnificent program and the team is incredibly supportive and helpful. It’s also an honor to be associated with the other K4W MVPs who are all amazing in their own right and, to be honest, somewhat intimidating. But they politely laugh at my jokes in group calls and rarely call me out when I say something stupid. For all this, I am very grateful.

I’m often asked how one gets into the MVP program. There are, of course, midnight rituals and secret nominations as with any similar association of people. In general, however, the MVP is given out for participating in community activities like message boards (yes, you should be answering questions on the MSDN forums and passing your knowledge on to others!) as well as Code Camps like the one I attended this past Saturday.

My talk at the 2014 Code Camp Atlanta was on the Kinect for Windows v2. It was appropriately called “Handwaving with the Kinect for Windows v2” since the projector in the room didn’t work for the first twenty minutes or so of the presentation. I was delighted to find out that I actually knew enough to talk through the features of the new Kinect without notes, slides, or a way to show my Kinect demos and still remain relatively entertaining and informative.

Once the nameless but wonderful tech guy finished installing a second projector in the room as I was going through my patter, I was able to start navigating through my slides using hand gestures and this gesture mapper tool I built last year: http://channel9.msdn.com/coding4fun/kinect/Kinect-PowerPoint-Mapper-A-fresh-look-at-Kinecting-to-PowerPoint

Anyways, I wanted to express my appreciation for the early morning attendees who sat through my hand-waving exercise and I hope it got you interested enough to download the sdk and start trying your hand at Kinect development.

MSR Mountain View and Kinect

Just before the start of the weekend, Mary Jo Foley broke the story that the Mountain View lab of Microsoft Research was being closed.  Ideally, most of the researchers will be redistributed to other locations and not be casualties of the most recent round of layoffs.

The Kinect sensor is one of the great examples of Microsoft Research successfully working well with a product team to bring something to market.  Researchers from around the world worked on Project Natal (the code-name for Kinect).  An extremely important contribution to the machine learning required to make skeleton tracking work on the Kinect was made in Mountain View.

Machine learning works best when you are dealing with lots of data.  In the case of skeleton tracking, millions of images had been gathered.  But how do you find the hardware to process that many images?

Fortunately, the Mountain View group specialized in distributed computing.  One researcher in particular, Mihai Budiu, worked on a project that he believed would help the Project Natal team to solve one of its biggest hurdles.  The project was called DryadLinq and could be used to coordinate parallel processing over a large server cluster.  The problem it solved was recognizing body parts for people of various sizes and shapes – a preliminary step to generating the skeleton view.

The research lab at Mountain View was an essential part of the Kinect story.  It will be missed.

Playing with Toasters

WP_20140811_001

Every parent at some point faces the dilemma of what to tell her children.  There’s a general and possibly mistaken notion that if you provide education about S-E-X in schools, you will encourage young ‘uns to turn words into deeds.  Along the same lines, we can’t resist telling our children not to put forks in the toaster, even though we know that a child told not to do something will likely do it within five minutes.  No more dangerous words were ever spoken than “don’t touch that!”

On a recent conference call I was on, someone asked if it would be dangerous to take an Xbox One Kinect and plug it into your computer.  Although I waited more than five minutes, I eventually had to give into my impulse to find out.

I have several versions of the Kinect.  I have both of the older model Kinect for Xbox 360 and Kinect for Windows v1.  I also have a Kinect for Xbox One, Kinect for Windows v2 developer preview and Kinect for Windows v2 consumer (shown above).

The common opinion is that most of the differences between versions of the Kinect v2 are purely cosmetic.  Kinect for Windows has a “Kinect” logo where the Kinect for Xbox One has a metallic “XBOX” logo.  The preview K4Wv2 hardware is generally assumed to be a Kinect for Xbox One with razzmatazz stickers all over it.   There is a chance, however, that the Kinect for Windows hardware lacks the IR blaster included with the Xbox One’s Kinect.  The blaster is used to change channels on your TV from the Kinect, which “blasts” an IR signal over your room which the TV’s IR receiver picks up the reflection of.

  Kinect for Xbox One K4Wv2 Preview Kinect for Windows v2
SDK Color Sample yes yes yes
SDK Audio Sample yes yes yes
SDK Coord Map yes yes yes
Xbox Fitness yes yes no
Xbox Commands yes yes no
Xbox IR Blaster yes yes no
       

This was slightly scary, of course.  I didn’t want to brick a $150 device.  Then again, I reasoned it was being done for science – or at least for a blog post – so needs must.

I began by running the preview hardware against the latest SDK 2.0 preview.  I plugged the preview hardware into the new power/usb adapter that comes with the final hardware.  I then ran the color camera sample WPF project that comes with the SDK 2.0 preview.  It took about 30 to 60 seconds for the Kinect to be recognized as the firmware was automatically updated.  The sample then ran correctly.  I did the same with the Audio sample and the Coordinate Mapper, both of which ran correctly.

Next, I tried the same thing with the Kinect for Xbox One.  I plugged it into the Kinect for Windows v2 adapter and waited for it to be recognized.  I was, of course, concerned that even if I succeeded in getting the device to run, I might hose the Kinect for use back on my Xbox.  As things turned out, though, after a brief wait, the Kinect for Xbox One ran fine on a PC and with applications build on the SDK 2.0.

I think plugged my Kinect for Xbox One back into my Xbox One.  The only application I have that responds to the player’s body is the fitness app.  I fired that up and it recognized depth just fine.  I also tried speech commands such as “Xbox Go Home” and “Xbox Watch TV”.  I tested the IR blaster by shouting out “Xbox Watch PBS”.  Apparently my Kinect for Xbox was not damaged.

I then performed the same actions using the Kinect for Windows preview hardware and, I think, confirmed the notion that it is simply a Kinect for Xbox.  Everything I could do with the Xbox device could also be done using the Kinect for Windows preview hardware.

Finally I plugged in the Kinect for Windows final hardware and nothing happened.  The IR emitters never lighted up.  Either the hardware is just different enough or there is no Xbox compatible firmware installed on it.

There was no smoke and no one was harmed in the making of this blog post.

Kinect v2 Final Hardware

WP_20140725_001

The final Kinect hardware arrived at my front door this morning.  I’ve been playing with preview hardware for the past half year – and working on a book on programming it as various versions of the SDK were dropped on a private list – but this did not dampen my excitement over seeing the final product.

WP_20140725_002

The sensor itself looks pretty much the same as the the preview hardware – and as far as I know the internal components are identical.  The cosmetic differences include an embossed metal “Kinect” on the top of the sensor and the absence of the razzmatazz stickers – which I believe were simply meant to cover up Xbox One branding.

WP_20140725_003

Besides allowing you to admire my beige shag carpet, the photo above illustrates the major difference between the preview hardware and the final hardware.  It’s all about the adapters.  At the top of the picture are the older USB and power adapters, while below them are the new, sleek, lightweight versions of the same.  I’ve been carrying around that heavy Xbox power adapter for months, from hotel room to hotel room, in order to spend my evenings away from home working on Kinect code.  Naturally, I was often stopped by TSA and am happy that will not be happening any more.

The Javascript Cafeteria

cafeteria, 1950

The Nobel laureate and author Isaac Bashevis Singer tells an anecdote about his early days in America and his first encounter with an American style cafeteria.  He saw lots of people walking around with trays of food but none of them paid him any attention.  He thought that this must be the world’s most devilish restaurant, full of waiters but none willing to seat him.

The current world of javascript libraries seems like that sometimes.  New libraries pop up all the time and the ones you might have used a few months ago have become obsolete while you had your back turned.  Additionally you have to find a way to pick through the dim sum cart of libraries to find the complete set you want to consume. 

But maybe dim sum cart is also a poor metaphor since you can get in trouble that way, trying to combine things that do the same thing like knockout and backbone, or angular and asp.net mvc (<—that was a joke! but not really).  It’s actually more like a prix fixe menu where you pick one item from the list of appetizers, one from the main courses and finally one from deserts.

This may seem a lot like the problem of the firehose of technology but there is a difference and a silver lining.  It used to be that if you didn’t jump on a technology when it first came out (and there was a bit of a gamble to this, as witnessed by the devs who jumped on Silverlight – mea culpa) you would just fall behind and have a very hard time ever becoming an expert.  In the contemporary web dev climate, you can actually wait a little longer and that library you never got around to learning will just disappear. 

Even better, if a library has already been out for a few months, you can simply strategically ignore it and pick the one that came out last week.  The impostor syndrome epidemic (seriously, it’s like a nightmare version of Spartacus with everyone coming forward and insisting they feel like a phony – man up, dawg) goes away since anyone, even the retiring Visual Cobol developer, can become an expert living on the bleeding edge with just a little bit of Adderall assisted concentration.  True, it also means each of us is now competing with precocious 16 year olds for salaries, but such is the way of things.

Obviously we can take for granted that we are using JSON rather than XML for transport, and REST rather than SOAP for calls.  XML and SOAP are like going to a restaurant and finding that the chef is still adding fried eggs or kale to his dishes – or even foam of asparagus. 

moto, chicago

Just choose one item from column A, then another from column B, and so on.  I can’t give you any advice – who has time to actually evaluate these libraries before they become obsolete.  You’ll have to just do a google search like everyone else and see what Jim-Bob or cyberdev2000 thinks about it – kindof like relying on Yelp to plck a restaurant.  Arrows below indicate provenance.

Appetizers (javscript libraries):
jquery
prototype

Corso Secundo (visual effects):
jquery ui – – -> jquery
bootstrap – – -> jquery
script.aculo.us – -> prototype

Soups and Salads (utility libraries):
underscore
lazy.js
Lo-Dash – -> underscore

Breeze

Amuse Bouche (templating):
{{mustache}}
handlebars.js -> {{mustache}}

Main Courses (model binding frameworks):
angularjs
backbone.js -> underscore
knockout.js
ember.js -> handlebars.js
marionette.js -> backbone.js
CanJs

Wine Pairings (network libraries):
node.js
edge.js -> node.js
Go

Sides:
CoffeeScript
bower -> node.js

Desserts (polyfills):
modernizr
Mozilla Brick
polymer

Actually, I can help a little.  If you ask today’s waiter to surprise you (and we’re talking July of 2014 here), he’d probably bring you Jquery, Lo-Dash, Angularjs, Go, bower, modernizr.  YMMV.