What Game of Thrones Can Teach Us About Terrorism

distrubance 

"I felt a great disturbance in the Force, as if millions of voices suddenly cried out in terror …”

Last night’s airing of Game of Thrones season 3 episode 9, The Rains of Castamere, was in many ways the culmination of the “A Song of Ice and Fire” experience.  In the books by G.R.R. Martin, the Red Wedding occurs half way through the third book (there are currently five).  The RW is the primary reason people get their friends to read the book.  According to the producers of the HBO series, it is the episode they felt they had to get to.

In going through the social media related to the Red Wedding, there seemed to be mainly two reactions.  One was the sense of shock, grief and eventually numbness from people who didn’t know it was coming. I well recognize this mental state from the time I read the RW scene almost ten years ago.  The second was the strange elation of people who had already read the books in response to the reaction of the people who hadn’t.

black_frey

I wish I could find a word for this second, reflective emotion.  It isn’t exactly schadenfreude, that amazing German word for the the pleasure we take in other people’s misfortune.  Schadenfreude always has an element of ressentiment in it and seems generally directed to people who are better off than us.  The object of our schadenfreude thinks he is an innocent while in our minds, the misfortune is in some way deserved — though perhaps excessive.  Schadenfreude is the emotion Walder Frey feels as he watches the Starks and their bannermen being cut down.

In my bedroom wall, there is a hole made by a very heavy paperback tome. It marks the place where my wife threw her copy of A Storm of Swords against the wall after the Red Wedding scene – and for those more in the know, specifically the scene involving Arya and the Hound’s axe. I hadn’t read it yet and it was at that point my wife made me start with the first book, A Game of Thrones, so I could catch up and find out why there was a hole in our bedroom wall.

walder

There was a serious angst (‘nother awesome German word but still not the one we want) to her mood and it wouldn’t go away until I’d gotten to the emotional place she wanted me. I wanted to throw the book at the wall, too, but it seemed pointless by then.  The important thing though was she would finally talk to me again and we were on the same page, so to speak.  Oddly enough, we talked about what a great movie these books would make. 

The reflective emotion online was partly a weird glee but also a solicitousness towards those who were experiencing the RW psychic shock for the first time.  It’s as if for those who had already gone through this trauma, the trauma itself presented a barrier between themselves and everyone who was going about their lives in ignorance of the fact that a horrible thing happens in the middle of the third book of this series of books they probably are never going to read because adults don’t read Proust-length fantasy novels.  And then, thanks to the HBO series, now that trauma has been shared with the rest of the world.

bolton

I think the emotional word I’m looking for might be terrorism.  Isn’t this what terrorists do to people who don’t understand or sympathize with their plight?  They find a way to share their trauma with others in order to externalize their angst?

With terrorism, though, we never get to the point where people say, hey, thanks for the bombing, now I see where you’re coming from and everything’s going to be okay.

readthebook

Following the airing of The Rains of Castamere, on the other hand, all of us are now on the same page emotionally, are ready for healing, and can move on to the next thing, whether that next thing is the new season of True Blood or possibly a new Gene Wolfe novel.  On the other hand, if you are just interested in connecting with more people who have gone through what you just went through, you can try the online Song of Ice and Fire community at http://asoiaf.westeros.org/

It can be thought of as the largest and longest lasting group therapy session ever created. While I haven’t been back for a while, my wife and I joined it shortly after we created that hole in our bedroom wall and it was the source of much comfort and consolation to us.  It was the place, strangely enough, where some of the casting for the HBO series occurred as well as the best place to learn how to decipher one of the great hidden secret of the series: R+L=J.

I Can Haz the Unconscious?

pet therapy

The scientific method is one of the great wonders of deliberative thought.  It isn’t just our miraculous modern world that is built upon it, but also our confidence in rationality in general.  It is for this reason that we are offended on a visceral level at all sorts of climate change deniers, creationists, birthers, conspiracy theorists and the constant string of yahoos that seem to pop up using the trappings of rationality to deny the results of the scientific method and basic common sense.

It is so much worse, however, when the challenge to the scientific method comes from within.  Dr. Yoshitaka Fujii has been unmasked as perhaps one of the greatest purveyors of made up data in scientific experimentation, and while the peer review process seems to have finally caught him out, he still had a nearly 20 year run and some 200 journal articles credited to him.  Diederik Stapel is another prominent scientific fraudster whose activities put run-of-the-mill journalistic fraudsters like Jayson Blair to shame. 

Need we even bring up the demotion of Pluto, the proposed removal of narcissistic personality disorder as a diagnosis in the DSM V (narcissists were sure this was an intentional slight against them in particular), or the little-known difficulty of predicting Italian earthquakes (seven members of the National Commission for the Forecast and Prevention of Major Risks in Italy were convicted of manslaughter for not forecasting and preventing a major seismological event)?

It’s the sort of thing that gives critics ammo when they want to discredit scientific findings like Jerry Mahlman’s hockey stick graph in climatology.  And the great tragedy isn’t that we reach a stage where we no longer believe in the scientific method, but that we now believe in any scientific method.  Everyone can choose their own scientific facts to believe in and a general opinion that incompatible scientific positions do not need to be resolved with experimentation but rather through politics prevails.

Unconscious Thought Theory is now the object of similar reconsiderations.  A Malcolm Gladwell pet theory based on the experiments of Ap Dijksterhuis, Unconscious Thought Theory posits that we simply perform certain cognitive activities better when we are not actively cognizing.  As a software programmer, I am familiar with this phenomenon in terms of “sleep coding”.  If I am working all day on a difficult problem, I will sometimes have dreams about coding in my sleep and wake up the next morning with a solution.  When I arrive back at my work, it will effectively take me a few minutes to finish typing a routine into my IDE that I’ve been working for a day or several days trying to crack. 

I am a firm believer in this phenomenon and, as they say in late night infomercials, “it really works!”  I even build a certain amount of sleep coding into my programming estimates these days.  A project may take three days of conscious effort, one night of sleep, and then an additional five minutes to code up.  Sometimes the best thing to do when a problem seems insurmountable is simply to fire up the Internets, watch some cat videos and lolcatz the unconscious.

Imagine also how salvific the notion of a powerful unconscious is following the recent series of financial crisis.  At the first level, the interpretation of financial debacles blames excessive greed for our current problems (second great depression and all that jazz).  But that’s so 1980’s Gordon Gecko.  A deeper interpretation holds that the problem comes down to falsely assuming that in economic matters we are rational actors – an observation that has given birth (or at least a second wind) to the field of behavioral economics. 

I can haz Asimo

Lots of cool counter-factual papers and books about how remarkably irrational the consumer is has come out of this movement.  The coolest has got to be not only that we are much more irrational than we think, but that our irrational unconscious selves are much more capable than our conscious selves are.  It’s a bit like the end of of Isaac Asimov’s I, Robot (spoilers ahead) where after working out all the issues with Robots someone discovers that things are just going too smoothly in the world and comes to the realization that humans are not smart enough to end wars and cure diseases like this.  After some investigation, the intrepid hero discovers that our benign computer systems have taken over the running of the world and haven’t told us because they don’t want to freak us out about it.  They want us to go on thinking that we are still in charge and to feel good about ourselves.  It’s a dis-distopian ending of sorts.

As I mentioned, however, Unconscious Thought Theory is undergoing some discreditation.  One of the rules of the scientific method is that with experiments, they gots to be reproducible, and Dijksterhuis’s do not appear to be.  Multiple experiments have not been able to replicate Dijksterhuis’s “priming effect” experiments which used social priming techniques (for instance, having something think about a professor or a football hooligan before an exam) and then evaluating the exam scores correlated with the type of priming that happened.  There’s a related social priming experiment by someone else, also not reproducible, that seemed to show that exposing people to notions about aging and old people would make them walk slower.  The failure to replicate and verify the findings of Dijksterhuis’s social priming experiments lead one inevitably to conclude that Dijksterhuis’s other experiments promoting Unconscious Thought Theory are likewise questionable.

a big friggin' eye full of clouds

On the other had, that’s exactly what a benevolent, intelligent, all-powerful, collective supra-unconscious would want us to think.  Consider that if Dijksterhuis is correct about the unconscious being, in many circumstances, basically smarter at complex thinking activities than our conscious minds are, then the last thing this unconscious would want is for us to suddenly start being conscious of it.  It works behind the scenes, after all. 

When we find the world too difficult to understand, we are expected to give up and miraculously, after a good’s night sleep, the unconscious provides us with solutions.  How many scientific eureka moments throughout history have come about this way?  How many of our greatest technological discoveries are driven by humanity’s collective unconscious working carefully and tirelessly behind the scenes while we sleep?  Who, after all, made all those cat videos to distract us from psychological experiments on the power of the unconscious while the busy work of running the world was being handled by others?  Who created YouTube to host all of those videos?  Who invented the Internet – and why? 

Helpful vs Creepy Face Recognition

mr_mall

One of the interesting potential commercial uses for the Kinect for Windows sensor is as a realtime tool for collecting information about people passing by.  The face detection capabilities of the Kinect for Windows SDK lends itself to these scenarios.  Just as Google and Facebook currently collect information about your browsing habits, Kinects can be set up in stores and malls to observe you and determine your shopping habits.

There’s just one problem with this.  On the face of it, it’s creepy.

To help parse what is happening in these scenarios, there is a sophisticated marketing vocabulary intended to distinguish “creepy” face detection from the useful and helpful kind.

First of all, face detection on its own does little more than detect that there is a face in front of the camera.  The face detection algorithm may go even further and break down parts of the face into a coordinate system.  Even this, however, does not turn a particular face into an token that can be indexed and compared against other faces. 

Turning an impression of a face into some sort of hash takes us to the next level and becomes face recognition rather than merely detection.  But even here there is parsing to be done.  Anonymous face recognition seeks to determine generic information about a face rather than specific, identifying information.  Anonymous face recognition provides data about a person’s age and gender – information that is terribly useful to retail chains. 

Consider that today, the main way retailers collect this information is by placing a URL at the bottom of a customer’s receipt and asking them to visit the site and provide this sort of information when the customer returns home.  The fulfillment rate on this strategy is obviously horrible.

Being able to collect these information unobtrusively would allow retailers to better understand how inventory should be shifted around seasonally and regionally to provide customers with the sorts of retail items they are interested in.  Power drills or perfume?  The Kinect can help with these stocking questions.

But have we gotten beyond the creepy factor with anonymous face recognition?  It actually depends on where you are.  In Asia, there is a high tolerance for this sort of surveillance.  In Europe, it would clearly be seen as creepy.  North America, on the other hand, is somewhere between Europe and Asia on privacy issues.  Anonymous face recognition is non-creepy if customers are provided with a clear benefit from it – just as they don’t mind having ads delivered to their browsers as long as they know that getting ads makes other services free.

Finally, identity face recognition in retail would allow custom experiences like the virtual ad delivery system portrayed in the mall scene from The Minority Report.  Currently, this is still considered very creepy.

At work, I’ve had the opportunity to work with NEC, IBM and other vendors on the second kind of face recognition.  The surprising thing is that getting anonymous face recognition working correctly is much harder than getting full face recognition working.  It requires a lot of probabilistic logic as well as a huge database of faces to get any sort of accuracy when it comes to demographics.  Even gender is surprisingly difficult.

Identity face recognition, on the other hand, while challenging, is something you can have in your living room if you have an XBox and a Kinect hooked up to it.  This sort of face recognition is used to log players automatically into their consoles and can even distinguish different members of the same family (for engineers developing facial recognition software, it is an irritating quirk of fate that people who look alike also tend to live in the same house).

If you would like to try identity face recognition out, you can try out the Luxand Face SDK.  Luxand provides a 30-day trial license which I tried out a few months ago.  The code samples are fairly good.  While Luxand does not natively support Kinect development, it is fairly straightforward to turn data in the Kinect’s rgb stream into images which can then be compared against other images using Luxand.

I used Luxand’s SDK to compare anyone standing in front of the Kinect sensor with a series of photos I had saved.  It worked fairly well, but unfortunately only if one stood directly in front of the sensor and about a foot or two in front of it (which wasn’t quite what we needed at the time).  The heart of the code is provided below.  It simply takes color images from Kinect and compares it against a directory of photos to see if a match can be found.  It could be used as part of a system for unlocking a computer when the proper user stands in front of it (though you can probably think of better uses – just try to avoid being creepy).

void _sensor_ColorFrameReady(object sender
    , ColorImageFrameReadyEventArgs e)
{
    using (var frame = e.OpenColorImageFrame())
    {
        var image = frame.ToBitmap();
        this.image2.Source = image.ToBitmapSource();
        LookForMatch(image);
    }
 
}
 
private bool LookForMatch(System.Drawing.Bitmap currentImage)
{
 
        if (currentImage == null)
            return false;
        IntPtr hBitmap = currentImage.GetHbitmap();
        try
        {
        FSDK.CImage image = new FSDK.CImage(hBitmap);
        FSDK.SetFaceDetectionParameters(false, false, 100);
        FSDK.SetFaceDetectionThreshold(3);
        FSDK.TFacePosition facePosition = image.DetectFace();
        if (facePosition.w != 0)
        {
 
            FaceTemplate template = new FaceTemplate();
            template.templateData = 
                ExtractFaceTemplateDataFromImage(image);
            bool match = false;
            FaceTemplate t1 = new FaceTemplate();
            FaceTemplate t2 = new FaceTemplate();
            float best_match = 0.0f;
            float similarity = 0.0f;
            foreach (FaceTemplate t in faceTemplates)
            {
 
                t1 = t;
                FSDK.MatchFaces(ref template.templateData
                    , ref t1.templateData, ref similarity);
                float threshold = 0.0f;
                FSDK.GetMatchingThresholdAtFAR(0.01f
                    , ref threshold);
 
                if (similarity > best_match)
                {
                    this.textBlock1.Text = similarity.ToString();
                    best_match = similarity;
                    t2 = t1;
                    if (similarity > _targetSimilarity)
                        match = true;
                }
            }
            if (match && !_isPlaying)
            {
                return true;
            }
            else
            {
                return false;
            }
        }
        else
            return false;
    }
    finally
    {
        DeleteObject(hBitmap);
        currentImage.Dispose();
    }
}
 
private byte[] ExtractFaceTemplateDataFromImage(FSDK.CImage cimg)
{
    byte[] ret = null;
    Luxand.FSDK.TPoint[] facialFeatures;
    var facePosition = cimg.DetectFace();
    if (0 == facePosition.w)
    {
    }
    else
    {
        bool eyesDetected = false;
        try
        {
            facialFeatures = 
                cimg.DetectEyesInRegion(ref facePosition);
            eyesDetected = true;
        }
        catch (Exception ex)
        {
            return cimg.GetFaceTemplateInRegion(ref facePosition);
        }
 
        if (eyesDetected)
        {
            ret = 
                cimg.GetFaceTemplateUsingEyes(ref facialFeatures);
        }
        else
        {
            ret = cimg.GetFaceTemplateInRegion(ref facePosition);
        }
    }
    return ret;
    cimg.Dispose();
}
 

One Thumb Drive To Rule Them All

thumbdrive

I currently have an HTC 8X windows phone on my desk which I think is one of the best smartphones on the market.  I also have a Surface tablet.   I have a fascinating little device called a Leap Motion sitting on my desk that detects finger gestures.  I also have three Kinect for Windows sensors arrayed around my desk in order to capture images from multiple directions, bullet time style.

The thing that is most precious to me, however, is the 16 Gig Lexar jump drive someone bought for my dev/design group.  It is the fastest USB flash drive currently available.  When I described it to my wife, she said she didn’t realize that thumb drives came in different speeds.  After thinking it over, I realized that before using the Lexar, I hadn’t realized it either.

Or to be more accurate, I realized vaguely in my lizard brain that some thumb drives are slower than others, but I had no idea that some were faster than others.

And above all the fast thumb drives, there’s the Lexar, which feels like it is instantaneous.  For example, a colleague recently needed a copy of Visual Studio 2012 while we were in Manhattan for a retail show.  I put the 1.5 Gig ISO on my Lexar jump drive and he brought his laptop to my hotel room to copy the file over.  He thought he could get the copying started, we’d go to dinner, and hopefully it would be done by the time dinner was over.  But practically before he’d even touched the Lexar to his USB port … ziiiiiiiiiiiiiiiiiip … it was over.  The ISO file was on his harddrive.

I have to admit that I now have a problem even letting someone else use the 16 Gig Lexar – even though it is communal property – because I’m not sure I’ll get it back.  People in our group are constantly asking for the plastic container where we keep our various jump drives … but of course we all know what they are really looking for is one of the two 16 Gig Lexars we own.  Honestly, it’s starting to be a problem, and I’m tempted to just throw these thumb drives into a volcano somewhere.  It causes nothing but friction and jealously on the team.

But at the same time, it is so beautiful and precious to me.  My colleague from New York was instantly won over and talked about the thumb drive for a half hour through dinner.  If you have a tech person you want to buy a nice present for – or if you are someone who needs a little self-care – treat yourself to something special.  They’re a little pricey, and even better than you can possibly imagine.

Today is the last day to innovate before tomorrow …

[This will be the last post before the Mayan apocalypse tomorrow.]

There have already been some very interesting blog posts on other sites predicting the trajectory of technology in 2013.  Worthy of special mention is this excellent overview from Frog Design as well as this one from PSFK.

An interesting feature of all these predictions is that they are an amalgamation of current business trends and futuristic American movies.  Sci-fi movies provide a direction while business (especially retail) provides the funding.  Think of it as a sort of merchandise-celuloidal complex creating our collective future.

The central flaw of practically all the predictions linked above is that they are heavily influenced by American science fiction.  American science fiction, however, is a mere shadow of and several decades behind Japanese science fiction.  I want to correct that today by basing my 2013 Technology Trends predictions on the advanced research occurring in the Japanese futuristic anime industry.

johnny9

1. Giant Robots – 2013 will finally see the arrival of giant robots.  These should more properly be thought of as Gundam or giant suits of armor rather than robots (in the US our pre-occupation with robotics has seriously undermined our edge in this technological frontier) but for the sake of brevity I’ll continue to refer to them as robots for now.

Suidobashi Heavy Industries put their first Mech up for sale earlier this year (youtube link).  Over the next year, we can expect to see giant robots only getting bigger and dropping in price as they go into mass production. 

You should definitely trade in your Prius for one of these rugged commuter vehicles.  Not only will you be able to walk right over most commuter traffic, but you’ll also find your daily commute is much more enjoyable and comfortable as the anti-grav features kick in.  Giant Robots are also good for settling disputes with your neighbors and with your home owner’s association.  Even in rest mode, they become interesting conversation pieces when placed on your front lawn.

You can see a future vision video (much like Google’s vision video for Project Glass) on how giant robots will be used in the near future here.

stargate

2. Wormholes – Created by a race of aliens known as The Ancients, the wormhole travel system was discovered by the US Airforce about fifteen years ago and will be declassified and integrated by the TSA into commercial aviation routes in 2013.  Layovers on Beta Pictoris b and Kepler-42c are imminent.

walkingdead

3. Zombies – The US Cloning program will face a setback in 2013.  For the past five years, all major political figures as well as Hollywood A-List celebrities have been cloned in order to assure the smooth transition of power in government and entertainment.  Have you ever wondered how George Clooney stays so young?  Cloning.

In 2013, however, impurities introduced into the manufacture of clones (currently managed by the Umbrella Corporation) will turn clones of US House members into voracious and infectious brain eaters.  The US Congress will quickly turn the American populous into a rabid, ugly and mindless horde incapable of rational thought and obeying only raw emotions and appetites.

Only those who never leave their homes or watch cable news will be safe.

IMG_1254

4. Tablets – I think tablets are going to be really big in 2013.  Over the past several years I’ve noticed a subtle trend in which cameras have been flattened out and had phone-calling capabilities added to them.  Why phone companies rather than camera companies are driving this is a mystery to me, but more power to them.  Between 2010 and today these cameras have been getting bigger and bigger and are now even touch-enabled!  In 2013, I predict the arrival of 22”, 32” and even 55” touch-enabled cameras called “tablets” that people can comfortably carry around with them in their cars (or in their giant robots).  These tablets can even double as mirrors or flashlights!

3Gear Systems Kinect Handtracking API Unwrapping

WP_000542

I’ve been spending this last week setting up the rig for the beta hand detection API recently published by 3Gear Systems.  There’s a bit of hardware required to position the two Kinects correctly so they face down at a 45 degree angle.  The Kinect mounts from Amazon arrived within a day and were $6 each with free shipping since I never remember to cancel my Prime membership.  The aluminum parts from 80/20 were a bit more expensive but came to just a little above $100 with shipping.  We already have lots of Kinects around the Razorfish Emerging Experiences Lab, so that wasn’t a problem.

WP_000543

80/20 surprisingly doesn’t offer a lot of instruction on how to put the parts of the aluminum frame together so it took me about half-an-hour of trial-and-error to figure it out.  Then I found this PDF explaining what the frame should end up looking like deep-linked on the 3Gear website and had to adjust the frame to get the dimensions correct.

WP_000544

I wanted to use the Kinect for Windows SDK and, after some initial mistakes, realized that I needed to hook up our K4W Kinects rather than the Kinect for Xbox Kinects to do that.  When using OpenNI rather than K4W (the SDK supports either) you can use either the Xbox Kinect or the Xtion sensor.

My next problem was that although the machine we were building on has two USB Controllers, one of them wasn’t working, so I took a trip to Fry’s and got a new PCI-E USB Controller which ended up not working.  So on the way home I tracked down a USB Controller from a brand I recognized, US Robotics, and tried again the next day.  Success at last!

WP_000545

Next I started going through the setup and calibration steps here.  It’s quite a bit of command line voodoo magic and requires very careful attention to the installation instructions – for instance, install the C++ redistributable and Java SE.

WP_000546

After getting all the right software installed I began the calibration process.  A paper printout of the checkerboard pattern worked fine.  It turns out that the software for adjusting the angle of the Kinect sensor doesn’t work if the sensor is on its side facing down so I had to click-click-click adjust it manually.  That’s always a bit of a scary sound.

WP_000554

Pretty soon I was up and running with a point cloud visualization of my hands.  The performance is extremely good and the rush from watching everything working is incredible.

WP_000556

Of the basic samples, the rotation_trainer programmer is probably the most cool.  It allows one to rotate a 3D model around the Y-axis as well as around the X-axis.  Just this little sample opens up a lot of cool possibilities for HCI design.

WP_000557

From there my colleagues and I moved on to the C++ samples.  According to Chris Twigg from 3Gear, this 3D chess game (with 3D physics) was written by one of their summer interns.  If an intern can do this in a month … you get the picture.

I’m fortunate to get to do a lot of R&D in my job at Razorfish – as do my colleagues.  We’ve got home automation parts, arduino bits, electronic textiles, endless Kinects, 3D walls, transparent screens, video walls, and all manner of high tech toys around our lab.  Despite all that, playing with the 3Gear software has been the first time in a long time that we have had that great sense of “gee-whiz, we didn’t know that this was really possible.”

Thanks, 3Gear, for making our week!

Two Years of Kinect

As we approach the second anniversary of the release of the Kinect sensor, it seems appropriate to take inventory of how far we have come. Over the past two months, I have had the privilege of being introduced to several Kinect-based tools and demos that exemplify the potential of the Kinect and provide an indication of where the technology is headed.

restOnDesk

One of my favorites is a startup in San Francisco called 3Gear Systems. 3Gear have conquered the problem of precise finger detection by using dual Kinects. Whereas the original Kinect was very much a full-body sensor intended for bodies up to twelve feet away from the camera, 3Gear have made the Kinect into a more intimate device. The user can pick up digital objects in 3D space, move them, rotate them, and even draw free hand with her finger. The accuracy is amazing. The founders, Robert Wang, Chris Twigg and Kenrick Kin, have just recently released a beta of their finger-precise gesture detection SDK for developers to try out and instructions on purchasing and assembling a rig to take advantage of their software. Here’s a video demonstrating their setup and the amazing things you will be able to do with it.

oblong

Mastering the technology is only half the story, however. Oblong Industries has for several years been designing the correct gestures to use in a post-touch world. This TED Talk by John Underkoffler, Oblong’s Chief Scientist, demonstrates their g-speak technology using gloves to enable precision gesturing. Lately they’ve taken off the gloves in order to accomplish similar interactions using Kinect and Xtion sensors. The difficulty, of course, is that gestural languages can have accents just as spoken languages do. Different people perform the same gesture in different ways. On top of this, interaction gestures should feel intuitive or, at least, be easy for users to discover and master. Oblong’s extensive experience with gestural interfaces has aided them greatly in overcoming these types of hurdles and identifying the sorts of gestures that work broadly.

brekel-kinect-pro-face

The advent of the Kinect is also having a large impact on independent film makers.  While increasingly powerful software has allowed indies to do things in post-production that, five years ago, were solely the provenance of companies like ILM, the Kinect is finally opening up the possibility of doing motion capture on the cheap.  Few have done more than Jasper Brekelmans to help make this possible.  His Kinect Pro Face software, currently sold for $99 USD, allows live streaming of Kinect face tracking data straight into 3D modeling sofrtware.  This data can then be mapped to 3D models to allow for realtime digital puppetry. 

Kinect Pro Face is just one approach to translating and storing the data streams coming out of the Kinect device.  Another approach is being spearheaded by my friend Joshua Blake at Infostrat.  His company’s PointStreamer software treats the video, depth and audio feeds like any other camera, compressing the data for subsequent playback.  PointStreamer’s preferred playback mode is through point clouds which project color data onto 3D space generated using the depth data.  These point cloud playbacks can then be rotated in space, scrubbed in time, and generally distorted in any way we like.  This alpha-stage technology demonstrates the possibility of one day recording everything in pseudo-3D.

Got an Image Enhancer that can Bitmap?

Every UI platform needs a killer concept.  For the keyboard and mouse it was the Excel sheet.  If you ever watch the rebooted Hawaii Five-0, you’ll realize that for Touch it’s the flick.  Flicking is more satisfying than tapping on sooo many levels.  Birds do it, bees do it, even monkeys in the trees do it.

Gestural interfaces haven’t found that killer concept yet, but it may just be the ability to zoom in on an image.  Like flicking and entering tabular data, killer concepts don’t necessarily have to be clever.  They just have to feel right.

Consider what John Anderton spent his time doing in 2002’s Minority Report.  For the most part, he used innovative fantasy technology (later made real at Oblong Industries) to enhance images on his rather large screen.

Go back even further and you’ll recall Rick Deckard used speech recognition to enhance an image in 1982’s Blade Runner.  This may be the first inkling any of us had of the true purpose of NUI.

It obviously left an impression on the zeitgeist because every movie or TV show attempting to demonstrate technological sophistication on the cheap (CSI being the biggest culprit) managed to insert an “enhance” scene into their franchise somewhere.

And if you happened to have a movie with no budget, there was no reason you should let this stop you.

And while we’re getting nonstalgic for NUI, let’s not forget to give credit where credit is due. Before Leap Motion, before Microsoft’s Kinect, before Oblong’s g-speak, even before Minority Report, there was the NES Power Glove:

And in the decades after, all we’ve managed to do is to enhance that killer concept.

What’s In Kinect for Windows SDK 1.5?

You shouldn't have come back, Flynn.

Microsoft has just published the next release of the Kinect SDK: http://www.microsoft.com/en-us/kinectforwindows/develop/developer-downloads.aspx  Be sure to install both the SDK and the Toolkit.

This release is backwards compatible with the 1.0 release of the SDK.  This is important, because it means that you will not have to recompile applications you have already written with the Kinect SDK 1.0.  They will continue to work as is.  Even better, you can install 1.5 right over 1.0 – the install we take care of everything and you don’t have to go through the messy process of tracking down and removing all the components of the previous install.

I do recommend upgrading your applications to 1.5 if you are able, however.  There are improvements to tracking as well as the depth and color data.

Additionally, several things developers asked for following the initial release have been added.  Near-mode, which allows the sensor to work as close as 40cm, now also supports skeleton tracking (previously it did not). 

Partial Skeleton Tracking is now also supported.  While full body tracking made sense for XBox games, it made less sense when people were sitting in front of their computer or even simply in a crowded room.  With the 1.5 SDK, applications can be configured to ignore everything below the waist and just track the top ten skeleton joints.  This is also known as seated skeleton tracking.

Kinect Studio has been added to the toolkit.  If you have been working with the Kinect on a regular basis, you have probably developed several workplace traumas never dreamed of by OSHA as you tested your applications by gesticulating wildly in the middle of your co-workers.  Kinect Studio allows you to record color, depth and skeleton data from an application and save it off.  Later, after making necessary tweaks to your app, you can simply play it back.  Best of all, the channel between your app and Kinect Studio is transparent.  You do not have to implement any special code in your application to get record and play-back to work.  They just do!  Currently Kinect Studio does not record voice – but we’ll see what happens in the future.

Besides partial skeleton tracking, skeleton tracking now also provides rotation information.  A big complaint with the initial SDK release was that there was no way to find out if a player/user is turning his head.  Now you can – along with lots of other tossing and turning: think Kinect Twister.

Those are things developers asked for.  In the SDK 1.5 release, however, we also get several things no one was expecting.  The Face Tracking Library (part of the toolkit) allows devs to track 87 distinct points on the face.  Additional data is provided indication the location of the eyes, the vertices of a square around a player’s face (I used to jump through hoops with OpenCV to do this), as well as face gesture scalars that tell you things like whether the lower lip is curved upwards or downwards (and consequently whether a player is smiling or frowning).  Unlike libraries such as OpenCV (in case you were wondering), the face tracking library is using rgb as well as depth and skeleton data to perform its analysis.

I fight for the Users!

The other cool thing we get this go-around is a sample application called Avateering that demonstrates how to use the Kinect SDK 1.5 to animate a 3D Model generated by tools like Maya or Blender.  The obvious way to use this, though, would be in common motion capture scenarios.  Jasper Brekelmans has taken this pretty far already with OpenNI and there have been several cool samples published on the web using the K4W SDK (you’ll notice that everyone reuses the same model and basic XNA code).  The 1.5 Toolkit sample takes this even further by, first, having smoother tracking and, second, by adding joint rotation to the mocap animation.  The code is complex and depends a lot on the way the model is generated.  It’s a great starting point, though, and is just crying out for someone to modify it in order to re-implement the Shape Game from v1.0 of the SDK.

The Kinect4Windows team has shown that it can be fast and furious as it continues to build on the momentum of the initial release.

There are some things I am still waiting for the community (rather than K4W) to build, however.  One is a common way to work with point clouds.  KinectFusion has already demonstrated the amazing things that can be done with point clouds and the Kinect.  It’s the sort of technical biz-wang that all our tomorrows will be constructed from.  Currently PCL has done some integration with certain versions of OpenNI (the versioning issues just kill me).  Here’s hoping PC will do something with the SDK soon.

The second major stumbling block is a good gesture library – ideally one built on computer learning.  GesturePak is a good start though I have my doubts about using a pose approach to gesture recognition as a general purpose solution.  It’s still worth checking out while we wait for a better solution, however. 

In my ideal world, a common gesture idiom for the Kinect and other devices would be the responsibility of some of our best UX designers in the agency world.  Maybe we could even call them a consortium!  Once the gestures are hammered out, they would be passed on to engineers who would use computer learning to create decision trees for recognizing these gestures much as the original skeleton tracking for Kinect was done.  Then we would put devices out in the world and they would stream data to people’s Google glasses and … but I’m getting ahead of myself.  Maybe all that will be ready when the Kinect 2.5 SDK is released.  In the meantime, I still have lots to chew on with this release.

Famous Youtubers: from our far-flung correspondent

Still not recovered from book writing, I have asked my eleven year old son to provide an overview of what’s going on in YouTube land.  My son spends a lot of time working on his own videos – mostly guides to Minecraft and short Lego stop-motion films – and looks up to the sort of people who have managed to eek out a living doing this.  Here are some of the movers-and-shakers in his world:

Hello audience, I am Paul Ashley; son of James Ashley… I am writing this article because of my epic writing skills I gained at school! Oh also because my dad said to… My Youtube account is PaulVAshley so remember to subscribe to me! Or don’t… Let’s begin our Top 5 Most subscribed Youtubers!

#5: Freddiew (Freddie Wong) 3,022,460(as of now) Subscribers.

Freddie and Brandon are two good friends who enjoy making videos with sweet VFX. I’ve always liked their videos, and I still do. I was first introduced to the channel by my friend ANONYMOUS. Umm… okay… anyway, he wanted to show me a tutorial Freddie and Brandon made on First Person Shooter Videos. I began watching all of his short movies starting with “Mr. Toots.” I have become one of his biggest fans. I also wonder what he has in store for us in “Video Game High School.” He is a great director and he is my role model!

#4: Machinima 4,356,027(as of now) Subscribers.

Machinima is an actual company that employs people to play games all day and occasionally make a “machinima” (A video with voices filmed from a game) from time to time. I think this channel is slightly unfair because they have hundreds of people making their videos. I enjoy certain songs that they make, but most videos I think to be just plain stupid. This is only my opinion though… Overall, I really like them only they sometimes have a video that is “bad.”

#3: Smosh (Ian Hecox and Anthony Padilla) 4,464,823(as of now) Subscribers.

Smosh is definitely my personal favorite Youtube channel. They upload new videos every week. Ian has a separate channel for making shows called “Ian is bored,” and “Lunchtime with Smosh.” I was first introduced by a few friends, one of them being Sam. Anyways, we would watch the “Theme song” series (Mortal Kombat, Pokémon, Teenage Mutant Ninja Turtles…). That was back in ’07 or ’08. Nowadays, they upload sketch videos. Overall, I love all of their videos except a few crappy ones.

#2: Nigahiga (Ryan Higa) 5,256,220(as of now) Subscribers.

Nigahiga… The most popular, classic Youtuber of all time! He is definitely the most famous among Youtubers. I was first introduced by my friends Shirish, Sam, and ANONYMOUS. We enjoyed videos like “How to be Ninja, Gangster, and Nerd.” My favorite video is “THE BEST CREW: The Audition.” I almost died in laughter. Overall, I like ALL of his videos.

#1: RayWilliamJohnson 5,408,244(as of now) Subscribers.

FINALLY! I’ve been enslaved to write this article for HOURS! So… where were we… Ah yes, lucky number 1. Ray is a Youtuber that makes a web show called =3. I was first introduced by my friend Shirish. Mr. Johnson (hehe) used to entertain me when I was 10, but I’ve grown ever-so bored of his predictable jokes. He also has a channel called “BreakingNYC.” Ahem, now this boy-man is funny to the creepy weirdoes of Youtube. Overall, I hate to sound sketchy but I dislike all of his videos.

YAY! ENDING PARAGRAPH! I like all of the channels I reviewed except RWJ. Okay, bye guys that’s all you get.

-From the insane mind of Paul Vladimir Ashley.