A Tale from the Snowpocalypse

one-inch-of-snow

“Atlanta, we are ready for the snow.“ — @KasimReed via twitter

The Snowpocalypse of 2014 is strangely not a weather story so much as a traffic story.  One or two inches of snow, after all, is hardly a tsunami, a flood, or even a moderate earthquake.  It may, however, have the singular distinction of being the first time a governor of the great state of Georgia has declared a state of emergency due to a bad case of gridlock.

Which is not to say that road conditions were particularly good. They just weren’t the initial problem.  As first flurries and then larger flakes fell shortly before noon on Tuesday, January 28th, people started to realize it was time to abandon Atlanta.  The digital marketing agency where I work is located in midtown.  The email announcement went out at 12:36 that the office was closing at 1 pm. 

People had an inkling that traffic would be bad.  Even Atlanta’s mayor Kasim Reid has said that as it was happening, he thought it was a bad idea that everyone was leaving at once.  Of course, everyone thought this, which is why they all rushed out of work at the same time to beat everyone else headed toward the freeways and highways to go home to their families – most people who work in Atlanta, after all, live in the suburbs around the city.

As I peered down at the cars filling up the roads around my building like syrup overfilling a plate of pancakes, I decided to hunker down and wait a bit.  This choice was driven more by necessity than forethought as I had a meeting with a potential technology partner and then a performance management meeting.  True forethought was exercised by my friend Wells who simply called in and said he was working from home that day.  After all, we all knew there would be snow.  Our smartphone weather apps told us so.

It wasn’t until 5:30 that I finally left.  The immediate roads around the office had dried up a little.  Additionally the online traffic cameras colored all the routes out of town black so it was unlikely anything would get better in the near future.  I had no idea what a “black” road actually meant, however, so I wasn’t as rattled as I should have been.  I expected a simple three hour commute – which was about the worse I’d ever experienced on the 25 mile drive back home to the city of Lilburn – next to Snellville – northeast of Atlanta where I live.  Little did I know this was only the start of a fifteen hour odyssey through the Snowpocalypse that would give me nightmares for days and show me things about the true nature of my fellow man I’d just as soon never have known.  The performance review went relatively well, by the way – thanks for asking.

Muscle memory is an amazing metaphor for how the mind works.  Whatever the actual biological process, the story of muscle memory says that actions we perform repetitively are stored in a Lamarckian way in our bodies themselves – as if our minds, home to our memories, permeate through and suffuse our arms and legs.  In my experience, though, the notion of muscle memory ought to be extended into geographical space, for surely we leave impressions of ourselves in the places we abide and the routes we frequent much as a person leaves a depression in the easy chair when he gets up from it.

As I headed home, my car followed its own muscle memory around and around the parking deck, then right on Cypress Street, Left on 4th, crossed West Peachtree and finally made another left turn on Spring Street.  And then it was two hours to crawl south on Spring, past J.R. Crickets on my left (established 1980), then past The Varsity on my right (surprisingly good onion rings).  Two hours as the sun went down.  Two hours to struggle down three blocks at which point I reached a decision.  I could either make a right turn onto the I-85 headed north or continue on to the I-75 South, which would take me to the 20 West and eventually the Stone Mountain Highway toward Athens and Snellville. 

The odd thing is that after spending two hours despising the herd of cars around me, when it came time to make my choice North or South I followed the herd.  No one was getting onto the 85 (I discovered later it was totally blocked) so I didn’t either.  Instead I spent another hour crawling even more slowly along to get onto the 75 South.  As the Honda Accord in front of me let in one person after another in front of him (and, of course, in front of me) I slowly seethed.  Since it was taking ten minutes or so to move each car length, the Accord was adding time to my journey, taking time from my life, taking money from my pocket. 

And as I seethed, the lizard part of my brain took over.  I imagined the zombies from The Walking Dead and felt that I was coming to understand them.  I slowly shuffled along, to the extent a car can shuffle along, and tried to stay close to the cars in front of me – even if this was probably an unsafe distance.  I no longer even saw the cars in front of me so much as patterns of tail lights.  When someone appeared to move faster than the standard shuffling pace, the entire herd became hungry and would look toward the sudden flash of movement – only to realize nothing was really happening, there was no fresh meat.

I shuffled forward for a half hour.  Then I shuffled another half hour.  I was now approaching a 270 degree turn on my right onto the 75.  The whole turn was perhaps 600 feet long.  And here we reached a sort of standstill. No motion for another half hour.  I had been texting my wife (at this speed, I couldn’t see the harm) but my phone finally gave out with a defeated beep.  The main excitement during this extended wait occurred when a Lexus pulled up along the right shoulder and sped past everyone making the 270 degree turn.  At first I was angry.  And then I was envious.  Why didn’t I do that too?  This dude was speeding along at almost fifteen miles an hour.  As he reached the 180 degree point, however, he was brought up short, too – and when it came down to it, just thinking of leaving the order of the herd made me anxious.  

Over the turn was a large digital sign.  It lit up the whole area and cycled through something about a new sitcom, then something about a new reality show, then a Coca-Cola spot, then the sitcom again.  All of midtown Atlanta is hooked up for communication, every pocket has a smartphone with a data plan streaming information, every car has a radio allowing our government to speak directly to us.  Despite all this, the massive sign positioned to communicate to hundreds of people in terrible trouble could only tell us was to tune into TBS for a few giggles.  The smartphone, that miraculous device which allows me to call anywhere anytime, dies in less than a day because that’s just the state of battery technology – especially when the GPS is turned on.  And finally the radio, which once a month or so starts bleating, then tells me that it’s only having a test and that if this were a real emergency it would tell me what to do next – the government, the governor, the mayor apparently had nothing to communicate to the stranded motorists, so there was not emergency bleating to be had.  Again, I thought of those zombie movies where lone survivors sit by their radios waiting for news from the army about safe zones and instead hear only static.

More time went by and I was finally on the 75, but now stuck behind a big rig truck.  It was spinning its wheels faster and faster and faster but couldn’t seem to make any progress forward.  At the same time it was freaking out everyone around the 18 wheeler as we imagined what would happen if it the tires actually caught and the truck went flying forward.  As I waited behind this truck, I noticed another rig pull up to its left and get stuck, then another beside that.  Eventually there were four tractor trailers side by side and stuck, blocking all movement on the 75.  For a time I thought they must be getting secret communications from the government and that this was a complex maneuver intended to shut down the Interstate because there were worse things ahead – government and truckers working together for the common weal. 

This was not true, of course, and I found out later that it was mainly the big rigs that were shutting down all the freeways and highways running around and through Atlanta.  They would either just freeze in place or, worse, slide until they were sideways and blocking all lanes.  Things would probably have turned out differently if the people who are in charge had simply called up all the truckers on their radios and told them to pull over.  Then we all might have gotten home, freeing up the big roads and as a by product all the capillaries blocked by people trying to get onto the big roads.

Something snapped in me.  Fresh vitality came back to my mind and warmth flowed into my fingers.  I pulled around the trucker, I weaved slowly around other cars that appeared stopped, and took the first exit onto Courtland Street.  I drove up to Peachtree Street and then took it all the way to Ponce de Leon Avenue where I turned right.  Ponce is basically two blocks from my office where I started out.  I was about four or five hours into the journey at this point.

Ponce was beautifully clear.  I had left the zombies behind and now felt as though I was on a different, more exciting adventure.  I glided down the beautifully tree-lined Ponce – driving / skating along its winding path.  As I approached intersections, the lights kept turning green for me.  One time I stopped for a red light but discovered it was hard to get moving again once I’d stopped.  I panicked and started pressing harder and harder on the gas.  Then I remembered the 18 wheeler on the 75 and got a grip on myself.  I reversed slowly, the moved slowly forward and was free again to glide.  I don’t know when it happened but I eventually fell behind a White Passat.  Whereas I’d previously secretly despised everyone driving around me, the White Passat became my special friend, and I like to think he felt the same way.  We were comrades traveling through a post-apocalyptic world and nothing could harm us.  Other travelers joined us and were welcomed gladly.  We few, we happy few.

There were whole stretches that felt like we were driving through the Christmas day scene from A Christmas Carol – not the ghost bits but the morning with Scrooge running around wishing people happy Christmas and carolers in scarves and big smiles.  Just like that except I’m driving through Dickensian London in a Toyota Scion.  I even have false memories of snow covered cobble streets lined with gas lamps decorated with ribbon.

And then we finally came to the 78 – Stone Mountain Highway – and my dear friend headed toward Decatur while I continued toward Snellville.  So much had been left unsaid between us.  Perhaps it was better this way.

I was able to go several miles on the 78 without seeing anyone.  And then I started to see cars slowly headed toward me the wrong way on the highway.  It was like a movie in which the protagonist is headed into a forest and suddenly all the birds burst out from the trees and head towards him and then overhead – a clear indication that the protagonist, rather than the birds, are headed in the wrong direction. 

Oddly, I was still hoping to get home before midnight.  The last message I’d sent my wife before the phone died was “Is there food?”  I knew she was worried and hated that I didn’t have a way to let her know I was safe.  And what if things got worse?  Who wants their last words to be “Is there food?” 

Perhaps the headlights coming toward me where a bit too uncanny.  It was at this point – the only point in the whole adventure — that my car started to spin.  I remembered that I was supposed to turn into the spin and looked down at my hands, which had all on their own turned completely in the opposite direction.  Stupid muscle memory.  Of all the stupid things I’ve picked up over the years – baseball stats, D&D rules, obsolete computer languages – knowledge of how to drive in the snow suddenly floated to the top like the submerged pyramid in a magic 8 ball.  Pump pump pump on the breaks, slow down, turn the wheel slightly into the spin – and suddenly I was back in control again.  I’m sure there was a metaphor buried somewhere in that experience that I could have pulled out in order to live my life better and be a better human being, but I was really too tired and hungry to care.

Up ahead I started to find cars turned around in the direction I was headed, sparse at first, but more and more dense as I headed further north until the traffic came to a standstill.  And for the most part that was how things were for the next nine to ten hours.  It was like being in a parking lot lit only by the headlights of the cars in it.  We would be stopped for an hour at a time and then get ten minutes or so of forward motion, then stop again.  No one had any idea what was happening ahead to allow for the forward motion, which by now had become the exception rather than the rule.  I never wondered why we were stopped – only how we ever progressed.

Occasionally during these forward movements I’d realized I was parked behind a completely stopped vehicle.  At first I was dumbfounded by the thought of someone not taking the opportunity to move forward when given the chance, but I got used to it.  People were just stopping in the middle of the highway and going to sleep in the snow like wanderers in a Jack London story.  Sometimes, I’d pass cars that were simply abandoned.  The lights and engines would just be off – surely if someone were sleeping they would leave the engine running to heat their car.  Mostly these cars were well situated.  Early on they’d be abandoned on the right and left shoulders of the road as if someone had taken care to park them carefully before abandoning them. 

Later – at the eleven and twelve hour mark, I’d pass cars that were simply left in the middle of traffic, correctly positioned in an appropriate lane.  Drivers had simply said screw this, turned off their engines and walked into the woods – at least I imagine they walked into the woods because there weren’t really any hotels or houses or stores around us on that patch of highway.  The drivers vanished into the cold.  Later still, as I began to pass the various cars that had created the original pile ups, I found abandoned cars facing a variety of different directions.  Sometimes I’d see one car oriented perpendicularly to another car and barely kissing each other, the result of a slow motion crash in which no one was injured, not even the body work on the cars, but which was no doubt frightening enough – and in slow motion at that – that both drivers just said fuckit and walked off into the snow.

As I passed these wrecks frozen in time – perhaps even still occurring so infinitesimally slowly that no one noticed – I came to realize that getting past a pileup like this was never the end, for just a mile ahead there would be another one, and a mile in front of that another one.  Like a series of dominos, each slow collision between two cars caused other cars to brake badly and slide further behind them and so on and so on.   This was simply the pattern of things and even had a beautiful cadence that I learned to appreciate.

I remember strange moments breaking up the interminable boredom.  The snow occurred during a new moon, so the light during the odd patches when there were no cars around was provided solely by stars and was beautiful.

I remember the people who got out of their cars to walk north along the side of the road, either to see what was going on or to find a discrete place to urinate.  I would wait for each of them to come back and worried when it seemed to take too long.

The stretch of highway going in the opposite direction was empty.  A hitchhiker walked south along that stretch with his hand out and I wondered who he was hoping to get a ride from.  I was also amazed at how fast he was moving compared to me, as if he had wings on his feet.

I was entertained for an hour by a small truck headed in the opposite direction that had gotten stuck.  The engine would rev and over rev and the wheels would whine at higher and higher pitches which I learned to recognize as the sound of futility.  Then the truck would stop for five or ten minutes, gather up courage, and proceed to do the exact same thing again with the exact same results.

For the most part, the massive trucks that are common to Atlanta had absolutely no advantage in the snow and were even more likely to get stuck, for some reason.  Chances are they got stuck due to overconfidence while the cautious tortoise-like commuter cars fared much better.  Those fantastic commercials of trucks driving over glaciers, it turns out, have been lying to us and planting false knowledge in our collective unconscious.  Even sadder, we probably all already knew this.

Occasionally black all-terrain vehicles with S.W.A.T. bumper stickers would pass by and national guardsmen would jump out.  They’d look around for a while and then get back into their trucks and drive on, pursuing their mysterious missions.  They never talked to anyone but each other.

One time a large truck with flashing police lights came along the left shoulder and told everyone over a megaphone to get out of the left two lanes.  We magically transformed a slowly crawling traffic jam over five lanes into a fully stopped traffic jam over three lanes.  According to the megaphone, we were clearing the way for a salt truck to come through and treat the roads.  He insisted that this was the only way to clear the traffic and that we had to stop traffic to unclog traffic.  Over the next two hours that open left lane was a great temptation but no one took advantage of it.  We believed in following rules and working together for the greater good.  We believed in foregoing immediate gratification in order to achieve a higher outcome.  Like a nasty scab, that open lane begged to be scratch, but we did not.

Until slowly we began to realize that there was no salt truck coming and we just picked and picked at that scab for about five minutes until all lanes were backed up again.  And it felt good.

I mostly entertained myself by listening to AM radio, hoping for some news about what was going on or signs that someone in authority was taking charge.  Apparently, though, no one in authority really had anything to pass on to the stranded motorists.  Home Depot, bless their hearts, were opening up fourteen stores for people to take shelter.  Sadly, though, this didn’t really help anyone out except the people stuck in traffic in front of a Home Depot.

The radio announcer was apparently going well beyond his appointed duration.  He acknowledged that since we were suffering, he wanted to be there right along with us.  He then talked about how nice it was to be at home drinking bourbon in front of a raging fire and wished we could be there with him.  It was actually pleasant listening to him take calls and listen to other people vent about the troubles.  They called in and complained about the poor preparation exhibited by the state of Georgia and the city of Atlanta.  This being AM radio, these calls were followed up by others extolling the virtue of personal responsibility and reminding people they had no one to blame but themselves.  I actually couldn’t follow the logic of these callers since I didn’t know what I was responsible for other than coming in to work that morning and going to a performance management meeting.

Speaking of performance management, the host of the show also passed on comments by the mayor of Atlanta explaining that contrary to popular opinion, the city was actually doing a fantastic job of managing road conditions.  I think I heard this at around the thirteen hour mark.  It occurred to me that people often lie to themselves and others when it comes to performance.  Which led me to dwell a bit on my own relatively good performance review and I realized that when I asked about the possibility of a promotion my boss said let’s wait and see how things go – and it suddenly dawned on me that this is what I say to my children when they ask me for something and I don’t want to say no but I also have no intention of ever giving it to them.

That’s the problem with long drives.  Too much time to think.

As I mentioned, I was curious why the AM radio host was staying on for so long and then I finally understood, as he understood, that he had a captive audience.  He started laying out a theory about James the brother of the Lord being the actual rather than half-brother of Jesus, and then something complicated about Jesus having had to be a historical person otherwise we’d be saying that Polycarp and Tertullian never existed or something.  And slowly it dawned on me that he was simply laying out a Da Vinci Code- liite theory of his own in which Jesus has nephews and nieces with their own nephews and nieces spread throughout in the world.  He never quite said it but this seemed to be where he was headed, and he had a captive audience to spin it out to like a drunk uncle at a family get-together.

He had almost gotten to his point when the thread of the argument was lost due to some real news.  The government (not sure which one) had decided not to do anything further until sunrise.  It was about four-thirty when I heard this and I realized that now I had two hours to go before anything more would happen.  I could actually plan … to do nothing … but having the ability to exercise forethought was exciting.  I looked at the cars around me.  The fellow in the lane to my right leaned back and closed his eyes.  I found this deeply offensive – he had an obligation to maintain the night watch with the rest of us.  He felt my critical gaze, opened his eyes, looked over at me and just shrugged.  Then he went back to sleep.  I looked at the car behind him and saw two people watching a movie on their phone.  Again, what amazing devices smartphones are, so potentially useful in an emergency, and it turns out best used to catch up on two-and-a-half men.  In the back seat of the Lexus in front of me I noticed a small child’s hand weaving back and forth hypnotically.

I snapped awake a little later, slid forward a few car lengths, and slipped back into my coma.  This went on several times over the next few hours.  The logic of waiting till dawn was that the sun would help melt the snow despite the freezing temperatures.  I think it was really an opportunity for a respite and permission for people in authority to finally admit that they were out of ideas.  The fault was with planning, after all, and no amount of frantic response after the fact would really make up for it.  There was also probably something mythological at work.  The dawn chases away evil.  It chases away vampires, werewolves, and even makes zombies less frightening.  Sometimes it even helps us forget bad choices.

Dawn was beautiful when it came.  With the dawn came hope.  A black truck with a S.W.A.T. bumper sticker sped by and moved out beyond the tiny circumferences of the world immediately around my car.  And then ten minutes later cars started moving, just as promised.  I looked at my speedometer and realized I was actually moving at five miles an hour.  I was worried that at those speeds, I would spin out of control – it just seemed so much faster than I was used to.

After passing the gates to Stone Mountain, I was happy to discover that none of these people I had been stuck with for hours were actually headed my way.  So why on earth had they blocked me for so long?  It was another skating drive like I’d had on Ponce de Leon, with familiar streets made unfamiliar by white powder.  I would occasionally pass gangs of curious children out playing, getting supplies, breaking into abandoned cars, whatever.  The rules had changed.  I didn’t even stop for red lights anymore, having shed all muscle memory of traffic regulations in the night.  I simply slowed down at intersections and enjoyed my newfound freedom of movement.

A left turn onto Hewitt road to get to my own street, carefully maneuvered.  I didn’t want to slide into the gutter a mile from home – that would just be embarrassing after all that.  On a tiny two lane street, I finally passed the last signs of the Snowmaggedon.  Seven cars, all turned in different directions, some half on the road, others on people’s lawns, all abandoned.  I’d seen scenes like this all night but in the light of day the abandoned cars were particularly striking and more like the panoramic scenes of a disaster movie.  People who leave their cars in the middle of the road must really think life sucks.

I navigated slowly around these cars, watching for zombies to jump out, pumping my breaks the whole way, and finally made a left turn onto Oak Road.  There had been a single car behind me, matching my speed and following my lead as we passed cars and avoided icy slicks.  I was happy to pass on the survival skills I’d learned in the night to this fellow traveler of the post-apocalyptic highways and byways.  My little buddy, however, went right when I’d gone left, and I was alone again.

I slid into my driveway, walked up to the door, found that my house key wasn’t working and banged on the door, desperately, until someone let me in.  Have you ever played the Xbox game Left for Dead?  At the end of each level you find a safe house and after everyone has freed themselves of their zombie pursuers, you can shut and bolt the door behind you.  That’s how it felt to finally be in my house again after that fifteen hour ordeal.  I was home again, I was warm, and I was loved.

And you know what?  There was even food.

Razzle Dazzle

kinect for XBox One

People continue to ask what the difference is between the Kinect for XBox One and the Kinect for Windows v2.  I had to wait to unveil the Thanksgiving miracle to my children, but now I have some pictures to illustrate the differences.

side by side

On the sensors distributed through the developer preview program (thank you Microsoft!) there is a sticker along the top covering up the XBox embossing on the left.  There is an additional sticker covering up the XBox logo on the front of the device.  The power/data cables that comes off of the two  sensors look a bit like tails.  Like the body of the sensors, the tails are also identical.  These sensors plug directly into the XBox One.  To plug them into a PC, you need an additional adapter that draws power from a power cord and sends data to a USB 3.0 cable and passes both of these through the special plugs shown in the picture below.

usb

So what’s with those stickers?  It’s a pattern called razzle dazzle (and sometimes razzmatazz).  In World War I, it was used as a form of camouflage for war ships by the British navy.  It’s purpose is to confuse rather than conceal — to obfuscate rather than occlude.

war razzle dazzle

Microsoft has been using it not only for the Kinect for Windows devices but also in developer units of the XBox One and controllers that went out six months ago. 

This is a technique of obfuscation popular with auto manufacturers who need to test their vehicles but do not want competitors or media to know exactly what they are working on.  At the same time, automakers do use this peculiar pattern to let their competitors and the media know that they are, in fact, working on something.

car razzle dazzle

What we are here calling razzle dazzle was, in a more simple age, called the occult.  Umberto Eco demonstrates in his fascinating exploration of the occult, Foucault’s Pendulum, that the nature of hidden knowledge is to make sure other people know you have hidden knowledge.  In other words, having a secret is no good if people don’t know you have it.  Dr. Strangelove expressed it best in Stanley Kubrick’s classic film:

Of course, the whole point of a Doomsday Machine is lost if you keep it a secret!

A secret, however, loses its power if it is ever revealed.  This has always been the difficulty of maintaining mystery series like The X-Files and Lost.  An audience is put off if all you ever do is constantly tease them without telling them what’s really going on. 

magic

By the same token, the reveal is always a bit of a letdown.  Capturing bigfoot and finding out that it is some sort of hairy hominid would be terribly disappointing.  Catching the Loch Ness Monster – even discovering that it is in fact a plesiosaur that survived the extinction of the dinosaurs – would be deflating compared to the sweetness of having it exist as a pure potential we don’t even believe in.

This letdown even applies to the future and new technologies.  New technologies are like bigfoot in the way they disappoint when we finally get our hands on them.  The initial excitement is always short-lived and is followed by a peculiar depression.  Such was the case in an infamous blog post by Scott Hanselman called Leap Motion Amazing, Revolutionary, Useless – but known informally as his Dis-kinect post – which is an odd and ambivalent blend of snarky and sympathetic.  Or perhaps snarky and sympathetic is simply our constant stance regarding the always impending future.

bigfoot

The classic bad reveal – the one that traumatized millions of idealistic would-be Jedi – is the quasi-scientific explanation of midichlorians  in The Phantom Menace.   The offences are many – not least because the mystery of the force is simply shifted to magic bacteria that pervade the universe and live inside sentient beings – an explanation that explains nothing but does allow the force to be quantified in a midichlorian count. 

The midichlorian plot device highlights an important point.  Explanations, revelations and unmaskings do not always make things easier to understand, especially when it’s something like the force that, in some sense, is already understood intuitively.  Every child already knows that by being good, one ultimately gets what one wants and gets along with others.  This is essentially the lesson of that ancient Jedi religion – by following the tenets of the Jedi, one is able to move distant objects with one’s will, influence people, and be one with the universe.  An over-analysis of this premise of childhood virtue destroys rather than enlightens.

the force razzle dazzle

The force, like virtue itself, is a kind of razzle dazzle – by obfuscating it also brings something into existence – it creates a secret.  In attempts to explain the potential of the Kinect sensor, people often resort to images of Tom Cruise at the Desk of the Future or Picard on the holodeck.  The true emotional connection, however, is with that earlier (and adolescent) fantasy awakened by A New Hope of moving things by simply wanting them to move, or changing someone’s mind with a wave of the hand and a few words – these are not the droids you are looking for.  Ben Kenobi’s trick in turn has its primordial source in the infant’s crying and waving of the arms as a way to magically make food appear. 

It’s not coincidental, after all, that Kinect sensors have always had both a depth sensor to track hand movements as well as a virtual microphone array to detect speech.

Kinect for Windows v2 First Look

WP_20131123_001

I’ve had a little less than a week to play with the new Kinect for Windows v2 so far, thanks to the developer preview program and the Kinect MVP program.  The original unboxing video is on Vimeo.  So far it is everything Kinect developers and designers have been hoping for – full HD through the color camera and a much improved depth camera as well as USB 3.0 data throughput. 

Additionally, much of the processing is now occurring on the GPU rather than the onboard chip or your computer’s CPU.  While amazing things were possible with the first Kinect for Windows sensor, most developers found themselves pushing the performance envelope at times and wishing they could get just a little more resolution or just a little more data speed.  Now they will have both.

20131126_110049

At this point the programming model has changed a bit between Kinect for Windows v1 and Kinect for Windows v2.  While knowing the original SDK will definitely give you a leg up, a bit of work will still need to be done to port Kinect v1 apps to the new Kinect v2 SDK when it is eventually released.

What I find actually confusing is the naming.  With the first round of devices that came out in 2010-11, we had the Kinect for XBox and Kinect for Windows.  It makes sense that the follow up to Kinect for XBox is the “Kinect for XBox One”.  But the follow up to Kinect for Windows is “Kinect for Windows v2” so we end up with the Kinect for XBox One as the correlate to K4W2. Furthermore,  by “Windows” we mean Windows 8 (now 8.1) so to be truly accurate, we really should be calling the newest Windows sensor K4W8.1v2.  For convenience, I’ll just be calling it the “new Kinect” for a while.

WP_20131123_004

What’s different between the new Kinect for XBox One and the Kinect for Windows v2?  It turns out not a lot.  The Kinect for XBox has a special USB 3.0 adapter that draws both lots of power as well as data from the XBox One.  Because it is a non-standard connector, it can’t be plugged straight into a PC (unlike with the original Kinect which had a standard USB 2.0 plug).

To make the new Kinect work with a PC, then, requires a special breakout board.  This board serves as an adapter with three ports – one for the Kinect, one for a power source, finally one for a standard USB 3.0 cable. 

We can also probably expect the firmware on the two versions of the new Kinect sensor to also diverge over time as occurred with the original Kinect.

kinec2_skel

Skeleton detection is greatly improved with the new Kinect.  Not only are more joints now detected, but many of the jitters developers became used to working around are now gone.  The new SDK recognizes up to 6 skeletons rather than just two.  Finally, because of the improved Time-of-Flight depth camera, which replaces the Primesense technology used in the previous hardware, the accuracy of the skeleton detection is much better and includes excellent hand detection.  Grip recognition as well as Lasso recognition (two fingers used to draw) are now available out of the box – even in this early alpha version of the SDK.

WP_20131123_005

I won’t hesitate to say – even this early in the game – that the new hardware is amazing and is leaps and bounds better than the original sensor.  The big question, though, is whether it will take off the way the original hardware did.

If you recall, when Microsoft released the first Kinect sensor they didn’t have immediate plans to use it for anything other than a game controller – no SDK, no motor controller, not a single luxury.  Instead, creative developers, artists, researchers and hackers figured out ways to read the raw USB data and started manipulating it to create amazingly original applications that took advantage of the depth sensor – and they posted them to the Internet.

Will this happen the second time around?  Microsoft is endeavoring to do better this time by getting an SDK out much earlier.  As I mentioned above, the alpha SDK for Kinect v2 is already available to people in the developer preview program.  The trick will be in attracting the types of creative people that were drawn to the Kinect three years ago – the kind of creative technologists Microsoft has always had trouble attracting toward other products like Windows Phone and Windows tablets.

My colleagues and I at Razorfish Emerging Experiences are currently working on combining the new Kinect with other technologies such as Oculus Rift, Google Glass, Unity 3D, Cinder, Leap Motion and 4K video.  Like a modern day scrying device (or simply a mad scientist’s experiment) we hope that by simply mixing all these gadgets together we’ll get a glimpse at what the future looks like and, perhaps, even help to create that future.

I Just Inceptioned Visual Studio 2013

inception_sim

Building Windows Store apps in Visual Studio 2013 has gotten a lot more fun with the Simulator.  At first, this seemed to be the same thing as the Emulator for Windows Phone development, but there are some interesting differences.

First, the Simulator actually seems to be closer to the simulator used for Pixel Sense (nee Microsoft Surface 2) development since it allows us to use a mouse to simulate finger touches as well as two finger gestures.  In general, we should all be using touch screens for development – but in the field or on unusual environments like Parallels running on a Mac, this isn’t always doable.  Being able to use the simulator gives us an out.  Additionally, because it allows us to simulate alternative aspect ratios and resolutions, it can be handy even when a touch display is readily available.

The really cool thing about the Simulator, though, is that when it fires up, it seems to create a VM of my current system.  I start a new project, set the debug target to “Simulator” and punch F5. 

My desktop background image shows up inside the Simulator and all my apps show up in the Tiles screen. 

I can even search for Visual Studio 2013 with the Search charm and find VS13. 

Then I can fire it up. 

Then I can look at the bottom of the file menu and, under recent project, find the project I am currently running inside the Simulator! 

The next step is obvious, right?  I set the target of the instance of visual studio running inside my Simulator, set that to “Simulator” and hit F5 to get a neat message:

“Unable to start the Simulator.  Another user on this computer is running Simulator, can not start Simulator.”

This is not standard English, so it’s especially fascinating. As everyone knows, the worthwhile Microsoft error messages are the ones that have never been spellchecked.

Does anyone know if I can log into the simulator as a different user at this point?  This is a rabbit hole I really want to go down.

Ghost Hunting with Kinect

Paranormal Activity 4

I don’t usually try to undersell the capabilities of the Kinect.  Being a Microsoft Kinect for Windows MVP, I actually tend to promote all the things that Kinect currently does and one day will do.  In fact, I have a pretty big vision of how Kinect, Kinect 2, Leap Motion, Intel’s Perceptual Computing camera and related gestural technologies will change the way we interact with our environment.

Having said that, let me just add that Kinect cannot find ghosts.  It might reveal bugs in the underlying Kinect software – but it cannot find ghosts.

Nevertheless, “experts” are apparently using Kinect sensors to reveal the presence of ghosts.  Here’s a clip from Travel Channel’s Ghost Adventures.  It’s an episode called Cripple Creek and you’ll want to skip ahead to about 3:50 (ht to friend Josh Blake for finding this).

The logic of this is based on some very sophisticated algorithms the Kinect uses to identify “skeletons” – or outlines of the human form.  The current Kinect can spot two skeletons at a time including up to 20 joints on each skeleton.  Additionally, it has a “seated mode” that allows it to identify partial skeletons from about the waist up – this tends to be a little more dodgy though.  All of this skeleton information is provided primarily to allow developers to create games that track the human body and, typically, animate an onscreen avatar that emulates the player’s movements.

The underlying theory behind using it for ghost hunting is that, since when someone passes in front of the Kinect sensor the Kinect will typically register a skeleton, it follows that if the Kinect registers a skeleton someone must have passed in front of it.

skeleton

Unfortunately, this is not really the case.  There are lots of forum posts from developers asking how to work around peculiarities with the Kinect skeletons while anyone who has played a Kinect game on XBox has probably noticed that the sensor will occasionally provide false positives (which for gaming, is ultimately better than false negatives).  In fact, even my dog would sometimes register as a skeleton when he ran in front of me while I was playing. 

Perhaps you’ve also noticed that in an oddly shaped room, Kinect is prone to register false speech commands.  This happens to me especially when I’m trying to watch my favorite ghost hunting show on Netflix – probably because of the feedback from the television itself (which the Kinect tends to be very good at cancelling out if you take the trouble to configure it according to instructions – but I don’t).  I know this isn’t a ghost pausing my TV show, though, because the Kinect isn’t set up to hear anything I don’t hear.  Just because the Kinect emulates some human features – like following simple voice commands like “Play” and “Pause” – doesn’t mean it’s something from The Terminator, The Matrix or Minority Report.  It is no more psychic than I am and it doesn’t have super hearing.

Kinect 2 IR

Similarly, skeleton tracking on Kinect isn’t specially fitted to see invisible things.  It uses a combination of an infrared camera and a color camera to collect data which it interprets as a human structure.  But these cameras don’t see anything the human eye can’t see with the lights on.  Those light photons that are being collected by the sensors still have to bounce off of something visible, even if you can’t see the light beams themselves.  Perhaps part of the illusion is that, because we can’t see the infrared light being emitted and collected by the Kinect, people assume that what it detects also can’t be seen?

Here’s another episode of Ghost Adventures on location at the haunted Talumne Hospital.  It’s especially remarkable because the Kinect here is doing exactly what it is expected to do.  As the subject lifts himself off the bed, he separates his outline from the background and Kinect for Windows’ “seated mode” identifies his partial skeleton from approximately the waist up.  The intrepid ghost hunters then scream out “It was in your gut!”  Television gold.

Apparently the use of unfamiliar (and misunderstood) technology provides a veneer of seriousness to what these people do on their shows.  Another piece of weird technology all these shows use is something called EVP – electronic voice phenomena.  Here the idea is that you put out a tape recorder or digital recorder and let it run for a while – often with a white noise machine in the background.  Then you play it back later and you start hearing things you didn’t hear at the time.  The trick is that if you run these recordings through software intended to clean up audio in order to discover voices, they remarkably discover voices that you never heard but which must be the voice of ghosts.

I can’t help feeling, however, that it isn’t the world of extrasensory phenomena that is mysterious and baffling to us.  It’s all the crazy new technologies that appear every day that is truly supernatural and overwhelming.  Perhaps tying all of these frightening technologies to our traditional myths and collective superstitions is just a way of making sense of it all and normalizing it.

Book Review: Augmented Reality with Kinect

4384OT_Mini

Rui Wang’s Augmented Reality with Kinect from Pakt Publishing is my new favorite book about the Kinect sensor.  It’s a solid 5 out of 5 for me and if you want to learn how to use the Kinect 4W SDK 1.5 and above with C++, then this is the book for you.  That said, however, it is also an incredibly frustrating software programming book.

The first issue I have with it is that it isn’t really about Augmented Reality, as such.  The way AR fits in is simply that the central project created in the course of the book is a Fruit Ninja-style game using Kinect and with a player overlay.  AR seems very much incidental to the book.

What it actually is is an intro book to C++ and the Kinect for Windows SDK.  This is actually a much needed resource in the Kinect community and one I have been on the lookout for for a long time.  I’m not sure why the publisher decided to add this “AR” twist to the concept for the book.  It really wasn’t necessary.

Second, the book’s tool chain is Visual Studio 2012, C++, Kinect for Windows SDK 1.5 and OpenGL.  One of these is not like the others!  In the second chapter, we are then told that the book covers OpenGL rather than DirectX because “…it is only used under Windows currently, and can hardly support languages except C/C++ and C#.”  Hmmm.

With those reservations out of the way, this is a really fine book about programming for the Kinect sensor.  C++ is the right way to do vision processing and this is a great introduction to the topic.  Along the way, it even includes a nice overview of face tracking.

Kinect PowerPoint Mapper

I just published a Kinect mapping tool for PowerPoint allowing users to navigate through a PowerPoint slide deck using gestures.  It’s here on CodePlex: https://k4wppt.codeplex.com/ .  There are already a lot of these out there, by the way – one of my favorites is the one Josh Blake published.

So why did I think the world needed one more? 

kinect_for_windows_fig1

The main thing is that, prior to the release of the Kinect SDK 1.7, controlling a slide deck with a Kinect was prone to error and absurdity.  Because they are almost universally written for the swipe gesture, prior PowerPoint controllers using Kinect had a tendency to recognize any sort of hand waving gesture as an event.  Consequently, as a speaker innocently gesticulated through his point the slides would begin to wander on their own.

The Kinect for Windows team added the grip gesture as well as the push gesture in the SDK 1.7.  This required several months of computer learning work to get these recognizers to work effectively in a wide variety of circumstances.  They are extremely solid at this point.

The Kinect PowerPoint Mapper I just uploaded to CodePlex takes advantage of the grip gesture to implement a grab-and-throw for PowerPoint navigation.  This effectively disambiguates navigation gestures from other symbolic gestures a presenter might use during the course of a talk.

I see the Kinect PowerPoint Mapper serving several audiences:

1. It’s for people who just want a more usable Kinect-navigation tool for PowerPoint.

2. It’s a reference application for developers who want to learn how they can pull the grip and the push recognizers out of the Microsoft Kinect controls and use them in combination with other gestures.  (A word of warning, tho – while double grip is working really well in this project, double push seems a little flakey.)  One of the peculiarities of the underlying interfaces is that the push notification is a state, when for most purposes it needs to be an event.  The grip, on the other hand, is basically a pair of events (grip and ungrip) which need to be transposed into states.  The source code for the Mapper demonstrates how these translations can be implemented.

3. The Mapper is configuration based, so users can actually use it with PC apps other than PowerPoint simply by remapping gestures to keystrokes.  The current mappings in KinectKeyMapper.exe.config look like this:

    <add key="DoubleGraspAction" value="{F5}" />
    <add key="DoublePushAction" value="{Esc}" />
    <add key="RightSwipeWithGraspAction" value="{Right}" />
    <add key="LeftSwipeWithGraspAction" value="{Left}" />
    <add key="RightSwipeNoGraspAction" value="" />
    <add key="LeftSwipeNoGraspAction" value="" />
    <add key="RightPush" value="" />
    <add key="LeftPush" value="" />
    <add key="TargetApplicationProcessName" value="POWERPNT"/>

Behind the scenes, this is basically translating gesture recognition algorithms (some complex, some not so much) to keystrokes.  To have a gesture mapped to a different keystroke, just change the value associated with the gesture – making sure to include the squiggly brackets.  If the value is left blank, the gesture will not be read.  Finally, the TargetApplicationProcessName tells the application which process to send the keystroke to if there are multiple applications open at the same time.  To find a process name in Windows, just go into the Task Manager and look under the process tab.  The process name for all currently running applications can be found there – just remove the dot-E-X-E at the end of the name. 

4. The project ought to be extended as more gesture recognizers become available from Microsoft or as people just find good algorithms for gesture recognizers over time.  Ideally, there will ultimately be enough gestures to map onto your favorite MMO.  A key mapper created by the media lab at USC was actually one of the first Kinect apps I started following back in 2010.  It seemed like a cool idea then and it still seems cool to me today.

Free XBox One with purchase of Kinect

the difference between the Kinect and Kinect for Windows

It’s true.  In November, Microsoft will release the Kinect 2 for approximately $500.  The new Kinect2 comes with HD video at 30 fps (we currently get 640×480 with the Kinect1), much improved skeleton tracking and improved audio tracking.  One of the most significant changes is in depth tracking.  Instead of the Primesense structured light technology used in Kinect1, Kinect2 uses the more traditional and more accurate time-of-flight technology.  Since most Time of Flight depth cameras start at around $1K, getting this in a Kinect2 along with all the other features for half that price is pretty amazing.

But the deal doesn’t stop there.  If you buy the Kinect for XBox, you automatically get an XBox for free!  You actually can’t even buy the XBox on its own.  You only can get it if you buy the Kinect2.

How do they give the new XBox One away for free you may ask?  Apparently the price of the XBox One will be subsidized through game sales.  Since the games for XBox will tend to have some sort of Kinect capability – enabled by the requirement that you can’t get the XBox on its own – the expectation seems to be that these unique games will get enough sales that, through volume, the cost of producing the XBox One will eventually be recouped.

But what if you aren’t interested in gaming?  What if – like at my company, Razorfish – you are mainly interested in building commercial interfaces and artistic experiences with the Kinect technologies. 

In this case, Microsoft will be providing another version of the Kinect (one assumes that it will be called something like Kinect2 for Windows or perhaps K4W2 – its Star Wars droid name) that has a USB 3 adapter that will plug into a PC.  And because it is for people who are not interested in gaming, it will probably cost a bit less than $500 to make up for the fact that it doesn’t come with a free XBox One and won’t ever recoup that hardware cost from non-gamers.  By the way, this version of the Kinect sensor will be released some time – perhaps months? — following the K4X1 November release.

Finally, to make the distinction between the two kinds of Kinect2s clear, the Kinect2 for XBox will not plug into a PC and Kinect2 for Windows will not plug into an XBox.  It’s just cleaner that way.

With the original Kinect, there was quite a bit of confusion introduced by the fact that when it was released it used a typical USB connector that could be plugged into either the XBox 360 or a PC.  This turned out to be a great thing for Microsoft because it set off an amazing flood of creativity among hackers who started building their own frameworks and drivers to read the USB data and then build applications on top of it. 

Overnight, this grassroots Kinect Hacks movement made Microsoft cool again.  There is currently talk going around that the USB connector on the Kinect was simply fortuitous.  I’m pretty sure, however, that it was prescient – at least on someone’s part – and the intent was – again on someone’s part if not everyone’s – to provide the sort of platform that could be taken advantage of to build more than games.

As Microsoft moved forward with the development of the Kinect SDK as a platform for developers to build Kinect applications on, they decided that this should be coupled with a special version of the “Kinect” called Kinect for Windows that would carry special firmware supporting near mode.  Additionally, the commercial version of the hardware (which was pretty much the same as the the gaming version of the hardware) required a special dongle (see photo above) that would help regulate the power on PCs. The biggest difference between the two Kinects, however, was the licensing terms and the price.  Basically, if you wanted to use Kinect technology commercially with the Kinect SDK, you needed to use the Kinect for Windows sensor which carried a higher, un-subsidized price. 

This, naturally, caused a lot of confusion.  People wondered why Microsoft was overcharging for the commercial version of the sensor when with a Copernican frame of mind that might have just as easily asked why Microsoft was undercharging for the gaming version of the sensor.

With the Kinect2 sensors, all of this confusion is removed by fiat since the gaming version and commercial version now have different connectors.  From a hardware standpoint, rather than merely a legal one, you cannot use your gaming sensor with a PC.

Of course, you could also perform a Copernican revolution on my framing above and suggest that it isn’t the XBox One that is being subsidized through the purchase of the Kinect2 but rather the Kinect2 that is being subsidized through the purchase of the XBox One.

It’s all a bit of an accounting trick, isn’t it?  Basically the money has to come from somewhere.  Given that Microsoft received a lot of free, positive PR from the Kinect hacking movement, it would be cool if they gave a little back and made the non-gaming Kinect2 sensor more accessible. 

Contrarily, however, it is already the case that a time-of-flight camera for under $500 along with all the other features loaded onto the Kinect2 is a pretty amazing deal for weekend coders, installation artists, and retailers. 

In any case, it gives me peace of mind to think of the Kinect2 sensor as a $500 device that comes with a free XBox One.  A lot of the angst I might otherwise feel about pricing simply melts away.  Though if Microsoft felt like subsidizing the price of the K4W2 sensor with some of the excess money they make off of Sharepoint licenses, I’d be cool with that, too.

Kinect Application Project Template

Over the past year, every time I start a new Kinect for Windows project, I’ve basically just copied the infrastructure code from a previous project.  The starting point was the code my friend Jarrett Webb and I wrote for our book Beginning Kinect Programming with the Microsoft Kinect SDK, but I’ve made incremental improvements to this code as needed and based on pointers I’ve found in various places.  I finally realized that I’d made enough changes and it was time to just turn this base code into a project template for myself and my colleagues at work.  Realizing that there wasn’t a Kinect Application project template available yet on the visual studio gallery, I uploaded it there, also.

The cool thing about templates uploaded to the gallery is that anyone with visual studio can now install it from the IDE.  If you select Tools | Extension Manager … and then search for “Kinect” under the Online Gallery, you should see something like this.  From here you can install the Kinect Application project template to your computer.

Kinect Application Project Template

If you then create a new project and look under C# | Windows, you will be able to build a Kinect WPF application with a bit of a headstart.  Here are some key features:

1. Initialization Code

All the initialization code and Kinect stream event handlers are stubbed out in the InitSensor method.  All you need to do is uncomment the streams you want to use.  Additionally, the event handler code is also stubbed out with the proper pattern for opening and disposing of frame objects.  Whatever you need to do with the image, depth and skeleton frames can be done inside those using statements.  This code also uses the latest agreed upon best practices for efficiently managing streamed data as of the 1.7 SDK.

void sensor_ColorFrameReady(object sender
    , ColorImageFrameReadyEventArgs e)
{
    using (ColorImageFrame frame = e.OpenColorImageFrame())
    {
        if (frame == null)
            return;

        if (_colorBits == null) _colorBits = 
            new byte[frame.PixelDataLength];
        frame.CopyPixelDataTo(_colorBits);

        throw new NotImplementedException();
    }
}

2. Disposal Code

Whatever you enable in the InitSensor method you will need to disable and dispose of in the DeInitSensor method.  Again, this just requires uncommenting the appropriate lines.  The DeInitSensor also implements a disposal pattern that is somewhat popular now.  The sensor is actually shut down on a background thread rather than on the main thread.  I’m not sure if this is a best practice as such, but it resolves a problem many C# developers were running into in shutting down their Kinect-enabled applications.

3. Status Changed Code

The Kinect can actually be disconnected in mid-process or simply not be on when you first run an application.  It is also surprisingly common to forget to plug the Kinect’s power supply in.  Generally, your application will just crash in such situations.  If you properly handle the KinectSensors.StatusChanged event, however, your application will just start up again when you get the sensor plugged back in.  A pattern for doing this was first introduced in the KinectChooser component in the Developer Toolkit.  A lightweight version of this pattern is included in the Kinect Application Project Template.

void KinectSensors_StatusChanged(object sender
    , StatusChangedEventArgs e)
{
    if (e.Status == KinectStatus.Disconnected)
    {
        if (_sensor != null)
        {
            DeInitSensor(_sensor);
        }
    }

    if (e.Status == KinectStatus.Connected)
    {
        _sensor = e.Sensor;
        InitSensor(_sensor);
    }
}
 

4. Extension Methods

While most people were working on controls for the Kinect 4 Windows SDK, Clint Rutkas and the Coding4Fun guys brilliantly came up with the idea of developing extension methods for handling the various Kinect streams.

The extension methods included with this template provide lots of conversions from bitmaps byte arrays to BitmapSource types (useful for WPF image controls) and vice-versa.  This allows you to do something easy like display a color stream which otherwise can be rather hairy.  The snippet below assumes there is an image control in the MainWindow named canvas.

using (ColorImageFrame frame = e.OpenColorImageFrame())
{
    if (frame == null)
        return;

    if (_colorBits == null) _colorBits = 
        new byte[frame.PixelDataLength];
    frame.CopyPixelDataTo(_colorBits);

    // new line
    this.canvas.Source = 
        _colorBits.ToBitmapSource(PixelFormats.Bgr32, 640, 480);
}

More in line with the original Coding4Fun Toolkit, the extension methods also make some very difficult scenarios trivial – for instance background subtraction (also known as green screening), skeleton drawing, player masking.  These methods should make it easier for quickly mock up a demo or even show off the power of the Kinect in the middle of a presentation using just a few lines of code.

private void InitSensor(KinectSensor sensor)
{
    if (sensor == null)
        return;

    sensor.ColorStream.Enable();
    sensor.DepthStream.Enable();
    sensor.SkeletonStream.Enable();
    sensor.Start();
    this.canvas.Source = sensor.RenderActivePlayer();
}

Again, this code assumes there is an image control in MainWindow named canvas.  You’ll want to put the following code in the InitSensor method to ensure that the code is called again if your Kinect sensor accidentally gets dislodged.  To create a simple background subtraction image, enable the color, depth and skeleton streams and then call the RenderActivePlayer extension method.  By stacking another image beneath the canvas image, I create an effect like this:

me on tatooine

Here are some overloads of the RenderActivePlayer method and the effects they create.  I’ve removed Tatooine from the background in the following samples.

 

canvas.Source = sensor.RenderActivePlayer(System.Drawing.Color.Blue);

blue_man

 

canvas.Source = sensor.RenderActivePlayer(System.Drawing.Color.Blue
                , System.Drawing.Color.Fuchsia);

blue_fuschia

 

canvas.Source = sensor.RenderActivePlayer(System.Drawing.Color.Transparent
                , System.Drawing.Color.Fuchsia);

trasnparent_fuschia

 

And so on.  There’s also this one:

canvas.Source = sensor.RenderPredatorView();

predator

 

… as well as this oldie but goodie:

canvas.Source = sensor.RenderPlayerSkeleton();
skeleton_view

The base method uses the colors (and quite honestly most of the code) from the Kinect Toolkit that goes with the SDK.  As with the RenderActivePlayer extension method, however, there are lots of overrides so you can change all the colors if you wish to.

canvas.Source = sensor.RenderPlayerSkeleton(System.Drawing.Color.Turquoise
    , System.Drawing.Color.Indigo
    , System.Drawing.Color.IndianRed
    , trackedBoneThickness: 1
    , jointThickness: 10);

balls

 

Finally, you can also layer all these different effects:

canvas.Source = sensor.RenderActivePlayer();
canvas2.Source = sensor.RenderPlayerSkeleton(System.Drawing.Color.Transparent);

everything

PRISM, Xbox One Kinect, Privacy and Semantics

loose lips

It’s interesting that at one time getting people to keep quiet was a priority for the government.  During World War II the government promoted a major advertising campaign to remind people that “loose lips sink ships.”  During war time (back when wars were temporary affairs), it was standard practice to suppress the flow of information and censor personal letters to ensure that useful information would not fall into enemy hands.  In a sense, privacy and national security were one.

Recent leaks about the NSA’s PRISM program suggest that things have dramatically changed.  We’ve realized for several years now that our cell phone service providers, our social networks, and our search engines are constantly tracking our physical and digital movements and mining that data for marketing.  We basically have traded our privacy for convenience in the same way that we accept ads on TV and on the Internet in exchange for free content. 

The dark side of all this is when all of this information is being passed along to third parties we didn’t even know about until we start getting junk mail in our inboxes for products we have no interest in.

What we only suspected, until now, was that the infrastructure that has been built to support these transactions of personal information for services were also of interest to our government and that we are sharing our identifying information not only with content providers, service providers, spammers and junk mailers but also with the United States security apparatus.  Now that all that information has been collected, the government wants to mine it also.

We don’t live in a police state today.  I don’t belong to either the far right wing nor the far left wing – I’m neither an occupier nor a tea partay kind of guy – so I also don’t believe we are even close to slipping into a police state in the near future.  I’m not concerned that the government will or ever will use this information to track me down and I am pretty confident that all this data mining will mainly be used only to track down terrorists and to send me unwanted emails.  And yet, it bugs me on a visceral level that people are going through my stuff, whatever that ethereal stuff actually is.

Gears of War

The main argument against this cooties feeling about my privacy is that only metadata is being inspected and not actual content.  Unfortunately, this seems like a porous boundary to me.  To paraphrase Hegel’s overarching criticism of Kant, whenever we draw a line we also necessarily have to cross over it at the same time.  From everything I know about software, the only way to gather metadata is to inspect the content in order to generate metadata about it.  For instance, when a government computer system listens to phone traffic in order to pick out key words and constellations of words, it still has to listen to all the other words first in order to pick out what it is interested in. 

Moreover, according to Slate, the data mining being done by PRISM is incredibly broad:

It appears the National Security Agency’s sweeping surveillance is not something only Verizon customers should be concerned about. The agency has also reportedly obtained access to the central servers of major U.S. Internet companies as part of a secret program that involves the monitoring of emails, file transfers, photos, videos, chats, and even live surveillance of search terms.

The semantics of Privacy today, as defined under the regime of the NSA, doesn’t mean no one is listening to what you are saying – it just means no one cares.  The best way to protect one’s privacy today is to simply be boring.

At the same time that all these revelations about PRISM were coming out (in fact on the very same day), Microsoft released a brief about privacy concerns around the new Xbox One’s Kinect peripheral.  Here’s an attempted explanation of the brief on Windows Phone Central I found particularly fascinating:

A lot of people feared that the Kinect would be able to listen to you when the Xbox One was off. Apparently, when off, the Xbox One is only listening for one command in its low-power state: “Xbox On”. It’s nice to know that you’re in control when the Kinect is on, off or paused. Some games though will require Kinect functionality (again, at the discretion of the game developers/publisher). That’s up to you to play or not play those games.

 

The author’s reassurance is based on a semantic sleight-of-hand.  The Kinect is not listening to you, according to the author, because it “is only listening for one command.”  This is an honest mistake, but a dangerous one.  In fact, in order to listen for one command, the Kinect has to have that microphone turned on and listening to everything anyone is saying.  What it is actually doing is only acting on one command – and hopefully throwing away everything else.  Additionally I do have a bit of experience with Microsoft’s speech recognition technology both on the Kinect and on the PC, and the “low-power state” modifier doesn’t particularly make sense.  It takes a similar amount of effort to identify insignificant data as it does to identify significant data, AFAIK. (There’s always the possibility that the Xbox Kinect has an on-board language processor just to listen for this one command that is separate from the rest of its speech recognition processing chain – but I haven’t heard about anything like that so far.)

Halo IV

The original Microsoft brief called Privacy by Design, upon which I assume the Windows Phone Central post is based, doesn’t play this particular semantic game – though it plays another.  At the same time, it also seems particularly and intentionally vague about certain points.

The semantic game in Microsoft’s Privacy post is around the term ‘design’.  Does design  here refer to the hardware design, the software architecture, the usability design or the marketing campaign?  These are all things that are encompassed by the term design and, in the linked article, privacy could be referring to any of them.  If it refers to the marketing campaign and UX, as it probably does, this doesn’t actually provide me any guarantees of privacy.  All it tells me is that Microsoft doesn’t initially intend to use the new Kinect sitting in my living room to collect random conversations.  ‘Design’ may refer to the initial software architecture, but this doesn’t provide us with any particular guarantees since any post-release software update can change the way the software works.

To put this another, way, the article describes Microsoft’s intent but doesn’t provide any guarantees.  Is there anything in the hardware that will prevent speech data from being mined in the future?  Probably not.  In that case, is there anything in the licensing that prevents Microsoft from mining this data?  Microsoft’s privacy brief doesn’t even touch on this.

So should you be concerned?  Totally – and here’s why.  In its pursuit of security, the NSA has instituted an infrastructure that performs better and better the more information it is fed.  Do terrorists play Xbox?  I have no idea.  Would the NSA want all that data anyways? 

Call of Duty

Hypothetically, the new Xbox One and the Kinect can collect this information on us.  Here’s how.  According to recent Microsoft announcements, the Xbox One must be connected to the Internet once every 24 hours in order to play games on it.  The new Kinect is designed to always be on and I am obligated to have it (I can’t buy a Kinect One without it).  Even when my Xbox One is off, my Kinect is still on listening for a command to turn it on.  The infrastructure is there and the NSA’s PRISM project is a monster that is hungry for it.

To be clear, I don’t think Microsoft is particularly interested in collecting this data.  Microsoft has no particular use for the typically rather boring conversations I have in my living room.  They won’t be gleaning any particularly useful marketing information from my conversations either. 

Nevertheless, I think it would be extremely forward looking of Microsoft to explain what they have put in place to prevent the government from ever issuing a request for this data and getting it the way they have already gotten other data, so far, from Verizon, AT&T, Microsoft, Yahoo, Google, Facebook, AOL, Skype, YouTube, and Apple.

Has Microsoft designed a mechanism, either through hardware or through a customer agreement they won’t/can’t rescind in the future, that will future proof my privacy?