Minecraft in Virtual Reality and Augmented Reality

minecraft_ar_godview

Microsoft recently created possibly the best demo they have ever done on stage for E3. Microsoft employees played Minecraft in a way no one has ever seen it before, on a table top as if it was a set of legos. Many people speculated on social media that this may be the killer app that HoloLens has been looking for.

What is particularly exciting about the way the demo captured people’s imaginations is that they can start envisioning what AR might actually be used for. People are even getting a firm grip on the differences between Virtual Reality, which creates an immersive experience, and augmented reality which creates a mixed experience overlapping digital objects with real world objects.

Nevertheless, there is still a tendency to see virtual reality exemplified by the Oculus Rift and augmented reality exemplified by HoloLens and Magic Leap as competing solutions. In fact they are complementary solutions. They don’t compete with one another any more than your mouse and your keyboard do.

Bill Buxton has famously said that everything is best for something and worst for something else. By contrasting the Minecraft experience for Oculus and HoloLens, we can better see what each technology is best at.

 

minecraft_vr

The Virtual Reality experience for Oculus is made possible by a free hacking effort called Minecrift. It highlights the core UX flavor of almost all VR experiences – they are first person, with the player fully present in a 3D virtual world. VR is great for playing Minecraft in adventure or survival mode.

minecraft_ar_wall

Adventure mode with HoloLens is roughly equivalent to the adventure mode we get today on a PC or XBOX console with the added benefit that the display can be projected on any wall. It isn’t actually 3D, though, as far as we can tell from the demo, despite the capability of displaying stereoscopic scenes with HoloLens.

What does work well, however, is Minecraft in creation mode. This is basically the god view we have become familiar with from various strategy and resource games over the years.

minecraft_ar_closeup

God View vs First Person View

In a fairly straightforward way, it makes sense to say that AR is best for a god-centric view while VR is best for a first-person view. For instance, if we wanted to create a simulation that allows users to fly a drone or manipulate an undersea robot, virtual reality seems like the best tool for the job. When we need to create a synoptic view of a building or even a city, on the other hand, then augmented reality may be the best UX. Would it be fair to say that all new UX experiences fall into one of these two categories?

 

Most of our metaphors for building software, building businesses and building every other kind of buildable thing, after all, are based on the lego building block and it’s precursors the Lincoln log and erector sets. We play games as children in order, in part, to prepare ourselves for thinking as adults. Minecraft was built similarly on the idea of creating a simulation of a lego block world that we could not only build but also virtually play in on the computer.

lego_astronaut

The playful world of Lego blocks is built on two things: the blocks themselves formed into buildings and scenes and the characters that we identify with who live inside the world of blocks. In other words the god-view and the first-person view.

coffee prince

It should come as no surprise, then, that these two core modes of our imaginative lives should stay with us through our childhoods and into our adult approaches to the world. We have both an interpersonal side and an abstract, calculating side. The best leaders have a bit of both.

minecraft_ar_limburgh 

You apparently didn’t put one of the new coversheets on your TPS report

The god-view in business tends to be the synoptic view demanded by corporate executives and provided in the form of dashboards or crystal reports. It would be a shame if AR ended up falling into that use-case when it can provide so much more and in more interesting ways. As both VR and AR mature over the next five years, we all have a responsibility to keep them anchored in the games of our childhood and avoid letting them become the faults and misdemeanors of the corporate adult world.

Update 6/20

A recent arstechnica article indicates that the wall-projected HoloLens version of Minecraft in adventure mode can be played in true 3D:

One other impressive feature of the HoloLens-powered virtual screen was the ability to activate a three-dimensional image, so that the scene seemed to recede into the wall like a window box. Unlike a standard 3D monitor, this 3D image actually changed perspective based on the viewing angle. If I went up near the wall and looked at the screen from the left, I could see parts of the world that would usually be behind the right side of the wall, as if the screen was simply a window into another world.

Jon Snow Lives (and how he would do it)

 

unjon

Those watching Game of Thrones on TV just got the last of George R. R. Martin’s big whammies. Book readers have known about this since around 2011. They also have had almost four years to come up with rescue plans for one of their favorite characters. Here are a few that have been worked out over the years. For a series that is famous for killing characters off, there are a surprising number of ways to bring people back in Westeros. Remember, in the game of thrones, you win or you die or you somehow get resurrected. People always forget that last part.

1. Julius Ceasar Jon — dead is dead. Got to throw this out there even though no one really believes it.
2. Jesus Christ Jon — As the Azor Ahai Jon somehow resurrects himself. The best scenario I saw is that they attempt to burn his body but he rises from the ashes.

dondarrion
3. UnJon — Melisandre does some blood magic to bring Jon back like Thoros brings back Beric Dondarrion. Mel and Thoros worship the same god and use similar magic.
4. Sherlock Holmes Jon — Jon fakes his own death in order to leave the wall.
5. J.R. Ewing Jon — the antecedents are Arya at the twins and Theon’s fake out with Bran’s and Rickon’s bodies at Winterfell — Jon isn’t dead at all and survives his cuts. The narrative and screen cuts just makes us think he’s dead.
6. General Hospital Jon — in a coma.
7. Jon Cleese — just a flesh wound.

8. Do Over Jon — Mel or Wildling medicine restores Jon with no lasting effects. No better or worse than before.
9. Cold Hands Jon — there are good wights, too, after all. In the books, there is a character referred to as Cold Hands who has all the characteristics of a white walker but is helpful.

wight 
10. Other Jon (aka Darth Jon) — and then again, there are bad wights (most of them, in fact). This would be the graying of Jon Snow’s character if he goes over to the dark side, per prophecy and fan theory.
11. Alter-Jon — like the Mace/Rattleshirt switcheroo in the books, someone else has been glamored to look like Jon. The faceless men have this magic, so we know it exists.

Bran 
12. Targ Warg Jon — warging runs in the Stark blood. This opens up additional variations:

12a. Ghost Jon — Jon lives out the next season in his wolf Ghost.
12b. Ice Jon — He goes to Ghost but comes back into his own body which is preserved in a frozen state under the Wall.
12c. Wun Wun Jon — From Ghost to a nice new strong body with a simple mind (a book specific theory).
12d. Stannis Jon — From Ghost to Stannis — if Stannis is dead, he won’t be needing his body. Plus this would allow Jon to prosecute his war against the Lannisters, taking up from his brother Rob.
12e. Dragon Jon — Jon goes to Ghost and then into one of Dany’s dragons (or maybe another dragon under the Wall or under Winterfell). Makes him literally the third head of the dragon (you followers of ancient Westeros prophecies know what I’m talkin’ about – yeah you do).

13. Kentucky Fried Jon — like Victarion’s arm (old book history), a healing magic to sustain life through burning.

clegane

14. Frankenstein’s Monster Jon – Qyburn, we discover in the season finally, has basically brought Gregor Clegane back to life (Gregorstein) through some kind of laboratory science. If Jon is put on ice, Qyburn may eventually do the same for Jon. Or mix and match the two, who knows.

Emgu, Kinect and Computer Vision

monalisacv

Last week saw the announcement of the long awaited OpenCV 3.0 release, the open source computer vision library originally developed by Intel that allows hackers and artists to analyze images in fun, fascinating and sometimes useful ways. It is an amazing library when combined with a sophisticated camera like the Kinect 2.0 sensor. The one downside is that you typically need to know how to work in C++ to make it work for you.

This is where EmguCV comes in. Emgu is a .NET wrapper library for OpenCV that allows you to use some of the power of OpenCV on .NET platforms like WPF and WinForms. Furthermore, all it takes to make it work with the Kinect is a few conversion functions that I will show you in the post.

Emgu gotchas

The first trick is just doing all the correct things to get Emgu working for you. Because it is a wrapper around C++ classes, there are some not so straightforward things you need to remember to do.

1. First of all, Emgu downloads as an executable that extracts all its files to your C: drive. This is actually convenient since it makes sharing code and writing instructions immensely easier.

2. Any CPU isn’t going to cut it when setting up your project. You will need to specify your target CPU architecture since C++ isn’t as flexible about this as .NET is. Also, remember where your project’s executable is being compiled to. For instance, an x64 debug build gets compiled to the folder bin/x64/Debug, etc.

3. You need to grab the correct OpenCV C++ library files and drop them in the appropriate target project file for your project. Basically, when you run a program using Emgu, your executable expects to find the OpenCV libraries in its root directory. There are lots of ways to do this such as setting up pre-compile directives to copy the necessary files. The easiest way, though, is to just go to the right folder, e.g. C:\Emgu\emgucv-windows-universal-cuda 2.4.10.1940\bin\x64, copy everything in there and paste it into the correct project folder, e.g. bin/x64/Debug. If you do a straightforward copy/paste, just remember not to Clean your project or Rebuild your project since either action will delete all the content from the target folder.

4. Last step is the easiest. Reference the necessary Emgu libraries. The two base ones are Emgu.CV.dll and Emgu.Util.dll. I like to copy these files into a project subdirectory called libs and use relative paths for referencing the dlls, but you probably have your own preferred way, too.

WPF and Kinect SDK 2.0

I’m going to show you how to work with Emgu and Kinect in a WPF project. The main difficulty is simply converting between image types that Kinect knows and image types that are native to Emgu. I like to do these conversions using extension methods. I provided these extensions in my first book Beginning Kinect Programming about the Kinect 1 and will basically just be stealing from myself here.

I assume you already know the basics of setting up a simple Kinect program in WPF. In MainWindow.xaml, just add an image to the root grid and call it rgb:

<Grid> 
    <Image x:Name="rgb"></Image> 
</Grid> 

 

Make sure you have a reference to the Microsoft.Kinect 2.0 dll and put your Kinect initialization code in your code behind:

KinectSensor _sensor;
ColorFrameReader _rgbReader;


private void InitKinect()
{
    _sensor = KinectSensor.GetDefault();
    _rgbReader = _sensor.ColorFrameSource.OpenReader();
    _rgbReader.FrameArrived += rgbReader_FrameArrived;
    _sensor.Open();
}

public MainWindow()
{
    InitializeComponent();
    InitKinect();
}

 

protected override void OnClosing(System.ComponentModel.CancelEventArgs e)
{
    if (_rgbReader != null)
    {
        _rgbReader.Dispose();
        _rgbReader = null;
    }
    if (_sensor != null)
    {
        _sensor.Close();
        _sensor = null;
    }
    base.OnClosing(e);
}

 

Kinect SDK 2.0 and Emgu

 

You will now just need the extension methods for converting between Bitmaps, Bitmapsources, and IImages. In order to make this work, your project will additionally need to reference the System.Drawing dll:

static class extensions
{

    [DllImport("gdi32")]
    private static extern int DeleteObject(IntPtr o);


    public static Bitmap ToBitmap(this byte[] data, int width, int height
        , System.Drawing.Imaging.PixelFormat format = System.Drawing.Imaging.PixelFormat.Format32bppRgb)
    {
        var bitmap = new Bitmap(width, height, format);

        var bitmapData = bitmap.LockBits(
            new System.Drawing.Rectangle(0, 0, bitmap.Width, bitmap.Height),
            ImageLockMode.WriteOnly,
            bitmap.PixelFormat);
        Marshal.Copy(data, 0, bitmapData.Scan0, data.Length);
        bitmap.UnlockBits(bitmapData);
        return bitmap;
    }

    public static Bitmap ToBitmap(this ColorFrame frame)
    {
        if (frame == null || frame.FrameDescription.LengthInPixels == 0)
            return null;

        var width = frame.FrameDescription.Width;
        var height = frame.FrameDescription.Height;

        var data = new byte[width * height * PixelFormats.Bgra32.BitsPerPixel / 8];
        frame.CopyConvertedFrameDataToArray(data, ColorImageFormat.Bgra);

        return data.ToBitmap(width, height);
    }

    public static BitmapSource ToBitmapSource(this Bitmap bitmap)
    {
        if (bitmap == null) return null;
        IntPtr ptr = bitmap.GetHbitmap();
        var source = System.Windows.Interop.Imaging.CreateBitmapSourceFromHBitmap(
        ptr,
        IntPtr.Zero,
        Int32Rect.Empty,
        System.Windows.Media.Imaging.BitmapSizeOptions.FromEmptyOptions());
        DeleteObject(ptr);
        return source;
    }

    public static Image<TColor, TDepth> ToOpenCVImage<TColor, TDepth>(this ColorFrame image)
        where TColor : struct, IColor
        where TDepth : new()
        {
            var bitmap = image.ToBitmap();
            return new Image<TColor, TDepth>(bitmap);
        }

    public static Image<TColor, TDepth> ToOpenCVImage<TColor, TDepth>(this Bitmap bitmap)
        where TColor : struct, IColor
        where TDepth : new()
    {
        return new Image<TColor, TDepth>(bitmap);
    }

    public static BitmapSource ToBitmapSource(this IImage image)
    {
        var source = image.Bitmap.ToBitmapSource();
        return source;
    }
   
}

Kinect SDK 2.0 and Computer Vision

 

Here is some basic code to use these extension methods to extract an Emgu IImage type from the ColorFrame object each time Kinect sends you one and then convert the IImage back into a BitmapSource object:

void rgbReader_FrameArrived(object sender, ColorFrameArrivedEventArgs e)
{
    using (var frame = e.FrameReference.AcquireFrame())
    {
        if (frame != null)
        {
            var format = PixelFormats.Bgra32;
            var width = frame.FrameDescription.Width;
            var height = frame.FrameDescription.Height;
            var bitmap = frame.ToBitmap();
            var image = bitmap.ToOpenCVImage<Bgr,byte>();

            //do something here with the IImage 
            //end doing something


            var source = image.ToBitmapSource();
            this.rgb.Source = source;

        }
    }
}

 

face_capture

You should now be able to plug in any of the sample code provided with Emgu to get some cool CV going. As an example, in the code below I use the Haarcascade algorithms to identify heads and eyes in the Kinect video stream. I’m sampling the data every 10 frames because the Kinect is sending 30 frames a second while the Haarcascade code can take as long as 80ms to process. Here’s what the code would look like:

int frameCount = 0;
List<System.Drawing.Rectangle> faces;
List<System.Drawing.Rectangle> eyes;

void rgbReader_FrameArrived(object sender, ColorFrameArrivedEventArgs e)
{
    using (var frame = e.FrameReference.AcquireFrame())
    {
        if (frame != null)
        {
            var format = PixelFormats.Bgra32;
            var width = frame.FrameDescription.Width;
            var height = frame.FrameDescription.Height;
            var bitmap = frame.ToBitmap();
            var image = bitmap.ToOpenCVImage<Bgr,byte>();

            //do something here with the IImage 
            int frameSkip = 10;
            //every 10 frames

            if (++frameCount == frameSkip)
            {
                long detectionTime;
                faces = new List<System.Drawing.Rectangle>();
                eyes = new List<System.Drawing.Rectangle>();
                DetectFace.Detect(image, "haarcascade_frontalface_default.xml", "haarcascade_eye.xml", faces, eyes, out detectionTime);
                frameCount = 0;
            }

            if (faces != null)
            {
                foreach (System.Drawing.Rectangle face in faces)
                    image.Draw(face, new Bgr(System.Drawing.Color.Red), 2);
                foreach (System.Drawing.Rectangle eye in eyes)
                    image.Draw(eye, new Bgr(System.Drawing.Color.Blue), 2);
            }
            //end doing something


            var source = image.ToBitmapSource();
            this.rgb.Source = source;

        }
    }
}

Why are the best Augmented Reality Experiences inside of Virtual Reality Experiences?

elite_cockpit

I’ve been playing the Kickstarted space simulation game Elite: Dangerous for the past several weeks with the Oculus Rift DK2. Totally work related, of course.

Basically I’ve had the DK2 since Christmas and had been looking for a really good game to go with my device (rather than the other way around). After shelling out $350 for the goggles, $60 more for a game didn’t seem like such a big deal.

In fact, playing Elite: Dangerous with the Oculus and an XBox One gamepad has been one of the best gaming experiences I have ever had in my life – and I’m someone who played E.T. on the Atari 2600 when it first came out so I know what I’m talking about, yo. It is a fully realized Virtual Reality environment which allows me to fly through a full simulation of the galaxy based on current astronomical data. When I am in the simulation, I objectively know that I am playing a game. However, all of my peripheral awareness and background reactions seem to treat the simulation as if it is real. My sense of space changes and my awareness expands into the virtual space of the simulation. If I don’t mistake the VR experience for reality, I nevertheless do experience a strong suspension of disbelief when I am inside of it.

elite_cockpit2

One of the things I’ve found fascinating about this Virtual Reality simulation is that it is full of Augmented Reality objects. For instance, the two menu bars at the top of the screencap above, to the top left and the top right, are full holograms. When I move my head around, parallax effects demonstrate that their positions are anchored to the cockpit rather than to my personal perspective. If the VR goggles allowed me to do it, I would be able to even lean forward and look at the backside of those menus. Interestingly, when the game is played in normal 3D first person mode rather than VR with the Oculus, those menus are rendered as head-up displays and are anchored to my point of view as I use the mouse to look around the cockpit — in much the same way that google glass anchored menus to the viewer instead of the viewed.

The navigation objects on the dashboard in front of me are also AR holograms. Their locations are anchored to the cockpit rather than to me, and when I move around I can see them at different angles. At the same time, they exhibit a combination of glow and transparency that isn’t common to real-world objects and that we have come to recognize, from sci fi movies, as the inherent characteristics of holograms.

I realized at about the 60 hour mark into my gameplay \ research that one of the current opportunities as well as problems with AR devices like the Magic Leap and HoloLens is that not many people know how to develop UX for them. This was actually one of the points of a panel discussion concerning HoloLens at the recent BUILD conference. The field is wide open. At the same time, UX research is clearly already being done inside VR experiences like Elite: Dangerous. The hologram-based control panel at the front of the cockpit is a working example of how to design navigation tools using augmented reality.

elite_cockpit3

Another remarkable feature of the HoloLens is the use of gaze as an input vector for human-computer interactions. Elite: Dangerous, however, has already implemented it. When the player looks at certain areas of the cockpit, complex menus like the one shown in the screencap above pop into existence. When one removes one’s direct gaze, the menu vanishes. If this were a usability test for gaze-based UI, Elite: Dangerous will have already collected hours of excellent data from thousands of players to verify whether this is an effective new interaction (in my experience, it totally is, btw). This is also the exact sort of testing that we know will need to be done over the next few years in order to firm up and conventionalize AR interactions. By happenstance, VR designers are already doing this for AR before AR is even really on the market.

sao1

The other place augmented reality interaction design research is being carried out is in Japanese anime. The image above is from a series called Sword Art Online. When I think of VR movies, I think of The Matrix. When I put my children into my Oculus, however, they immediately connected it to SAO. SAO is about a group of beta testers for a new MMORPG that requires virtual reality goggles who become trapped inside the MMORPG due to the evil machinations of one of the game developers. While the setting of the VR world is medieval, players still interact with in-game AR control panels.

sao2

Consider why this makes sense when we ask the hologram versus head-up display question. If the menu is anchored to our POV, it becomes difficult to actually touch menu items. They will move around and jitter as the player looks around. In this case, a hologram anchored to the world rather than to the player makes a lot more sense. The player can process the consistent position of the menu and anticipate where she needs to place her fingers in order to interact with it. Sword Art Online effectively provides what Bill Buxton describes as a UX sketch for interactions of this sort.

On an intellectual level, consider how many overlapping interaction metaphors are involved in the above sketch. We have a 1) GUI-based menu system transposed to 2) touch (no right clicking) interactions, then expressed as 3) an augmented reality experience placed inside of 4) a virtual reality experience (and communicated inside a cartoon).

Why is all of this possible? Why are the best augmented reality experiences inside of virtual reality experiences and cartoons? I think it has to do with cost of execution. Illustrating an augmented reality experience in an anime is not really any more difficult than illustrating a field of grass or a cute yellow gerbil-like character. The labor costs are the same. The difficulty is only in the conceptualization.

Similarly, throwing a hologram into a virtual reality experience is not going to be any more difficult than throwing a tree or a statue into the VR world. You just add some shaders to create the right transparency-glowy-pulsing effect and you have a hologram. No additional work has to be done to marry the stereoscopic convergence of hologram objects and the focal position of real world locations as is required for really good AR. In the VR world, these two things – the hologram world and the VR world – are collapsed into one thing.

There has been a tendency to see virtual reality and mixed reality as opposed technologies. What I have learned from playing with both, however, is that they are actually complementary technologies. While we wait for AR devices to be released by Microsoft, Magic Leap, etc. it makes sense to jump into VR as a way to start understanding how humans will interact with digital objects and how we must design for these interactions. Additionally, because of the simplification involved in creating AR for VR rather than AR for reality, it is likely that VR will continue to hold a place in the design workflow for prototyping our AR experiences even years from now when AR becomes not only a reality but an integral thread in the fabric of reality.

On The Gaze as an input device for Hololens

microsoft-hololens-build-anatomy

The Kinect sensor and other NUI devices have introduced an array of newish interaction patterns between humans and computers: tap, touch, speech, finger tracking, body gestures. The hololens provides us with a new method of interaction that hasn’t been covered extensively from a UX perspective before: The Gaze. Hololens performs eye tracking in order to aid users of the AR device to activate menus.

Questions immediately arise as to the role this will play in surveillance culture, and even more in the surveillance of surveillance culture. While sensors track our gaze, will they similarly inform us about the gaze of others? Will we one day receive notifications that someone is checking us out? Quis custodiet ipsos custodes? To the eternal question who watches the watchers, we finally have an answer. HoloLens does.

lacan gaze

Even though The Gaze has not been analyzed deeply from a UX perspective, it has been the object of profound study from a phenomenological and a structuralist point of view. In this post I want to introduce you to five philosophical treatments of The Gaze covering the psychological, the social, the cinematic, the ethical and the romantic. To start, the diagram above is not from an HCI book as one might reasonably assume but rather from a monograph by the psychoanalyst Jacques Lacan.

the_ambassadors

A distinction is often drawn between Lacan’s early studies of The Gaze and his later conclusions about it. The early work relates it to a “mirror stage” of self-awareness and concerns the gaze when directed to ourselves rather than to others:

“This event can take place … from the age of six months on; its repetition has often given me pause to reflect upon the striking spectacle of a nursling in front of a mirror who has not yet mastered walking or even standing, but who … overcomes, in a flutter of jubilant activity, the constraints of his prop in order to adopt a slightly leaning-forward position and take in an instantaneous view of the image in order to fix it in his mind.”

This notion flowered in the later work The Split Between the Eye and the Gaze into a theory of narcissism in which the subject sees himself/herself as an objet petit a (a technical term for an object of desire) through the distancing effect of the gaze. Through this distancing, the subject also become alienated from itself. What is probably essential for us in this work – as students of emerging technologies –  is the notion that the human gaze is emotionally distancing. This observation was later taken up in post-colonial theory as the “Imperial Gaze” and in feminist theory as “objectification.”

eye-tracking

Michel Foucault achieved fame as a champion of the constructivist interpretation of truth but it is often forgotten that he was also an historian of science. A major theme in his work is the way in which the gaze of the other affects and shapes us – in particular the “scientific gaze.” Being watched causes us discomfort and we change our behavior – sometimes even our perception of who we are – in response to it. The grand image Foucault raises to encapsulate this notion is Jeremy Bentham’s Panopticon, a circular prison in which everyone is watched by everyone else.

THE HOUSEHOLD OF PHILIP IV or LAS MENINAS by Juan Bautista Martinez del Mazo (c1612-15-1667) after Diego Velazquez (1599 - 1660), at Kingston Lacy, Dorset

Where Lacan concentrates on the self-gaze and Foucault on the way the gaze makes us feel, Slovoj Zizek is concerned with the appearance of The Gaze when we gaze back at it. He writes in an essay called “Why Does the Phaullus Appear” from the collection Enjoy Your Symptom:

Shane Walsh (Jon Bernthal) - The Walking Dead - Season 2, Episode 12 - Photo Credit: Gene Page/AMC

Let us take the ‘phantom of the opera,’ undoubtedly mass culture’s most renowned specter, which has kept the popular imagination occupied from Gaston Leroux’s novel at the turn of the century through a series of movie and television versions up to the recent triumphant musical: in what consists, on a closer look, the repulsive horror of his face? The features which define it are four:

“1) the eyes: ‘his eyes are so deep that you can hardly see the fixed pupils. All you can see is two big black holes, as in a dead man’s skull.’ To a connoisseur of Alfred Hitchcock, this image instantly recalls The Birds, namely the corpse with the pecked-out eyes upon which Mitch’s mother (Jessica Tandy) stumbles in a lonely farmhouse, its sight causing her to emit a silent scream. When, occasionally, we do catch the sparkle of these eyes, they seem like two candles lit deep within the head, perceivable only in the dark: these two lights somehow at odds with the head’s surface, like lanterns burning at night in a lonely, abandoned house, are responsible for the uncanny effect of the ‘living dead.’”

Obviously whatever Zizek says about the phantom of the opera applies equally well to The Walking Dead. What ultimately distinguishes vampires, zombies, demons and ghosts lies in the way they gaze at us.

While Zizek finds in the eyes a locus for inhumanity, the ethicist Emmanual Levinas believes this is where our humanity resides. These two notions actually complement each other, since what Zizek indentifies as disturbing is the inability to project a human mind behind a vacant stare. As Levinas says in a difficult and metaphysical way in his masterpiece Totality and Infinity:

“The presentation of the face, expression, does not disclose an inward world previously closed, adding thus a new region to comprehend or to take over. On the contrary, it calls to me above and beyond the given that speech already puts in common among us…. The third party looks at me in the eyes of the Other – language is justice. It is not that there first would be the face, and then the being it manifests or expresses would concern himself with justice; the epiphany of the face qua face opens humanity…. Like a shunt every social relation leads back to the presentation of the other to the same without the intermediary of any image or sign, solely by the expression of the face.”

The face and the gaze of the other implies a demand upon us. For Levinas, unlike Foucault, this demand isn’t simply a demand to behave according to norms but more broadly posits an existential command. The face of the other asks us implicitly to do the right thing: it demands justice.

new_optics_06

The final aspect of the gaze to be discussed – though probably the first aspect to occur to the reader – is the gaze of love, i.e. love at first sight. This was a particular interest of the scholar Ioan P. Couliano. In his book Eros and Magic in the Renaissance Couliano examines old medical theories about falling in love and cures for infatuation and obsession. He relates this to Renaissance theories about pneuma [spiritus, phantasma], which was believed to be a pervasive fluid that allowed objects to be sensed through apparently empty air and become transmitted to the brain and the heart. In this regard, Couliano raises a question that would only make sense to a true Renaissance man: “How does a woman, who is so big, penetrate the eyes, which are so small?” He quotes the 13th century Bernard of Gordon:

Leopold_von_Sacher-Masoch_with_Fannie

The illness called ‘hereos’ is melancholy anguish caused by love for a woman. The ‘cause’ of this affliction lies in the corruption of the faculty to evaluate, due to a figure and a face that have made a very strong impression. When a man is in love with a woman, he thinks exaggeratedly of her figure, her face, her behavior, believing her to be the most beautiful, the most worthy of respect, the most extraordinary with the best build in body and soul, that there can be. This is why he desires her passionately, forgetting all sense of proportion and common sense, and thinks that, if he could satisfy his desire, he would be happy. To so great an extent is his judgment distorted that he constantly thinks of the woman’s figure and abandons all his activities so that, if someone speaks to him, he hardly hears him.”

And here is Couliano’s gloss of Bernard’s text:

RokebyVenus

“If we closely examine Bernard of Gordon’s long description of ‘amor hereos,’ we observe that it deals with a phantasmic infection finding expression in the subject’s melancholic wasting away, except for the eyes. Why are the eyes excepted? Because the very image of the woman has entered the spirit through the eyes and, through the optic nerve, has been transmitted to the sensory spirit that forms common sense…. If the eyes do not partake of the organism’s general decay, it is because the spirit uses those corporeal apertures to try to reestablish contact with the object that was converted into the obsessing phantasm: the woman.”

 

[As an apology and a warning, I want to draw your attention to the use of ocular vocabulary such as “perspective,” “point of view,” “in this regard,” etc. Ocular phrases are so pervasive in the English language that it is nearly impossible to avoid them and it would be more awkward to try to do so than it is to use them without comment. If you intend to speak about visual imagery, take my advice and pun proudly and without apology – for you will see that you have no real choice in the matter.]

HoloCoding resources from Build 2015

darren! 

The Hololens team has stayed extremely quiet over the past 100 days in order to have a greater impact at Build. Alex Kipman was the cleanup batter on the first day keynote at Build with an amazing overview of realistic Hololens scenarios. This was followed by Hololens demos as well as private tutorials on using Hololens with a version of Unity 3D. Finally there were sessions on Hololens and a pre-recorded session on using the Kinect with the brand new RoomAlive Toolkit.

darren?

Here are some useful links:

 

There were two things I found particularly interesting in Alex Kipman’s first day keynote presentation.

microsoft-hololens-build-anatomy

The first was the ability of the onstage video to actually capture what was being shown through the hololens but from a different perspective. The third person view of what the person wearing the hololens even worked when the camera moved around the room. Was this just brilliant After Effects work perfectly synced to the action onstage? Or were we seeing a hololens-enabled camera at work? If the latter – this might be even more impressive than the hololens itself.

mission_impossible_five

Second, when demonstrating the ability to pin movies to the wall using Hololens gestures, why was the new Mission Impossible trailer used for the example? Wouldn’t something from, say, The Matrix been much more appropriate.

Perhaps it was just a licensing issue, but I like to think there was a subtle nod to the inexplicable and indirect role Tom Cruise has played in the advancement of Microsoft’s holo-technologies. Minority Report and the image of Cruise wearing biking gloves with his arms raised in the air, conductor-like, was the single most powerful image invoked with Microsoft first introduced the Kinect sensor. As most people know by now, Alex Kipman was the man responsible not only for carrying the Kinect nee Natal Project to success, but now for guiding the development of the Hololens. Perhaps showing Tom Cruise onstage at Build was a subtle nod to this implicit relationship.

The HoloCoder’s Bookshelf

WP_20150430_06_43_49_Pro

Professions are held together by touchstones such as as a common jargon that both excludes outsiders and reinforces the sense of inclusion among insiders based on mastery of the jargon. On this level, software development has managed to surpass more traditional practices such as medicine, law or business in its ability to generate new vocabulary and maintain a sense that those who lack competence in using the jargon simply lack competence. Perhaps it is part and parcel with new fields such as software development that even practitioners of the common jargon do not always understand each other or agree on what the terms of their profession mean. Stack Overflow, in many cases, serves merely as a giant professional dictionary in progress as developers argue over what they mean by de-coupling, separation of concerns, pragmatism, architecture, elegance, and code smell.

Cultures, unlike professions, are held together not only by jargon but also by shared ideas and philosophies that delineate what is important to the tribe and what is not. Between a profession and a culture, the members of a professional culture, in turn, share a common imaginative world that allows them to discuss shared concepts in the same way that other people might discuss their favorite TV shows.

This post is an experiment to see what the shared library of augmented reality and virtual reality developers might one day look like. Digital reality development is a profession that currently does not really exist but which is already being predicted to be a multi-billion dollar industry by 2020.

HoloCoding, in other words, is a profession that exists only virtually for now. As a profession, it will envelop concerns much greater than those considered by today’s software developers. Whereas contemporary software development is mostly about collecting data, reporting on data and moving data from point A to points B and C, spatial software development will be more concerned with environments and will have to draw on complex mathematics as well as design and experiential psychology. The bookshelf of a holocoder will look remarkably different from that of a modern data coder. Here are a few ideas regarding what I would expect to find on a future developer’s bookshelf in five to ten years.

 

1. Understanding Media by Marshall McLuhan – written in the 60’s and responsible for concepts such as ‘the global village’ and hot versus cool media, McLuhan pioneered the field of media theory.  Because AR and VR are essentially new media, this book is required reading for understanding how these technologies stand side-by-side with or perhaps will supplant older media.

2. Illuminations by Walter Benjamin – while the whole work is great, the essay ‘The Work of Art in the Age of Mechanical Reproduction’ is a must read for discussing how traditional notions about creativity fit into the modern world of print and now digital reproduction (which Benjamin did not even know about). It also deals at an advanced level with how human interactions work on stage versus film and the strange effect this creates.

3. Sketching User Experiences by Bill Buxton – this classic was quickly adopted by web designers when it came out. What is sometimes forgotten is that the book largely covers the design of products and not websites or print media – products like those that can be built with HoloLens, Magic Leap and Oculus Rift. Full of insights, Buxton helps his readers to see the importance of lived experience when we design and build technology.

4. Bergsonism by Gilles Deleuze – though Deleuze is probably most famous for his collaborations with Felix Guattari, this work on the philosophical meaning of the term ‘’virtual reality’, not as a technology but rather as a way of approaching the world, is a gem.

5. Passwords by Jean Baudrillard – what Deleuze does for virtual reality, Baudrillard does for other artifacts of technological language in order to show their place in our mental cosmology. He also discusses virtual reality along the way, though not as thoroughly.

6. Mathematics for 3D Game Programming and Computer Graphics by Eric Lengeyl – this is hardcore math. You will need this. You can buy it used online for about $6. Go do that now.

7. Linear Algebra and Matrix Theory by Robert Stoll – this is a really hard book. Read the Lengeyl before trying this. This book will hurt you, by the way. After struggling with a page of this book, some people end up buying the Manga Guide to Matrix Theory thinking that there is a fun way to learn matrix math. Unfortunately, there isn’t and they always come back to this one.

8. Phenomenology of Perception by Maurice Merleau-Ponty – when it first came out, this work was often seen as an imitation of Heiddeger’s Being and Time. It may be the case that it can only be truly appreciated today when it has become much clearer, thanks to years of psychological research, that the mind reconstructs not only the visual world for us but even the physical world and our perception of 3D spaces. Merleau-Ponty pointed this out decades ago and moreover provides a phenomenology of our physical relationship to the world around us that will become vitally important to anyone trying to understand what happens when more and more of our external world becomes digitized through virtual and augmented reality technologies.

9. Philosophers Explore the Matrix – just as The Matrix is essential viewing for anyone in this field, this collection of essays is essential reading. This is the best treatment available of a pop theme being explored by real philosophers – actually most of the top American philosophers working on theories of consciousness in the 90s. Did you ever think to yourself that The Matrix raised important questions about reality, identity and consciousness? These professional philosophers agree with you.

10. Snow Crash by Neal Stephenson – sometimes to understand a technology, we must extrapolate and imagine how that technology would affect society if it were culturally pervasive and physically ubiquitous. Fortunately Neal Stephenson did that for virtual reality in this amazing book that combines cultural history, computer theory and a fast paced adventure.

What is a HoloCoder?

holodeck

Over the past few years we’ve seen the rapid release of innovative consumer technologies that are all loosely related by their ability to scan 3D spaces, interact with 3D spaces or synthesize 3D spaces. These include the Kinect sensor, Leap Motion, Intel Perceptual Computing, Oculus Rift, Google Glass, Magic Leap and HoloLens. Additional related general technologies include projection mapping and 3D printing. Additional related tools include Unity 3D and the Unreal Engine.

Despite a clear family resemblance between all of these technologies, it has been difficult to clearly define what that relationship is. There has been a tendency to categorize all of them as simply being “bleeding edge”, “emerging” or “future”. The problem with these descriptors is that they are ultimately relative to the time at which a technology is released and are not particularly helpful in defining what holds these technologies together in a common gravitational pull.

definitions

I basically want to address this problem by engaging in a bit of word magic. Word magic is a sub-category of magical thinking and is based on a form of psychological manipulation. If you have ever gone out to Martin Fowler’s Bliki then you’ve seen the practice at work. One of the great difficulties of software development is anticipating the unknown: the unknown involved in requirements, the unknown related to timelines, and the unknown concerned with the correct tactics to accomplish tasks. In a field with a limited history and a tendency not to learn from other related fields, the fear of the unknown can utterly cripple projects.

Martin Fowler’s endless enumeration of “patterns” on his bliki takes this on directly by giving names to the unknown. If one reads his blog carefully, however, it quickly becomes clear that most, though not all, of these patterns are illusory: they are written at such an abstract level that they fail to provide any prescriptive advice on how to solve the problems they are intended to address. What they do provide, however, is a sense of relief that there is a “name” that can be used to plug up the hole opened up in time by the fear of the unknown. Solutions architects can return to their teams (or their managers) and pronounce proudly that they have found a pattern to solve the outstanding problem that is hanging over everyone – all that remains is to determine what each “name” actually means.

In this sense, the whole world of software architecture – which Glassdoor ranked as the 11th best job of 2015 — is a modern priesthood devoted to prophetic interpretations of “design patterns”.

I similarly want to use word magic to define the sort of person that works with the sorts of technology I listed at the top of this article. I think I can even do it quite simply with familar imagery.

A holocoder is someone who works with technologies that are inspired by and/or anticipate the Star Trek Holodeck.

 

 

interpretations

 

holodeck

The part of the definition that states “inspired by and/or anticipate” may seem strange but it is actually quite essential. It is based on a specific temporal-cybernetic theory concerning the dissemination of ideas which I will attempt to describe but which is purely optional with respect to the definition.

But first: how can a theory be both essential and optional? This is an issue that Niels Bohr, one of the fathers of quantum mechanics, tackled frequently. In the early 30’s Bohr was travelling through eastern Europe on a lecture tour. During part of the tour, a former student met him at his inn and noticed him nailing a horse shoe over the door of his room. “Professor Bohr”, he asked, “what are you doing?” Niels Bohr replied, “The Inn Keeper informed me that a horse shoe over the door will bring me luck.” The student was scandalized by this. “But Herr Professor,” the student objected, “surely as a physicist and intellectual such as yourself does not believe in these silly superstitions.” “Of course not,” Bohr answered. “But the Inn Keeper reassured me that the horse shoe will bring me luck whether I believe in it or not.”

Here is the optional theory of the Holodeck. Certain technologies, it seems to me, can have such an influence that they shape the way we think about the world. We have seen many examples of this in our past such as the printing press, the automobile, the personal computer and the cell phone. Furthermore we anticipate the advent of similar major technologies in our future. These technologies have what is called a “psychic resonance” and change the very metaphors we use to describe our world. To give a simple example, whereas we originally used mental metaphors to explain computers in terms of “memory”, “processing” and even “computing”, today we use computer metaphors to help explain how the brain works. The arrival of the personal computer caused a shift and a reversal in what semioticians call the relationship between the explanans and the explanandum.

wesley in the holodeck

Psychic impact is transmitted over carriers called “memes”. Memes are basically theoretical constructs that are phenomenally identical to what we call “ideas” but behave like viruses. Memes travel through air as speech and along light waves as images in order to spread themselves from host to host. Traditionally the psychic impact of a meme is measured by the meme’s density over a given space. Besides density, the psychic impact can also be measured based on the total volume of space it is able to infect. Finally, the effectiveness of a meme can also be measured based on its ability to spread into the future. For instance, works of literature and cultural artifacts such as religions and even famous sayings are examples of memes that have effectively infected the future despite a distance of thousands of years between the point of origin of the infection and the temporal location of the target.

While the natural habitat of bacteria like e coli is in the gastrointestinal tract, the natural habitat of memes is in the brain and this leads to a fascinating third form of mimetic transmission. At the level of microtubules in the brain where memes happen to live, we enter the Planck scale in which classical physics do not apply in the way that they do at the macro level. At this scale, effects like quantum entanglement create spooky behaviors such as quantum communication. While theoretically people still cannot communicate with each other in time since that level of semiotics is still governed by classical physics, there is an opening for mimetic viruses to actually be transmitted backwards in time as if they were entering a transporter in one brain and rematerialized in another brain in the past. This allows for a third manner of mimetic spread: in space, forward in time, and finally backwards in time.

Riker in the Holodeck

As an aside, and as I said above, this is an _optional_ theory of psychic impact through time. A common and totally valid criticism is that it appeals to quantum mystery which tends to be misused to justify anything from ghosts to religious cults. The problem with appeals to “quantum mystery” is that this simply provides a name for a problem rather than prescribing actual ways to make predictions or anticipate behavior. In other words, like Martin Fowler’s bliki, it is word magic that provides interpretations of things but not actual solutions. Against such criticisms, however, it should be pointed out that I am explicitly engaged in an exercise in word magic, in which case using certain techniques of word magic – such as quantum mystery – is perfectly legitimate and even natural.

Through quantum entanglement acting on memes at the microtubule level, a technology from our possible future which resembles the Star Trek holodeck has such a large psychic impact that it resonates backwards in time until it reaches and inhabits the brains of the writers of a futuristic science fiction show in the late 80’s and is introduced into the show as the Holodeck. Through television transmissions, the holodeck meme is then broadcast to millions of teenagers who eventually enter the tech industry, become leaders in the tech industry, and eventually decide to implement various aspects of the holodeck by creating better and better 3D sensors, 3D simulation tools and 3D visualization technologies – both augmented and virtual. In other words, the Holodeck reaches backwards in time to inspire others in order to effectively give birth to itself, ex nihilo. Those that have been touched by the transmission are what I am calling holocoders.

 

and/or

Alternatively, this theory of where holocoders come from can be taken as a metaphor only. In this case, holocoders are not people being pulled toward a common future but instead people being pushed forward from a common past. Holocoders are people inspired directly or indirectly by a television show from the late 80’s that involved a large room filled with holograms that could be used for entertainment as well as research. Holocoders work on any or all of the wide variety of technologies that could potentially be combined to recreate that imagined experience.

the dreamatorium

Anyways, that’s my theory and I’m sticking to it. More importantly, these technologies are deeply entangled and deserve a good name, whether you want to go with holocoding or something else (though the holodeck people from the future highly encourage you to use the terms “holocoder”, “holocoding” and “holodeck”).

 

appendix

There are two other important instances of environment simulators which for whatever reason do not have the same impact as the Star Trek holodeck but are nevertheless worth mentioning.

danger_room

The first is the X-Men Danger Room which is an elaborate obstacle course involving holograms as well as robots used to train the X-Men. While the Danger Room goes back to the 60’s, the inclusion of holograms actually didn’t happen until the early 90’s, and so actually comes after the Star Trek environment simulator.

WayStation(Simak)

Clifford D. Simak published Way Station in 1963 (and won a Hugo award for it). It actually anticipates two Star Trek technologies – transporters as well as an environment simulator. Enoch Wallace, the hero of the story, works the earth relay station for intergalactic aliens who transport travelers over vast distances by sending them over shorter hops between the way stations of the title. Because he is so isolated in his job, the aliens support him by allowing him to pursue a hobby. Because Wallace enjoys hunting, the aliens build for him an environment simulator that lets him do big game hunting for dinosaurs.

How to Read Star Wars Comics

wrecked_ship

Are you trying to find a guide to reading Star Wars comic books online? You’ve come to the right place. And obviously — Do. Or do not. There is no try.

Lucas Film just released the second teaser for Star Wars VII. My wife and I found ourselves tearing up as we watched it on her ipad, demonstrating that nostalgia is the only thing stronger than the Force.  We are of the generation that first saw Star Wars in a theater in the 70’s. We remember a time before Star Wars existed and yet it has always been the background myth of our lives as we grew up. What Gilgamesh was to Mesopotamians or Siegfried and Brunhilde to Germans, Luke, Han and Leia are to us.

Timed with the release of the teaser, Marvel Comics has added a ton of Star Wars related comic books to their digital comics service Marvel Unlimited. These comic books span a period from the 90’s to the present in which Dark Horse Comics started spinning up stories from the Star Wars expanded mythology (many based on the books) that fill in gaps left by the movies as well as extending the storyline beyond Star Wars VI. In 2015, Disney, which owns both the Star Wars franchise as well as Marvel Comics, moved the Star Wars publication rights from Dark Horse Comics to Marvel Comics, which is apparently how these classic Dark Horse comics are now appearing online.

 

accessing the digital comics

 

If you want to know what happens in Star Wars after the battle of Endor (but before J. J. Abrams retcons over it) then this is your opportunity. You can even do it for free if you want. Go to the Marvel Unlimited website and enter the promotion code starwars to get one free month – though I’d recommend skipping this and getting an annual subscription for $69. Marvel Unlimited has a decent web interface, but the best way to use the service is with the iPad app.

(Scott Hanselman has a good but critical review of the service written in 2011 that deserves to be read. It’s worth mentioning, though, that the service as well as the UX have greatly improved over the past four years.)

Once you have your subscription your main problem is going to be seeing the trees for the forest. There are thousands of comics in the Marvel catalog and they tend to be listed in alphabetical order. This is sensible, but not particularly helpful if you want to read the continuing Star Wars saga in mythologically chronological order. Additionally, while Marvel is offering large chunks of the Star Wars graphic novel canon, there are pieces missing. This is an additional difficulty in trying to get the full story straight.

 

reading in the correct order

 

While there are lots of comics available through the subscription that are contemporaneous with the events in the movies, in this post I’m just going to try to help you to read the Star Wars comics being offered through Marvel Unlimited in the correct order starting just after the battle of Endor.

TwinEnginesOfDestruction

The first comic of the New Republic Era available is Star Wars: Boba Fett – Twin Engines of Destruction (1997) — I’m using the titles as they are listed in the “browse” tab of the Marvel Unlimited app and including the year to highlight the difference between the chronology and the publication order. I’m also heavily indebted to Wookieepedia (you read it right) for all the correct timeline information.

The story picks up with what is known as the Thrawn Trilogy. In Marvel Unlimited, these are cataloged under three different series of about 5 comics each:

Heir6

Star Wars: Heir to the Empire (1995 – 1996)

 

Dfrtpb

Star Wars: Dark Force Rising (1997)

 

Tlctpb

Star Wars: The Last Command (1997 – 1998)

 

This leads us into the Dark Empire Trilogy in which the emperor turns out not to be as dead as he could be. Dark Horse’s first Dark Empire series is also in many ways what first made the Star Wars comics attractive as a vector for transmitting expanded universe stories. A few comics slip in between Dark Empire II and Empire’s End of which only the Boba Fett story is currently available on MU.

 

Darkempire1

Star Wars: Dark Empire (1991 – 1992)

 

De2_1

Star Wars: Dark Empire II (1994 – 1995)

 

Bobafett_agentofdoom

Star Wars: Boba Fett – Agent of Doom (2000)

 

1121447

Star Wars: Empire’s End (1995)

 

Boba Fett grew as a character mainly because he had an awesome costume and fans just wanted to see more of it. The same could be said of the main character in the next two series. Crimson Empire follows the exploits of one of Emperor Palpatine’s elite bodyguards.

 

001eb0bc_medium

Star Wars: Crimson Empire (1997)

 

978907-crimson_empire_ii

Star Wars: Crimson Empire II – Council of Blood (1998-1999)

 

Chewbacca_TPB

The Chewbacca series (2000) is a commemorative four issue run with stories told by Chewbacca’s friends because he is dead at this point in the Star Wars chronology (::sniff::) which will be overwritten by J. J. Abrams faster than you can unsay “Kaaahhhhhn” as J. J. retcons the expanded Star Wars universe.

At this point we leap a century forward and get into the Star Wars: Legacy comics where we follow the adventures of Cade Skywalker, Ania Solo and lots of other people with familiar-but-not-quite-right sounding names. MU lists three collections in the browse tab.

 

3starwarslegacy

Star Wars Legacy (2006-2010) – 50 issues!

 

Legacy_war_I

Star Wars: Legacy – War (2010 – 2011) – six issues

 

Legacy2

Star Wars: Legacy (2013 – 2014) [aka Star Wars: Legacy II] – 18 issues

 

then what … ?

 

And that’s as far as it goes for now. If you need more to read, you can go back in time and start pounding the 55 issues of Knights of the Old Republic digital comics which will provide the Jedi back story from thousands of years before the movies. On the other hand, you might also want to branch out and see what else MU has to offer. Here’s some other books on Marvel Unlimited that I would highly recommend.

 

nick fury

Nick Fury, Agent of SHIELD #1 (1968)

This single issue written and illustrated by legend Jim Steranko changed the game in comic books. Even as pop art was bringing high art low, Steranko lifted the comic book genre and opened the possibility to start considering comics an art form – or as we prefer to say today, start considering “graphic novels” an art form.

 

Marvel_1602

Marvel 1602 (2002-2003)

While there have been many takes on alternate Marvel timelines, Neil Gaiman’s turn with these eight issues is one of the most interesting. He imagines the classic Marvel heroes finding their place in 17th century Europe.

 

eternals

Eternals (2006)

In the 70’s, Marvel experimented with making Erich von Daniken’s Chariots of the Gods the basis for a comic book and had mixed success. Decades later, Neil Gaiman came along and wrote a seven issue series based on the earlier work to create an amazing story of aliens turning ancient humans into super heroes for their own mysterious purpose. The aliens in question, by the way, happen to be the Celestials who are part of the back story for James Gunn’s Guardians of the Galaxy movie. See – everything ties together in the Marvel universe.

 

gg

Guardians of the Galaxy (2008)

The comics are as good as the movie. There are a few more characters and the ones you know are slightly different. Rocket and Groot are the same, though.  This run of the comics basically revives a bunch of Silver Age characters, modernizes them and throws them together to amazing effect.

 

conquest

Annihilation: Conquest – Starlord (2007)

But if you want to do it right and find out how Peter Quill aka Starlord first meets Rocket, Groot and Bug (who’s Bug you ask?) then you might want to also read the four issues of Annihilation: Conquest – Starlord which is just a part of the much bigger Marvel space event called Annihilation: Conquest. Really, all of Annihilation: Conquest is worth reading because then you’ll get to know more about Quasar, Ronan the Accuser, the Heralds of Galactus, and the Nova Corps.

 

annihilation_4

Annihilation (2006 – 2007)

But if you really really want to do it right, then you’ll read the Annihilation comic event before you read either Annihilation: Conquest or Guardians of the Galaxy. This is where Peter Quill first gets retconned into the contemporary world. In the case that you are fully committing to the effort, the correct order for reading most of the back story for the Guardians movie would be:

Annihilation Prologue (2006), Annihilation (2006 – 2007), Annihilation: Quasar / Annihilation: Nova / Annihilation: Ronan / Annihilation: Silver Surfer / Annihilation: Super Skrull [these are all overlapping series], Annihilation: Conquest Prologue (2007), Annihilation: Conquest (2007), Annihilation: Conquest – Quasar / Annihilation: Conquest – Starlord / Annihilation: Conquest – Wraith / Annihilation: Conquest – Heralds of Galactus, Guardians of the Galaxy (2008), The Thanos Imperative: Ignition (2010), The Thanos Imperative (2010), The Thanos Imperative: Devastation (2010).

It’s totally worth it.

 

Agents_of_Atlas_Vol_1_1

Agents of Atlas (2006 — 2007)

Agents of Atlas, like Guardians of the Galaxy, is an instance of Marvel retconning characters that were abandoned in the 50’s and brought together for a series in the 00’s. Interestingly, this is the second time they have been retconned. The first time was in a What If? one-off from the 70’s. FBI agent Jimmy Woo leads a rag-tag team of super-powered beings against the nefarious criminal organization known as the Atlas Foundation. His team includes Namora of Atlantis, the goddess Venus, Marvel Boy the Uranian, Gorilla Man and M-11 the robot. Chronologically in the Marvel universe, Jimmy Woo’s team is actually considered the original Avengers formed to rescue President Dwight Eisenhower from the clutches of Atlas and then later mysteriously disbanded. This series of six issues from 2006 uncovers what really happened to the team. Another series of 11 issues of Agents of Atlas was released in 2009, which was followed up in 2010 by a five issue series simply titled Atlas.

 

nextwave

Nextwave: Agents of H.A.T.E.

A twelve issue S.H.I.E.L.D. parody written by comic legend Warren Ellis and beautifully drawn by Stuart Immonen. Super powered heroes discover that they aren’t working for the good guys after all, but that their organization are actually the baddies. They decide to do something about it. Ellis said of the series, “It’s an absolute distillation of the superhero genre. No plot lines, characters, emotions, nothing whatsoever. It’s people posing in the street for no good reason.”

 

runaways

Runaways (2003 — 2004)

Misfit, middle-class teenagers discover that their parents really are evil after all when they accidentally witness them performing a human sacrifice to Elder Gods. They also come to discover that, like their super-villain parents, they possess super powers.

 

journey

Journey Into Mystery (2011)

The god of mischief Loki is dead but a young boy appears claiming to be Loki reborn. He struggles however because, being Loki, everybody hates him and nobody trusts him. Written by Kieron Gillen, this is the story of how Loki attempts to redeem himself. It is by turns hilarious and heart breaking. Start with issue #622 if you can and try to at least get to issue #645 which wraps up the Loki story.

 

Secret_Avengers_Vol_1_20

Secret Avengers #20 (2010)

The entire Secret Avengers series is great. I would especially recommend that you read issue #20 which follows a single storyline in the life of Natalia Romanov, aka Black Widow.

 

marvel_zombies

Marvel Zombies (2005-2006)

Marvel Unlimited gives you every variation on Marvel zombies you could possibly want, from the original series to the five follow ups to Marvel Zombies Christmas Carol to Marvel Zombies vs. Marvel Apes. I recommend at least having a taste of the first five issue run.

 

secret warriors

Secret Warriors (2008 — 2011)

In the first comic of this 28 issue run, Nick Fury of S.H.I.E.L.D. discovers that for his entire career, his arch enemy Hydra (“Hail Hydra!”) has been secretly controlling  S.H.I.E.L.D. itself. The last 60 years of secret wars have been a farce scripted by the Nazi Baron Strucker. (This is actually the basis for the storyline in the film Winter Soldier as well as the Agents of S.H.I.E.L.D. TV series.) But Nick doesn’t give up. Instead, he pulls together a team to take Hydra down once and for all.

One Kinect to rule them all: Kinect 2 for XBox One

two_kinects

Yes. That’s a bit of a confusing title, but it seems best to lay out the complexity upfront. So far there have been two generations of the Kinect sensor which combine a color camera, a depth sensing camera, an infrared emitter (basically used for the depth sensing camera) and a microphone array which works as a virtual directional shotgun microphone. Additional software called the Kinect SDK then allows you to write programs that read these data feeds as well as interpolating them into 3D animated bodies that are representations of people’s movements.

Microsoft has just announced that they will stop producing separate versions of the Kinect v2, one for windows and one for the XBox One,  but will instead encourage developers to purchase the Kinect for Windows Adapter instead to plug their Kinects for XBox One into a PC. In fact, the adapter has been available since last year, but this just makes it official. All in all this is a good thing. With the promise that Universal Windows Apps will be portable to XBox, it makes much more sense if the sensors – and more importantly the firmware installed on them – are exactly the same whether you are on a PC running Windows 8/10 or an XBox running XBox OS.

This announcement also vastly simplifies the overall Kinect hardware story. Up to this point, there weren’t just two generations of Kinect hardware but also two versions of the current Kinect v2 hardware, one for the Xbox and one for Windows (for a total of four different devices). The Kinect hardware, both in 2010 and in 2013, has always been built first as a gaming device. In each case, it was then adapted to be used on Windows machines, in 2012 and 2014 respectively.

The now discontinued Kinect for Windows v2 differed from the Kinect for the Xbox One in both hardware and software. To work with Windows machines, the Kinect for Windows v2 device uses the specialized power adapter to pump additional power to the hardware (there is a splitter in the adapter that attaches the hardware to both a USB port as well as a wall plug). The Xbox One, being proprietary hardware, is able to pump enough juice to its Kinect sensor without needing special adapter. Additionally, the firmware for the original Kinect for Windows v1 sensor diverged over time from the Kinect for Xbox’s firmware – which led to differences in how the two versions of the hardware performed. It is now clear that this will not happen with Kinect v2.

Besides the four hardware devices and their respective firmware, the loose term “Kinect” can also refer to the software APIs used to incorporate Kinect functionality into a software program. Prior to this, there was a Kinect for Windows SDK 1.0 through 1.8 that was used to program against the original Kinect for Windows sensor. For the Kinect for XBox One with the Kinect for Windows Adapter, you will want to use the Kinect for Windows SDK 2.0 (“for Windows” is still part of the title for now, even though you will be using it with a Kinect for XBox One, though of course you can still use it with the Kinect for Windows v2 sensor if you happen to have bought one of those prior to their discontinuation). There are also other SDKs floating around such as OpenNI and Libfreenect.

[Much gratitude to Kinect MVP Bronwen Zande for helping me get the details correct.]