So it’s almost a year since I last posted something on this blog, but it was a very busy year full of awesome events for me.
Last time I blogged (March 2011) was during my “empty semester” where I actually dropped everything and decided to go do my Bachelor Project in Augsburg, Germany :)
So here I’m going to talk about the awesome stuff I did there.

Advice: This is a very long blog post (even after I tried to make it shorter), if you don’t feel that you can read it all, just jump to the part that you find interesting :D

The Journey to Germany

When I first got the acceptance letter from Universität Augsburg on March 2011 offering me a chance to do my Bachelor Project there in the fields of Computer Graphics, Multi-Touch Interaction or Machine Learning (exact topic was to be decided later) I got very excited and worried at the same time, I was very excited because I wasn’t expecting to get accepted since my GPA was relatively bad and there was A LOT of GUC student applying on various universities in Germany, I also found Computer Graphics and Multi-touch Interaction very interesting, however I was also worried because it meant that I had to live on my own in Germany for 3 months which I thought was a lot, and that I’ll have to drop the Bachelor Project that I was about to start working on at the GUC (it was a nice topic with a great professor), thankfully I decided to do my Bachelors in Germany :)
There was a lot of problems I faced to get the visa and travel, problems from the GUC, the Deutsche Bank (such annoying people!) and the German Embassy, so I got the visa at the end of May, but thankfully Prof. André allowed me to delay the start of my Project till June.

After all the trouble of getting to Germany and all the trouble of getting from Munich Airport to Augsburg by train (almost got lost!) then having to sleep on the floor at a friend’s dorm room for a few days (I wasn’t able to find a dorm room for rent!), I finally met Prof. Elisabeth André (my supervisor) as well as Mr. Chi-Tai Dang (my other supervisor) and to my surprise they didn’t force me to join a certain team or work on a certain project that was part of a bigger project as most Professors do with Bachelor Projects in Germany, they told me to work on anything that’s related to Multi-Touch Technology and preferably something new that wasn’t done before!
I couldn’t believe it, I think I was the only one of my friends who got to come up with the project to work on! (well, except for 2 awesome people who did their projects in Cairo and arranged their own topics with some cool professors :))
After that,  Mr. Chi-Tai showed me the Lab (the HCM Lab) where all the magic happens, and he introduced me to the Microsoft Surface :)

Yup! My project was to be implemented on the (first generation) Microsoft Surface, a wonderful piece of Technology :)

A Video

Since this post might be a bit long and boring, I’ll start by showing you the video of the finished project to grab your interest :D
I’ve uploaded a series of very short videos recorded throughout my work on this project, I put them all in this playlist.

I wish was able to take a video of 2 people actually playing the game so you can see how much fun it actually is :)

Anyways, after you’ve seen it, now lets talk about how I came up with the idea and how I implemented it!

The Idea

During my brainstorming session with my supervisor Mr. Chi-Tai, we came up with 2 ideas:
- A Social Media app that takes full advantage of the capabilities of the Microsoft Surface and offers very natural user interaction
- A Multi-Touch Game that demonstrated a high level of interactivity that can’t be made on basic touch screens
Then on my own I came up with the idea to make a board game (like chess or backgammon) that uses real board game pieces placed on the surface and the surface augments the gaming experience, for example by giving hints on possible moves and showing nice effects, I thought it would be better than a fully virtual board game as it feels more natural.

After some researching and thinking, I found out that the augmented board game idea was made a lot of times and that a lot of research was already done in this area, and I found that the social media app idea was too simple and there wont be a lot to do to make the interaction if feel natural, also I always like making games, so I ended up thinking about a Multi-Touch game to implement and kept thinking about what would feel great to play on a large multi-touch screen and how to introduce a new level of interactivity.

It’s not that easy to design a game that would fit in the multi-touch interaction paradigm without making it feel weird or confusing to interact with the game, I read a wonderful paper about Multi-Touch Game Design titled “Game Design for Ad-Hoc Multi-Touch Gameplay on Large Tabletop Displays” by Jonas Schild and Maic Masuch (link), and it explained to me some concepts for game design and how it could be applied on multi-touch games.

After a lot of thinking about game genres and game design, I finally came up with the design for the game, I decided to make an Arcade Style game where the players sit on opposite sides of the Microsoft Surface and each has 3 bases and 2 cannons/turrets and their goal is to use the cannons to fire projectiles/shots at the opponent’s base and destroy it before his base gets destroyed, also there are some random power-ups that show up on the battlefield (pun intended :D) to make the game more random and interesting.
Pretty simple right?
But there is a twist, I thought about how a player can block incoming projectiles to defend his base and make it feel so natural, so I sat in front on the surface and imagined myself playing the game and the first thing that came to my mind was to just put my hand on the surface so that the projectile actually “bounces” off my hand and go away from my base, and that is what I decided to do!

I searched on various Research Paper Databases to see if the idea was done before, and I found out that it was only done once before by two people at Microsoft Research and they implemented it on an early prototype of the first-gen Microsoft Surface, their paper was titled “Bringing Physics to the Surface” (link), what they did was to explore the different way of representing objects placed on the surface as virtual objects given to a 3D Physics Engine so they can interact with the virtual object on the screen, and that’s what I wanted to do but with a 2D Physics Engine instead, but the paper wasn’t helpful at all as it didn’t give any indication on how they implemented the different methods nor gave any indication about the performance of each method and how to decrease it.

So I went to my supervisor and told him about my decision and he liked the idea, and he also told me that he knows one of the authors of that paper, so I went ahead and started working on most fun game I made :)

Playing around with the SDK

There are two official development platforms for the Microsoft Surface, WPF and … (wait for it) … XNA!
I was so happy to know that I wont have to learn a lot of things to get started with the development, I’ve been doing cool things with XNA for a long time (since I started blogging :D) so all I had to learn was how to use the Surface SDK’s core library (high-level stuff were for WPF).

So I went ahead and made a Surface XNA project and started playing around with the SDK, the first thing I did was to render a 3D Model (the one I used in my simple Asteroids game) and make it rotate by touching the surface and dragging (link to video).
Then I took a look at the manipulation API and used it to make multi-finger manipulations, pinch-to-zoom and other stuff (link to video).

Then I wanted to test the SDK’s ability to detect Finger Orientation, so I made a small circle that fires projectiles when touched, these projectiles moved in the direction my finger had when it was touching the surface (link to video), for some reason it felt fun to just keep touching the circle and watch the projectiles as they fly around :D

Developing the Game

Now after I had the basic shooting mechanism ready, I had to start thinking about the players’ bases and collision, at that point I remembered the name of an open source 2D Physics engine that works with XNA, it the Farseer Physics Engine!
The Farseer Physics Engine is based on a 2D Physics Engine called Box2D.XNA which is a C# port of Box2D, and farseer adds in a few extra features that I though I might use.

After drawing some class diagrams and planning how to organize my code, I finally implemented the very basic version of the game where there are 2 sides, each has 2 cannons and 3 bases (left, right & bottom), the bases and the projectiles were part of farseer’s physics world, so the projectiles bounced off the bases when they collide, and also the base loses health points when hit, the health is indicated by the color of the base (green to red), and once a base’s health points gets to Zero it disappears (link to video).

After that I started to look into blocking projectiles with your hand, and a great thing about the Microsoft Surface that sets it apart from other multi-touch screens is it’s ability to actually “see” the objects that touch it, and the SDK gave me the ability to retrieve the raw image that is captured by the infrared cameras inside the surface, the SDK was able to give me an array of bytes that represent an 8-bit gray-scale image at (as far as I remember) 30 frames per second, and this is exactly what I needed to implement accurate collision between my hand and the projectiles.
So I went ahead and implemented basic collision where every projectile checks the few pixels that are around it and see if any of them is above  a certain threshold, if yes then I multiply the projectile’s velocity by –1, this simply inverts it’s direction of motion, I also switch a flag to true on collision and as long as it’s true, I divide the projectile’s velocity by some number so it slows down and doesn’t keep bouncing infinitely (link to video).

The Hardest Part

And now comes the hardest part (no, not the Coldplay song :D), I wanted to make realistic collision and just inverting the projectile’s direction isn’t enough and sometimes caused it to get stuck (as seen in the video), this meant that I’ll have to detect the shape of my hand or whatever is touching the surface and add it to farseer’s physics world, this proved to be the hardest part of the project because it has a huge overhead on the CPU (the first-get Surface has a poor little dual core processor) and I had to find a way to implement this efficiently so that it doesn’t lag the game.
At that moment I realized that this will be the main focus of my Thesis, collision between virtual objects and real objects placed on the surface, and how to implement it efficiently and without losing accuracy :)

I started with a basic algorithm: iterate over all the pixels of the raw image, if the pixel is below the threshold and any of it’s 8 adjacent pixels is white then the current pixel is an edge pixel, otherwise it’s not, then create a new polygon shape and set its vertices to be all the edge pixels detected, then add this shape to the physics world, this is done by having a physics body which I called “shield” and clear it’s set of fixtures then add the new polygon shape as a fixture.
The result was an extremely slow performance so I had to run that algorithm in a separate thread with very low priority so that it doesn’t affect the game’s frame rate, and at that point I discovered that Farseer doesn’t really like threads because it’s collision system was based on a quad-tree and you can’t just remove a fixture from the world at any time, so I had to synchronize the thread with the main thread running the game.
The result was nice physical collisions with accurate edge detection, but of course the shape of the shield body was updated every few seconds, so even after I remove the real object I’m placing on the surface, the projectiles would keep bouncing off of a ghost body for a few seconds.
You can see the result of that algorithm in this video where first I run the game with the simple non-physical collision then I rerun it with the new collision algorithm, but notice that the lag can’t be seen because I placed the piece of paper on the surface before I started the game.

Improving the Shape Detection

Of course the basic Algorithm had a lot of problems, the main 2 problems were:
- I iterated over ALL the pixels of the raw input image, this made the algorithms slow AND all the objects placed on the surface was treated as only one body with only one fixture (shape), so I put to hands with a big space between them, a projectile can’t pass between them!
- I’m getting Edge Pixels and using them as Vertices, that’s a lot of redundancy here which also affects the performance!

To solve the first problem, I made use of the SDK’s Contacts classes that give me access to a list of fingers/blobs that are touching the surface and their position (which I already used to detect when I touch the cannon and my finger orientation) so that I only iterate over the part of the raw input image that is occupied by a detected finger or blob (anything that’s not recognized as a finger is a blob :D).
This greatly improved the performance and made it possible to make a fixture (shape) for every real object touching the surface, so that way the projectile can pass through the space between players’ hands.

One further improvement I also did to solve the first problem is change the way I’m iterating over the pixels, instead of just checking whether the pixel is an edge or not for each pixel in the row of pixels, I do changed it so that when I find an edge pixel I skip the rest and move to the next row of pixels, then do the same thing again but I reverse the order of iteration (decrement counter instead of increment) to get the other edge of the object, that way I don’t waste any time iterating over the inner pixels and checking if they are edge pixels or not, the only disadvantage of that is that if the placed object has a hole in the middle its edges wont get the detected, but that’s usually not the case in this game. (what? you wanna trap the poor little projectile inside your hand? :D)

Then for the second problem I had to find a solution that actually makes the algorithm faster and not slower, doing proper Corner Detection would greatly decrease the performance, so I came up with a different way which is to filter-out as much edge pixels as possible (without missing any of the objects’ important features) to get the vertex pixels, and I did this by modifying the conditions that decide whether a pixel is an edge pixel or not, the current pixel still needs to be below the threshold, but instead of saying if any of the 8 adjacent pixels is above the threshold then it’s an edge pixel, I can say one of the following:
- check the 8 adjacent pixels and count how many are above the threshold, and if that number is small enough (eg: smaller than 4) then the current pixel is a vertex pixel, that way you’ll remove some of the edge pixels that belong to the same corner.
- the current pixel is a vertex pixel if the top-right or top-left or bottom-right or bottom-left pixel is higher than the threshold AND the center-right, center-left, center-top and center-bottom pixels are below the threshold, that way you ignore the pixels that are lying along an edge are ignored and only ones that are at the tip of a corner are detected.

It’s not easy to figure out which is way better without testing because you can heavily filter these vertices and increase the performance of the collision detection on the cost of decreasing the performance of the vertex detection algorithm and vice versa.
So after I tested them out I found out that the 2nd way has a better overall performance, the overall shape detection took about 50 milliseconds when testing with one hand placed sideways perpendicular to the surface (the natural way to block something), but it took more time when more hands where involved and took significantly longer (up to about 200 milliseconds) if I spread my hand on the surface with my palm down and fingers spread apart, this might be a problem but during play-testing it didn’t prove to be a huge problem in most of the cases as the delay wasn’t that noticeable.

You can see these improvements starting from the part 4 video.

By the time I was about to finish working on the Game, I figured out a way that really increased the performance of the game, a way that can only work on this specific game due to its gameplay design.
Instead of detecting fingers/blobs and iterating over the parts of the raw input image that they occupy, I’ll iterate over the part occupied by (and surrounding) the projectiles!
So now instead of having the performance depend on the size of the object touching the surface, it depends on the amount of projectiles, and now only the vertices detected around the projectiles are added to the physics world and thus making collision detection perform faster.
I tested this new algorithm with a lot of projectiles and it was better than the previous method in most of the times, and when there are a bunch of projectiles but non of them is close to my hand the whole shape detection algorithm took about 2 milliseconds, so I ended up using it :)

Fun Features & Gameplay Improvements

As some of you might have figure out, improving the shape detection wasn’t always fun to do, so as I was improving it and testing it I was also adding some fun gameplay features every now and then so I don’t get bored :)

When testing the game I felt that it would be better if that the longer I hold my finger down on the cannon the more bigger and powerful the projectile should be, so I implemented this and also as a result increased the health points of the base, I also made the projectiles collide with the cannons but made the cannons invincible, these changes can be seen starting from the part 4 video.

After that I implemented blurry trails that follow projectiles (the faster they are the longer the trail), all I did was make an array of 10 Vector3’s to store the positions of the trail parts and initialize all elements with the current position of the projectile (once it’s launched) then on every frame I update the positions so that the first trail part has the same position as the projectile and each of the other trail part is slightly behind the trail part in front of it, and when drawing I draw a pre-blurred circle at every trail part position and I scale down the sprite as I move along the trail.

When testing the gameplay, I found out that it was too easy to defend the bottom base so I made it twice as wide and made it have twice the health, and it was also slightly hard to tell if a base is about to get destroyed or not so I made it keep flashing between white and red when it has 2 or 1 health point left, I also implemented a bar than shows you the power of the projectile while you’re holding down your finger on the cannon so you can easily know when it’s completely charged up (maximum power).

These features can be seen starting from the part 5 video.

Then I started to work on implementing Power-ups in the Game that spawn randomly, I made it possible to have many types of power-ups but I only implemented one type of power-up, the Scatter Bomb which when hit fires a random number of projectiles with random directions and random power, it added some fun randomness to the gameplay :)

There was a design decision I had to make on how to activate power-ups, either by touching them or by shooting at them, touching them seemed trivial and natural, but I thought it would be a bit painful/annoying when you have 4 players playing this game (2 on each team) and they all attempt to touch the power-up at the same time (it’s like “oh! a power-up! must get it!!!” … “ouch! you squashed my finger!” :D) so I decided to make power-ups activated by shooting at them, this also makes it require some aiming talent to get them.

Bored? Let me show you something a bit funny, I was once feeling lazy and bored when I was at the lab so I started messing with the constants in the power-up system, I made them spawn instantly one after the other and made every Scatter Bomb fire about a hundred projectile,  it looked like an awesome chain reaction :D
You can see the chain reaction hell in this video, of course I rolled back these changes but it was fun to mess with the gameplay for a while :D

As I was almost done with implementing the game, I wanted to make it look like a real game that can actually attract people to play it, so I added some sounds to the game, I opened a nice little program called sfxr that generates sound effects and made sounds for firing projectiles and base destruction, and made the firing sound have lower pitch for bigger projectiles :)
I also added a Game Over screen that shows which player has won and which player has lost and made it reset the game when touched.

And to make the game more visually appealing I added some particle effects but they had to be done on the GPU and not the CPU so that it doesn’t ruin the performance that I worked hard to gain, but I was a bit lazy to write HLSL code and deal with the GPU so I just copied the code from the Particles 3D sample available at the MSDN App Hub and modified it to my use.
Now whenever a projectile hits a base, some random fire particles appear, and when a base is destroyed, a large amount of fire particle cover the whole base, and in the game over screen I made particle effects that look like Fireworks along with Fireworks sound effects :)

Then I added nice diagonal stripes for the background (stole the idea from ElleEstCrimi’s design for my website :D), the stripes are read on one side and blue on the other to split the field between the 2 players.
I also colored the projectiles red and blue depending on the player who fired the projectile, and made the one from the scatter bomb yellow :)

You can see all these changes at the final video (part 6), which I embedded at the beginning of this post :)

Final Thoughts

Even though this game probably well never be used in any scenario (unless some extremely rich guy bought a Microsoft Surface for home use and wanted to play a game on it :D), but I really learned a lot while developing it :)
It has one of the cleanest code I ever written and can be easily extended to add more features, it was the first time I feel the need to synchronize threads (I actually used semaphores! … but then I used the “lock” statement in C# instead) and it was the first time I feel the overhead caused by the Garbage Collector (I was constantly creating lots of vertices and disposing others) and feel the need to switch to a native language :D

Although I really enjoyed working on the project, writing the Thesis was a very boring thing for me and took me a lot of time to finish it and it ended up just having 18 pages, probably the shortest Bachelor Thesis ever written :S
Oh and I didn’t like writing in LaTeX :
Anyways, here it a link for it if you’re interested in reading papers and stuff.

Living in Augsburg for 3 months was an amazing experience, even though I had no Dish Washer or Car or a lot of things that make life easier, I was still living happier there than here in Cairo or Alexandria, and till that moment I still get those moments when I remember some of the things I used to do there as part of my daily routine and get the feeling that I want to go and live there for some time again :)

Source Code!

I’ll probably never be able to run this game again, but maybe someone out there might have a first-gen Microsoft Surface and might want to try the game, so here is the full source code:

The code has 2 solutions:
- A Visual Studio 2008 solution containing an XNA 3.1 Game Project, this is just to compile the content (images, fonts, sounds and effect) into .XNB files (I’ve included it’s bin folder)
- A Visual Studio 2010 solution containing 2 projects, Farseer Physics Engine v3.3.1 for XNA 3.1 (I did very small changes to its code), and the Surface Bachelor Project which is a Surface XNA project