Category: Computer Sciences

So during a conversation with some colleagues the topic of RobotWars came up. Not the ones in real life though, but virtualized. As in, you program the behaviour of your own robot and then you compete with other programmers (or your own robots). I have never heard of programming games like this (I know of logo, or turtle) so I had to start reading up on them and there are dozens at least. I think just the idea of this is awesome. So I have been digging through them and a lot of them seem to be built around their own scripting language which may be faster to get going with, I liked the idea of a RobotWars clone that used a full commercial language which includes all the language’s bells and whistles. So far the only one I have found that does is called RealTimeBattle. Check it out at :

Unfortunately the latest source code available for download did not build. As it turns out the project hasn’t been updated since 2005 and the C/C++ compilers and their standards have changed somewhat since then. Most of it has to deal with string handling, plus I found patch lying around that was published around from 2006 that was necessary to get the source to compile. I am a bit surprised that there is no backwards compatibility with regards to string handling but until I look into why the APIs have been changed the way they have, I shall assume that the gurus managing the C/C++ standard have good reason for refactoring the way they did.

If I can get in touch with the original RTB programmers, I will try get my patched version added onto sourceforge. In the meantime here is a download link just in case I can’t get the sourceforge version updated :

RealTimeBattle 1.0.8 Revised

Just as a caution, I did not take time to optimize my changes, such as the use of namespaces which may not be necessary. Feel free to optimize yourself.

Ciao 🙂

Just a quick post today regarding something I’ve been tinkering with. My experience with Zita Asteria included doing a few of its particle systems and they can be a lot of fun to play with. I might do a more general tutorial on creating particle systems some day when they come more naturally to me. For now I suggest a more informal approach, by simply visualizing the system before coding.

As usual, a list of ingredients :

–          Visual Studio C# Express 2010

–          XNA 4.0

–          DPSF ( Dynamic Particle System Framework )

The DPSF API is pretty well black boxed but I want to still give a run down of the basic lines of code you’ll need to get particle systems running.

The following should be enough to get started :

1)   An attribute, the particle system manager of type ParticleSystemManager

2)  Adding the particle systems needed using particleSystemManager.AddParticleSystem(particleSystem);

3)  An update call – particleSystemManager.UpdateAllParticleSystems(elapsedTime)

4)  Config calls

a.  particleSystemManager.SetCameraPositionForAllParticleSystems(cameraLocation)

b.  particleSystemManager.SetWorldViewProjectionMatricesForAllParticleSystems(msWorldMatrix, msViewMatrix, msProjectionMatrix);

5)  A draw call – particleSystemManager.DrawAllParticleSystems()


Any other code needed will be specific to the particle system you might be using.

The particle system I put together to demo this is extremely simplistic but it does demo the basics of one of DPSF’s classes. It also incorporates basic rotation using some of the DPSF utility methods although I didn’t use the traditional DPSF delegates for simplicity’s sake.

Each particle essentially has two aspects to its behaviour :

1)      Rotational velocity relative to the emitter’s location ( not the center of the particle )

2)      Vertical linear velocity ( essentially increasing  y-component )

That’s this system in a nutshell. Here is a screenshot of it running :


A thanks goes to the DPSF project team for the textures supplied in the source!

I hope this helps someone. Well implemented particle systems add quality to any visuals.

If you want the project source : Whirlwind Particle System Demo

Ciao All

Numerical Methods

Hi Everybody

This post will cover my first encounter with numerical methods (NM) after doing an introduction at university. Needless to say I thoroughly enjoyed such a fresh approach to mathematics and I plan to try look more at numerical methods in the future.

What the hell are numerical methods? Well I guess in a nutshell I would define them as a computer’s solution to DOING mathematics. If someone had to ask you how on earth would you write a program to do integral calculus and all its applications, numerical methods would be your answer. One of the fundamental paradigm shifts I went through when I started looking at NM was how to do mathematics numerically as opposed to analytically which is essentially how mathematics is taught at school. Analytical mathematics relies heavily on an intuitive feel for the problem, and the numerical methods essentially rely on predefined problem sets. I would essentially say that analytical mathematics is a super set for NM and also precedes NM in terms of the problem solving process.

This is my first project looking at numerical methods. I started the original project as part of my numerical methods course at university, but I find these little projects become great test beds for miscellaneous experiments. Before I give a basic run down on the project, here is what I used to put the project together :

–          Visual Studio 2010 Express ( C++ )

–          Book : Applied Numerical Methods ( Gerald & Wheatley ) 7th Edition ( referred to as ANA )

–          The Internet

This first version of the project only has algorithms to deal with solving equations of different forms for different purposes. They are generally taught as a means of calculating roots of given equations but they could be manipulated and used for finding more generic points as well (such as maxima or minima). I’ll give the wiki links as well for more comprehensive reading not to mention I refer to it for my own explanation while trying to keep my version remotely human. Also bear in mind that the algorithms implemented come almost completely and directly from the book Applied Numerical Methods, I would recommend this book to anyone interested in the subject as it is both down to earth and thorough.

From an implementation point of view, I have not optimized the project’s structure as of yet and have aimed to keep it relatively simplistic.

I will not be going into any theoretical detail of these algorithms, nor am I qualified to do so. But I will give a quick description and one or two links for further study.

The algorithms are as follows:

“The Derivative” :

This is my home baked method of calculating a derivative of a function at a specific point on the function based on the rise-and-run method. I originally implemented it simply because Newton’s method will require the use of derivatives. It served my purpose but I still want to keep a look out for alternatives in the future.

Newton’s Method

( Check out : )

I’ll give you only one attempt to guess who this algorithm is named after. Newton’s Method (sometimes called the Newton-Raphson method; see wiki) is probably one of the most commonly known algorithms for NM. It is a method used for root-finding and is probably the easiest to understand.

Rate of Convergence: Quadratic


–          Within reasonable approximation of the root, it is rapidly convergent

–          Relatively simple to understand


–          Requires to function evaluations per iteration

–          A note given by ANA :

  • “The method may converge to a root different from the expected one or diverge if the starting value is not close enough to the root”

For those too lazy to google the algorithm I have included my own diagram showing an example of Newton’s Method at work:












This style of diagram is a cliché in numerical methods. But they are very useful for visualizing the behaviours of the variables used. In this case after several iterations the x0 and x1 variables should eventually collide (to within the margins specified). For any algorithms you are trying to work out, I would suggest drawing up atleast 3 of these diagrams in sequence with an update of values for each iteration.

Fixed Point Iteration (FPI) (AKA : x = g(x) )

( Check out : )

A less common algorithm used for finding a fixed point on an iterative function. This algorithm has a particular unique twist. The equation that is processed by the FPI algorithm needs to be manipulated by solving for x. I personally found this algorithm to be fairly temperamental and particularly susceptible to divergence. This may however be an implementation issue. It is however a very simple algorithm.

Rate of Convergence: Linear


–          Rate of convergence can potentially be increased by using Aitken’s Acceleration


–          Requires the input equation to be “pre-processed”

–          Convergence can be both oscillatory and monotonic, and can be difficult to determine if the process is diverging.

Muller’s Method

( Check out : )

A verbose but powerful algorithm. Also considered a part of the householder algorithms ( algorithms used for root finding ) and is based on the secant method. It however does not use a linear equation with one or two points on the function to traverse the function, but instead uses three points using a quadratic polynomial to find the root.

Rate of Convergence: Quadratic


–          Despite the verbosity of the algorithm, will attain reasonable precision in fewer iterations


–          Denominator of the quadratic equation may not be zero or the algorithm will fail

–          The delta value of the square root (b^2 – 4ac may not be negative )

–          Verbose and requires multiple evaluations per iteration


That pretty much covers what I have implemented into this project thus far. There are some shortcomings that I still want to look at in the future…….

What I have not dealt with yet:

–          Checking for traversal into infinity. The project could essentially run into an infinite loop depending on algorithm’s idiosyncrasies and given initial boundary values

–          Aitken’s Method, used to accelerate convergence of iterative methods

–          Methods of Linear Interpolation (Regula Falsi and the Secant Method)

–         Function definition : Evaluated functions are hard-coded! This will be a post on its own(at least).

Here is the download for the project:

Edit : I’ve added a refactored version of the project as well to correct some layout mistakes


NumericalMethodsCPU – Revised

As with any and all projects that I work on, please feel free to give feedback or let me know of any bugs, limitations or problems. I especially welcome any advice regarding misunderstandings that I may have in the topic’s theory (in this case, numerical methods).

I hope this helps. Despite it being a tough subject to wrap your head around initially, it is also a lot of fun!

Ciao 🙂