Numerical Methods

Hi Everybody

This post will cover my first encounter with numerical methods (NM) after doing an introduction at university. Needless to say I thoroughly enjoyed such a fresh approach to mathematics and I plan to try look more at numerical methods in the future.

What the hell are numerical methods? Well I guess in a nutshell I would define them as a computer’s solution to DOING mathematics. If someone had to ask you how on earth would you write a program to do integral calculus and all its applications, numerical methods would be your answer. One of the fundamental paradigm shifts I went through when I started looking at NM was how to do mathematics numerically as opposed to analytically which is essentially how mathematics is taught at school. Analytical mathematics relies heavily on an intuitive feel for the problem, and the numerical methods essentially rely on predefined problem sets. I would essentially say that analytical mathematics is a super set for NM and also precedes NM in terms of the problem solving process.

This is my first project looking at numerical methods. I started the original project as part of my numerical methods course at university, but I find these little projects become great test beds for miscellaneous experiments. Before I give a basic run down on the project, here is what I used to put the project together :

–          Visual Studio 2010 Express ( C++ )

–          Book : Applied Numerical Methods ( Gerald & Wheatley ) 7th Edition ( referred to as ANA )

–          The Internet

This first version of the project only has algorithms to deal with solving equations of different forms for different purposes. They are generally taught as a means of calculating roots of given equations but they could be manipulated and used for finding more generic points as well (such as maxima or minima). I’ll give the wiki links as well for more comprehensive reading not to mention I refer to it for my own explanation while trying to keep my version remotely human. Also bear in mind that the algorithms implemented come almost completely and directly from the book Applied Numerical Methods, I would recommend this book to anyone interested in the subject as it is both down to earth and thorough.

From an implementation point of view, I have not optimized the project’s structure as of yet and have aimed to keep it relatively simplistic.

I will not be going into any theoretical detail of these algorithms, nor am I qualified to do so. But I will give a quick description and one or two links for further study.

The algorithms are as follows:

“The Derivative” :

This is my home baked method of calculating a derivative of a function at a specific point on the function based on the rise-and-run method. I originally implemented it simply because Newton’s method will require the use of derivatives. It served my purpose but I still want to keep a look out for alternatives in the future.

Newton’s Method

( Check out : )

I’ll give you only one attempt to guess who this algorithm is named after. Newton’s Method (sometimes called the Newton-Raphson method; see wiki) is probably one of the most commonly known algorithms for NM. It is a method used for root-finding and is probably the easiest to understand.

Rate of Convergence: Quadratic


–          Within reasonable approximation of the root, it is rapidly convergent

–          Relatively simple to understand


–          Requires to function evaluations per iteration

–          A note given by ANA :

  • “The method may converge to a root different from the expected one or diverge if the starting value is not close enough to the root”

For those too lazy to google the algorithm I have included my own diagram showing an example of Newton’s Method at work:












This style of diagram is a cliché in numerical methods. But they are very useful for visualizing the behaviours of the variables used. In this case after several iterations the x0 and x1 variables should eventually collide (to within the margins specified). For any algorithms you are trying to work out, I would suggest drawing up atleast 3 of these diagrams in sequence with an update of values for each iteration.

Fixed Point Iteration (FPI) (AKA : x = g(x) )

( Check out : )

A less common algorithm used for finding a fixed point on an iterative function. This algorithm has a particular unique twist. The equation that is processed by the FPI algorithm needs to be manipulated by solving for x. I personally found this algorithm to be fairly temperamental and particularly susceptible to divergence. This may however be an implementation issue. It is however a very simple algorithm.

Rate of Convergence: Linear


–          Rate of convergence can potentially be increased by using Aitken’s Acceleration


–          Requires the input equation to be “pre-processed”

–          Convergence can be both oscillatory and monotonic, and can be difficult to determine if the process is diverging.

Muller’s Method

( Check out : )

A verbose but powerful algorithm. Also considered a part of the householder algorithms ( algorithms used for root finding ) and is based on the secant method. It however does not use a linear equation with one or two points on the function to traverse the function, but instead uses three points using a quadratic polynomial to find the root.

Rate of Convergence: Quadratic


–          Despite the verbosity of the algorithm, will attain reasonable precision in fewer iterations


–          Denominator of the quadratic equation may not be zero or the algorithm will fail

–          The delta value of the square root (b^2 – 4ac may not be negative )

–          Verbose and requires multiple evaluations per iteration


That pretty much covers what I have implemented into this project thus far. There are some shortcomings that I still want to look at in the future…….

What I have not dealt with yet:

–          Checking for traversal into infinity. The project could essentially run into an infinite loop depending on algorithm’s idiosyncrasies and given initial boundary values

–          Aitken’s Method, used to accelerate convergence of iterative methods

–          Methods of Linear Interpolation (Regula Falsi and the Secant Method)

–         Function definition : Evaluated functions are hard-coded! This will be a post on its own(at least).

Here is the download for the project:

Edit : I’ve added a refactored version of the project as well to correct some layout mistakes


NumericalMethodsCPU – Revised

As with any and all projects that I work on, please feel free to give feedback or let me know of any bugs, limitations or problems. I especially welcome any advice regarding misunderstandings that I may have in the topic’s theory (in this case, numerical methods).

I hope this helps. Despite it being a tough subject to wrap your head around initially, it is also a lot of fun!

Ciao 🙂