Computer Solutions to Ordinary Differential Equations

Assignment :

Complete Homework 14, and Start Homework 15., Read the Web notes on the Fortran Pointer data attribute.

New Fortran:

none

I've already given you a method to solve a limited number of Ordinary Differential equations. If you have a differential equation in the form:
dy/dx = f(x)

where f(x) is any function of x alone, and you are given some initial condition, say y(0)=0. All you need to do is apply the trapezoidal rule with some appropriate steps in x, to obtain an approximation to y at any value of x. Usually you want more than one value of y. You want to see the behavior of y as a function of x over some range of x values. To obtain this information, y is stored as an array, retaining the value of y resulting from trapezoidal integration over each length increment in x. This can be expressed in Fortran as:

y(i+1) = y(i) + 0 .5*( f( x(i+1) ) + f( x(i) ) )*( x(i+1) - x(i) )

The resulting set of x(i), y(i) pairs of values, constitutes the approximation to the solution of the original ODE.

Things get a little more complicated when you want to solve an equation that looks like

dy/dx = f(x,y)

or when higher derivatives are involved. These situations lead to methods like Forward Euler, Backward Euler, and Crank-Nicholson. You may also want an even higher order of accuracy in your approximate solution, and need to consider methods like Runge-Kutta, Adams-Bashford, or Adams-Moulton.

To illustrate some approaches to solution of more complicated problems, let's start with a simple problem that we already know how to solve. This is always a good idea when developing general computer solution methods. It gives you confidence that your method is working, before you tackle a problem with no simple way to check answers. Take a steel ball and drop it from a height of 100 meters. What is its height as a function of time? The basic equation of motion is: Based on what we learned about approximation of the second derivative, the first form of the equation can be approximated as:

(zi+1 - 2 zi + zi-1 )/dt2 = -g

where "dt" is the step taken in time between two successive elevations (e.g. between z1 and z2 dt=t2-t1). Once zi and zi-1 are known, the equation can be rearranged to get the next value of z,

zi+1 = -g dt2 + 2zi - zi-1

I can in fact get this solution started based on my knowledge that at t=0, z=100 and the initial velocity is zero. However, there is no point in going into the startup details, because the solution that follows does not work well. Basically the factor of 2 multiplying zi acts as an amplifier on computer round-off errors, eventually destroying the quality of the solution. Take a look at the sample program fall.f to see an example of this problem with growth in error. You'll find that as smaller time steps are chosen, the results at any given time get worse rather than better. The reason is that the increased number of time steps give more opportunity for growth of errors.

There are situations where it's appropriate to solve a second order differential equation with a substitution of the "finite difference" form of the second derivative. This is the case for boundary value problems, such as the equations for vibrating systems, or for the conduction of heat. The analogy for the above problem would be that we are told z=100 when t=0, and z=20 when t=4. No information is given about initial velocity. It must be determined based on the starting and final (boundary) conditions for z. The resulting set of equations is called a tridiagonal linear system, and can be solved without propagation of error.

To reduce error growth on this initial value problem, reformulate the problem to give two first order differential equations. For those of you who have done the analytic solutions of similar differential equations, you should recognize this two step process.

This form using two equations can be cast in a number of finite difference forms with various levels of accuracy and ability to suppress propagation of round-off errors. The program in fall1.f demonstrates one where we approximate the equations as:

(vi+1 - vi )/dt = -g

(zi+1 - zi)/dt = .5 ( vi+1 + vi )

In effect we have to decided to evaluate the differential equations at a point halfway between ti and ti+1. We have used the result obtained earlier for the derivative in a quadratic approximation, and have used a simple linear interpolation to obtain value of the right hand sides at time of ti+t/2. Machine round-off error also propagates a little with these equations, but the problem is not as severe. The specific implementation of fall1.f further suppresses round-off by working in double precision. When you execute it, you will see that computed results match the theoretical results to the level of precision printed. This should be expected because the approximation to the derivative assumes that a second order polynomial can describe z and velocity, and the evaluation of the right hand side assumes velocity is linear in time. Both assumptions are consistent with the actual solution.

That wasn't too complicated. Now let's try something with a solution that doesn't match the assumed curve fit in the finite difference approximations. I'm going to do a modified form of a Bungy jump. The bungy cord is suspended above you, so that it just starts to stretch as you step off the bridge. The equations describing your fall (without air resistance) are: where z1 is your initial height above the ground below, m is your mass (in kilograms), and k is the spring constant associated with the bungy cord.

The program fall2.f solves this problem with the following numerical approximation to the differential equations:

(vi+1 - vi )/dt = - g + k/m (z1 - zi )

(zi+1 - zi)/dt = .5 ( vi+1 + vi )

The equations can be rearranged to obtain the unknown values at time ti+1 in terms of known values as follows:

vi+1 = vi + t (- g + k/m (z1 - zi ) )

zi+1 = zi + .5 t ( vi+1 + vi )

Note that in the first equation, the point at which the right hand side is evaluated (ti) is not the time for which the numerical time derivative best approximates the real derivative (ti+1/2). This is not a fatal problem, but will require smaller time step sizes than the next approximation. This method is called Forward Euler or Explicit Euler, and is said to be "first order" accurate in time step size (dt), because errors in the solution are directly proportional to first power of dt.

The program fall3.f provides a more accurate approximation to the solution of this problem with the following numerical approximation to the differential equations:

(vi+1 - vi )/dt = - g + k/m (z1 - .5 ( zi + zi+1 ))

(zi+1 - zi)/dt = .5 ( vi+1 + vi )

Since zi+1 appears in both equations they must be solved as a coupled pair with unknowns vi+1 and zi+1 . This method of evaluating all terms at a point centered between tn and tn+1 is called "Crank-Nicholson" and is said to be "second order" accurate in dt, because the errors in the solution are proportional to the second power of dt.

To appreciate the advantages of the higher order method consider the following results from fall2.f and fall3.f at selected times for z1 = 100 meters.

dt = .1 2nd order (fall3.f)

Computed        Actual
Time            Height         Height         Error
4.7000000E+00  1.0221680E+01  1.0209579E+01  1.2101302E-02
4.8000000E+00  7.1281666E+00  7.1157809E+00  1.2385729E-02
4.9000000E+00  4.0294560E+00  4.0167973E+00  1.2658637E-02
5.0000000E+00  9.2864640E-01  9.1572717E-01  1.2919237E-02
5.1000000E+00 -2.1711621E+00 -2.1843288E+00  1.3166752E-02

dt = .1 1st order (fall2.f)

Computed        Actual
Time            Height         Height         Error
4.7000000E+00  9.5425613E+00  1.0209579E+01 -6.6701775E-01
4.8000000E+00  6.4120142E+00  7.1157809E+00 -7.0376667E-01
4.9000000E+00  3.2754198E+00  4.0167973E+00 -7.4137753E-01
5.0000000E+00  1.3591168E-01  9.1572717E-01 -7.7981548E-01
5.1000000E+00 -3.0033721E+00 -2.1843288E+00 -8.1904329E-01

dt = .01 2nd order (fall3.f)

Computed        Actual
Time            Height         Height         Error
4.7000000E+00  1.0209700E+01  1.0209579E+01  1.2103161E-04
4.8000000E+00  7.1159047E+00  7.1157809E+00  1.2387609E-04
4.9000000E+00  4.0169239E+00  4.0167973E+00  1.2660533E-04
5.0000000E+00  9.1585638E-01  9.1572717E-01  1.2921147E-04
5.0300000E+00 -1.4407235E-02 -1.4537203E-02  1.2996813E-04

dt = .01 1st order (fall2.f)

Computed        Actual
Time            Height         Height         Error
4.7000000E+00  1.0142135E+01  1.0209579E+01 -6.7444369E-02
4.8000000E+00  7.0446624E+00  7.1157809E+00 -7.1118462E-02
4.9000000E+00  3.9419209E+00  4.0167973E+00 -7.4876439E-02
5.0000000E+00  8.3701243E-01  9.1572717E-01 -7.8714733E-02
5.0300000E+00 -9.4418524E-02 -1.4537203E-02 -7.9881321E-02

Notice that errors behave as expected for the order of accuracy of the method. For the first order method, dropping the time step by a factor of 10 drops the error by a factor of 10. For the second order method, dropping the time step by a factor of 10 drops the error by a factor of 102.

It is important to note that the 2nd order results are more accurate at dt=.1 seconds than the 1st order results are for dt=.01 seconds. A little extra effort for the higher accuracy can save significant computer time. The higher order method takes far fewer total time steps to achieve a result with comparable accuracy to a given first order result. However, be aware that higher order methods are touchier on complex problems (more kinks in the polynomials) and can become useless for complex problems because they die during a calculation when a low order method will march on happily to completion.

The Basic Numerical Solution Methods

For this discussion, we are looking for other solution methods to the general ODE  subject to some initial condition ( the value of y is known at the starting time). In what follows I'm going to use some fairly standard notation in the solution of ODE's. A subscript denotes the time step at which the result is obtained. For example y10 is the value of y obtained at the tenth time step, when time is t10. The step taken in time is abbreviated as the letter "h"

h = tn+1-tn = dt.

I've already mentioned the Forward (or Explicit) Euler method. For the above equation and notation, the form of the method to advance from one level to another is:

yn+1= yn+h f(tn,yn)

There is another important form of the method called Backward (or Implicit) Euler:

yn+1= yn+h f(tn+1 ,yn+1)

The key change is that the right hand side of the equation is formally evaluated at the n+1 time level, which will normally result in a nonlinear equation that must be solved for yn+1. No problem, just use yn as the initial guess for a Newton iteration.

Both Forward and Backward Euler, have errors proportional to h. You've seen that the next step up in accuracy (error proportional to h2 ) is the Crank-Nicholson method. In the current notation this method is written as:

yn+1= yn+.5 h (f(tn ,yn) + f(tn+1 ,yn+1 ))

Order of Accuracy

The order of accuracy of an approximation to a differential equation is determined by using a Taylor series expansion. Let's look at the Explicit Euler method. Most terms are evaluated at time level n so we'll do Taylor expansions about that point to look at our approximation of the differential equation. Substituting this into the explicit Euler equation gives: or This is just the original differential equation evaluated at time ti modified with error terms containing the first and higher powers of h. For this reason the finite difference method is said to be first order accurate in time (error proportional to the time step, h). You can play the same games with the Implicit Euler (expand about tn+1 ), and the Crank-Nicholson (expand about tn+1/2 ).

Even Higher Accuracy

Various combinations of equations can be constructed with the aid of Taylor expansion analysis to produce approximations to the ODE with error terms starting at higher powers of h. One of the most popular of these is the 4th order Runge-Kutta, defined by the following series of equations: Looks a little like a Simpson integration.

Many other methods can be generated in a similar way or by fitting high order polynomials to existing points. I'll just list three closely related ones that take advantage of existing information from the solution to obtain fourth order accuracy. The Adams-Bashford uses evaluation of the right hand side at four consecutive time steps to predict the next value of y.

yn+1= yn + h/24 (55 fn + 59 fn-1 - 37 fn-2 - 9fn-3 )

You need to take three steps of Runge-Kutta beyond the initial conditions, to get this and the other Adams methods started.

As you will see by running my example odeint.f, Adams-Bashford can start giving strange results (numerical instability) if the time step is pushed high enough. The Adams-Moulton method is one attempt to mitigate this problem, starting with a standard Adams-Bashford step, and then applying a correction.

y*n+1= yn + h/24 (55 fn + 59 fn-1 - 37 fn-2 - 9fn-3 )
f*n+1 = f(tn+1 , y*n+1 )
yn+1= yn + h/24 (9 f*n+1 + 19 fn - 5 fn-1 + fn-2 )

This can also be converted to an Implicit Adams-Moulton:

yn+1= yn + h/24 (9 fn+1 + 19 fn - 5 fn-1 + fn-2)

Like the Implicit Euler this can require the solution of a nonlinear equation in yn+1 , since

fn+1 = f(tn+1 , yn+1 ) .

Stability of Solution Methods

Consider the sample problem with f (t,y) = 1 - y2 , solved by the example odeint.f. This is a variation on the homework problem involving a falling body with air drag. What is an appropriate value for h? In the steady limit (when the time derivative goes to zero) y2=1 or y=1. Since the forcing term is also 1, the characteristic time scale is 1/ymax=1. Start with a value for h small compared to 1, say 0.01. An interesting thing happens, when you rerun the problem with higher and higher time steps. At some point each of these high order methods goes totally nuts with the calculated answer going low then high then lower then higher, bouncing up and down with an increasing amplitude. This is referred to as a numerical instability. Only the Implicit (Backward) Euler method is immune to these problems. Experiment with odeint.f to see these results.