Complete Homework 14, and Start Homework 15., Read the Web notes on the Fortran Pointer data attribute.

none

I've already given you a method to solve a limited number of Ordinary Differential equations. If you have a differential equation in the form:

dy/dx = f(x)

where f(x) is any function of x alone, and you are given some initial condition, say y(0)=0. All you need to do is apply the trapezoidal rule with some appropriate steps in x, to obtain an approximation to y at any value of x. Usually you want more than one value of y. You want to see the behavior of y as a function of x over some range of x values. To obtain this information, y is stored as an array, retaining the value of y resulting from trapezoidal integration over each length increment in x. This can be expressed in Fortran as:

y(i+1) = y(i) + 0 .5*( f( x(i+1) ) + f( x(i) ) )*( x(i+1) - x(i) )

The resulting set of x(i), y(i) pairs of values, constitutes the approximation to the solution of the original ODE.

Things get a little more complicated when you want to solve an equation that looks like

dy/dx = f(x,y)

or when higher derivatives are involved. These situations lead to methods like Forward Euler, Backward Euler, and Crank-Nicholson. You may also want an even higher order of accuracy in your approximate solution, and need to consider methods like Runge-Kutta, Adams-Bashford, or Adams-Moulton.

To illustrate some approaches to solution of more complicated problems, let's start with a simple problem that we already know how to solve. This is always a good idea when developing general computer solution methods. It gives you confidence that your method is working, before you tackle a problem with no simple way to check answers. Take a steel ball and drop it from a height of 100 meters. What is its height as a function of time? The basic equation of motion is:

Based on what we learned about approximation of the second derivative, the first form of the equation can be approximated as:

(z_{i+1} - 2 z_{i} + z_{i-1} )/dt^{2} = -g

where "dt" is the step taken in time between two successive elevations (e.g. between z_{1} and z_{2}
dt=t_{2}-t_{1}). Once z_{i} and z_{i-1} are known, the equation can be rearranged to get the next value of z,

z_{i+1} = -g dt^{2 } + 2z_{i} - z_{i-1}

I can in fact get this solution started based on my knowledge that at t=0, z=100 and the initial
velocity is zero. However, there is no point in going into the startup details, because the solution
that follows does not work well. Basically the factor of 2 multiplying z_{i} acts as an amplifier on
computer round-off errors, eventually destroying the quality of the solution. Take a look at the
sample program fall.f to see an example of this problem with growth in error. You'll find that as
smaller time steps are chosen, the results at any given time get worse rather than better. The
reason is that the increased number of time steps give more opportunity for growth of errors.

There are situations where it's appropriate to solve a second order differential equation with a
substitution of the "finite difference" form of the second derivative. This is the case for boundary
value problems, such as the equations for vibrating systems, or for the conduction of heat. The
analogy for the above problem would be that we are told z=100 when t=0, and z=20 when t=4.
No information is given about initial velocity. It must be determined based on the starting and
final (boundary) conditions for z. The resulting set of equations is called a **tridiagonal** linear
system, and can be solved without propagation of error.

To reduce error growth on this initial value problem, reformulate the problem to give two first order differential equations.

For those of you who have done the analytic solutions of similar differential equations, you should
recognize this two step process.

This form using two equations can be cast in a number of finite difference forms with various levels of accuracy and ability to suppress propagation of round-off errors. The program in fall1.f demonstrates one where we approximate the equations as:

(v_{i+1} - v_{i} )/dt = -g

(z_{i+1} - z_{i})/dt = .5 ( v_{i+1} + v_{i} )

In effect we have to decided to evaluate the differential equations at a point halfway between t_{i} and
t_{i+1}. We have used the result obtained earlier for the derivative in a quadratic approximation, and
have used a simple linear interpolation to obtain value of the right hand sides at time of t_{i}+t/2.
Machine round-off error also propagates a little with these equations, but the problem is not as
severe. The specific implementation of fall1.f further suppresses round-off by working in double
precision. When you execute it, you will see that computed results match the theoretical results to
the level of precision printed. This should be expected because the approximation to the derivative
assumes that a second order polynomial can describe z and velocity, and the evaluation of the right
hand side assumes velocity is linear in time. Both assumptions are consistent with the actual
solution.

That wasn't too complicated. Now let's try something with a solution that doesn't match the assumed curve fit in the finite difference approximations. I'm going to do a modified form of a Bungy jump. The bungy cord is suspended above you, so that it just starts to stretch as you step off the bridge. The equations describing your fall (without air resistance) are:

where z_{1} is your initial height above the ground below, m is your mass (in kilograms), and k is the
spring constant associated with the bungy cord.

The program fall2.f solves this problem with the following numerical approximation to the differential equations:

(v_{i+1} - v_{i} )/dt = - g + k/m (z_{1} - z_{i })

(z_{i+1} - z_{i})/dt = .5 ( v_{i+1} + v_{i} )

The equations can be rearranged to obtain the unknown values at time t_{i+1} in terms of known
values as follows:

v_{i+1} = v_{i} + t (- g + k/m (z_{1} - z_{i }) )

z_{i+1} = z_{i} + .5 t ( v_{i+1} + v_{i} )

Note that in the first equation, the point at which the right hand side is evaluated (t_{i}) is not the time
for which the numerical time derivative best approximates the real derivative (t_{i+1/2}). This is not a
fatal problem, but will require smaller time step sizes than the next approximation. This method
is called Forward Euler or Explicit Euler, and is said to be "first order" accurate in time step size
(dt), because errors in the solution are directly proportional to first power of dt.

The program fall3.f provides a more accurate approximation to the solution of this problem with the following numerical approximation to the differential equations:

(v_{i+1} - v_{i} )/dt = - g + k/m (z_{1} - .5 ( z_{i }+ z_{i+1} ))

(z_{i+1} - z_{i})/dt = .5 ( v_{i+1} + v_{i} )

Since z_{i+1 } appears in both equations they must be solved as a coupled pair with unknowns v_{i+1} and
z_{i+1} . This method of evaluating all terms at a point centered between t_{n} and t_{n+1} is called "Crank-Nicholson" and is said to be "second order" accurate in dt, because the errors in the solution are
proportional to the second power of dt.

To appreciate the advantages of the higher order method consider the following results from fall2.f
and fall3.f at selected times for z_{1} = 100 meters.

dt = .1 2nd order (fall3.f)

Computed Actual Time Height Height Error 4.7000000E+00 1.0221680E+01 1.0209579E+01 1.2101302E-02 4.8000000E+00 7.1281666E+00 7.1157809E+00 1.2385729E-02 4.9000000E+00 4.0294560E+00 4.0167973E+00 1.2658637E-02 5.0000000E+00 9.2864640E-01 9.1572717E-01 1.2919237E-02 5.1000000E+00 -2.1711621E+00 -2.1843288E+00 1.3166752E-02

dt = .1 1st order (fall2.f)

Computed Actual Time Height Height Error 4.7000000E+00 9.5425613E+00 1.0209579E+01 -6.6701775E-01 4.8000000E+00 6.4120142E+00 7.1157809E+00 -7.0376667E-01 4.9000000E+00 3.2754198E+00 4.0167973E+00 -7.4137753E-01 5.0000000E+00 1.3591168E-01 9.1572717E-01 -7.7981548E-01 5.1000000E+00 -3.0033721E+00 -2.1843288E+00 -8.1904329E-01

dt = .01 2nd order (fall3.f)

Computed Actual Time Height Height Error 4.7000000E+00 1.0209700E+01 1.0209579E+01 1.2103161E-04 4.8000000E+00 7.1159047E+00 7.1157809E+00 1.2387609E-04 4.9000000E+00 4.0169239E+00 4.0167973E+00 1.2660533E-04 5.0000000E+00 9.1585638E-01 9.1572717E-01 1.2921147E-04 5.0300000E+00 -1.4407235E-02 -1.4537203E-02 1.2996813E-04

dt = .01 1st order (fall2.f)

Computed Actual Time Height Height Error 4.7000000E+00 1.0142135E+01 1.0209579E+01 -6.7444369E-02 4.8000000E+00 7.0446624E+00 7.1157809E+00 -7.1118462E-02 4.9000000E+00 3.9419209E+00 4.0167973E+00 -7.4876439E-02 5.0000000E+00 8.3701243E-01 9.1572717E-01 -7.8714733E-02 5.0300000E+00 -9.4418524E-02 -1.4537203E-02 -7.9881321E-02

Notice that errors behave as expected for the order of accuracy of the method. For the first order
method, dropping the time step by a factor of 10 drops the error by a factor of 10. For the second
order method, dropping the time step by a factor of 10 drops the error by a factor of 10^{2}.

It is important to note that the 2nd order results are more accurate at dt=.1 seconds than the 1st order results are for dt=.01 seconds. A little extra effort for the higher accuracy can save significant computer time. The higher order method takes far fewer total time steps to achieve a result with comparable accuracy to a given first order result. However, be aware that higher order methods are touchier on complex problems (more kinks in the polynomials) and can become useless for complex problems because they die during a calculation when a low order method will march on happily to completion.

subject to some initial condition ( the value of y is known at the starting time). In what follows
I'm going to use some fairly standard notation in the solution of ODE's. A subscript denotes the
time step at which the result is obtained. For example y_{10} is the value of y obtained at the tenth
time step, when time is t_{10}. The step taken in time is abbreviated as the letter "h"

h = t_{n+1}-t_{n} = dt.

I've already mentioned the Forward (or Explicit) Euler method. For the above equation and notation, the form of the method to advance from one level to another is:

y_{n+1}= y_{n}+h f(t_{n},y_{n})

There is another important form of the method called Backward (or Implicit) Euler:

y_{n+1}= y_{n}+h f(t_{n+1} ,y_{n+1})

The key change is that the right hand side of the equation is formally evaluated at the n+1 time
level, which will normally result in a nonlinear equation that must be solved for y_{n+1}. No problem,
just use y_{n} as the initial guess for a Newton iteration.

Both Forward and Backward Euler, have errors proportional to h. You've seen that the next step
up in accuracy (error proportional to h^{2} ) is the Crank-Nicholson method. In the current notation
this method is written as:

y_{n+1}= y_{n}+.5 h (f(t_{n} ,y_{n}) + f(t_{n+1} ,y_{n+1} ))

Substituting this into the explicit Euler equation gives:

or

This is just the original differential equation evaluated at time t_{i} modified with error terms
containing the first and higher powers of h. For this reason the finite difference method is said to
be first order accurate in time (error proportional to the time step, h). You can play the same
games with the Implicit Euler (expand about t_{n+1} ), and the Crank-Nicholson (expand about t_{n+1/2} ).

Looks a little like a Simpson integration.

Many other methods can be generated in a similar way or by fitting high order polynomials to existing points. I'll just list three closely related ones that take advantage of existing information from the solution to obtain fourth order accuracy. The Adams-Bashford uses evaluation of the right hand side at four consecutive time steps to predict the next value of y.

y_{n+1}= y_{n} + h/24 (55 f_{n} + 59 f_{n-1} - 37 f_{n-2 }- 9f_{n-3} )

You need to take three steps of Runge-Kutta beyond the initial conditions, to get this and the other Adams methods started.

As you will see by running my example odeint.f, Adams-Bashford can start giving strange results (numerical instability) if the time step is pushed high enough. The Adams-Moulton method is one attempt to mitigate this problem, starting with a standard Adams-Bashford step, and then applying a correction.

y^{*}_{n+1}= y_{n}+ h/24 (55 f_{n}+ 59 f_{n-1}- 37 f_{n-2 }- 9f_{n-3})

f^{*}_{n+1}= f(t_{n+1}, y^{*}_{n+1})

y_{n+1}= y_{n}+ h/24 (9 f^{*}_{n+1}+ 19 f_{n}- 5 f_{n-1 }+ f_{n-2})

This can also be converted to an Implicit Adams-Moulton:

y_{n+1}= y_{n}+ h/24 (9 f_{n+1}+ 19 f_{n}- 5 f_{n-1 }+ f_{n-2})

Like the Implicit Euler this can require the solution of a nonlinear equation in y_{n+1} , since

f_{n+1}= f(t_{n+1}, y_{n+1}) .