Study for Wednesday's Exam

MATMUL Fortran 90 Intrinsic Function

We are discussing "Linear Algebra", because when solving the complex equations representing the behavior of the real world, it almost always boils down to the solution of a big set of linear equations. The standard approach to modeling a physical system is to first write down all of the differential equations (usually partial differential equations that describe the behavior of the system. Next you apply some type of Finite Difference or Finite Element method to convert the differential equations to a coupled set of non-linear algebraic equations. Next you apply some variation of Newton's method to solve these equations. At each iteration a set of linear equations is generated, that must be solved by the computer. This is where most of the computational effort resides, but fortunately it is not where most of your effort will be spent. There are lots of freely available linear equation solution Subroutines available from the Web.

Before launching into this material I want to give two simple definitions and route you through a discussion of the notation used in linear algebra. When I use the word "matrix" here I'm talking about a doubly dimensioned array (A(10,10), etc). When I use the term vector I'm talking about our standard 1-D array (X(10), etc.). To pick up the basics of Linear Algebra's matrix notation, including the definition of matrix multiplication look at my Linear Algebra Web Page of print the Postscript file matrices.ps

The approach to solving linear equations that I will cover here is called Lower-Upper (LU) decomposition. I won't go into details of this approach until later, but will give you a summary of the key points. Remember that this is only one of many techniques for solving a set of linear equations, and not always the best choice (avoid it when there are a very large number of equations). In the process of solving the linear equation set Ax=b, you can write A as product of 2 matrices L and U where L has zeros above the diagonal (L(I,J)=0 when J>I) and U has zeros below the diagonal (U(I,J)=0 when J<I). This gives an equation that easy to solve in three steps. For Ax=b, go to LUx=b, then let y=Ux, and solve Ly=b. Now that you know y solve Ux=y for x. Here's the slick part of all of this. For n equations with n unknowns Gauss elimination, or determining L and U takes something proportional to n cubed computer operations (multiplies and adds). However solving the LUx=b system only takes of order n squared operations. You won't see this for a while, but in many, perhaps most interesting problems for a given matrix A you don't just solve one system Ax=b, but many related systems Ax=b, Ax=c, Ax=d, .... You use LU decomposition to do most of the work up front then additional equations are relatively cheap.

I'm not asking you to write LU decomposition subroutines. Where do you get them? In particular you look for subsections of NETLIB called LINPACK or LAPACK. We'll go for LINPACK and get SGEFA and SGESL to do the job. Here's how to get there:

- Tell NETSCAPE to go to http://www.netlib.org
- Click on phrase Browe Netlib
- locate work "linpack" and click on it
- Use the "Edit" menu item "Find" to locate "sgefa" (search until it is a blue header)
- Click on the end of the line reading: sgefa + dependencies
- Use "File" menu item "Save as ...". You may need a floppy to store it.
- Repeat for sgesl then exit.

Look carefully at the comments in sgefa.f and sgesl.f to determine what the arguments do. There are several important things to notice. First, although you don't use it or understand it, you must provide an integer array (called IPVT by the subroutines) for them to store information. Doen't touch this array between your call to SGEFA and your call to SGESL. It is how the subroutines keep track of something called "Pivoting", that I will discuss later. You should also be aware that SGEFA overwrites your original matrix A for the combined storage of the Lower and Upper Triangular matrices. That altered copy of A must be passed to SGESL for the correct answer. If you need the original matrix for checking answers or for other operations, copy it into another array (say AT) before calling SGEFA. Also notice that SGEFA and SGESL do not assume that the size of the array A matches the number of equations to be solved, so you must pass the two pieces of information separately. Finally, observe that SGESL overwrites the array (B) used to provide the right hand sides of the linear equations. When the subroutine is done, that array contains the solution to the equations. Try modifying my driver program to solve a set of linear equations that you can handle on a piece of paper. This will give you a better feeling of were the equation coefficients go to start the problem, and were the answers turn up after the LINPACK subroutines are called.

Written and Maintained by John Mahaffy : jhm@psu.edu