ME 540

Numerical Solutions Applied To Heat Transfer And Fluid Mechanics Problems


Gaussian Elimination and LU Factorization

Whether you know it or not you've used Gauss elimination to solve systems of linear equations. What you probably never considered is that the method can be approached in a very systematic way, permitting implementation in a computer program. Also of importance is the fact that with very minimal additional effort, the program for Gauss elimination can be enhanced to perform Lower-Upper matrix factorization (write any non-singular matrix as a product of a lower triangular and an upper triangular matrix).

We wrote a set of linear equations with the following notation for coefficients


 
 
 

In the matrix notation of linear algebra, these equations can be written as:


 
 
 

Comparing the left side of the above matrix equation with the preceding set of linear equations gives a definition of the product of a matrix with a vector. When performing Gauss Elimination a further shorthand is used. We introduce the augmented matrix:


 
 
 

Let's lay out a specific set of equations in this format.


 
 

The augmented matrix form for this is:


 
 
 

For a standard Gauss elimination, the first question that we ask is: By what factor must I multiply the first equation so that I can subtract it from the second equation and eliminate the coefficient in column 1. The answer is 4, and after I do this operation, the augmented matrix becomes:


 
 
 

or


 
 
 

Rather than leave the 0 in row 2 column 1 of the augmented matrix, I'm going to use that position to do some bookkeeping. I'll store the value of the multiplier that I just applied in the process of simplifying the equations.


 
 
 

Next I find that I need to multiply the first equation by 2 so that subtraction eliminates the coefficient in row 3 column 1. The result of this elimination including bookkeeping is:


 
 
 

Now I need to eliminate the coefficient in row 3 column 2. This can be accomplished by multiplying the equation in row 2 by 2/5 and subtracting it from the equation in row 3.


 
 
 

At this point we have completed the Gauss Elimination and by back substitution find that

x3 = 3/3 = 1

x2 = (5+5x3)/5 = 2

x1 = 2 - 2x2 + x3 = -1
 

Now, let's look at the reason for those little numbers where the zeros would normally sit in the final form of the matrix. I can define a lower triangular matrix L by combining these numbers with 1's along the matrix diagonal to get:


 
 
 
 

and I can also write an upper triangular matrix from the basic results of the Gauss elimination


 
 
 
 

I claim that the matrix product LU is equal to the original coefficient matrix for my equations.
 

Now I want to remind you of why we bother with L U decomposition. For n equations with n unknowns Gauss elimination, or determining L and U takes something proportional to n3 computer operations (multiplies and adds). However solving the LUx=b system only takes of order n2 operations. (If you don't believe my counting solve 3, 4, and 5 equation systems and make a careful count of the number of adds and multiplies that you do). You won't see this for a while, but in many, perhaps most interesting problems for a given matrix A you don't just solve one system Ax=b, but many related systems Ax=b, Ax=c, Ax=d, .... You use LU decomposition to do most of the work up front then additional equations are relatively cheap.
 

Let's check my claim that the product of L and U is equal to the original coefficient matrix for the linear equations, and at the same time clearly define matrix multiplication. I'm going to name the product A and its individual coefficients ai j . The coefficients of L are li j and those of U are ui j . The general definition for the matrix product is:


 
 

where n is the number of rows in u (number of unknowns). Here n=3. In plainer terms the product can be written as a combination of matrix vector products. The first column of A is the result of multiplying L by the first column in U:


 
 
 

The second column of A is the product of L and the second column of U:


 
 
 

The third column of A is the product of L and the third column of U:


 
 
 

Combining the columns gives:
 


 
 

which is the original coefficient matrix. The bookkeeping exercise really did give us an LU decomposition. For those of you not totally up to speed on linear algebra you should got through the above exercise to calculate the product UL. You will discover that UL is not equal to LU. Matrix multiplication is not commutative.

In Fortran, the above matrix product can be accomplished with the following subroutine.

     
      subroutine matmult(a,b,n,c)
      dimension a(n,n),b(n,n),c(n,n)
      do 100 i=1,n
         do 100 j=1,n
          c(i,j)=0.
          do 100 k=1,n
  100        c(i,j)=c(i,j)+a(i,k)*b(k,j)
      return
      end
Using this type subroutine or loop structure is not a good idea when Fortran 90 is available, because you have a matrix multiply available that almost always will be implemented more efficiently than any compilation of your Fortran. For arrays established by a "REAL A(3,3), L(3,3), U(3,3)" you do the multiply with the following line:

A = MATMUL(L,U)
 

Time savings associated with things like matrix multiplies can be a very big deal in many applications. If you look at the Fortran matrix multiply above you will see that the operation takes n3 multiplies and the same number of adds. You don't have to get to very large matrices or very frequent use of the matrix multiply before you are looking at significant savings in computer time by using a optimally coded function like MATMUL.
 

Pivoting

If you read the comments or documentation on Linpack's LU factorization subroutine SGEFA, you will see a discussion of "partial pivoting". This is one step used to minimize the growth of machine roundoff error during a solution. You should be aware that there are linear systems where growth of roundoff error can not be avoided with Gauss Elimination, and other methods need to be tried. To do partial pivoting I start my Gauss Elimination by dividing the coefficients in column 1 by the coefficient in the corresponding row with the maximum absolute value. For column 1 row 1 the number of interest is 1/2. For column 1 row 2 the number is 4/4=1. For column 1 row 3 the number is 2/5. Of the three rows, the second produces the largest ratio, so I declare this to be my "pivot" equation. In effect I switch equations 1 and 2, using the 2nd equation to eliminate the first variable from the remaining equations. I repeat this testing and switching at each step of the elimination, keeping notes of each switch in equations (reason for extra integer array IPVT in SGEFA and SGESL).

For more information on this, I recommend that you take a look at Chapter 19 of "Advanced Engineering Mathematics," by Erwin Kreyszig, or browse the Numerical Methods and Linear Algebra books in the library until something looks clear to you.


Other Lectures / Home


Created and Maintained by John Mahaffy : jhm@psu.edu