ME
523
**Numerical Solutions Applied To Heat Transfer And
Fluid Mechanics Problems**

**Krylov Subspace Methods**
Review my discussion of Krylov methods in my introductory remarks. As you consider
various Krylov methods, pay attention to the fine print. To solve
the equation Ax=b, many of these methods require that the matrix A be
symmetric. This is not a major restriction if necessary you can
multiply both sides of the equation by the transpose of A to create the
new equation Sx=c, where S is symmetric (S = A^{T}A) and c = A^{T}b.

## General Properties of Krylov Solvers

Read the documentation for SMLib and for Templates,
to get a feeling for these methods. This class of methods is
characterized by some simple properties. First, the residuals
associated with the iteration are mutually orthogonal. Also, if
the iteration is represented as x^{(j+1)} = x^{(j)} + a^{(j)}p^{(j)},
where p is a vector and "a" is a scalar, then the set of
all p^{(j)} are in some sense mutually orthogonal and
the a^{(j)}'s are chosen so that some norm of the
residual is minimized. These properties have two important
consequences. The first is only valid in the ideal case of
infinite precision arithmetic, when the orthogonality guarantees that
after some finite number of iterations we've covered the entire set of
basis vectors and produce the exact solution. The second property
results from the minimization of a norm. What that does is to
select those basis vectors with the most significant contribution to
the solution first, resulting in a pretty good approximation to the
solution well before an exact solution would be achieved.

### Conjugate Gradient Method

The conjugate gradient method was created in the early 1950's as an
interesting form of direct matrix solution, but was largely forgotten
until the common practice of matrix preconditioning made it a useful
iterative option (circa 1980). Hestenes and Stiefel
discovered that construction of a method which minimizes the norm
r^{T}A^{-1}r results in a very simple algorithm.
Start with an initial guess x^{(0)}, and let:

r^{(0)} = b-Ax^{(0)}

Set p^{(0)}= r^{(0) }then each iteration contains
the following steps:

a^{(j)} = (r^{(j)T}r^{(j)})/(p^{(j)T}Ap^{(j)})

x^{(j+1)} = x^{(j)} + a^{(j)}p^{(j)}

r^{(j+1)} = r^{(j)} - a^{(j)}Ap^{(j)}

b^{(i)} = (r^{(j+1)T}r^{(j+1)})/(r^{(j)T}r^{(j)})

p^{(j+1)} = r^{(j+1)} + b^{(j)}p^{(j)}

This results in vectors with the following properties

r^{(j)}p^{(i) }= 0 for j>i

p^{(j)T}Ap^{(i)} = 0 for i not equal to
j

r^{(j)T}r^{(i)} = 0 for i not equal to j

## Preconditioners

You almost always need to come up with some preconditioner for a Krylov
method to be worthwhile. This is basically the idea that you
don't solve Ax=b. You solve M^{-1}Ax = M^{-1}b or
some variation on this theme such as AM^{-1}y=b where
y=Mx. Read the documentation for SMLib
and for Templates
for
more information on preconditioning. When you need a
preconditioner for a particular physical problem, start by reading the
literature very carefully. In some cases, people have created
some very effective model specific preconditioners. If no good
suggestions are available, start with an incomplete LU preconditioner. SMLib is one
source of Fortran 90 software to do
this for you. With this preconditioning you come up with a lower
and upper triangular matrix pair (L and U) whose product is
approximately A. To understand the construction of L and U,
consider the basic process of LU
decomposition. For incomplete decomposition some advance
pattern is selected for keeping non-zero elements of L and U.
Usually its a simple decision like preserving non-zero terms for a
certain distance from the matrix diagonal. Note that such a
decision will generally result in more than just discarding elements
from the full factored matrices. The values of non-zero elements
in the partially factored matrices may be different from the
corresponding elements in the correct LU matrices.
For incomplete LU preconditioning, there are a number of options for
creating a symmetric system when needed. The basic preconditioned
system is:

(LU)^{-1}Ax = (LU)^{-1}b

Symmetry is obtained by multiplying the equation by ((LU)^{-1}A)^{T}

A^{T}(LU)^{-T}(LU)^{-1}Ax = A^{T}(LU)^{-T}(LU)^{-1}b

So the system solved is Sx=c, where S=A^{T}(LU)^{-T}(LU)^{-1}A,
and c=A^{T}(LU)^{-T}(LU)^{-1}. In
practice you don't multiply A^{T}(LU)^{-T}(LU)^{-1}A
before starting your Krylov solver, you construct the solver to take
advantage of some simplifications possible when starting with such a
matrix product.

**Maintained by John
Mahaffy : jhm@psu.edu**