Parallel Preconditioning with ILU-type Methods

Prof.Dr. Thomas Huckle
Scientific Computing in Computer Science, Technical University Munich
Germany


Abstract:

For solving a sparse linear system of equations $Ax=b$ iteratively by applying e.g. preconditioned conjugate gradient (pcg) or GMRES, effective preconditioners are necessary. For computing incomplete LU factorization (ILU) in a parallel environment, recent work by Edmond Chow introduced an easy way to parallelize fixed-point iteration. A few iterations already deliver a good approximation on $L$ and $U$, and the fixed-point iteration is convergent. Unfortunately, the convergence will be slow in the most interesting case that $A$, $L$, and $U$ are ill conditioned. Modified ILU (MILU) is a slight modification of the ILU method that ensures that the row sums of $A$ and $LU$ coincide, resp. $A*1=LU*1$ with $1$ the vector of all ones. Also here $L$ and $U$ will be even more ill conditioned.

In this talk we generalize the iterative ILU method on MILU and prove convergence of the related fixed-point iteration. In a second step - to improve the convergence - we replace the plain fixed point iteration for $LU$-type methods by Newton's method and show that the resulting linear systems to solve in every Newton step are sparse triangular and allow faster convergence.

Therefore, the solving of sparse triangular matrices gets the bottleneck in efficiently parallel solving of linear systems. Here we want to consider stationary Jacobi iterations. In its original form the Jacobi iteration for ill-conditioned matrices has very slow convergence. Therefore, we introduce different acceleration tools like preconditioning (block Jacobi and Incomplete Sparse Approximate Inverse ISAI approximations), asynchronous update steps, and - most important - a recursive acceleration of the Jacobi method.