Lab 3: Iterative methods
Contents
Lab 3: Iterative methods#
Note: The labs are not mandatory, but highly recommended.
Part 1: Sparsity pattern of a discrete Laplacian#
Large linear systems of equations arise when solving PDEs numerically. Here, we will study Laplace’s equation on \(\Omega=[0\;1]\times[0\;1]\) (the unit square). The PDE is given by
This problem models a film of soap, where \(u(x,y)\) is the vertical position of the soap film surface. We will solve the PDE using 2nd order finite differences with \(N_x N_y\) interior grid points. Let \(h_x=\frac{1}{N_x+1}\) and \(h_y=\frac{1}{N_y+1}\). Then the discrete problem is as follows:
For all interior points, i.e. for all \((i,j)\) such that \(1\leq i \leq N_x\) and \(1\leq j \leq N_y\):
For all boundary points, i.e., for all \((i,j)\) such that \(i \in \{0, N_x+1\}\) or \(j \in \{0, N_y+1\}\):
After choosing a way to order the unknown values \(u_{i,j}\) (for example row by row), we can write down a linear system of equations \(Au=b\).
Exercises#
Download the following modules: Lab31.py, Lab32.py, Lab33.py, soapfilm.py, laplace_equation.py, JacobiMatrix.py, GaussSeidelMatrix.py NOTE: If you are using the university computers, use
Anaconda > Spyder
to run the codes. That installation is compatible with the plotting routines used.Open and run
Lab31.py
. The discretization matrix \(A\) is computed by thesoapfilm
function:A = soapfilm(Nx, Ny)
. After plotting the solution, the program displays the sparsity pattern of \(A\).To compute the soap film surface accurately, we would need large values of \(N_x\) and \(N_y\). However, the purpose of this experiment is not to solve the PDE but to study the sparsity pattern of the matrices involved. For a good visualization of the sparsity pattern you can try \(N_x = N_y = 6\). Then try \(N_x=6,\, N_y=12\), and \(N_x = 12, \, N_y=6\). How does the pattern change when you change \(N_x\)? What happens when you change \(N_y\)? Try to understand the difference.
In
Lab31.py
, calltimed_runs()
instead ofmain()
. The program prints the time required to solve the linear system for different values of \(N_x\) and \(N_y\). Notice how quickly the runtime increases with the problem size!
Part 2: Iterative methods#
In many cases it is better to solve sparse linear systems using iterative methods. Typically, an iterative method uses less memory than direct methods such as Gaussian elimination or \(LU\) factorization. In many cases, the iterative methods are also faster.
To solve the linear system \(Ax = b\) with an iterative method, you begin by “guessing” a solution, \(x^{(0)}\). You then use some algorithm to iteratively modify and hopefully improve the solution. The solution after iteration \(k\) is denoted by \(x^{(k)}\).
There are many iterative methods to choose from. They differ in how they compute \(x^{(k+1)},\) given \(x^{(k)}\). In this part of the lab assignment, you will compare three iterative methods: Jacobi’s method, Gauss-Seidel’s method, and the Conjugate Gradient method. Each method will be applied to the problem of computing the soap film surface.
Both the Jacobi method and the Gauss-Seidel method can mathematically be expressed as:
where \(G\) is a matrix and \(d\) a vector. The rate of convergence for an iterative method will depend on the eigenvalues of its iteration matrix \(G\). More precisely, the successive approximations \(x^{(k)}\) will converge if the spectral radius of \(G\) is less than one. The spectral radius is the maximum absolute value of the eigenvalues of \(G\) and is denoted by \(\rho(G)\).
Exercises#
Open and run
Lab32.py
, which solves the linear system using iterative methods. You can choose betweenJacobi
,Gauss-Seidel
, andConjugate Gradient
. Notice that \(A\) is symmetric positive definite so the Conjugate Gradient method is applicable. How fast does the residual decrease with every iteration with the different methods? Do the iterations take the same amount for time for all of them?Run
Lab33.py
to study the eigenvalues of the iteration matrix \(G\) for both Jacobi and Gauss-Seidel. Do the eigenvalues explain the different convergence rates obsered in the previous exercise?