Parallel algorithms for Scientific Computing

This course is organized by the Division of Scientific Computing at the IT Department (DCS)

Edition 2020: General description

The course focuses on the impact of HPC and parallel computations on the algorithms for scientific computing problems. We cover parallel algorithms for a range of problem classes in Scientific Computing: mostly linear algebra (with a focus on sparse problems related to solving PDE), iterative solvers, multigrid and some other basic algorithms, such as QR factorization and SVD. We also include some details on sparse matrix storage formats and impact on HPC computations.

Examination and grading

The course gives 3 hp, which includes following the lectures and fulfill an assignment, to be delivered in a written form. It is possible to obtain 5 or even 7.5 hp by doing an additional project that includes some testing and implementation on a parallel computer facility (to be discussed individually).

Course prerequisites

Experience of scientific computing problems and basic knowledge of scientific computing algorithms. Detailed knowledge of algorithms like conjugate gradients, generalised minimal residual method and multigrid is not needed but is helpful. Experience of programming for scientific computing in languages like C, but you do not need to have taken a course in parallel programming to do the assignments. An optional project work (for obtaining 5 or 7.5 hp) should contain implementation and experiments using a parallel system, thus, presumably more experience of parallel programming will be needed.
The lectures will be online, via Zoom.

Lecturer:

Maya Neytcheva

Additional course materials

Assignment
Lecture 1, slides Lecture 2, slides Lecture 3, slides

Detailed time schedule

Date Topic(s) Time
Location
Lecturer
Nov 9 Lecture 1: Sparse matrices and storage schemes 10:15-12
Zoom link
MN
Nov 16 Lecture 2: Iterative methods and parallelization. Preconditioners - parallelization aspects 10:15-12
Zoom link
MN
Nov 24 Lecture 3: Pipelined versions of CG and GMRES - an attempt to achieve better scaling for Very Large/Exascale. Briefly on multifrontal methods, communication-avoiding algorithms, extended Krylov methods. 10:15-12
Zoom link
MN
     

Recommended books:

  1. Data storage schemes for parallel computations old source, 1992, new source, 2017
  2. P. Ghysels, W. Vanroose, Hiding global synchronization latency in the preconditioned conjugate gradient algorithm, Parallel Computing, 40 (2014), pp. 224-238, (also here)
  3. P. Sanan, S.M. Schnepp, D.A. May, Pipelined flexible Krylov subspace methods
  4. Pipelined solvers, performance comparison
  5. PETSc: pipelined CG , PETSc: pipelined GMRES

Last changed on November 8, 2020.
Mail to: Maya dot Neytcheva "at" it dot uu dot se "