On solving symmetric systems of linear equations arising in optimization

Anders Forsgren
Department of Mathematics, KTH Royal Institute of Technology


Abstract:

An essential component in many methods for smooth optimization problems is the solution of a system of linear equations where the matrix is symmetric. It could be methods based on linear equations as such, for example active-set methods for linear and convex quadratic programming, or methods based on solving nonlinear equations by solving a sequence of linear equations, for example interior methods.

It is not always so clear which equations to solve, and how best to solve them. Equations from one iteration to the next are related, and it is of interest to economize by use information from the previous iteration. We discuss how this can be done in the context of convex quadratic programming. In particular, we contrast less expensive active-set iterations to more expensive interior iterations, as well as contrast factorization methods to iterative methods for solving the linear equations.

We also discuss quasi-Newton methods applied to solving a symmetric equation where the matrix is positive definite, which can be seen as the basis of a Newton iteration. We discuss relationship of quasi-Newton methods to the method of conjugate gradients and give a family of limited-memory methods that behave as the method of conjugate gradients in exact arithmetic. We illustrate the behavior on quadratic problems and demonstrate that there is a potential to gain overall efficiency by a limited-memory quasi-Newton method.

Finally, we discuss an application of interior methods for optimization problems that arise in intensity-modulated radiation therapy. We show how specialized linear-algebraic methods combined with higher-order interior methods may be highly useful for solving linear programs that arise. ---- The talk is based on joint work with David Ek, Lovisa Engberg, Kjell Eriksson, Philip E. Gill, Joshua Griffin, Björn Hårdemark and Tove Odland.