NLPLIB - A MATLAB Toolbox for Nonlinear Programming and Parameter Estimation

At the Department of Mathematics and Physics (IMa)


The MATLAB toolbox NLPLIB TB 1.0 (NonLinear Programming LIBrary Toolbox) is part of TOMLAB. NLPLIB is a set of MATLAB m-files, which solves nonlinear optimization problems and nonlinear parameter estimation problems in operations research and mathematical programming. The focus is on dense problems. The toolbox is running in both MATLAB 5.1 and MATLAB 4.2c and works on both PC (NT4.0, Win95, Windows 3.11) and Unix systems (SUN, HP).

Optimization problems may either be solved using any of the included interactive menu programs, or solved by direct call to a parameter driven driver routine. In either case the problem may be solved using any of the installed optimization solvers. NLPLIB has its own solvers, but may also call all solvers in MathWorks Optimization Toolbox (if this toolbox is installed) as well as many FORTRAN/C codes using MEX-file interfaces.

NLPLIB has interactive menu programs for unconstrained optimization, quadratic programming, constrained optimization and nonlinear least squares optimization (including the problem of fitting positive sums of exponential functions to data).

A MEX-file interface for both PC and Unix has been developed for the commercial optimization code MINOS , used both for linear programming (in OPERA TB 1.0) and nonlinear programming. MEX-file interfaces working on both PC and Unix have also been developed for the commercial codes from the Systems Optimization Laboratory (SOL) NPSOL, NPOPT and NLSSOL(all of them also present in the NAG library, with other names). The aim is to expand this list in the near future.

NLPLIB has many predefined test problems. It is very easy to try to solve any of these problems using any of the solvers present. It is also very easy to expand the set of test problems.

NLPLIB was designed with the aim to simplify the solution of practical optimization problems. After defining a new problem in the very simple NLPLIB format it is then possible to try to solve the problem using any available solver or method.

For two-dimensional nonlinear unconstrained problems, the menu programs support graphical display of the selected optimization problem as mesh or contour plot. Furthermore the iteration steps are displayed on the contour plot. For higher-dimensional problems, iterations steps are displayed in two-dimensional subspaces.

NLPLIB TB 1.0 is developed by Kenneth Holmström, project leader of the Applied Optimization and Modeling (TOM) profile research area at Mälardalen University. Many students have contributed with first versions of many of the codes. It has been used for many years in optimization courses both at Mälardalen University and at Uppsala University.

NLPLIB TB 1.0 is distributed free of charge for academic purposes. Always cite the following reference::

Kenneth Holmström: NLPLIB TB 1.0 - A MATLAB Toolbox for Nonlinear Optimization and Parameter Estimation, Technical Report IMa-TOM -97-2, Department of Mathematics and Physics, Mälardalen University, Sweden.


Overview of NLPLIB TB 1.0

Installation on PC systems: NLPLIB is normally installed as part of TOMLAB, normally \matlab\tomlab\nlplib or \matlab\toolbox\tomlab\nlplib. This path must be added to the MATLABPATH. Before starting any NLPLIB routines, call nlpinit, which sets the number of columns and declares the global variables used. If you have a screen with more than 80 columns, change the variable MAXCOLS in nlpinit.m to the correct number.

Installation on Unix systems: NLPLIB is normally installed as part of TOMLAB, usually in a directory like /home/tomlab/nlplib or /home/matlab/toolbox/tomlab/nlplib. This path must be added to the MATLABPATH. Before starting any NLPLIB routines, call nlpinit, which sets the number of columns and declares the global variables used. If you have a screen with more than 80 columns, change the variable MAXCOLS in nlpinit.m to the correct number.

If NLPLIB is installed as a standalone toolbox, the following routines from the OPERA toolbox must be included: lpsimp1, mPrint, printmat, xprint, xprinte and xprinti.

In the following we describe NLPLIB TB 1.0 by giving tables describing all MATLAB functions and some short selected comments. The first table describes the four menu programs, one for each type of optimization problem:

Menu Program Description
ucopt.m Menu for Unconstrained Optimization
qpopt.m Menu for Quadratic Programming Optimization
conopt.m Menu for Constrained Optimization
lsopt.m Menu for Least Squares Problems (and fitting of positive sums of exponentials to data)

The menu programs described in the table above calls the corresponding driver routine given in the next table. One exception is the Quadratic Programming menu program qpopt which calls the different solvers directly.

For each external FORTRAN/C solver we have also developed a special driver routine, which simplifies the test and use of a certain external solver.

Driver Routine Description
ucRun.m Driver routine to solve unconstrained problems
conRun.m Driver routine to solve constrained problems
lsRun.m Driver routine to solve nonlinear least squares problems and exponential fitting problem
minosrun.m Driver/Test routine for MINOS.
npsolrun.m Driver/Test routine for NPSOL.
npoptrun.m Driver/Test routine for NPOPT.
nlsrun.m Driver/Test routine for NLSSOL.

Like MathWorks Optimization Toolbox, NLPLIB uses a default vector with optimization parameters. In Optimization Toolbox, this routine is called foptions and is using a vector with length 18. NLPLIB needs more parameters, currently 29, and therefore the routine goptions is defined. To further simplify the use of NLPLIB, three routines have been written which calls goptions and change some of the parameters to more reasonable values depending on what type of problem should be solved. The routines are described in the table below:

Routine Description:
conDef.m Define default parameters for constrained optimization
ucDefine.m Define default parameters for unconstrained optimization
lsDefine.m Define default parameters for nonlinear least squares optimization
goptions.m Setup default values for optimization and display parameters. Generalized version of foptions in Optimization Toolbox

Optimization Algorithms and Solvers:

The following table describes the optimization algorithms, solvers and routines which is part of NLPLIB.

Routine Description
ucsolve.m Algorithms for unconstrained optimization with simple bounds on the parameters. Currently implemented algorithms are Newtons Method , BFGS, Inverse BFGS, Fletcher-Reeves' Conjugate Gradient Method, Polak-Ribieres' Conjugate Gradient Method. For all algorithms the code is using a subspace minimization technique to handle rank problems
gn.m Algorithms for nonlinear least squares with simple bounds. Currently implemented algorithms are the Gauss-Newtona Method with Subspace Minimization, the Fletcher-Xu Hybrid Method, Al-Baali-Fletcher Hybrid Method and Huschens Method
consolve.m A SQP (Sequential Quadratic Programming) algorithm, by Schittkowski, for generally constrained optimization problems.
sTrustR.m A Structural Trust Region algorithm for unconstrained optimization, by Conn, Gould, Sartenaer, Toint 1995.
ittr.m Initial Trust Region Radius Algorithm, from A. Sartenaer 1995
linesrch.m Modified version of a linesearch algorithm from Fletcher: Practical Optimization, 2nd ed., 1987
intpol2.m Find minimum of quadratic approximation. Used by linesrch
intpol3.m Find minimum of cubic approximation. Used by linesrch
armijo.m Armijo-Goldstein line search algoritm.
qpe.m Solve a quadratic programming problem with equality constraints (QPE) using a null space method
qplm.m Solve a quadratic programming problem with equality constraints (QPE) using Lagranges method
qpi.m Solve a quadratic programming problem (QPI). The algorithm has special treatment of both inequality and equality constraints as well as lower and upper bounds (simple bounds). May converge for some indefinite QPIs also, but code is not robust for indefine problems. Calls qpe or qplm to solve sub problems.

It is possible from MATLAB to call FORTRAN and C/C++ code using dynamically linked subroutines, MEX-files. On PC Windows systems these interfaces are either built using 16-bit DLL files or the modern 32-bit REX MEX-file interface. A severe restriction is that the standard I/O routines in C and FORTRAN are not available; DLL MEX-files must use the Windows file I/O functions instead. For long it has been very difficult to develop a working MEX-file interface on PC Windows systems, especially MEX-file interfaces to FORTRAN code. Most optimization solvers are written in FORTRAN and also most of them draws heavy on a functioning I/O system. On Unix systems it is much easier to develop working MEX-file interfaces, and FORTRAN MEX-file code is available for several optimization solvers.

In NLPLIB TB there is currently four MEX-file interface developed, to the commercial solvers MINOS, NPSOL, NPOPT and NLSSOL. As standard, MINOS has a very advanced I/O handling, but is also possible to switch it off and use MINOS as a silent subroutine. The other tree SOL routines are also possible to make totally silent. The MEX-file interfaces are all written in C and compiled and linked using the WATCOM C/C++ version 10.6 compiler. It was impossible to make themwork without converting all FORTRAN code to C (using f2c).

MEX Interface Routine Description
minos.m MEX interface routine in MATLAB which calls the MEX-file interface routine for MINOS , the compiled executable minosmex
npsol.m MEX interface routine in MATLAB which calls the MEX-file interface routine for NPSOL, the compiled executable npsolmex
npopt.m MEX interface routine in MATLAB which calls the MEX-file interface routine for NPOPT , the compiled executable npoptmex
nlssol.m MEX interface routine in MATLAB which calls the MEX-file interface routine for NLSSOL, the compiled executable nlssolmx

In NLPLIB all names of the low level routines for a given problem, i.e. the routines that compute the function value, the gradient, the Hessian, the residual (for NNLS problems), the Jacobian (for NNLS problems), the constraints and the constraint gradient, are defined in a set of global variables with predefined names. It is the task of the problem setup routine, in NLPLIB routines with names of the type *_init, to return a string matrix FName with correct names for these routines. The current driver routine then sets the global variables to correct values.

As input argument to all menu programs and driver routines you can send your own setup routine. This implies that the system is totally flexible. Problem specific information that is needed to compute the low level routines are packed and supplied in one vector, user_P. Three functions are used to define, pack and unpack this vector, uPinit, uPset and uPget. These routines are included in table of utility routines further down. After writing your setup routine and the low level routines needed, NLPLIB is ready to run your problem.

The following table shows the reserved global names for the low level routines and the information that should be computed when running each low level routine.Only the routines relevant for a certain problem type must be coded. There are dummy routines for the other routines.

Global Parameter Name Information that is computed when evaluating this the function
p_f The function value f(x) of the function to be optimized at the point x
p_g The gradient vector g(x) of the function f(x) at the point x
p_H The Hessian matrix H(x) of the function f(x) at the point x
p_c The constraint vector c(x) at the point x
p_dc The constraint derivative matrix dc(x) at the point x
p_r The residual vector r(x) of a function f(x) = 0.5 r(x)T * r(x) at the point x
p_J The the Jacobian matrix J(x), where Jij(x)=dri/dxj, i=1,...,m, j=1,...,n

Different solvers all have very different demand on how the sufficient information should be supplied, i.e. the function to optimize, the gradient, the Hessian matrix. To be able to program the problem only once and then use this formulation to run all types of solvers, it was neccessary to develop a lot of filter routines which returns the information in the format needed for the actual solver. The filter routines are using the global names from the table above to call the correct low level routine. The following table gives the current set of filter routines needed.

Interface Name Description of General Interface Filters for Test Problems
nlp_fc.m Compute function value f(x) and constraint c(x) using global names p_f and p_c. Used when calling routines from Optimization Toolbox
nlp_dfdc.m Compute derivatives df(x) and dc(x) using global names p_g and p_dc. Used when calling routines from Optimization Toolbox
nlp_fg.m Compute function value f(x) and gradient g(x) using global names p_f and p_g.Used when calling MINOS
nlp_cdc.m Compute constraint c(x) and derivatives dc(x) using global names p_c and p_dc
nlp_cdcS.m Compute c(x) and derivatives dc(x) (stored sparse) using global names p_c and p_dc. Used when calling MINOS
nlp_c.m Generate empty constraint, for use with constrained codes
nlp_dc.m Generate derivative of empty constraint
nlp_cD.m Generate dummy constraint, for use with constrained codes
nlp_dcD.m Generate derivative of dummy constraint
nlp_jt.m Compute transposed Jacobian calling global p_J. Used when calling routines from Optimization Toolbox
funobj.m Dummy problem for solvers accessed using the MEX-file interface. funobj computes the function value f(x) and the gradient g(x)
funcon.m Dummy problem for solvers accessed using the MEX-file interface. funcon computes the constraints c(x) and the constraint derivatives dc(x)

The following table describes the low level test functions and the corresponding setup routines needed for the predefined unconstrained optimization problems.

UNCONSTRAINED TEST FUNCTIONS Description
uc_init.m Setup 12 unconstrained problems, UC 1-12.
uc_f.m Compute the objective function f(x) for UC Problem 1-12
uc_g.m Compute the gradient g(x) for UC Problem 1-12
uc_h.m Compute the Hessian H(x) for UC Problem 1-12
uc2_init.m Setup 12 unconstrained problems with number UC 13-24
uc2_f.m Compute the objective function f(x) for UC problem 13-24
uc2_g.m Compute the gradient g(x) for UC problem 13-24
uc2_h.m Compute the Hessian H(x) for UC problem 13-24

The following table describes the low level test functions and the corresponding setup routines needed for the predefined constrained optimization problems.

Constrained Test Functions Description
con_init.m Setup routine 1 for constrained problems
con2init.m Setup routine 2 for constrained problems
con_f.m The objective function f(x) for constrained problem
con_df.m The gradient g(x) for constrained problem
con_d2f.m The Hessian H(x) of f(x) for constrained problem
con_c.m Constraint residuals c(x) for constrained problem
con_dc.m Derivative of constraint residuals for constrained problems
con_fm.m Compute merit function theta(x_k)
con_gm.m Compute gradient of merit function theta(x_k)

To be able to solve Quadratic Programming (QP) problems using general purpose solvers, routines to compute function gradient and Hesssian must be defined. The following table describes the routine whichs defines the QP problem, qpmatrix, and the three other functions needed:

QP Routine Description
qpmatrix.m Return the matrices and vectors defining a Quadratic Programming Problem (QP)
qp_f.m Compute the function value f(x) in a Quadratic Programming Problem, calling qpmatrix
qp_g.m Compute the gradient value g(x) in a Quadratic Programming Problem, calling qpmatrix
qp_h.m Compute the constant Hessian value H(x) for a Quadratic Programming Problem, calling qpmatrix to define the matrix

The following table describes the low level test functions and the corresponding setup routines needed for the predefined nonlinear least squares problems.

LS Routine Description
ls_init.m Setup nonlinear least squares problem
ls_r.m Compute the residuals ri(x), i=1,...,m. x = (x_1,...,x_n)T
ls_J.m Compute the Jacobian matrix dri/dxj, i=1,...,m, j=1,...,n
ls_f.m Compute the function value f(x) = 0.5 r(x)T * r(x)
ls_g.m Compute the gradient g(x) = J(x)T * r
ls_H.m Compute the Hessian approximation H(x) = J(x)T * J
ls_rj.m Computes r(x) and J(x) calling the functions defined by the global names p_r and p_J. Used when calling NLSSOL

The problem of fitting positive sums of positively weighted exponential functions to empirical data may be formulated either as a nonlinear least squares problem or a separable nonlinear least squares problem. Some empirical data series are predefined and artificial data may be generated. Algorithms to find starting values for different number of exponential terms are implemented. The following table describes the relevant routines:

ExpFIT routine Description
exp_artP.m Generate artificial exponential sum problems
exp_init.m Setup routine for both nonlinear least squares and separable nonlinear least squares formulation. Includes data from several different empirical test series
exp_r.m Compute the residuals ri(x), i=1,...,m. x = (x_1,...,x_n)T
exp_J.m Compute the Jacobian matrix dri/dxj, i=1,...,m, j=1,...,n
exp_c.m Compute the constraints lambda1 < lambda2 < etc on the exponential parameters
exp_dc.m Derivative of constraints for constrained fitting problem
expLamb0.m Find starting values for exponential parameters lambda

There are some options in the menu programs to display graphical information about the selected problem. For two-dimensional nonlinear unconstrained problems, the menu programs support graphical display of the selected optimization problem as mesh or contour plot. Furthermore the iteration steps are displayed on the contour plot. For higher-dimensional problems, iterations steps are displayed in two-dimensional subspaces. The next table describes the plot routines.

PLOT Routine Description
plotiter.m Setup call to plot routines step2d and step2dx
step2d.m Make a contour plot of the function f(x) and all search steps for two dimensional optimization problems
step2dx.m Make a contour plot of the function f(x) and plot the search step in a two dimensinal subspace when the optimization problem has more than two unknown parameters
meshf.m Make a mesh plot of the function f(x)
lineplot.m Plot along a line (i.e. view the line search problem)

The following table describes the utility routines called from other NLPLIB functions:

Utility Routine Description
OparMenu.m Subprogram displaying menus which gives the user possibility to change optimization parameters
secUpdat.m Compute least change formulation of secant updates, called from gn
uPinit.m Setup the initial user parameter vector, user_P, for a given problem
uPset.m Set (pack) the user parameter vector, user_P
uPget.m Get (unpack) the values in user parameter vector, user_P
inputSet.m Prompt user for integer from set of values
inputR.m Prompt user for real input value in a given interval
inputV.m Prompt user for n-vector in an interval. The routine can duplicate 1 value
circinit.m Generate random points for the predefined Least Squares Circle Fitting Problem
circplot.m Plot actual and approximated circle and simulated data points for the predefined Least Squares Circle Fitting Problem
backsub.m Solves upper triangular linear system with backward substitution
inirobot.m Extra routine needed for the robot problem, constrained test prob # 12.

The optimization solvers in MathWorks Optimization Toolbox which are possible to use in NLPLIB are listed in the table below. The user must of course have a valid license. The driver routine checks if the actual routine is in the path, possible to call, and then uses feval to run it.

Optimization Toolbox Routine Solves Type of Problem
constr Constrained minimization
leastsq Nonlinear least squares
fminu Unconstrained minimization using gradient search
lp Linear programming
qp Quadratic programming
nnls (no license needed) Non-negative linear least squares
conls Constrained linear least squares


Last change 1997-08-18