The world’s leading publication for data science, AI, and ML professionals.

How to solve a constraint optimization problem in R

Nonlinear Constraint Optimization in R using nloptr

Hands-On Tutorials

Image by the author using the function f = (Z⁴-1)³ where Z is a complex number
Image by the author using the function f = (Z⁴-1)³ where Z is a complex number

Introduction

Often in physical science research, we end up with a hard problem of optimizing a function (called objective) that needs to satisfy a range of constraints – linear or non-linear equalities and inequalities. The optimizers usually also have to adhere to the upper and lower bound. I recently worked on a similar problem in Quantum Information Science (QIS) where I attempted to optimize a non-linear function based on a few constraints dictated by rules of physics and mathematics. Interested readers may find my work on Constellation Optimization for Phase-Shift Keying Coherent States With Displacement Receiver to Maximize Mutual Information where I optimized mutual information for QuadriPhase-Shift Keying (QPSK) based on a set of constraints.

Constellation Optimization for Phase-Shift Keying Coherent States With Displacement Receiver to…

While chasing the problem of non-linear optimization with a set of constraints, I found out that not all optimization routines are created equally. There are several libraries available in different languages such as python (scipy.optimize), Matlab (fmincon), C++ (robotim, nlopt), and R (nloptr). While my list for optimization routine is not exhaustive, some of them are more reliable than others, some provide faster execution than others and some have better documentation. Based on several key factors, I find nloptr, implemented in the R language to be most suitable for nonlinear optimization. nloptr uses nlopt implemented in C++ as a backend. As a result, it provides the elegance of the R language and the speed of C++. The optimization procedure is performed quickly in a fraction of seconds even with a tolerance of the order of 10e-15.

Nonlinear Optimization Problem

A general nonlinear optimization problem usually have the form

where f is an objective function, g defines a set of inequality constraints, h is a set of equality constraints. xL and xU are lower and upper bounds respectively. In the literature, several optimization algorithms have been presented. For example, MMA (Method of moving asymptotes)¹ supports arbitrary nonlinear inequality constraints, (COBYLA) Constrained Optimization BY Linear Approximation², (ORIG_DRIECT) DIRECT algorithm³. Optimization algorithms that also support nonlinear equality constraints include ISRES (Improved Stochastic Ranking Evolution Strategy)⁴, (AUGLAG) Augmented Lagrangian Algorithm⁵. A full list of such methods can be found on nlopt C++ reference page at https://nlopt.readthedocs.io/en/latest/NLopt_Reference/. nloptr uses one of these methods to solve the given optimization problem.

"MMA (Method of moving asymptotes) supports arbitrary nonlinear inequality constraints, (COBYLA) Constrained Optimization BY Linear Approximation, (ORIG_DRIECT) DIRECT algorithm. Optimization algorithms that also support nonlinear equality constraints include ISRES (Improved Stochastic Ranking Evolution Strategy), (AUGLAG) Augmented Lagrangian Algorithm."

In the rest of the article, I provide several examples of solving a constraint optimization problem using R. I personally use R Studio that combines R compiler and editor. R Studio also provides knitr tool which is great for writing documentation or articles with inline code which can also generate a latex source code and a pdf file. Most of the example presented here has been modified from test suites used to validate functions in nloptr R package.

Installation and loading of the library

Installation of nloptr in R is fairly straightforward.

install.packages("nloptr")
library('nloptr')

Example 1: Optimization with explicit gradient

In the first example, we will minimize the Rosenbrock Banana function

whose gradient is given by

However, not all the algorithms in nlopt require explicit gradient as we will see in further examples. Let’s define the objective function and its gradient first:

eval_f <- function(x)
{
    return ( 100 * (x[2] - x[1] * x[1])^2 + (1 - x[1])^2 )
}
eval_grad_f <- function(x) {
return( c( -400 * x[1] * (x[2] - x[1] * x[1]) - 2 * (1 - x[1]),
200 * (x[2] - x[1] * x[1]) ) )
}

We will also need initial values of optimizers:

x0 <- c( -1.2, 1 )

Before we run the minimization procedure, we need to specify which algorithm we will use. That can be done as follows:

opts <- list("algorithm"="NLOPT_LD_LBFGS",
"xtol_rel"=1.0e-8)

Here, we will use the L-BFGS algorithm. Now we are ready to run the optimization procedure.

# solve Rosenbrock Banana function
res <- nloptr( x0=x0,
eval_f=eval_f,
eval_grad_f=eval_grad_f,
opts=opts)

We can see the result by typing

print(res)

The function is optimized at (1,1) which is the ground truth. Check yourself by running the code in R Studio.

Example 2: Minimizing with inequality constraint without gradients

The problem to minimize is

with a1= 2, b1 = 0, a2 = −1, and b2 = 1. We re-arrange the constraints to have the form g(x) ≤ 0:

First, define the objective function

# objective function
eval_f0 <- function( x, a, b ){
return( sqrt(x[2]) )
}

and constraints are

# constraint function
eval_g0 <- function( x, a, b ) {
return( (a*x[1] + b)^3 - x[2] )
}

Define parameters

# define parameters
a <- c(2,-1)
b <- c(0, 1)

Now solve using NLOPT_LN_COBYLA without gradient information

# Solve using NLOPT_LN_COBYLA without gradient information
res1 <- nloptr( x0=c(1.234,5.678),
eval_f=eval_f0,
lb = c(-Inf,0),
ub = c(Inf,Inf),
eval_g_ineq = eval_g0,
opts = list("algorithm"="NLOPT_LN_COBYLA",
"xtol_rel"=1.0e-8),
a = a,
b = b )
print( res1 )

Example 3: Minimization with equality and inequality constraints without gradients

We want to solve the following constraint optimization problem

subject to constraints

For this example, the optimal solution is achieved at (1.00000000, 4.74299963, 3.82114998, 1.37940829).

# Objective Function
eval_f <- function(x)
{
return (x[1]*x[4]*(x[1] +x[2] + x[3] ) + x[3] )
}
# Inequality constraints
eval_g_ineq <- function(x)
{
return (25 - x[1]*x[2]*x[3]*x[4])
}
# Equality constraints
eval_g_eq <- function(x)
{
return ( x[1]^2 + x[2]^2 + x[3]^2 + x[4]^2 - 40 )
}
# Lower and upper bounds
lb <- c(1,1,1,1)
ub <- c(5,5,5,5)
#initial values
x0 <- c(1,5,5,1)

We will also need to define options for optimizations

# Set optimization options.
local_opts <- list( "algorithm" = "NLOPT_LD_MMA", "xtol_rel" = 1.0e-15 )
opts <- list( "algorithm"= "NLOPT_GN_ISRES",
"xtol_rel"= 1.0e-15,
"maxeval"= 160000,
"local_opts" = local_opts,
"print_level" = 0 )

We use NL_OPT_LD_MMA for local optimization and NL_OPT_GN_ISRES for overall optimization. You can set the tolerance to really low to get the best result. The number of iterations is set using maxeval. Setting tolerance to really low or the number of iterations to very high may result in the best approximation at the cost of increased computation time. Finally, we optimize

res <- nloptr ( x0 = x0,
                eval_f = eval_f,
                lb = lb,
                ub = ub,
                eval_g_ineq = eval_g_ineq,
                eval_g_eq = eval_g_eq,
                opts = opts
)
print(res)

In my case, the result came out to be (1 4.768461 3.78758 1.384204) which is fairly close to the ground truth.

Example 4: Minimization with multiple inequality constraints without gradients

Our objective function in this case is

subject to

with bounds on variables as

For this problem, the optimal solution is reached at (1, 1). Let’s write the code now.

# Objective function
eval_f <- function(x)
{
return ( x[1]^2 + x[2]^2 )
}
# Inequality constraints
eval_g_ineq <- function (x) {
constr <- c(1 - x[1] - x[2],
1 - x[1]^2 - x[2]^2,
9 - 9*x[1]^2 - x[2]^2,
x[2] - x[1]^2,
x[1] - x[2]^2)
return (constr)
}
# Lower and upper bounds
lb <- c(-50, -50)
ub <- c(50, 50)
# Initial values
x0 <- c(3, 1)

Finally, define options for nloptr as follows:

opts <- list( "algorithm"
= "NLOPT_GN_ISRES",
"xtol_rel"
= 1.0e-15,
"maxeval"= 160000,
"tol_constraints_ineq" = rep( 1.0e-10, 5 ))

and then perform the optimization

res <- nloptr(
        x0          = x0,
        eval_f      = eval_f,
        lb          = lb,
        ub          = ub,
        eval_g_ineq = eval_g_ineq,
        opts        = opts )
print(res)

With the specified tolerance and the number of iteration, I was able to achieve the optimal solution of (1, 1).

While I didn’t present the examples with multiple equality constraints, they are very similar to Example 4. However, be sure to select the optimization algorithm as NLOPT_GN_ISRES.

References

  1. K. Svanberg, The method of moving asymptotes – a new method for structural optimization, International Journal for Numerical Methods in Engineering, 1987, 24, 359–373.
  2. Powell MJD (1994) Advances in Optimization and Numerical Analysis, Kluwer Academic, Dordrecht, A direct search optimization method that models the objective and constraint functions by linear interpolation, pp 51–67.
  3. https://people.cs.vt.edu/~ltw/lecture_notes/HPCS08tut.pdf
  4. Thomas P. Runarsson and Xin Yao, "Stochastic ranking for constrained evolutionary optimization," IEEE Trans. Evolutionary Computation, vol. 4 (no. 3), pp. 284–294 (2000).
  5. https://www.him.uni-bonn.de/fileadmin/him/Section6_HIM_v1.pdf
  6. https://people.kth.se/~krille/mmagcmma.pdf
  7. https://cran.r-project.org/web/packages/nloptr/vignettes/nloptr.pdf
  8. http://faculty.cas.usf.edu/jkwilde/mathcamp/Constrained_Optimization.pdf
  9. https://nlopt.readthedocs.io/en/latest/

If this article benefits you, please use the following citations for referencing my work:

Rahul Bhadani. Nonlinear Optimization in R using nlopt. arXiv preprint arXiv:2101.02912, 2021.

or

@article{bhadani2021nonlinear,
    title={Nonlinear Optimization in R using nlopt},
    author={Rahul Bhadani},
    year={2021},
    eprint={2101.02912},
    archivePrefix={arXiv},
    primaryClass={math.OC},
  journal={arXiv preprint arXiv:2101.02912},
}

Related Articles