OptimizationIpopt.jl
|
The page is in the process of being translated. |
OptimizationIpopt.jl is a wrapper package that integrates Ipopt.jl with the Optimization.jl ecosystem. This allows you to use the powerful Ipopt (Interior Point OPTimizer) solver through Optimization.jl’s unified interface.
Ipopt is a software package for large-scale nonlinear optimization designed to find (local) solutions of mathematical optimization problems of the form:
where is the objective function, are the constraint functions, and the vectors and denote the lower and upper bounds on the constraints, and the vectors and are the bounds on the variables .
Installation: OptimizationIpopt.jl
To use this package, install the OptimizationIpopt package:
import Pkg;
Pkg.add("OptimizationIpopt");
Methods
OptimizationIpopt.jl provides the IpoptOptimizer algorithm, which wraps the Ipopt.jl solver for use with Optimization.jl. This is an interior-point algorithm that uses line search filter methods and is particularly effective for:
-
Large-scale nonlinear problems
-
Problems with nonlinear constraints
-
Problems requiring high accuracy solutions
Algorithm Requirements
IpoptOptimizer requires:
-
Gradient information (via automatic differentiation or user-provided)
-
Hessian information (can be approximated or provided)
-
Constraint Jacobian (for constrained problems)
-
Constraint Hessian (for constrained problems)
The algorithm supports:
-
Box constraints via
lbandubin theOptimizationProblem -
General nonlinear equality and inequality constraints via
lconsanducons
Options and Parameters
Common Interface Options
The following options can be passed as keyword arguments to solve and follow the common Optimization.jl interface:
-
maxiters: Maximum number of iterations (overrides Ipopt’smax_iter) -
maxtime: Maximum wall time in seconds (overrides Ipopt’smax_wall_time) -
abstol: Absolute tolerance (not directly used by Ipopt) -
reltol: Convergence tolerance (overrides Ipopt’stol) -
verbose: Control output verbosity (overrides Ipopt’sprint_level)-
falseor0: No output -
trueor5: Standard output -
Integer values 0-12: Different verbosity levels
-
IpoptOptimizer Constructor Options
Ipopt-specific options are passed to the IpoptOptimizer constructor. The most commonly used options are available as struct fields:
Termination Options
-
acceptable_tol::Float64 = 1e-6: Acceptable convergence tolerance (relative) -
acceptable_iter::Int = 15: Number of acceptable iterations before termination -
dual_inf_tol::Float64 = 1.0: Desired threshold for dual infeasibility -
constr_viol_tol::Float64 = 1e-4: Desired threshold for constraint violation -
compl_inf_tol::Float64 = 1e-4: Desired threshold for complementarity conditions
Linear Solver Options
-
linear_solver::String = "mumps": Linear solver to use-
Default: "mumps" (included with Ipopt)
-
HSL solvers: "ma27", "ma57", "ma86", "ma97" (require separate installation)
-
Others: "pardiso", "spral" (require separate installation)
-
-
linear_system_scaling::String = "none": Method for scaling linear system. Use "mc19" for HSL solvers.
NLP Scaling Options
-
nlp_scaling_method::String = "gradient-based": Scaling method for NLP-
Options: "none", "user-scaling", "gradient-based", "equilibration-based"
-
-
nlp_scaling_max_gradient::Float64 = 100.0: Maximum gradient after scaling
Barrier Parameter Options
-
mu_strategy::String = "monotone": Update strategy for barrier parameter ("monotone", "adaptive") -
mu_init::Float64 = 0.1: Initial value for barrier parameter -
mu_oracle::String = "quality-function": Oracle for adaptive mu strategy
Hessian Options
-
hessian_approximation::String = "exact": How to approximate the Hessian-
"exact": Use exact Hessian -
"limited-memory": Use L-BFGS approximation
-
-
limited_memory_max_history::Int = 6: History size for L-BFGS -
limited_memory_update_type::String = "bfgs": Quasi-Newton update formula ("bfgs", "sr1")
Line Search Options
-
line_search_method::String = "filter": Line search method ("filter", "penalty") -
accept_every_trial_step::String = "no": Accept every trial step (disables line search)
Additional Options Dictionary
For Ipopt options not available as struct fields, use the additional_options dictionary:
opt = IpoptOptimizer(
linear_solver = "ma57",
additional_options = Dict(
"derivative_test" => "first-order",
"derivative_test_tol" => 1e-4,
"fixed_variable_treatment" => "make_parameter",
"alpha_for_y" => "primal"
)
)
The full list of available options is documented in the Ipopt Options Reference.
Option Priority
Options follow this priority order (highest to lowest):
-
Common interface arguments passed to
solve(e.g.,reltol,maxiters) -
Options in
additional_optionsdictionary -
Struct field values in
IpoptOptimizer
Example with multiple option sources:
opt = IpoptOptimizer(
acceptable_tol = 1e-6, # Struct field
mu_strategy = "adaptive", # Struct field
linear_solver = "ma57", # Struct field (needs HSL)
print_timing_statistics = "yes", # Struct field
additional_options = Dict(
"alpha_for_y" => "primal", # Not a struct field
"max_iter" => 500 # Will be overridden by maxiters below
)
)
sol = solve(prob, opt;
maxiters = 1000, # Overrides max_iter in additional_options
reltol = 1e-8 # Sets Ipopt's tol
)
Examples
Basic Unconstrained Optimization
The Rosenbrock function can be minimized using IpoptOptimizer:
using Optimization, OptimizationIpopt
using Zygote
rosenbrock(x, p) = (p[1] - x[1])^2 + p[2] * (x[2] - x[1]^2)^2
x0 = zeros(2)
p = [1.0, 100.0]
# Ipopt requires gradient information
optfunc = OptimizationFunction(rosenbrock, AutoZygote())
prob = OptimizationProblem(optfunc, x0, p)
sol = solve(prob, IpoptOptimizer())
retcode: Success
u: 2-element Vector{Float64}:
0.9999999999999899
0.9999999999999792
Box-Constrained Optimization
Adding box constraints to limit the search space:
using Optimization, OptimizationIpopt
using Zygote
rosenbrock(x, p) = (p[1] - x[1])^2 + p[2] * (x[2] - x[1]^2)^2
x0 = zeros(2)
p = [1.0, 100.0]
optfunc = OptimizationFunction(rosenbrock, AutoZygote())
prob = OptimizationProblem(optfunc, x0, p;
lb = [-1.0, -1.0],
ub = [1.5, 1.5])
sol = solve(prob, IpoptOptimizer())
retcode: Success
u: 2-element Vector{Float64}:
0.9999999942690103
0.9999999885017182
Nonlinear Constrained Optimization
Solving problems with nonlinear equality and inequality constraints:
using Optimization, OptimizationIpopt
using Zygote
# Objective: minimize x[1]^2 + x[2]^2
objective(x, p) = x[1]^2 + x[2]^2
# Constraint: x[1]^2 + x[2]^2 - 2*x[1] = 0 (equality)
# and x[1] + x[2] >= 1 (inequality)
function constraints(res, x, p)
res[1] = x[1]^2 + x[2]^2 - 2*x[1] # equality constraint
res[2] = x[1] + x[2] # inequality constraint
end
x0 = [0.5, 0.5]
optfunc = OptimizationFunction(objective, AutoZygote(); cons = constraints)
# First constraint is equality (lcons = ucons = 0)
# Second constraint is inequality (lcons = 1, ucons = Inf)
prob = OptimizationProblem(optfunc, x0;
lcons = [0.0, 1.0],
ucons = [0.0, Inf])
sol = solve(prob, IpoptOptimizer())
retcode: Success
u: 2-element Vector{Float64}:
0.29289321506718563
0.7071067774417749
Using Limited-Memory BFGS Approximation
For large-scale problems where computing the exact Hessian is expensive:
using Optimization, OptimizationIpopt
using Zygote
# Large-scale problem
n = 100
rosenbrock_nd(x, p) = sum(p[2] * (x[i+1] - x[i]^2)^2 + (p[1] - x[i])^2 for i in 1:n-1)
x0 = zeros(n)
p = [1.0, 100.0]
# Using automatic differentiation for gradients only
optfunc = OptimizationFunction(rosenbrock_nd, AutoZygote())
prob = OptimizationProblem(optfunc, x0, p)
# Use L-BFGS approximation for Hessian
sol = solve(prob, IpoptOptimizer(
hessian_approximation = "limited-memory",
limited_memory_max_history = 10);
maxiters = 1000)
retcode: Success
u: 100-element Vector{Float64}:
1.000000000045498
0.9999999999674329
0.9999999999760841
1.0000000000439668
0.9999999999656436
0.9999999999982687
1.0000000000174465
0.9999999999961191
0.9999999999984535
1.0000000000270632
⋮
1.0000000000791858
0.9999999998741496
0.9999999998274194
0.9999999997875975
0.9999999993748642
0.9999999990813158
0.9999999981939615
0.9999999966259885
0.9999999933639284
Portfolio Optimization Example
A practical example of portfolio optimization with constraints:
using Optimization, OptimizationIpopt
using Zygote
using LinearAlgebra
# Portfolio optimization: minimize risk subject to return constraint
n_assets = 5
μ = [0.05, 0.10, 0.15, 0.08, 0.12] # Expected returns
Σ = [0.05 0.01 0.02 0.01 0.00; # Covariance matrix
0.01 0.10 0.03 0.02 0.01;
0.02 0.03 0.15 0.02 0.03;
0.01 0.02 0.02 0.08 0.02;
0.00 0.01 0.03 0.02 0.06]
target_return = 0.10
# Objective: minimize portfolio variance
portfolio_risk(w, p) = dot(w, Σ * w)
# Constraints: sum of weights = 1, expected return >= target
function portfolio_constraints(res, w, p)
res[1] = sum(w) - 1.0 # Sum to 1 (equality)
res[2] = dot(μ, w) - target_return # Minimum return (inequality)
end
optfunc = OptimizationFunction(portfolio_risk, AutoZygote();
cons = portfolio_constraints)
w0 = fill(1.0/n_assets, n_assets)
prob = OptimizationProblem(optfunc, w0;
lb = zeros(n_assets), # No short selling
ub = ones(n_assets), # No single asset > 100%
lcons = [0.0, 0.0], # Equality and inequality constraints
ucons = [0.0, Inf])
sol = solve(prob, IpoptOptimizer();
reltol = 1e-8,
verbose = 5)
println("Optimal weights: ", sol.u)
println("Expected return: ", dot(μ, sol.u))
println("Portfolio variance: ", sol.objective)
This is Ipopt version 3.14.19, running with linear solver MUMPS 5.8.2.
Number of nonzeros in equality constraint Jacobian...: 5
Number of nonzeros in inequality constraint Jacobian.: 5
Number of nonzeros in Lagrangian Hessian.............: 15
Total number of variables............................: 5
variables with only lower bounds: 0
variables with lower and upper bounds: 5
variables with only upper bounds: 0
Total number of equality constraints.................: 1
Total number of inequality constraints...............: 1
inequality constraints with only lower bounds: 1
inequality constraints with lower and upper bounds: 0
inequality constraints with only upper bounds: 0
iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
0 3.1200000e-02 0.00e+00 3.43e-02 -1.0 0.00e+00 - 0.00e+00 0.00e+00 0
1 3.2177561e-02 0.00e+00 1.30e-02 -1.7 1.78e-02 - 9.93e-01 1.00e+00h 1
2 3.0759152e-02 0.00e+00 4.26e-02 -2.5 5.87e-02 - 9.79e-01 1.00e+00f 1
3 2.9846484e-02 0.00e+00 2.83e-08 -2.5 1.07e-01 - 1.00e+00 1.00e+00f 1
4 2.8068833e-02 0.00e+00 2.47e-02 -3.8 3.60e-02 - 8.65e-01 1.00e+00f 1
5 2.7401391e-02 0.00e+00 1.50e-09 -3.8 2.41e-02 - 1.00e+00 1.00e+00f 1
6 2.7234938e-02 0.00e+00 1.46e-04 -5.7 1.03e-02 - 9.79e-01 1.00e+00f 1
7 2.7232343e-02 0.00e+00 1.84e-11 -5.7 1.85e-03 - 1.00e+00 1.00e+00f 1
8 2.7230500e-02 2.22e-16 2.51e-14 -8.6 1.40e-04 - 1.00e+00 1.00e+00f 1
Number of Iterations....: 8
(scaled) (unscaled)
Objective...............: 2.7230500043271134e-02 2.7230500043271134e-02
Dual infeasibility......: 2.5091040356528538e-14 2.5091040356528538e-14
Constraint violation....: 2.2204460492503131e-16 2.2204460492503131e-16
Variable bound violation: 0.0000000000000000e+00 0.0000000000000000e+00
Complementarity.........: 6.4320485361915960e-09 6.4320485361915960e-09
Overall NLP error.......: 6.4320485361915960e-09 6.4320485361915960e-09
Number of objective function evaluations = 9
Number of objective gradient evaluations = 9
Number of equality constraint evaluations = 9
Number of inequality constraint evaluations = 9
Number of equality constraint Jacobian evaluations = 9
Number of inequality constraint Jacobian evaluations = 9
Number of Lagrangian Hessian evaluations = 8
Total seconds in IPOPT = 1.938
EXIT: Optimal Solution Found.
Optimal weights: [0.23290714113956607, 0.16055161296302972, 0.0967086319310212, 0.08466825825418363, 0.4251643557121995]
Expected return: 0.09999999648873309
Portfolio variance: 0.027230500043271134
Tips and Best Practices
-
Scaling: Ipopt performs better when variables and constraints are well-scaled. Consider normalizing your problem if variables have very different magnitudes.
-
Initial Points: Provide good initial guesses when possible. Ipopt is a local optimizer and the solution quality depends on the starting point.
-
Hessian Approximation: For large problems or when Hessian computation is expensive, use
hessian_approximation = "limited-memory"in theIpoptOptimizerconstructor. -
Linear Solver Selection: The choice of linear solver can significantly impact performance. For large problems, consider using HSL solvers (ma27, ma57, ma86, ma97). Note that HSL solvers require separate installation - see the Ipopt.jl documentation for setup instructions. The default MUMPS solver works well for small to medium problems.
-
Constraint Formulation: Ipopt handles equality constraints well. When possible, formulate constraints as equalities rather than pairs of inequalities.
-
Warm Starting: When solving a sequence of similar problems, use the solution from the previous problem as the initial point for the next. You can enable warm starting with
IpoptOptimizer(warm_start_init_point = "yes").