AnyMath 文档

优化popt。jl

该页面正在翻译中。

脧锚脧赂`优化popt。jl`是一个包装包,集成https://github.com/jump-dev/Ipopt.jl[脧锚脧赂`Ipopt。jl`]与https://github.com/SciML/Optimization.jl[脧锚脧赂`优化。jl`]生态系统。 这允许您通过优化使用强大的Ipopt(内部点优化器)求解器。jl的统一接口。

Ipopt是用于大规模非线性优化的软件包,旨在找到形式的数学优化问题的(局部)解决方案:

哪里 是目标函数, 是约束函数和向量 表示约束的下限和上限,以及向量 是变量上的边界 .

安装:OptimizationIpopt。jl

要使用此软件包,请安装OptimizationIpopt软件包:

import Pkg;
Pkg.add("OptimizationIpopt");

方法

优化popt。jl提供 Ipoptimizer,ipoptimizer 算法,它包装Ipopt。用于优化的jl求解器。jl. 这是一个内部点算法,使用线搜索过滤方法,特别有效的:

*大规模非线性问题 *非线性约束的问题 *需要高精度解决方案的问题

算法要求

Ipoptimizer,ipoptimizer 需要:

*梯度信息(通过自动区分或用户提供) *Hessian信息(可以近似或提供) *约束雅可比(用于约束问题) *约束Hessian(用于约束问题)

该算法支持:

*框约束通过 磅/磅ub优化问题 *一般非线性平等和不等式约束通过 立法会议员乌肯斯

基本用法

using Optimization, OptimizationIpopt

# Create optimizer with default settings
opt = IpoptOptimizer()

# Or configure Ipopt-specific options
opt = IpoptOptimizer(
    acceptable_tol = 1e-8,
    mu_strategy = "adaptive"
)

# Solve the problem
sol = solve(prob, opt)

选项和参数

常用接口选项

以下选项可以作为关键字参数传递给 解决方案 并遵循共同的优化。jl接口:

* 最大的,最大的:最大迭代次数(复盖Ipopt的 最大值,最大值) * 最大时间:以秒为单位的最长墙壁时间(复盖Ipopt的 max_wall_时间) * 阿布斯托尔:绝对公差(ipopt不直接使用) * [医]雷托尔:收敛容差(复盖Ipopt的 托尔) * 详细,详细:控制输出详细程度(复盖Ipopt的 打印级别) ** 错误0:无输出 ** 真的5:标准输出 **整数值0-12:不同的详细级别

Ipoptimizer构造函数选项

Ipopt特定的选项传递给 Ipoptimizer,ipoptimizer 构造函数。 最常用的选项可用作结构字段:

终止选项

* 可接受性_tol::Float64=1e-6:可接受的收敛公差(相对) * 可接受_iter::Int=15:终止前可接受的迭代次数 * dual_inf_tol::Float64=1.0:双不可行性的期望阈值 * constr_viol_tol::Float64=1e-4:约束违反的期望阈值 * compl_inf_tol::Float64=1e-4:互补条件的期望阈值

线性求解器选项

* linear_solver::String="腮腺炎":线性求解器使用 **默认值:"腮腺炎"(包括Ipopt) **HSL求解器:"ma27","ma57","ma86","ma97"(需要https://github.com/jump-dev/Ipopt.jl?tab=readme-ov-file#linear-solvers[单独安装]) **其他:"pardiso","spral"(需要https://github.com/jump-dev/Ipopt.jl?tab=readme-ov-file#linear-solvers[单独安装]) * linear_system_scaling::String="无":缩放线性系统的方法。 对HSL求解器使用"mc19"。

NLP缩放选项

* nlp_scaling_method::String="基于梯度的":Nlp的缩放方法 **选项:"无"、"用户缩放"、"基于梯度"、"基于平衡" * nlp_scaling_max_gradient::Float64=100.0:缩放后的最大梯度

屏障参数选项

* mu_strategy::String="单调":屏障参数的更新策略("单调","自适应") * mu_init::Float64=0.1:障碍参数的初始值 * mu_oracle::String="质量函数":自适应mu策略的Oracle

Hessian选项

* hessian_approximation::String="精确":如何近似Hessian ** "确切":使用精确的Hessian ** "有限记忆":使用L-BFGS近似 * limited_memory_max_history::Int=6:L-BFGS的历史尺寸 * limited_memory_update_type::String="bfgs":准牛顿更新公式("bfgs","sr1")

行搜索选项

* line_search_method::String="过滤器":行搜索方法("过滤器","惩罚") * accept_every_trial_step::String="否":接受每个试用步骤(禁用行搜索)

输出选项

* print_timing_statistics::String="否":打印详细的时序信息 * print_info_string::String="否":打印算法信息字符串

暖启动选项

* warm_start_init_point::String="否":使用温启动从以前的解决方案

恢复阶段选项

* expect_infeasible_problem::String="否":如果问题预计不可行,请启用

附加选项字典

对于作为结构字段不可用的Ipopt选项,请使用 附加选项 字典:

opt = IpoptOptimizer(
    linear_solver = "ma57",
    additional_options = Dict(
        "derivative_test" => "first-order",
        "derivative_test_tol" => 1e-4,
        "fixed_variable_treatment" => "make_parameter",
        "alpha_for_y" => "primal"
    )
)

可用选项的完整列表记录在https://coin-or.github.io/Ipopt/OPTIONS.html[Ipopt选项参考]。

选项优先级

选项遵循此优先级顺序(从最高到最低):

  1. 传递给的通用接口参数 解决方案 (例如, [医]雷托尔, 最大的,最大的)

  2. 选项 附加选项 字典

  3. 结构字段值 Ipoptimizer,ipoptimizer

具有多个选项源的示例:

opt = IpoptOptimizer(
    acceptable_tol = 1e-6,           # Struct field
    mu_strategy = "adaptive",        # Struct field
    linear_solver = "ma57",          # Struct field (needs HSL)
    print_timing_statistics = "yes", # Struct field
    additional_options = Dict(
        "alpha_for_y" => "primal",   # Not a struct field
        "max_iter" => 500            # Will be overridden by maxiters below
    )
)

sol = solve(prob, opt;
    maxiters = 1000,  # Overrides max_iter in additional_options
    reltol = 1e-8     # Sets Ipopt's tol
)

例子:

基本无约束优化

Rosenbrock函数可以使用 Ipoptimizer,ipoptimizer:

using Optimization, OptimizationIpopt
using Zygote

rosenbrock(x, p) = (p[1] - x[1])^2 + p[2] * (x[2] - x[1]^2)^2
x0 = zeros(2)
p = [1.0, 100.0]

# Ipopt requires gradient information
optfunc = OptimizationFunction(rosenbrock, AutoZygote())
prob = OptimizationProblem(optfunc, x0, p)
sol = solve(prob, IpoptOptimizer())
retcode: Success
u: 2-element Vector{Float64}:
 0.9999999999999899
 0.9999999999999792

盒约束优化

添加框约束以限制搜索空间:

using Optimization, OptimizationIpopt
using Zygote

rosenbrock(x, p) = (p[1] - x[1])^2 + p[2] * (x[2] - x[1]^2)^2
x0 = zeros(2)
p = [1.0, 100.0]

optfunc = OptimizationFunction(rosenbrock, AutoZygote())
prob = OptimizationProblem(optfunc, x0, p;
                          lb = [-1.0, -1.0],
                          ub = [1.5, 1.5])
sol = solve(prob, IpoptOptimizer())
retcode: Success
u: 2-element Vector{Float64}:
 0.9999999942690103
 0.9999999885017182

非线性约束优化

用非线性相等和不等式约束求解问题:

using Optimization, OptimizationIpopt
using Zygote

# Objective: minimize x[1]^2 + x[2]^2
objective(x, p) = x[1]^2 + x[2]^2

# Constraint: x[1]^2 + x[2]^2 - 2*x[1] = 0 (equality)
# and x[1] + x[2] >= 1 (inequality)
function constraints(res, x, p)
    res[1] = x[1]^2 + x[2]^2 - 2*x[1]  # equality constraint
    res[2] = x[1] + x[2]                # inequality constraint
end

x0 = [0.5, 0.5]
optfunc = OptimizationFunction(objective, AutoZygote(); cons = constraints)

# First constraint is equality (lcons = ucons = 0)
# Second constraint is inequality (lcons = 1, ucons = Inf)
prob = OptimizationProblem(optfunc, x0;
                          lcons = [0.0, 1.0],
                          ucons = [0.0, Inf])

sol = solve(prob, IpoptOptimizer())
retcode: Success
u: 2-element Vector{Float64}:
 0.29289321506718563
 0.7071067774417749

使用有限内存BFGS近似

对于计算确切的Hessian是昂贵的大规模问题:

using Optimization, OptimizationIpopt
using Zygote

# Large-scale problem
n = 100
rosenbrock_nd(x, p) = sum(p[2] * (x[i+1] - x[i]^2)^2 + (p[1] - x[i])^2 for i in 1:n-1)

x0 = zeros(n)
p = [1.0, 100.0]

# Using automatic differentiation for gradients only
optfunc = OptimizationFunction(rosenbrock_nd, AutoZygote())
prob = OptimizationProblem(optfunc, x0, p)

# Use L-BFGS approximation for Hessian
sol = solve(prob, IpoptOptimizer(
           hessian_approximation = "limited-memory",
           limited_memory_max_history = 10);
           maxiters = 1000)
retcode: Success
u: 100-element Vector{Float64}:
 1.000000000045498
 0.9999999999674329
 0.9999999999760841
 1.0000000000439668
 0.9999999999656436
 0.9999999999982687
 1.0000000000174465
 0.9999999999961191
 0.9999999999984535
 1.0000000000270632
 ⋮
 1.0000000000791858
 0.9999999998741496
 0.9999999998274194
 0.9999999997875975
 0.9999999993748642
 0.9999999990813158
 0.9999999981939615
 0.9999999966259885
 0.9999999933639284

投资组合优化示例

一个具有约束的投资组合优化的实际例子:

using Optimization, OptimizationIpopt
using Zygote
using LinearAlgebra

# Portfolio optimization: minimize risk subject to return constraint
n_assets = 5
μ = [0.05, 0.10, 0.15, 0.08, 0.12]  # Expected returns
Σ = [0.05 0.01 0.02 0.01 0.00;      # Covariance matrix
     0.01 0.10 0.03 0.02 0.01;
     0.02 0.03 0.15 0.02 0.03;
     0.01 0.02 0.02 0.08 0.02;
     0.00 0.01 0.03 0.02 0.06]

target_return = 0.10

# Objective: minimize portfolio variance
portfolio_risk(w, p) = dot(w, Σ * w)

# Constraints: sum of weights = 1, expected return >= target
function portfolio_constraints(res, w, p)
    res[1] = sum(w) - 1.0                    # Sum to 1 (equality)
    res[2] = dot(μ, w) - target_return       # Minimum return (inequality)
end

optfunc = OptimizationFunction(portfolio_risk, AutoZygote();
                              cons = portfolio_constraints)
w0 = fill(1.0/n_assets, n_assets)

prob = OptimizationProblem(optfunc, w0;
                          lb = zeros(n_assets),     # No short selling
                          ub = ones(n_assets),      # No single asset > 100%
                          lcons = [0.0, 0.0],       # Equality and inequality constraints
                          ucons = [0.0, Inf])

sol = solve(prob, IpoptOptimizer();
           reltol = 1e-8,
           verbose = 5)

println("最佳权重:",sol。u)
println("期望返回:",dot(μ,sol.u))
println("Portfolio variance:",sol。目的)
This is Ipopt version 3.14.19, running with linear solver MUMPS 5.8.2.

Number of nonzeros in equality constraint Jacobian...:        5
Number of nonzeros in inequality constraint Jacobian.:        5
Number of nonzeros in Lagrangian Hessian.............:       15

Total number of variables............................:        5
                     variables with only lower bounds:        0
                variables with lower and upper bounds:        5
                     variables with only upper bounds:        0
Total number of equality constraints.................:        1
Total number of inequality constraints...............:        1
        inequality constraints with only lower bounds:        1
   inequality constraints with lower and upper bounds:        0
        inequality constraints with only upper bounds:        0

iter    objective    inf_pr   inf_du lg(mu)  ||d||  lg(rg) alpha_du alpha_pr  ls
   0  3.1200000e-02 0.00e+00 3.43e-02  -1.0 0.00e+00    -  0.00e+00 0.00e+00   0
   1  3.2177561e-02 0.00e+00 1.30e-02  -1.7 1.78e-02    -  9.93e-01 1.00e+00h  1
   2  3.0759152e-02 0.00e+00 4.26e-02  -2.5 5.87e-02    -  9.79e-01 1.00e+00f  1
   3  2.9846484e-02 0.00e+00 2.83e-08  -2.5 1.07e-01    -  1.00e+00 1.00e+00f  1
   4  2.8068833e-02 0.00e+00 2.47e-02  -3.8 3.60e-02    -  8.65e-01 1.00e+00f  1
   5  2.7401391e-02 0.00e+00 1.50e-09  -3.8 2.41e-02    -  1.00e+00 1.00e+00f  1
   6  2.7234938e-02 0.00e+00 1.46e-04  -5.7 1.03e-02    -  9.79e-01 1.00e+00f  1
   7  2.7232343e-02 0.00e+00 1.84e-11  -5.7 1.85e-03    -  1.00e+00 1.00e+00f  1
   8  2.7230500e-02 2.22e-16 2.51e-14  -8.6 1.40e-04    -  1.00e+00 1.00e+00f  1

Number of Iterations....: 8

                                   (scaled)                 (unscaled)
Objective...............:   2.7230500043271134e-02    2.7230500043271134e-02
Dual infeasibility......:   2.5091040356528538e-14    2.5091040356528538e-14
Constraint violation....:   2.2204460492503131e-16    2.2204460492503131e-16
Variable bound violation:   0.0000000000000000e+00    0.0000000000000000e+00
Complementarity.........:   6.4320485361915960e-09    6.4320485361915960e-09
Overall NLP error.......:   6.4320485361915960e-09    6.4320485361915960e-09


Number of objective function evaluations             = 9
Number of objective gradient evaluations             = 9
Number of equality constraint evaluations            = 9
Number of inequality constraint evaluations          = 9
Number of equality constraint Jacobian evaluations   = 9
Number of inequality constraint Jacobian evaluations = 9
Number of Lagrangian Hessian evaluations             = 8
Total seconds in IPOPT                               = 1.938

EXIT: Optimal Solution Found.
Optimal weights: [0.23290714113956607, 0.16055161296302972, 0.0967086319310212, 0.08466825825418363, 0.4251643557121995]
Expected return: 0.09999999648873309
Portfolio variance: 0.027230500043271134

提示和最佳实践

  1. *缩放*:当变量和约束被很好地缩放时,Ipopt表现更好。 如果变量具有非常不同的幅度,请考虑规范化您的问题。

  2. *初始点*:尽可能提供良好的初始猜测。 Ipopt是一个本地优化器,解决方案质量取决于起点。

  3. *Hessian近似*:对于大问题或Hessian计算昂贵时,使用 hessian_approximation="有限内存"Ipoptimizer,ipoptimizer 构造函数。

  4. *线性求解器选择*:线性求解器的选择会显着影响性能。 对于大问题,请考虑使用HSL求解器(ma27、ma57、ma86、ma97)。 请注意,HSL求解器需要https://github.com/jump-dev/Ipopt.jl?tab=readme-ov-file#linear-solvers[单独安装]-请参阅Ipopt。安装说明的jl文档. 默认的流行性腮腺炎解算器适用于中小问题。

  5. *约束公式*:Ipopt很好地处理相等约束。 在可能的情况下,将约束制定为相等而不是成对的不等式。

  6. *温启动*:在解决一系列类似问题时,使用前一个问题的解决方案作为下一个问题的初始点。 您可以从以下开始启用warm Ipoptimizer(warm_start_init_point="是").

参考资料

有关Ipopt算法和选项的更多详细信息,请参阅: