AnyMath 文档

以>10种方式解决Rosenbrock问题

该页面正在翻译中。

此示例演示了许多不同的求解器,以演示优化的灵活性。jl. 这是许多求解器的挑战,以了解软件包的常见工作流程并给出复制粘贴的起点。

注意此示例使用许多不同的优化求解器。jl. 每个求解器子包都需要单独安装。 例如,有关OptimizationOptimJL的安装和使用的详细信息。jl包,见 Optim.jl页

这项工作的目的是确定 最小化Rosenbrock函数结果的值对 带有一些参数值 . Rosenbrock函数对于测试很有用,因为已知它具有全局最小值为 .

的优化。jl接口期望用优化参数向量定义函数 和参数的向量 ,即:

参数 在矢量中捕获 并分配一些任意值以产生要最小化的特定Rosenbrock函数。

原版 域捕获在向量中 .

初步估计 初始化优化器需要最小位置。

现在可以定义并求解优化问题,以估计 使这个函数的输出最小化。

# Define the problem to solve
using Optimization, ForwardDiff, Zygote

rosenbrock(x, p) = (p[1] - x[1])^2 + p[2] * (x[2] - x[1]^2)^2
x0 = zeros(2)
_p = [1.0, 100.0]

f = OptimizationFunction(rosenbrock, Optimization.AutoForwardDiff())
l1 = rosenbrock(x0, _p)
prob = OptimizationProblem(f, x0, _p)
OptimizationProblem. In-place: true
u0: 2-element Vector{Float64}:
 0.0
 0.0

奥普蒂姆。jl求解器

从一些无导数优化器开始

using OptimizationOptimJL
sol = solve(prob, SimulatedAnnealing())
prob = OptimizationProblem(f, x0, _p, lb = [-1.0, -1.0], ub = [0.8, 0.8])
sol = solve(prob, SAMIN())

l1 = rosenbrock(x0, _p)
prob = OptimizationProblem(rosenbrock, x0, _p)
sol = solve(prob, NelderMead())
retcode: Success
u: 2-element Vector{Float64}:
 0.9999634355313174
 0.9999315506115275

现具有前向模式自动微分的基于梯度的优化器

optf = OptimizationFunction(rosenbrock, Optimization.AutoForwardDiff())
prob = OptimizationProblem(optf, x0, _p)
sol = solve(prob, BFGS())
retcode: Success
u: 2-element Vector{Float64}:
 0.9999999999373603
 0.99999999986862

现在使用正向模式自动微分生成的Hessians的二阶优化器

sol = solve(prob, Newton())
retcode: Success
u: 2-element Vector{Float64}:
 0.9999999999999994
 0.9999999999999989

现在一个二阶无黑森优化器

sol = solve(prob, Optim.KrylovTrustRegion())
retcode: Success
u: 2-element Vector{Float64}:
 0.999999999999108
 0.9999999999981819

现具有各种约束的基于导数的优化器

cons = (res, x, p) -> res .= [x[1]^2 + x[2]^2]
optf = OptimizationFunction(rosenbrock, Optimization.AutoForwardDiff(); cons = cons)

prob = OptimizationProblem(optf, x0, _p, lcons = [-Inf], ucons = [Inf])
sol = solve(prob, IPNewton()) # Note that -Inf < x[1]^2 + x[2]^2 < Inf is always true

prob = OptimizationProblem(optf, x0, _p, lcons = [-5.0], ucons = [10.0])
sol = solve(prob, IPNewton()) # Again, -5.0 < x[1]^2 + x[2]^2 < 10.0

prob = OptimizationProblem(optf, x0, _p, lcons = [-Inf], ucons = [Inf],
    lb = [-500.0, -500.0], ub = [50.0, 50.0])
sol = solve(prob, IPNewton())

prob = OptimizationProblem(optf, x0, _p, lcons = [0.5], ucons = [0.5],
    lb = [-500.0, -500.0], ub = [50.0, 50.0])
sol = solve(prob, IPNewton())

# Notice now that x[1]^2 + x[2]^2 ≈ 0.5:
res = zeros(1)
cons(res, sol.u, _p)
println(res)
┌ Warning: The selected optimization algorithm requires second order derivatives, but `SecondOrder` ADtype was not provided.
│         So a `SecondOrder` with AutoForwardDiff() for both inner and outer will be created, this can be suboptimal and not work in some cases so
│         an explicit `SecondOrder` ADtype is recommended.
└ @ OptimizationBase ~/work/package/Optimization.jl/lib/OptimizationBase/src/cache.jl:51
┌ Warning: The selected optimization algorithm requires second order derivatives, but `SecondOrder` ADtype was not provided.
│         So a `SecondOrder` with AutoForwardDiff() for both inner and outer will be created, this can be suboptimal and not work in some cases so
│         an explicit `SecondOrder` ADtype is recommended.
└ @ OptimizationBase ~/work/package/Optimization.jl/lib/OptimizationBase/src/cache.jl:51
┌ Warning: The selected optimization algorithm requires second order derivatives, but `SecondOrder` ADtype was not provided.
│         So a `SecondOrder` with AutoForwardDiff() for both inner and outer will be created, this can be suboptimal and not work in some cases so
│         an explicit `SecondOrder` ADtype is recommended.
└ @ OptimizationBase ~/work/package/Optimization.jl/lib/OptimizationBase/src/cache.jl:51
┌ Warning: The selected optimization algorithm requires second order derivatives, but `SecondOrder` ADtype was not provided.
│         So a `SecondOrder` with AutoForwardDiff() for both inner and outer will be created, this can be suboptimal and not work in some cases so
│         an explicit `SecondOrder` ADtype is recommended.
└ @ OptimizationBase ~/work/package/Optimization.jl/lib/OptimizationBase/src/cache.jl:51
[0.49999999999999994]
function con_c(res, x, p)
    res .= [x[1]^2 + x[2]^2]
end

optf = OptimizationFunction(rosenbrock, Optimization.AutoForwardDiff(); cons = con_c)
prob = OptimizationProblem(optf, x0, _p, lcons = [-Inf], ucons = [0.25^2])
sol = solve(prob, IPNewton()) # -Inf < cons_circ(sol.u, _p) = 0.25^2
retcode: Success
u: 2-element Vector{Float64}:
 0.24327905408863862
 0.05757865786675858

进化。jl求解器

using OptimizationEvolutionary
sol = solve(prob, CMAES(μ = 40, λ = 100), abstol = 1e-15) # -Inf < cons_circ(sol.u, _p) = 0.25^2
retcode: Success
u: 2-element Vector{Float64}:
 0.243331863020338
 0.05735507335058926

IPOPT通过OptimizationMOI

using OptimizationMOI, Ipopt

function con2_c(res, x, p)
    res .= [x[1]^2 + x[2]^2, x[2] &ast; sin(x[1]) - x[1]]
end

optf = OptimizationFunction(rosenbrock, Optimization.AutoZygote(); cons = con2_c)
prob = OptimizationProblem(optf, x0, _p, lcons = [-Inf, -Inf], ucons = [100.0, 100.0])
sol = solve(prob, Ipopt.Optimizer())
retcode: Success
u: 2-element Vector{Float64}:
 0.9999999999080653
 0.9999999998157655

现在让我们切换到具有反向模式AD的OptimizationOptimisers

import OptimizationOptimisers
optf = OptimizationFunction(rosenbrock, Optimization.AutoZygote())
prob = OptimizationProblem(optf, x0, _p)
sol = solve(prob, OptimizationOptimisers.Adam(0.05), maxiters = 1000, progress = false)
retcode: Default
u: 2-element Vector{Float64}:
 0.999957450694706
 0.9999163934196296

试用CMAEvolutionStrategy。jl的进化方法

using OptimizationCMAEvolutionStrategy
sol = solve(prob, CMAEvolutionStrategyOpt())
retcode: Failure
u: 2-element Vector{Float64}:
 1.0000002182731558
 1.00000041914492

现在尝试一些NLopt。通过ModelingToolkit进行符号微分的jl求解器。jl

using OptimizationNLopt, ModelingToolkit
optf = OptimizationFunction(rosenbrock, Optimization.AutoSymbolics())
prob = OptimizationProblem(optf, x0, _p)

sol = solve(prob, Opt(:LN_BOBYQA, 2))
sol = solve(prob, Opt(:LD_LBFGS, 2))
retcode: Success
u: 2-element Vector{Float64}:
 0.9999999999894374
 0.9999999999844783

添加一些框约束并用一些NLopt求解。jl方法

prob = OptimizationProblem(optf, x0, _p, lb = [-1.0, -1.0], ub = [0.8, 0.8])
sol = solve(prob, Opt(:LD_LBFGS, 2))
sol = solve(prob, Opt(:G_MLSL_LDS, 2), local_method = Opt(:LD_LBFGS, 2), maxiters = 10000) #a global optimizer with random starts of local optimization
retcode: MaxIters
u: 2-element Vector{Float64}:
 0.8
 0.6400000000000001

BlackBoxOptim.jl求解器

using OptimizationBBO
prob = Optimization.OptimizationProblem(rosenbrock, [0.0, 0.3], _p, lb = [-1.0, 0.2],
    ub = [0.8, 0.43])
sol = solve(prob, BBO_adaptive_de_rand_1_bin()) # -1.0 ≤ x[1] ≤ 0.8, 0.2 ≤ x[2] ≤ 0.43
retcode: MaxIters
u: 2-element Vector{Float64}:
 0.6577248383485392
 0.43

这只是优化的一小部分。jl必须提供!