Engee 文档
Notebook

设计神经调节器的方法

网络研讨会为 Engee 中的控制对象设计高级类型的调节器 包括几个例子:

1.智能手机摄像头自动对焦控制

2.模糊控制器在压力控制中的应用

3.自适应和神经网络调节器的实现

本项目包含网络研讨会第三部分的配套材料,其余部分可通过以上链接在社区中学习。

使用普通 PID 控制器的示例

这是我们要更换调节器的基本模型。

image.png

Model liquid_pressure_regulator.engee

自适应 PID 调节器

我们首先要做的是建立一个自适应 PID 调节器,它的系数在每个时间步长内都要经过以下更新程序:

``朱莉娅 如果 abs(e) > c.thr c.Kp = max(0, c.Kp + clamp(c.αec.pe, -c.ΔK, c.ΔK)) c.Ki = max(0, c.Ki + clamp(c.αec.pie, -c.ΔK, c.ΔK)) c.Kd = max(0, c.Kd + clamp(c.αec.pde, -c.ΔK, c.ΔK)) 结束


首先,我们不会对大于一定大小的控制信号跳变做出反应c.thr (参数threshold )。其次,参数α 限制了控制器所有参数的变化率,而变化步长则受限于参数ΔK

image.png

液体压力调节器模型

Model liquid_pressure_regulator_adaptive_pid.engee

朱莉娅神经调节器(RNN)

该调节器包含用 Julia 实现的递归神经网络的代码,无需高级库。它在系统运行时进行 "在线 "训练,因此其工作质量在很大程度上取决于权重的初始值。可能有必要采用预训练程序,至少让神经网络在计算开始时的静止状态下开始产生稳定值。

image.png

Model liquid_pressure_regulator_neural_test.engee

用 C 语言创建神经调节器(FNN)

让我们运行模型并建立数据集

In [ ]:
Pkg.add("CSV")
In [ ]:
using DataFrames, CSV
In [ ]:
data = engee.run( "liquid_pressure_regulator" )
Out[0]:
SimulationResult(
    "pressure" => WorkspaceArray{Float64}("liquid_pressure_regulator/pressure")
,
    "error" => WorkspaceArray{Float64}("liquid_pressure_regulator/error")
,
    "set_point" => WorkspaceArray{Float64}("liquid_pressure_regulator/set_point")
,
    "control" => WorkspaceArray{Float64}("liquid_pressure_regulator/control")

)
In [ ]:
function simout_to_df( data )
    vec_names = [i for i in keys(data) if length(data[i].value) > 0];
    df = DataFrame( hcat([collect(data[v]).value for v in vec_names]...), vec_names );
    df.time = collect(data[vec_names[1]]).time;
    return df
end
Out[0]:
simout_to_df (generic function with 1 method)
In [ ]:
df = simout_to_df( data );
CSV.write("Режим 1.csv", df);
plot(
    plot( df.time, df.set_point ), plot( df.time, df.control ), plot( df.time, df.pressure ), layout=(3,1)
)
Out[0]:

更改模型参数并在另一个场景中运行。

In [ ]:
engee.set_param!( "liquid_pressure_regulator/Сигнал утечки", "Amplitude"=>"0.001" )
data = engee.run( "liquid_pressure_regulator" )
df = simout_to_df( data );
CSV.write("Режим 2.csv", df);
plot(
    plot( df.time, df.set_point ), plot( df.time, df.control ), plot( df.time, df.pressure ), layout=(3,1)
)
Out[0]:
In [ ]:
engee.set_param!( "liquid_pressure_regulator/Сигнал утечки", "Amplitude"=>"0.0005", "Frequency"=>"0.2" )
data = engee.run( "liquid_pressure_regulator" )
df = simout_to_df( data );
CSV.write("Режим 3.csv", df);
plot(
    plot( df.time, df.set_point ), plot( df.time, df.control ), plot( df.time, df.pressure ), layout=(3,1)
)
Out[0]:

让我们把所有模型参数放回原位

In [ ]:
engee.set_param!( "liquid_pressure_regulator/Сигнал утечки", "Amplitude"=>"0.0005" )
engee.set_param!( "liquid_pressure_regulator/Сигнал утечки", "Frequency"=>"0.1" )

训练神经网络以近似多个调节器

该代码准备数据,定义三层全连接神经网络的结构,并对其进行训练,以找出过去 20 个不匹配值与下一个控制信号值之间的关系。

In [ ]:
Pkg.add(["BSON", "Glob", "MLUtils"])
In [ ]:
using Flux, MLUtils
using CSV, DataFrames
using Statistics, Random
using BSON, Glob

# Инициализируем генератор случайных чисел ради воспроизводимости эксперимента
Random.seed!(42)

# 1. Подготовим данные
function load_and_preprocess_data_fnn()
    # Загрузим все CSV файлы из текущей папки
    files = glob("*.csv")
    dfs = [CSV.read(file, DataFrame, types=Float32) for file in files]
    
    # Совместим данные в одну таблицу
    combined_df = vcat(dfs...)
    
    # Извлечем нужные нам столбцы (time, error, control)
    vtime = combined_df.time
    error = combined_df.error
    control = combined_df.control
    
    # Нормализация данных (очень поможет с ускорением обучения нейросети)
    error_mean, error_std = mean(error), std(error)
    control_mean, control_std = mean(control), std(control)
    
    error_norm = (error .- error_mean) ./ error_std
    control_norm = (control .- control_mean) ./ control_std
    
    # Разделим на небольшие последовательности чтобы обучить RNN
    sequence_length = 20  # сколько прошлых шагов мы учитываем для прогноза сигнала управления
    X = []
    Y = []
    
    for i in 1:(length(vtime)-sequence_length)
        push!(X, error_norm[i:i+sequence_length-1])
        push!(Y, control_norm[i+sequence_length])
    end
    
    # Оформим как массивы
    #X = reshape(hcat(X...), sequence_length, 1, :) # С батчами
    X = hcat(X...)'
    Y = hcat(Y...)'
    
    return (X, Y), (error_mean, error_std, control_mean, control_std)
end


# 2. Определяем структуру модели
function create_fnn_controller(input_size=20, hidden_size=5, output_size=1)
    return Chain(
        Dense(input_size, hidden_size, relu),
        Dense(hidden_size, hidden_size, relu),
        Dense(hidden_size, output_size)
    )
end


# 3. Обучение с новым API Flux
function train_fnn_model(X, Y; epochs=100, batch_size=32)
    # Разделение данных
    split_idx = floor(Int, 0.8 * size(X, 1))
    X_train, Y_train = X[1:split_idx, :], Y[1:split_idx, :]
    X_val, Y_val = X[split_idx+1:end, :], Y[split_idx+1:end, :]
    
    # Создание модели и оптимизатора
    model = create_fnn_controller()
    optimizer = Flux.setup(Adam(0.001), model)
    
    # Функция потерь
    loss(x, y) = Flux.mse(model(x), y)
    
    # Подготовка DataLoader
    train_loader = Flux.DataLoader((X_train', Y_train'), batchsize=batch_size, shuffle=true)
    
    # Цикл обучения
    train_losses = []
    val_losses = []
    
    for epoch in 1:epochs
        # Обучение
        Flux.train!(model, train_loader, optimizer) do m, x, y
            y_pred = m(x)
            Flux.mse(y_pred, y)
        end
        
        # Расчет ошибки
        train_loss = loss(X_train', Y_train')
        val_loss = loss(X_val', Y_val')
        push!(train_losses, train_loss)
        push!(val_losses, val_loss)
        
        # Логирование
        if epochs % 10 == 0
            @info "Epoch $epoch" train_loss val_loss
        end
    end
    
    # Визуализация обучения
    plot(1:epochs, train_losses, label="Training Loss")
    plot!(1:epochs, val_losses, label="Validation Loss")
    xlabel!("Epoch")
    ylabel!("Loss")
    title!("Training Progress")
    
    return model
end

# 4. Оценка модели (без изменений)
function evaluate_fnn_model(model, X, Y, norm_params)
    predictions = model(X')
    
    # Денормализация
    _, _, control_mean, control_std = norm_params
    Y_true = Y .* control_std .+ control_mean
    Y_pred = predictions' .* control_std .+ control_mean
    
    # Расчет метрик
    rmse = sqrt(mean((Y_true - Y_pred).^2))
    println("RMSE: ", rmse)
    
    # Визуализация
    plot(Y_true[1:100], label="True Control Signal")
    plot!(Y_pred[1:100], label="Predicted Control Signal")
    xlabel!("Time Step")
    ylabel!("Control Signal")
    title!("FNN Controller Performance")
end
Out[0]:
evaluate_fnn_model (generic function with 1 method)
In [ ]:
# Загрузка данных
(X, Y), norm_params = load_and_preprocess_data_fnn()

# Обучение
model = train_fnn_model(X, Y, epochs=100, batch_size=32)

# Сохранение модели
using BSON
BSON.@save "fnn_controller_v2.bson" model norm_params
RMSE: 0.00137486
In [ ]:
# Оценка
evaluate_fnn_model(model, X, Y, norm_params)
RMSE: 0.00137486
Out[0]:

让我们生成一个全连接神经网络的 C 代码

让我们使用Symbolics 库生成代码,并做一些修改,以便从C Function 块调用它。

In [ ]:
Pkg.add("Symbolics")
In [ ]:
using Symbolics
@variables X[1:20]
c_model_code = build_function( model( collect(X) ), collect(X); target=Symbolics.CTarget(), fname="neural_net", lhsname=:y, rhsnames=[:x] )
Out[0]:
"#include <math.h>\nvoid neural_net(double* y, const double* x) {\n  y[0] = -0.17745794f0 + 0.50926423f0 * ifelse(-0.049992073f0 + -0.23742697f0 * ifelse(-1.8439064f0 + -0.54531264f0 * x[0] + 0.69540715f0 * x[9] + 0.13691874f0 * x[10] + 0.4638261f0 * x[11] + 0.68296975f0 *" ⋯ 48502 bytes ⋯ "8941267f0 * x[16] + -0.20798773f0 * x[17] + 0.046037998f0 * x[18] + 0.2163552f0 * x[1] + 0.17670809f0 * x[19] + 0.48885685f0 * x[2] + 0.33982033f0 * x[3] + 0.23923202f0 * x[4] + 0.44608107f0 * x[5] + -0.22155789f0 * x[6] + 0.18008044f0 * x[7] + 0.3575259f0 * x[8]));\n}\n"
In [ ]:
# Заменим несколько инструкций в коде
c_fixed_model_code = replace( c_model_code,
                              "double" => "float",
                              "f0" => "f",
                              "  y[0]"=>"y" );

println( c_fixed_model_code[1:200] )
println("...")
#include <math.h>
void neural_net(float* y, const float* x) {
y = -0.17745794f + 0.50926423f * ifelse(-0.049992073f + -0.23742697f * ifelse(-1.8439064f + -0.54531264f * x[0] + 0.69540715f * x[9] + 0.1
...
In [ ]:
# Оставим только третью строчку от этого кода
c_fixed_model_code = split(c_fixed_model_code, "\n")[3]

c_code_standalone = """
float ifelse(bool cond, float a, float b) { return cond ? a : b; }

$c_fixed_model_code""";

println( c_code_standalone[1:200] )
println("...")
float ifelse(bool cond, float a, float b) { return cond ? a : b; }

y = -0.17745794f + 0.50926423f * ifelse(-0.049992073f + -0.23742697f * ifelse(-1.8439064f + -0.54531264f * x[0] + 0.69540715f * x[9]
...

将代码保存到文件

In [ ]:
open("$(@__DIR__)/neural_net_fc.c", "w") do f
    println( f, "$c_code_standalone" )
end

image.png

液体压力调节器模型

Model liquid_pressure_regulator_neural_fc.engee

结论

显然,我们对神经网络进行训练的时间太短,训练的例子太少。有许多步骤可以提高这种调节器的质量--例如,向神经网络输入更多信号,或者干脆建立一个更大的数据集,以便更好地进行离线训练。我们已经展示了如何开发神经调节器,以及如何在 Engee 模型环境中对其进行测试。