Engee 文档
Notebook

神经调节剂设计的方法

网络研讨会在Engee中开发有前途的控制对象调节器类型由几个例子组成:

  1. [智能手机相机自动对焦控制](https://engee.com/community/ru/catalogs/projects/upravlenie-avtofokusom-kamery-smartfona

  2. 使用模糊控制器控制давлением

  3. 自适应和神经网络调节器的实现

该项目包含网络研讨会第三部分的随附材料,您可以使用上面的链接在社区中研究其余部分。

一个常规PID控制器的例子

这是我们将更换调节器的主要模型。

image.png

liquid_pressure_regulator。工程师模型

自适应PID控制器

首先,我们将建立一个自适应PID控制器,每个时间步长的系数都经过以下更新过程。:

``'茱莉亚
如果abs(e)>c.thr
c.Kp =最大(0,c.Kp +钳位(c.αec.pe,-c.ΔK,c.ΔK))
c.Ki =最大(0,c.Ki +clamp(c.αec.pie,-c.ΔK,c.ΔK))
c.Kd=max(0,c.Kd+钳位(c.αec.pde,-c.ΔK,c.ΔK))
结束


首先,我们不会对大于一定大小的控制信号中的跳跃做出反应。 c.thr (参数 threshold). 其次,参数 α 限制控制器所有参数的变化率,而变化步长受参数模 ΔK.

image.png

liquid_pressure_regulator_adaptive_pid。工程师模型

朱莉娅的神经调节剂(RNN)

该控制器包含在Julia上运行的循环神经网络的代码,没有高级库。 他的训练发生在"在线",在系统的运行过程中,所以他的工作质量在很大程度上取决于他的权重的初始值。 可能需要应用预训练过程,至少使神经网络在计算开始时管道状态静止时开始产生稳定值。

image.png

liquid_pressure_regulator_neural_test。工程师模型

在C(FNN)中创建神经控制器

让我们启动模型并组装数据集

In [ ]:
Pkg.add("CSV")
In [ ]:
using DataFrames, CSV
In [ ]:
engee.open("$(@__DIR__)/liquid_pressure_regulator.engee")
data = engee.run( "liquid_pressure_regulator" )
Out[0]:
SimulationResult(
    "pressure" => WorkspaceArray{Float64}("liquid_pressure_regulator/pressure")
,
    "error" => WorkspaceArray{Float64}("liquid_pressure_regulator/error")
,
    "set_point" => WorkspaceArray{Float64}("liquid_pressure_regulator/set_point")
,
    "control" => WorkspaceArray{Float64}("liquid_pressure_regulator/control")

)
In [ ]:
function simout_to_df( data )
    vec_names = [i for i in keys(data) if length(data[i].value) > 0];
    df = DataFrame( hcat([collect(data[v]).value for v in vec_names]...), vec_names );
    df.time = collect(data[vec_names[1]]).time;
    return df
end
Out[0]:
simout_to_df (generic function with 1 method)
In [ ]:
df = simout_to_df( data );
CSV.write("Режим 1.csv", df);
plot(
    plot( df.time, df.set_point ), plot( df.time, df.control ), plot( df.time, df.pressure ), layout=(3,1)
)
Out[0]:

让我们更改模型的参数并在不同的场景中运行它。

In [ ]:
engee.set_param!( "liquid_pressure_regulator/Сигнал утечки", "Amplitude"=>"0.001" )
data = engee.run( "liquid_pressure_regulator" )
df = simout_to_df( data );
CSV.write("Режим 2.csv", df);
plot(
    plot( df.time, df.set_point ), plot( df.time, df.control ), plot( df.time, df.pressure ), layout=(3,1)
)
Out[0]:
In [ ]:
engee.set_param!( "liquid_pressure_regulator/Сигнал утечки", "Amplitude"=>"0.0005", "Frequency"=>"0.2" )
data = engee.run( "liquid_pressure_regulator" )
df = simout_to_df( data );
CSV.write("Режим 3.csv", df);
plot(
    plot( df.time, df.set_point ), plot( df.time, df.control ), plot( df.time, df.pressure ), layout=(3,1)
)
Out[0]:

将所有模型参数放回原位

In [ ]:
engee.set_param!( "liquid_pressure_regulator/Сигнал утечки", "Amplitude"=>"0.0005" )
engee.set_param!( "liquid_pressure_regulator/Сигнал утечки", "Frequency"=>"0.1" )

让我们训练一个神经网络来近似几个调节器

该代码允许您准备数据,设置三层全连接神经网络的结构,并对其进行训练,以查找过去20个失配值与下一个控制信号值之间的关系。

In [ ]:
Pkg.add(["Flux", "BSON", "Glob", "MLUtils"])
In [ ]:
using Flux, MLUtils
using CSV, DataFrames
using Statistics, Random
using BSON, Glob

# Инициализируем генератор случайных чисел ради воспроизводимости эксперимента
Random.seed!(42)

# 1. Подготовим данные
function load_and_preprocess_data_fnn()
    # Загрузим все CSV файлы из текущей папки
    files = glob("*.csv")
    dfs = [CSV.read(file, DataFrame, types=Float32) for file in files]
    
    # Совместим данные в одну таблицу
    combined_df = vcat(dfs...)
    
    # Извлечем нужные нам столбцы (time, error, control)
    vtime = combined_df.time
    error = combined_df.error
    control = combined_df.control
    
    # Нормализация данных (очень поможет с ускорением обучения нейросети)
    error_mean, error_std = mean(error), std(error)
    control_mean, control_std = mean(control), std(control)
    
    error_norm = (error .- error_mean) ./ error_std
    control_norm = (control .- control_mean) ./ control_std
    
    # Разделим на небольшие последовательности чтобы обучить RNN
    sequence_length = 20  # сколько прошлых шагов мы учитываем для прогноза сигнала управления
    X = []
    Y = []
    
    for i in 1:(length(vtime)-sequence_length)
        push!(X, error_norm[i:i+sequence_length-1])
        push!(Y, control_norm[i+sequence_length])
    end
    
    # Оформим как массивы
    #X = reshape(hcat(X...), sequence_length, 1, :) # С батчами
    X = hcat(X...)'
    Y = hcat(Y...)'
    
    return (X, Y), (error_mean, error_std, control_mean, control_std)
end


# 2. Определяем структуру модели
function create_fnn_controller(input_size=20, hidden_size=5, output_size=1)
    return Chain(
        Dense(input_size, hidden_size, relu),
        Dense(hidden_size, hidden_size, relu),
        Dense(hidden_size, output_size)
    )
end


# 3. Обучение с новым API Flux
function train_fnn_model(X, Y; epochs=100, batch_size=32)
    # Разделение данных
    split_idx = floor(Int, 0.8 * size(X, 1))
    X_train, Y_train = X[1:split_idx, :], Y[1:split_idx, :]
    X_val, Y_val = X[split_idx+1:end, :], Y[split_idx+1:end, :]
    
    # Создание модели и оптимизатора
    model = create_fnn_controller()
    optimizer = Flux.setup(Adam(0.001), model)
    
    # Функция потерь
    loss(x, y) = Flux.mse(model(x), y)
    
    # Подготовка DataLoader
    train_loader = Flux.DataLoader((X_train', Y_train'), batchsize=batch_size, shuffle=true)
    
    # Цикл обучения
    train_losses = []
    val_losses = []
    
    for epoch in 1:epochs
        # Обучение
        Flux.train!(model, train_loader, optimizer) do m, x, y
            y_pred = m(x)
            Flux.mse(y_pred, y)
        end
        
        # Расчет ошибки
        train_loss = loss(X_train', Y_train')
        val_loss = loss(X_val', Y_val')
        push!(train_losses, train_loss)
        push!(val_losses, val_loss)
        
        # Логирование
        if epochs % 10 == 0
            @info "Epoch $epoch" train_loss val_loss
        end
    end
    
    # Визуализация обучения
    plot(1:epochs, train_losses, label="Training Loss")
    plot!(1:epochs, val_losses, label="Validation Loss")
    xlabel!("Epoch")
    ylabel!("Loss")
    title!("Training Progress")
    
    return model
end

# 4. Оценка модели (без изменений)
function evaluate_fnn_model(model, X, Y, norm_params)
    predictions = model(X')
    
    # Денормализация
    _, _, control_mean, control_std = norm_params
    Y_true = Y .* control_std .+ control_mean
    Y_pred = predictions' .* control_std .+ control_mean
    
    # Расчет метрик
    rmse = sqrt(mean((Y_true - Y_pred).^2))
    println("RMSE: ", rmse)
    
    # Визуализация
    plot(Y_true[1:100], label="True Control Signal")
    plot!(Y_pred[1:100], label="Predicted Control Signal")
    xlabel!("Time Step")
    ylabel!("Control Signal")
    title!("FNN Controller Performance")
end
In [ ]:
# Загрузка данных
(X, Y), norm_params = load_and_preprocess_data_fnn()

# Обучение
model = train_fnn_model(X, Y, epochs=100, batch_size=32)

# Сохранение модели
using BSON
BSON.@save "fnn_controller_v2.bson" model norm_params
Info: Epoch 1
  train_loss = 0.8938878f0
  val_loss = 0.065423325f0
Info: Epoch 2
  train_loss = 0.6930624f0
  val_loss = 0.08745527f0
Info: Epoch 3
  train_loss = 0.6121955f0
  val_loss = 0.0670045f0
Info: Epoch 4
  train_loss = 0.5751492f0
  val_loss = 0.0673555f0
Info: Epoch 5
  train_loss = 0.5150536f0
  val_loss = 0.084629685f0
Info: Epoch 6
  train_loss = 0.43911234f0
  val_loss = 0.060537163f0
Info: Epoch 7
  train_loss = 0.34446087f0
  val_loss = 0.06451357f0
Info: Epoch 8
  train_loss = 0.26789767f0
  val_loss = 0.08551992f0
Info: Epoch 9
  train_loss = 0.21148369f0
  val_loss = 0.10232961f0
Info: Epoch 10
  train_loss = 0.18503676f0
  val_loss = 0.14733629f0
Info: Epoch 11
  train_loss = 0.15013915f0
  val_loss = 0.14483832f0
Info: Epoch 12
  train_loss = 0.12138271f0
  val_loss = 0.11467917f0
Info: Epoch 13
  train_loss = 0.10577717f0
  val_loss = 0.045897737f0
Info: Epoch 14
  train_loss = 0.101480916f0
  val_loss = 0.057284135f0
Info: Epoch 15
  train_loss = 0.10315049f0
  val_loss = 0.065064624f0
Info: Epoch 16
  train_loss = 0.10541139f0
  val_loss = 0.056540746f0
Info: Epoch 17
  train_loss = 0.09681009f0
  val_loss = 0.045904756f0
Info: Epoch 18
  train_loss = 0.093385085f0
  val_loss = 0.0685159f0
Info: Epoch 19
  train_loss = 0.09045938f0
  val_loss = 0.041075245f0
Info: Epoch 20
  train_loss = 0.08972832f0
  val_loss = 0.055446453f0
Info: Epoch 21
  train_loss = 0.09329716f0
  val_loss = 0.03967251f0
Info: Epoch 22
  train_loss = 0.1414877f0
  val_loss = 0.07507982f0
Info: Epoch 23
  train_loss = 0.08920607f0
  val_loss = 0.051499482f0
Info: Epoch 24
  train_loss = 0.0801875f0
  val_loss = 0.03534109f0
Info: Epoch 25
  train_loss = 0.08018934f0
  val_loss = 0.030156119f0
Info: Epoch 26
  train_loss = 0.076051794f0
  val_loss = 0.03466235f0
Info: Epoch 27
  train_loss = 0.07765352f0
  val_loss = 0.056599125f0
Info: Epoch 28
  train_loss = 0.07806299f0
  val_loss = 0.055766143f0
Info: Epoch 29
  train_loss = 0.07484163f0
  val_loss = 0.0389787f0
Info: Epoch 30
  train_loss = 0.07139521f0
  val_loss = 0.033857666f0
Info: Epoch 31
  train_loss = 0.08287221f0
  val_loss = 0.042753786f0
Info: Epoch 32
  train_loss = 0.072352625f0
  val_loss = 0.054598782f0
Info: Epoch 33
  train_loss = 0.0717976f0
  val_loss = 0.031754486f0
Info: Epoch 34
  train_loss = 0.069250494f0
  val_loss = 0.043800574f0
Info: Epoch 35
  train_loss = 0.068584874f0
  val_loss = 0.035793144f0
Info: Epoch 36
  train_loss = 0.098963626f0
  val_loss = 0.085103f0
Info: Epoch 37
  train_loss = 0.067444935f0
  val_loss = 0.06019559f0
Info: Epoch 38
  train_loss = 0.080243886f0
  val_loss = 0.027890693f0
Info: Epoch 39
  train_loss = 0.0734144f0
  val_loss = 0.038119968f0
Info: Epoch 40
  train_loss = 0.06261252f0
  val_loss = 0.036735766f0
Info: Epoch 41
  train_loss = 0.06626095f0
  val_loss = 0.04766762f0
Info: Epoch 42
  train_loss = 0.061304063f0
  val_loss = 0.038307376f0
Info: Epoch 43
  train_loss = 0.06375473f0
  val_loss = 0.049757667f0
Info: Epoch 44
  train_loss = 0.06929615f0
  val_loss = 0.031478032f0
Info: Epoch 45
  train_loss = 0.06482848f0
  val_loss = 0.041360646f0
Info: Epoch 46
  train_loss = 0.08152386f0
  val_loss = 0.031685207f0
Info: Epoch 47
  train_loss = 0.06899811f0
  val_loss = 0.03166399f0
Info: Epoch 48
  train_loss = 0.06471846f0
  val_loss = 0.057980362f0
Info: Epoch 49
  train_loss = 0.073723935f0
  val_loss = 0.0419289f0
Info: Epoch 50
  train_loss = 0.062251564f0
  val_loss = 0.05569823f0
Info: Epoch 51
  train_loss = 0.08304988f0
  val_loss = 0.047913346f0
Info: Epoch 52
  train_loss = 0.13036466f0
  val_loss = 0.06255697f0
Info: Epoch 53
  train_loss = 0.0668965f0
  val_loss = 0.023209779f0
Info: Epoch 54
  train_loss = 0.0641162f0
  val_loss = 0.026421405f0
Info: Epoch 55
  train_loss = 0.0606432f0
  val_loss = 0.03451042f0
Info: Epoch 56
  train_loss = 0.05806036f0
  val_loss = 0.047697715f0
Info: Epoch 57
  train_loss = 0.08080194f0
  val_loss = 0.030021664f0
Info: Epoch 58
  train_loss = 0.075242944f0
  val_loss = 0.027072791f0
Info: Epoch 59
  train_loss = 0.07018888f0
  val_loss = 0.03701796f0
Info: Epoch 60
  train_loss = 0.20293671f0
  val_loss = 0.039905872f0
Info: Epoch 61
  train_loss = 0.08879273f0
  val_loss = 0.068481795f0
Info: Epoch 62
  train_loss = 0.06276142f0
  val_loss = 0.044078235f0
Info: Epoch 63
  train_loss = 0.060167953f0
  val_loss = 0.030940093f0
Info: Epoch 64
  train_loss = 0.05863377f0
  val_loss = 0.033402573f0
Info: Epoch 65
  train_loss = 0.09663699f0
  val_loss = 0.033131924f0
Info: Epoch 66
  train_loss = 0.053091682f0
  val_loss = 0.026231134f0
Info: Epoch 67
  train_loss = 0.053288568f0
  val_loss = 0.03478807f0
Info: Epoch 68
  train_loss = 0.062837504f0
  val_loss = 0.032456074f0
Info: Epoch 69
  train_loss = 0.14382939f0
  val_loss = 0.039194208f0
Info: Epoch 70
  train_loss = 0.057213098f0
  val_loss = 0.03217173f0
Info: Epoch 71
  train_loss = 0.059472032f0
  val_loss = 0.050639316f0
Info: Epoch 72
  train_loss = 0.061687153f0
  val_loss = 0.041273717f0
Info: Epoch 73
  train_loss = 0.057540856f0
  val_loss = 0.043549165f0
Info: Epoch 74
  train_loss = 0.06940068f0
  val_loss = 0.030464195f0
Info: Epoch 75
  train_loss = 0.055994995f0
  val_loss = 0.06194949f0
Info: Epoch 76
  train_loss = 0.06127405f0
  val_loss = 0.035824254f0
Info: Epoch 77
  train_loss = 0.08695382f0
  val_loss = 0.03958839f0
Info: Epoch 78
  train_loss = 0.068831f0
  val_loss = 0.044648975f0
Info: Epoch 79
  train_loss = 0.06480649f0
  val_loss = 0.049541153f0
Info: Epoch 80
  train_loss = 0.054308545f0
  val_loss = 0.03448179f0
Info: Epoch 81
  train_loss = 0.062011868f0
  val_loss = 0.033419278f0
Info: Epoch 82
  train_loss = 0.058451455f0
  val_loss = 0.041000187f0
Info: Epoch 83
  train_loss = 0.056441877f0
  val_loss = 0.040258046f0
Info: Epoch 84
  train_loss = 0.065997876f0
  val_loss = 0.02149929f0
Info: Epoch 85
  train_loss = 0.14329454f0
  val_loss = 0.024663093f0
Info: Epoch 86
  train_loss = 0.07271247f0
  val_loss = 0.057003014f0
Info: Epoch 87
  train_loss = 0.05535151f0
  val_loss = 0.050590448f0
Info: Epoch 88
  train_loss = 0.10478283f0
  val_loss = 0.041117538f0
Info: Epoch 89
  train_loss = 0.05750017f0
  val_loss = 0.027143845f0
Info: Epoch 90
  train_loss = 0.05710252f0
  val_loss = 0.029968357f0
Info: Epoch 91
  train_loss = 0.058521483f0
  val_loss = 0.04156654f0
Info: Epoch 92
  train_loss = 0.056501415f0
  val_loss = 0.031160006f0
Info: Epoch 93
  train_loss = 0.06948501f0
  val_loss = 0.036111105f0
Info: Epoch 94
  train_loss = 0.056522947f0
  val_loss = 0.042170122f0
Info: Epoch 95
  train_loss = 0.07270187f0
  val_loss = 0.045176893f0
Info: Epoch 96
  train_loss = 0.09828934f0
  val_loss = 0.060680926f0
Info: Epoch 97
  train_loss = 0.059696835f0
  val_loss = 0.03424492f0
Info: Epoch 98
  train_loss = 0.07078414f0
  val_loss = 0.032400988f0
Info: Epoch 99
  train_loss = 0.05980099f0
  val_loss = 0.041180395f0
Info: Epoch 100
  train_loss = 0.05841915f0
  val_loss = 0.038568825f0
In [ ]:
# Оценка
evaluate_fnn_model(model, X, Y, norm_params)
RMSE: 0.00137486
Out[0]:

让我们为一个完全连接的神经网络生成C代码.

让我们应用库 Symbolics 为了生成代码,我们将进行一些改进,以便可以从块中调用它。 C Function.

In [ ]:
Pkg.add("Symbolics")
In [ ]:
using Symbolics
@variables X[1:20]
c_model_code = build_function( model( collect(X) ), collect(X); target=Symbolics.CTarget(), fname="neural_net", lhsname=:y, rhsnames=[:x] )
Out[0]:
"#include <math.h>\nvoid neural_net(double* y, const double* x) {\n  y[0] = -0.17745794f0 + 0.50926423f0 * ifelse(-0.049992073f0 + -0.23742697f0 * ifelse(-1.8439064f0 + -0.54531264f0 * x[0] + 0.69540715f0 * x[9] + 0.13691874f0 * x[10] + 0.4638261f0 * x[11] + 0.68296975f0 *" ⋯ 48502 bytes ⋯ "8941267f0 * x[16] + -0.20798773f0 * x[17] + 0.046037998f0 * x[18] + 0.2163552f0 * x[1] + 0.17670809f0 * x[19] + 0.48885685f0 * x[2] + 0.33982033f0 * x[3] + 0.23923202f0 * x[4] + 0.44608107f0 * x[5] + -0.22155789f0 * x[6] + 0.18008044f0 * x[7] + 0.3575259f0 * x[8]));\n}\n"
In [ ]:
# Заменим несколько инструкций в коде
c_fixed_model_code = replace( c_model_code,
                              "double" => "float",
                              "f0" => "f",
                              "  y[0]"=>"y" );

println( c_fixed_model_code[1:200] )
println("...")
#include <math.h>
void neural_net(float* y, const float* x) {
y = -0.17745794f + 0.50926423f * ifelse(-0.049992073f + -0.23742697f * ifelse(-1.8439064f + -0.54531264f * x[0] + 0.69540715f * x[9] + 0.1
...
In [ ]:
# Оставим только третью строчку от этого кода
c_fixed_model_code = split(c_fixed_model_code, "\n")[3]

c_code_standalone = """
float ifelse(bool cond, float a, float b) { return cond ? a : b; }

$c_fixed_model_code""";

println( c_code_standalone[1:200] )
println("...")
float ifelse(bool cond, float a, float b) { return cond ? a : b; }

y = -0.17745794f + 0.50926423f * ifelse(-0.049992073f + -0.23742697f * ifelse(-1.8439064f + -0.54531264f * x[0] + 0.69540715f * x[9]
...

将代码保存到文件中

In [ ]:
open("$(@__DIR__)/neural_net_fc.c", "w") do f
    println( f, "$c_code_standalone" )
end
image.png

liquid_pressure_regulator_neural_fc。工程师模型

结论

显然,我们使用过小的例子训练神经网络的时间太短了。 有许多步骤应该可以提高这种控制器的质量-例如,将更多信号应用于神经网络的输入,或者简单地组装更大的数据集以获得更好的离线学习。 我们已经展示了如何接近神经调节器的开发以及如何在Engee模型环境中进行测试。