设计神经调节器的方法
网络研讨会为 Engee 中的控制对象设计高级类型的调节器 包括几个例子:
本项目包含网络研讨会第三部分的配套材料,其余部分可通过以上链接在社区中学习。
使用普通 PID 控制器的示例
这是我们要更换调节器的基本模型。

Model liquid_pressure_regulator.engee
自适应 PID 调节器
首先,我们不会对大于一定大小的控制信号跳变做出反应c.thr
(参数threshold
)。其次,参数α
限制了控制器所有参数的变化率,而变化步长则受限于参数ΔK
。

液体压力调节器模型
Model liquid_pressure_regulator_adaptive_pid.engee
朱莉娅神经调节器(RNN)
该调节器包含用 Julia 实现的递归神经网络的代码,无需高级库。它在系统运行时进行 "在线 "训练,因此其工作质量在很大程度上取决于权重的初始值。可能有必要采用预训练程序,至少让神经网络在计算开始时的静止状态下开始产生稳定值。

Model liquid_pressure_regulator_neural_test.engee
用 C 语言创建神经调节器(FNN)
让我们运行模型并建立数据集
Pkg.add("CSV")
using DataFrames, CSV
data = engee.run( "liquid_pressure_regulator" )
function simout_to_df( data )
vec_names = [i for i in keys(data) if length(data[i].value) > 0];
df = DataFrame( hcat([collect(data[v]).value for v in vec_names]...), vec_names );
df.time = collect(data[vec_names[1]]).time;
return df
end
df = simout_to_df( data );
CSV.write("Режим 1.csv", df);
plot(
plot( df.time, df.set_point ), plot( df.time, df.control ), plot( df.time, df.pressure ), layout=(3,1)
)
更改模型参数并在另一个场景中运行。
engee.set_param!( "liquid_pressure_regulator/Сигнал утечки", "Amplitude"=>"0.001" )
data = engee.run( "liquid_pressure_regulator" )
df = simout_to_df( data );
CSV.write("Режим 2.csv", df);
plot(
plot( df.time, df.set_point ), plot( df.time, df.control ), plot( df.time, df.pressure ), layout=(3,1)
)
engee.set_param!( "liquid_pressure_regulator/Сигнал утечки", "Amplitude"=>"0.0005", "Frequency"=>"0.2" )
data = engee.run( "liquid_pressure_regulator" )
df = simout_to_df( data );
CSV.write("Режим 3.csv", df);
plot(
plot( df.time, df.set_point ), plot( df.time, df.control ), plot( df.time, df.pressure ), layout=(3,1)
)
让我们把所有模型参数放回原位
engee.set_param!( "liquid_pressure_regulator/Сигнал утечки", "Amplitude"=>"0.0005" )
engee.set_param!( "liquid_pressure_regulator/Сигнал утечки", "Frequency"=>"0.1" )
训练神经网络以近似多个调节器
该代码准备数据,定义三层全连接神经网络的结构,并对其进行训练,以找出过去 20 个不匹配值与下一个控制信号值之间的关系。
Pkg.add(["BSON", "Glob", "MLUtils"])
using Flux, MLUtils
using CSV, DataFrames
using Statistics, Random
using BSON, Glob
# Инициализируем генератор случайных чисел ради воспроизводимости эксперимента
Random.seed!(42)
# 1. Подготовим данные
function load_and_preprocess_data_fnn()
# Загрузим все CSV файлы из текущей папки
files = glob("*.csv")
dfs = [CSV.read(file, DataFrame, types=Float32) for file in files]
# Совместим данные в одну таблицу
combined_df = vcat(dfs...)
# Извлечем нужные нам столбцы (time, error, control)
vtime = combined_df.time
error = combined_df.error
control = combined_df.control
# Нормализация данных (очень поможет с ускорением обучения нейросети)
error_mean, error_std = mean(error), std(error)
control_mean, control_std = mean(control), std(control)
error_norm = (error .- error_mean) ./ error_std
control_norm = (control .- control_mean) ./ control_std
# Разделим на небольшие последовательности чтобы обучить RNN
sequence_length = 20 # сколько прошлых шагов мы учитываем для прогноза сигнала управления
X = []
Y = []
for i in 1:(length(vtime)-sequence_length)
push!(X, error_norm[i:i+sequence_length-1])
push!(Y, control_norm[i+sequence_length])
end
# Оформим как массивы
#X = reshape(hcat(X...), sequence_length, 1, :) # С батчами
X = hcat(X...)'
Y = hcat(Y...)'
return (X, Y), (error_mean, error_std, control_mean, control_std)
end
# 2. Определяем структуру модели
function create_fnn_controller(input_size=20, hidden_size=5, output_size=1)
return Chain(
Dense(input_size, hidden_size, relu),
Dense(hidden_size, hidden_size, relu),
Dense(hidden_size, output_size)
)
end
# 3. Обучение с новым API Flux
function train_fnn_model(X, Y; epochs=100, batch_size=32)
# Разделение данных
split_idx = floor(Int, 0.8 * size(X, 1))
X_train, Y_train = X[1:split_idx, :], Y[1:split_idx, :]
X_val, Y_val = X[split_idx+1:end, :], Y[split_idx+1:end, :]
# Создание модели и оптимизатора
model = create_fnn_controller()
optimizer = Flux.setup(Adam(0.001), model)
# Функция потерь
loss(x, y) = Flux.mse(model(x), y)
# Подготовка DataLoader
train_loader = Flux.DataLoader((X_train', Y_train'), batchsize=batch_size, shuffle=true)
# Цикл обучения
train_losses = []
val_losses = []
for epoch in 1:epochs
# Обучение
Flux.train!(model, train_loader, optimizer) do m, x, y
y_pred = m(x)
Flux.mse(y_pred, y)
end
# Расчет ошибки
train_loss = loss(X_train', Y_train')
val_loss = loss(X_val', Y_val')
push!(train_losses, train_loss)
push!(val_losses, val_loss)
# Логирование
if epochs % 10 == 0
@info "Epoch $epoch" train_loss val_loss
end
end
# Визуализация обучения
plot(1:epochs, train_losses, label="Training Loss")
plot!(1:epochs, val_losses, label="Validation Loss")
xlabel!("Epoch")
ylabel!("Loss")
title!("Training Progress")
return model
end
# 4. Оценка модели (без изменений)
function evaluate_fnn_model(model, X, Y, norm_params)
predictions = model(X')
# Денормализация
_, _, control_mean, control_std = norm_params
Y_true = Y .* control_std .+ control_mean
Y_pred = predictions' .* control_std .+ control_mean
# Расчет метрик
rmse = sqrt(mean((Y_true - Y_pred).^2))
println("RMSE: ", rmse)
# Визуализация
plot(Y_true[1:100], label="True Control Signal")
plot!(Y_pred[1:100], label="Predicted Control Signal")
xlabel!("Time Step")
ylabel!("Control Signal")
title!("FNN Controller Performance")
end
# Загрузка данных
(X, Y), norm_params = load_and_preprocess_data_fnn()
# Обучение
model = train_fnn_model(X, Y, epochs=100, batch_size=32)
# Сохранение модели
using BSON
BSON.@save "fnn_controller_v2.bson" model norm_params
# Оценка
evaluate_fnn_model(model, X, Y, norm_params)
让我们生成一个全连接神经网络的 C 代码
让我们使用Symbolics
库生成代码,并做一些修改,以便从C Function
块调用它。
Pkg.add("Symbolics")
using Symbolics
@variables X[1:20]
c_model_code = build_function( model( collect(X) ), collect(X); target=Symbolics.CTarget(), fname="neural_net", lhsname=:y, rhsnames=[:x] )
# Заменим несколько инструкций в коде
c_fixed_model_code = replace( c_model_code,
"double" => "float",
"f0" => "f",
" y[0]"=>"y" );
println( c_fixed_model_code[1:200] )
println("...")
# Оставим только третью строчку от этого кода
c_fixed_model_code = split(c_fixed_model_code, "\n")[3]
c_code_standalone = """
float ifelse(bool cond, float a, float b) { return cond ? a : b; }
$c_fixed_model_code""";
println( c_code_standalone[1:200] )
println("...")
将代码保存到文件
open("$(@__DIR__)/neural_net_fc.c", "w") do f
println( f, "$c_code_standalone" )
end

液体压力调节器模型
Model liquid_pressure_regulator_neural_fc.engee
结论
显然,我们对神经网络进行训练的时间太短,训练的例子太少。有许多步骤可以提高这种调节器的质量--例如,向神经网络输入更多信号,或者干脆建立一个更大的数据集,以便更好地进行离线训练。我们已经展示了如何开发神经调节器,以及如何在 Engee 模型环境中对其进行测试。