Engee documentation
Notebook

Approaches to the design of neural regulators

The webinar Designing advanced types of regulators for control objects in Engee consisted of several examples:

  1. Smartphone camera autofocus control

  2. Application of fuzzy controller for pressure control

  3. Realisation of adaptive and neural network regulators

This project contains the accompanying materials for the third part of the webinar, the rest you can study in the community at the links above.

Example with a regular PID controller

This is the basic model in which we will be replacing the regulator.

image.png

Model liquid_pressure_regulator.engee

Adaptive PID regulator

The first thing we will do is to set up an adaptive PID controller whose coefficients go through the following update procedure with each time step:

    if abs(e) > c.thr
        c.Kp = max(0, c.Kp + clamp(c.α*e*c.pe, -c.ΔK, c.ΔK))
        c.Ki = max(0, c.Ki + clamp(c.α*e*c.pie, -c.ΔK, c.ΔK))
        c.Kd = max(0, c.Kd + clamp(c.α*e*c.pde, -c.ΔK, c.ΔK))
    end

Firstly, we will not react to control signal jumps larger than a certain size c.thr (parameter threshold). Secondly, the parameter α limits the rate of change of all parameters of the controller, while the step of change is limited modulo the parameter ΔK.

image.png

Model liquid_pressure_regulator_adaptive_pid.engee

Neural Regulator on Julia (RNN)

This regulator contains the code of a recurrent neural network implemented in Julia without high-level libraries. It is trained "online", while the system is running, so the quality of its work depends very much on the initial values of its weights. It may be necessary to apply a pre-training procedure, at least for the neural network to start producing a stable value at a stationary, at the beginning of the calculation, state of the pipeline.

image.png

Model liquid_pressure_regulator_neural_test.engee

Creating a neural regulator in C (FNN)

Let's run the model and build the dataset

In [ ]:
Pkg.add("CSV")
In [ ]:
using DataFrames, CSV
In [ ]:
data = engee.run( "liquid_pressure_regulator" )
Out[0]:
SimulationResult(
    "pressure" => WorkspaceArray{Float64}("liquid_pressure_regulator/pressure")
,
    "error" => WorkspaceArray{Float64}("liquid_pressure_regulator/error")
,
    "set_point" => WorkspaceArray{Float64}("liquid_pressure_regulator/set_point")
,
    "control" => WorkspaceArray{Float64}("liquid_pressure_regulator/control")

)
In [ ]:
function simout_to_df( data )
    vec_names = [i for i in keys(data) if length(data[i].value) > 0];
    df = DataFrame( hcat([collect(data[v]).value for v in vec_names]...), vec_names );
    df.time = collect(data[vec_names[1]]).time;
    return df
end
Out[0]:
simout_to_df (generic function with 1 method)
In [ ]:
df = simout_to_df( data );
CSV.write("Режим 1.csv", df);
plot(
    plot( df.time, df.set_point ), plot( df.time, df.control ), plot( df.time, df.pressure ), layout=(3,1)
)
Out[0]:

Change the parameters of the model and run it in another scenario.

In [ ]:
engee.set_param!( "liquid_pressure_regulator/Сигнал утечки", "Amplitude"=>"0.001" )
data = engee.run( "liquid_pressure_regulator" )
df = simout_to_df( data );
CSV.write("Режим 2.csv", df);
plot(
    plot( df.time, df.set_point ), plot( df.time, df.control ), plot( df.time, df.pressure ), layout=(3,1)
)
Out[0]:
In [ ]:
engee.set_param!( "liquid_pressure_regulator/Сигнал утечки", "Amplitude"=>"0.0005", "Frequency"=>"0.2" )
data = engee.run( "liquid_pressure_regulator" )
df = simout_to_df( data );
CSV.write("Режим 3.csv", df);
plot(
    plot( df.time, df.set_point ), plot( df.time, df.control ), plot( df.time, df.pressure ), layout=(3,1)
)
Out[0]:

Let's put all the model parameters back in place

In [ ]:
engee.set_param!( "liquid_pressure_regulator/Сигнал утечки", "Amplitude"=>"0.0005" )
engee.set_param!( "liquid_pressure_regulator/Сигнал утечки", "Frequency"=>"0.1" )

Train the neural network to approximate several regulators

This code prepares the data, defines the structure of a three-layer fully-connected neural network and trains it to find the relationship between the past 20 mismatch values and the next control signal value.

In [ ]:
Pkg.add(["BSON", "Glob", "MLUtils"])
In [ ]:
using Flux, MLUtils
using CSV, DataFrames
using Statistics, Random
using BSON, Glob

# Инициализируем генератор случайных чисел ради воспроизводимости эксперимента
Random.seed!(42)

# 1. Подготовим данные
function load_and_preprocess_data_fnn()
    # Загрузим все CSV файлы из текущей папки
    files = glob("*.csv")
    dfs = [CSV.read(file, DataFrame, types=Float32) for file in files]
    
    # Совместим данные в одну таблицу
    combined_df = vcat(dfs...)
    
    # Извлечем нужные нам столбцы (time, error, control)
    vtime = combined_df.time
    error = combined_df.error
    control = combined_df.control
    
    # Нормализация данных (очень поможет с ускорением обучения нейросети)
    error_mean, error_std = mean(error), std(error)
    control_mean, control_std = mean(control), std(control)
    
    error_norm = (error .- error_mean) ./ error_std
    control_norm = (control .- control_mean) ./ control_std
    
    # Разделим на небольшие последовательности чтобы обучить RNN
    sequence_length = 20  # сколько прошлых шагов мы учитываем для прогноза сигнала управления
    X = []
    Y = []
    
    for i in 1:(length(vtime)-sequence_length)
        push!(X, error_norm[i:i+sequence_length-1])
        push!(Y, control_norm[i+sequence_length])
    end
    
    # Оформим как массивы
    #X = reshape(hcat(X...), sequence_length, 1, :) # С батчами
    X = hcat(X...)'
    Y = hcat(Y...)'
    
    return (X, Y), (error_mean, error_std, control_mean, control_std)
end


# 2. Определяем структуру модели
function create_fnn_controller(input_size=20, hidden_size=5, output_size=1)
    return Chain(
        Dense(input_size, hidden_size, relu),
        Dense(hidden_size, hidden_size, relu),
        Dense(hidden_size, output_size)
    )
end


# 3. Обучение с новым API Flux
function train_fnn_model(X, Y; epochs=100, batch_size=32)
    # Разделение данных
    split_idx = floor(Int, 0.8 * size(X, 1))
    X_train, Y_train = X[1:split_idx, :], Y[1:split_idx, :]
    X_val, Y_val = X[split_idx+1:end, :], Y[split_idx+1:end, :]
    
    # Создание модели и оптимизатора
    model = create_fnn_controller()
    optimizer = Flux.setup(Adam(0.001), model)
    
    # Функция потерь
    loss(x, y) = Flux.mse(model(x), y)
    
    # Подготовка DataLoader
    train_loader = Flux.DataLoader((X_train', Y_train'), batchsize=batch_size, shuffle=true)
    
    # Цикл обучения
    train_losses = []
    val_losses = []
    
    for epoch in 1:epochs
        # Обучение
        Flux.train!(model, train_loader, optimizer) do m, x, y
            y_pred = m(x)
            Flux.mse(y_pred, y)
        end
        
        # Расчет ошибки
        train_loss = loss(X_train', Y_train')
        val_loss = loss(X_val', Y_val')
        push!(train_losses, train_loss)
        push!(val_losses, val_loss)
        
        # Логирование
        if epochs % 10 == 0
            @info "Epoch $epoch" train_loss val_loss
        end
    end
    
    # Визуализация обучения
    plot(1:epochs, train_losses, label="Training Loss")
    plot!(1:epochs, val_losses, label="Validation Loss")
    xlabel!("Epoch")
    ylabel!("Loss")
    title!("Training Progress")
    
    return model
end

# 4. Оценка модели (без изменений)
function evaluate_fnn_model(model, X, Y, norm_params)
    predictions = model(X')
    
    # Денормализация
    _, _, control_mean, control_std = norm_params
    Y_true = Y .* control_std .+ control_mean
    Y_pred = predictions' .* control_std .+ control_mean
    
    # Расчет метрик
    rmse = sqrt(mean((Y_true - Y_pred).^2))
    println("RMSE: ", rmse)
    
    # Визуализация
    plot(Y_true[1:100], label="True Control Signal")
    plot!(Y_pred[1:100], label="Predicted Control Signal")
    xlabel!("Time Step")
    ylabel!("Control Signal")
    title!("FNN Controller Performance")
end
Out[0]:
evaluate_fnn_model (generic function with 1 method)
In [ ]:
# Загрузка данных
(X, Y), norm_params = load_and_preprocess_data_fnn()

# Обучение
model = train_fnn_model(X, Y, epochs=100, batch_size=32)

# Сохранение модели
using BSON
BSON.@save "fnn_controller_v2.bson" model norm_params
RMSE: 0.00137486
In [ ]:
# Оценка
evaluate_fnn_model(model, X, Y, norm_params)
RMSE: 0.00137486
Out[0]:

Let's generate a C code for a fully-connected neural network

Let's use the Symbolics library to generate the code and make some modifications so that it can be called from the C Function block.

In [ ]:
Pkg.add("Symbolics")
In [ ]:
using Symbolics
@variables X[1:20]
c_model_code = build_function( model( collect(X) ), collect(X); target=Symbolics.CTarget(), fname="neural_net", lhsname=:y, rhsnames=[:x] )
Out[0]:
"#include <math.h>\nvoid neural_net(double* y, const double* x) {\n  y[0] = -0.17745794f0 + 0.50926423f0 * ifelse(-0.049992073f0 + -0.23742697f0 * ifelse(-1.8439064f0 + -0.54531264f0 * x[0] + 0.69540715f0 * x[9] + 0.13691874f0 * x[10] + 0.4638261f0 * x[11] + 0.68296975f0 *" ⋯ 48502 bytes ⋯ "8941267f0 * x[16] + -0.20798773f0 * x[17] + 0.046037998f0 * x[18] + 0.2163552f0 * x[1] + 0.17670809f0 * x[19] + 0.48885685f0 * x[2] + 0.33982033f0 * x[3] + 0.23923202f0 * x[4] + 0.44608107f0 * x[5] + -0.22155789f0 * x[6] + 0.18008044f0 * x[7] + 0.3575259f0 * x[8]));\n}\n"
In [ ]:
# Заменим несколько инструкций в коде
c_fixed_model_code = replace( c_model_code,
                              "double" => "float",
                              "f0" => "f",
                              "  y[0]"=>"y" );

println( c_fixed_model_code[1:200] )
println("...")
#include <math.h>
void neural_net(float* y, const float* x) {
y = -0.17745794f + 0.50926423f * ifelse(-0.049992073f + -0.23742697f * ifelse(-1.8439064f + -0.54531264f * x[0] + 0.69540715f * x[9] + 0.1
...
In [ ]:
# Оставим только третью строчку от этого кода
c_fixed_model_code = split(c_fixed_model_code, "\n")[3]

c_code_standalone = """
float ifelse(bool cond, float a, float b) { return cond ? a : b; }

$c_fixed_model_code""";

println( c_code_standalone[1:200] )
println("...")
float ifelse(bool cond, float a, float b) { return cond ? a : b; }

y = -0.17745794f + 0.50926423f * ifelse(-0.049992073f + -0.23742697f * ifelse(-1.8439064f + -0.54531264f * x[0] + 0.69540715f * x[9]
...

Save the code to a file

In [ ]:
open("$(@__DIR__)/neural_net_fc.c", "w") do f
    println( f, "$c_code_standalone" )
end

image.png

Model liquid_pressure_regulator_neural_fc.engee

Conclusion

Obviously, we have trained the neural network for too short a time on too small an example. There are many steps that should allow to improve the quality of such a regulator - for example, feeding more signals to the neural network input, or simply building a larger dataset for better offline training. We have shown how to approach the development of a neural regulator and how to test it in the Engee model environment.

Blocks used in example