Engee 文档
Notebook

利用机器学习和深度学习对雷达目标进行分类

本示例演示了使用机器学习和深度学习技术对雷达数据进行分类的方法。以下是解决问题的方法: 1.机器学习: 支持向量法 (SVM)。 2.深度学习: SqueezeNet、LSTM

物体分类是雷达系统的一项重要任务。本示例解决的问题是确定物体反射的雷达信号是圆柱体还是圆锥体。虽然该示例使用的是合成数据,但该方法可应用于实际雷达测量。

SVM 训练

第一步是导入正在使用的软件包

文件install_packages.jl 包含脚本所需的软件包。它们被添加到工作环境中。在import_packages 文件中,脚本导入了所有已安装的软件包。在下面的单元格中,我们开始执行它们

In [ ]:
include("$(@__DIR__)/Install_packages.jl")
In [ ]:
include("$(@__DIR__)/import_packages.jl")
init()

数据加载

用于训练模型的数据取自一个关于雷达物体 EPR 建模的演示示例。

让我们指定数据所在的路径

In [ ]:
data_path = "$(@__DIR__)/gen_data.csv"

CSV 文件包含一个圆锥体和一个圆柱体的数据。前 100 个元素是圆柱体,后 100 个元素是圆锥体。让我们把数据分成训练数据集和测试数据集

In [ ]:
# Допустим, ваш файл называется "data.csv"
df = CSV.read(data_path, DataFrame)
data = Matrix(df)
cyl_train = data[:, 1:75]    # первые 75 элементов
cyl_test  = data[:, 76:100]  # оставшиеся 25 элементов

# Для конусов:
cone_train = data[:, 101:175]  # 75 элементов
cone_test  = data[:, 176:200]  # 25 элементов
# Объединяем в тренировочную и тестовую выборки:
train_data = hcat(cyl_train, cone_train)
test_data  = hcat(cyl_test, cone_test);
In [ ]:
TrainFeatures_T = permutedims(train_data, (2,1))
TestFeatures_T = permutedims(test_data, (2, 1))

TrainLabels = reshape(vcat(fill(1, 75), fill(2, 75)), :, 1)
TestLabels = reshape(vcat(fill(1, 25), fill(2, 25)), :, 1);

下面的代码使用莫列连续小波变换对时间序列进行特征提取。首先,对所有数据进行小波变换,然后计算系数的绝对值,只留下振幅。然后对这些振幅进行时间轴平均,以降低特征的维度。然后对结果进行 "压缩",以去除不必要的维度。

所有这些都是针对训练和测试数据集完成的

In [ ]:
wavelet_cfg = wavelet(Morlet(π), averagingType=Dirac(), β=1.5)
train_features_cwt = dropdims(mean(abs.(cwt(TrainFeatures_T, wavelet_cfg)), dims=2), dims=2);
In [ ]:
wavelet_cfg = wavelet(Morlet(π), averagingType=Dirac(), β=1.5)
test_features_cwt = dropdims(mean(abs.(cwt(TestFeatures_T, wavelet_cfg)), dims=2), dims=2);

初始化包含类名的列表

In [ ]:
class_names = ["Cylinder","Cone"]

在训练分类器之前,必须将数据转换成与模型兼容的格式:将训练特征转换成表格形式,将标签转换成分类类型。

In [ ]:
X_train = MLJ.table(train_features_cwt)  # Преобразуем X_train в датафрейм
X_test = MLJ.table(test_features_cwt)  # Преобразуем X_test в датафрейм
y_train  = coerce(vec(TrainLabels), Multiclass)  # Преобразуем y_train в вектор и задаем тип  Multiclass
y_test = coerce(vec(TestLabels), Multiclass);  # Преобразуем y_test в вектор и задаем тип Multiclass

支持向量模型的初始化及其训练

接下来,我们通过初始化参数和交叉验证来建立支持向量模型

In [ ]:
svm = (@MLJ.load SVC pkg=LIBSVM verbosity=true)()  # Загружаем и создаем модель SVM с использованием LIBSVM через MLJ

# Задаем параметры для модели SVM
svm.kernel = LIBSVM.Kernel.Polynomial         # Тип ядра: полиномиальное (Polynomial)
svm.degree = 2                                # Степень полинома для полиномиального ядра
svm.gamma = 0.1                               # Параметр γ, контролирующий влияние каждой обучающей точки
svm.cost = 1.0                                # Параметр регуляризации 

# Создаем "машину" (machine) для связывания модели с данными
mach = machine(svm, X_train, y_train)

# Настраиваем кросс-валидацию с 5 фолдами
cv = CV(nfolds=5);

我们使用交叉验证来训练模型,并计算训练的准确率

In [ ]:
@info "Load model config, cross-valid"

cv_results = evaluate!(mach; resampling=cv, measures=[accuracy], verbosity=0) # Выполняем кросс-валидацию модели
println("Точность модели: $(cv_results.measurement[1] * 100)" , "%")

评估训练好的模型

让我们在测试数据上评估训练有素的模型

In [ ]:
@info "Predict test"

y_pred_SVM = MLJ.predict(mach, X_test)   # Выполняеем предсказания модели на тестовых данных

accuracy_score_SVM = accuracy(y_pred_SVM, y_test)  # Вычисляем точность модели на тестовом наборе

println("Точность модели на тесте: ", Int(accuracy_score_SVM * 100), "%")

接下来,让我们建立一个误差矩阵来评估模型分类的质量--函数plot_confusion_matrix 可以完成这项任务

In [ ]:
function plot_confusion_matrix(C)
    # Создаем heatmap с большими шрифтами и контрастными цветами
    heatmap(
        C,
        title = "Confusion Matrix",
        xlabel = "Predicted",
        ylabel = "True",
        xticks = (1:length(class_names), class_names),
        yticks = (1:length(class_names), class_names),
        c = :viridis,  # Более контрастная цветовая схема
        colorbar_title = "Count",
        size = (600, 400)
    )

    # Добавим значения в ячейки для улучшенной наглядности
    for i in 1:size(C, 1)
        for j in 1:size(C, 2)
            annotate!(j, i, text(C[i, j], :white, 12, :bold)) 
        end
    end

    # Явно отображаем график
    display(current())
end;
In [ ]:
# Пример использования функции
conf_matrix = CM.confmat(y_pred_SVM, y_test, levels=[2, 1])
conf_matrix = CM.matrix(conf_matrix)
plot_confusion_matrix(conf_matrix)

从构建的误差矩阵中可以看出,模型对圆锥体的分类效果很好,但圆柱体与圆锥体经常混淆。

SqueezeNet 训练

接下来,我们将训练一个深度学习网络--SqueezeNet。SqueezeNet 是 2016 年提出的一种紧凑型卷积神经网络,它以更小的规模实现了 AlexNet 的性能。它使用的 Fire 模块包括挤压层(1x1 卷积层,用于减少通道数量)和扩展层(1x1 和 3x3 卷积层,用于恢复维度),在不损失质量的情况下减少了参数数量。由于结构紧凑,适合嵌入式设备。

所需参数

初始化模型训练和数据准备所需的参数

In [ ]:
batch_size = 2              # Размер батча для обучения
num_classes = 2             # Количество классов в задаче классификации
lr = 1e-4                   # Скорость обучения (learning rate)
Epochs = 15                 # Количество эпох для обучения
Classes = 1:num_classes;     # Список индексов классов, например, 1 и 2

创建数据集

首先,我们需要准备用于训练网络的数据。我们需要对信号进行并构建连续小波变换,以获得其时频特征。小波经过 "压缩 "后,能以较高的时间精度定位短期突发信号,而经过 "拉伸 "后,则能捕捉信号结构的平滑变化。

辅助功能save_wavelet_images 可获取每个雷达信号的连续小波变换(CWT) ,将结果转换为与计算机视觉模型兼容的格式,并将频谱图保存为图像。

初始化多个辅助函数

In [ ]:
# Функция для нормализации значений
function rescale(img)
    min_val = minimum(img)
    max_val = maximum(img)
    return (img .- min_val) ./ (max_val - min_val)
end

# Применение colormap jet и преобразование в RGB
function apply_colormap(data, cmap)
    h, w = size(data)
    rgb_image = [RGB(get(cmap, val)) for val in Iterators.flatten(eachrow(data))]
    return reshape(rgb_image, w, h)
end
# Преобразование непрерывного вейвлет-преобразования в изображение
function apply_image(wt)
    rescaled_data = rescale(abs.(wt))
    colored_image = apply_colormap(rescaled_data, ColorSchemes.inferno)
    resized_image = imresize(colored_image, (224, 224))
    flipped_image = reverse(resized_image, dims=1)
    return flipped_image
end;
In [ ]:
function save_wavelet_images(features_matrix, wavelet_filter, save_path, is_train=true)
    # Определение количества примеров для каждого класса
    
    class1_count = is_train ? 75 : 25  # Количество примеров для класса cone
    class2_count = size(features_matrix, 1) - class1_count  # Количество примеров для класса cylinder
    println(class1_count)
    # Создание папок для каждого класса
    cone_path = joinpath(save_path, "cone")  # cone — класс 1
    cylinder_path = joinpath(save_path, "cylinder")  # cylinder — класс 2
    mkpath(cone_path)
    mkpath(cylinder_path)

    # Обработка строк матрицы для класса cylinder
    for i in 1:class1_count
        res = cwt(features_matrix[i, :], wavelet_filter)
        image_wt = apply_image(res)
        img_filename = joinpath(cylinder_path, "sample_$(i).png")
        save(img_filename, image_wt)

    end

    # Обработка строк матрицы для класса cone
    for i in class1_count+1:class1_count+class2_count
        res = cwt(features_matrix[i, :], wavelet_filter)
        image_wt = apply_image(res)
        img_filename = joinpath(cone_path, "sample_$(i - class1_count).png")
        save(img_filename, image_wt)
    end
end;

c 是一个基于 Morlet 小波的可定制小波变换器。

In [ ]:
c = wavelet(Morlet(π), averagingType=NoAve(), β=1);

让我们获取用于训练神经网络的图像数据库

In [ ]:
save_wavelet_images(TrainFeatures_T, c, "$(@__DIR__)/New_imgs/train")
save_wavelet_images(TestFeatures_T, c, "$(@__DIR__)/New_imgs/test", false)

让我们来看看所获得图像的一个实例

In [ ]:
i = Images.load("$(@__DIR__)/New_imgs/train/cylinder/sample_2.png")
Out[0]:
No description has been provided for this image

初始化数据增强函数:它负责将图像转换为 224x224 尺寸,并将数据转换为张量

In [ ]:
function Augment_func(img)
    resized_img = imresize(img, 224, 224)               #Изменение размеров изображения до 224х224
    tensor_image = channelview(resized_img);            #Представление данных в виде тензора
    permutted_tensor = permutedims(tensor_image, (2, 3, 1));        #Изменение порядка размерности до формата (H, W, C)
    permutted_tensor = Float32.(permutted_tensor)                   #Преобразование в тип Float32
    return permutted_tensor
end;

函数Create_dataset 通过处理图像所在的目录来创建训练数据集。

In [ ]:
function Create_dataset(path)
    img_train = []
    img_test = []
    label_train = []
    label_test = []

    train_path = joinpath(path, "train");
    test_path = joinpath(path, "test");

    # Функция для обработки изображений в заданной директории
    function process_directory(directory, img_array, label_array, label_idx)
        for file in readdir(directory)
            if endswith(file, ".jpg") || endswith(file, ".png")
                file_path = joinpath(directory, file);
                img = Images.load(file_path);
                img = Augment_func(img);
                push!(img_array, img)
                push!(label_array, label_idx)
            end
        end
    end


    # Обработка папки train
    for (idx, label) in enumerate(readdir(train_path))
        println("Processing label in train: ", label)
        label_dir = joinpath(train_path, label)
        process_directory(label_dir, img_train, label_train, idx);
    end

    # Обработка папки test
    for (idx, label) in enumerate(readdir(test_path))
        println("Processing label in test: ", label)
        label_dir = joinpath(test_path, label)
        process_directory(label_dir, img_test, label_test, idx);
    end

    return img_train, img_test, label_train, label_test;
end;

在下一个代码单元中,让我们执行该函数来创建训练集和测试集

In [ ]:
path_to_data = "$(@__DIR__)/New_imgs"

img_train, img_test, label_train, label_test = Create_dataset(path_to_data);

创建一个数据加载器(DataLoader),将图像批量输入模型。将它们传输到 GPU

重要说明: 模型在 GPU 上进行训练,因为它能将学习过程加快很多倍。如果您需要使用 GPU,请联系管理人员,他们会为您提供 GPU 的使用权限。在工作目录中,会有已经预先训练好的网络权重,这些权重已经转移到 CPU 上。训练完主网络后,您可以通过将相应权重加载到模型中,查看 CPU 格式的网络

In [ ]:
train_loader_Snet = DataLoader((data=img_train, label=label_train), batchsize=batch_size, shuffle=true, collate=true)
test_loader_Snet = DataLoader((data=img_test, label=label_test), batchsize=batch_size, shuffle=true, collate=true)
train_loader_Snet = gpu.(train_loader_Snet)
test_loader_Snet = gpu.(test_loader_Snet)
@info "loading succes"
[ Info: loading succes

准备训练

通过将模型传输到 GPU 来初始化模型

In [ ]:
Net = SqueezeNet(;  pretrain=false,
           nclasses = num_classes) |>gpu;

初始化优化器和损失函数

In [ ]:
optimizer_Snet = Flux.Adam(lr, (0.9, 0.99));   
lossSnet(x, y) = Flux.Losses.logitcrossentropy(Net(x), y);

SqueezeNet 训练

让我们来介绍一下负责每个 epoch 训练模型的函数。函数train_one_epoch 通过遍历加载器Loader 中的所有数据批次,在单个 epoch 上执行模型训练。该函数稍后将用于训练模型LSTM 。该函数有一个参数type_model ,用于确定我们是在训练卷积模型还是递归模型

In [ ]:
function train_one_epoch(model, Loader, Tloss, correct_sample, TSamples, loss_function, Optimizer, type_model)
    for (i, (x, y)) in enumerate(Loader) 
         
        if type_model == "Conv"
            TSamples += length(y) 
            gs = gradient(() -> loss_function(x, onehotbatch(y, Classes)), Flux.params(model))          # Рассчитываем градиенты
        elseif type_model == "Recurrent"
            TSamples += size(y, 2) 
            gs = gradient(() -> loss_function(x, y), Flux.params(model))                                # Рассчитываем градиенты
        end
        Flux.update!(Optimizer, Flux.params(model), gs)                                                 # Обновляем оптимизатор
        y_pred = model(x)                                                                               # Делаем предсказание модели
        # Далее вычисляем точность и ошибку нашей моделе на эпохе
        if type_model == "Conv" 
            preds = onecold(y_pred, Classes)
            correct_sample += sum(preds .== y)
            Tloss += loss_function(x, onehotbatch(y, Classes))   
        elseif type_model == "Recurrent"
            Tloss += loss_function(x, y)
            predicted_classes = onecold(y_pred)  
            true_classes = onecold(y)         
            correct_sample += sum(predicted_classes .== true_classes)  
        end
    end
    return Tloss, TSamples, correct_sample
end;

开始 SqueezeNet 训练过程

In [ ]:
@info "Starting training loop"

for epoch in 1:10 
    total_loss = 0.0 
    train_running_correct = 0  
    total_samples = 0   
    @info "Epoch $epoch"

    total_loss, total_samples, train_running_correct = train_one_epoch(Net, train_loader_Snet, total_loss, 
                                                train_running_correct, total_samples, lossSnet, optimizer_Snet, "Conv")



    epoch_loss = total_loss / total_samples
    epoch_acc = 100.0 * (train_running_correct / total_samples)
    println("loss: $epoch_loss, accuracy: $epoch_acc")  
end

保存训练好的模型

In [ ]:
mkdir("$(@__DIR__)/models")
Out[0]:
"/user/nn/radar_classification_using_ML_DL/models"
In [ ]:
cpu(Net)
@save "$(@__DIR__)/models/SNET.bson" Net

评估经过训练的 SqueezeNet 模型

让我们对训练好的模型进行评估。函数evaluate_model_accuracy 负责计算模型的准确度

In [ ]:
function evaluate_model_accuracy(loader, model, classes, loss_function, type_model)
    total_loss, correct_predictions, total_samples = 0.0, 0, 0
    all_preds = []  
    True_labels = []
    for (x, y) in loader
        # Накопление потерь
        total_loss += type_model == "Conv" ? loss_function(x, onehotbatch(y, classes)) : loss_function(x, y)

        # Предсказания и вычисление точности
        y_pred = model(x)
        
        preds = type_model == "Conv" ? onecold(y_pred, classes) : onecold(y_pred)
        true_classes = type_model == "Conv" ? y : onecold(y)
        append!(all_preds, preds)
        append!(True_labels, true_classes)
        correct_predictions += sum(preds .== true_classes)
        total_samples += type_model == "Conv" ? length(y) : size(y, 2)
    end
    # Вычисление точности
    accuracy = 100.0 * correct_predictions / total_samples
    return accuracy, all_preds, True_labels
end;
In [ ]:
accuracy_score_Snet, all_predsSnet, true_predS = evaluate_model_accuracy(test_loader_Snet, Net, Classes, lossSnet, "Conv");
println("Accuracy trained model:", accuracy_score_Snet, "%")
Accuracy trained model:100.0%

如上图所示,模型的准确率为 100%。这清楚地表明,该模型完美地将两个类别区分开来。让我们来看一个模型预测的具体例子

让我们用函数plot_confusion_matrix

In [ ]:
preds_for_CM = map(x -> x[1], all_predsSnet);
conf_matrix = CM.confmat(preds_for_CM, true_predS, levels=[1, 2])
conf_matrix = CM.matrix(conf_matrix)
plot_confusion_matrix(conf_matrix)

模型预测

从测试数据集中加载图像

In [ ]:
path = "$(@__DIR__)/New_imgs/test/cone/sample_14.png";
img = Images.load(path)  # Загружаем изображение
img_aug = Augment_func(img);
img_res = reshape(img_aug, size(img_aug, 1), size(img_aug, 2), size(img_aug, 3), 1);

加载模型权重

In [ ]:
model_data = BSON.load("$(@__DIR__)/models/SNET.bson")
snet_cpu = model_data[:Net] |> cpu;

进行预测

In [ ]:
y_pred = (snet_cpu(img_res))
pred = onecold(y_pred, Classes)
# pred = cpu(preds)  # Переносим предсказания на CPU
predicted_class_name = class_names[pred]  # Получаем название предсказанного класса 
println("Предсказанный класс: $predicted_class_name")
Предсказанный класс: ["Cone"]

这样,模型就完成了它的任务。

LSTM

本示例的最后一部分介绍 LSTM 工作流程。首先,定义 LSTM 级别:

参数初始化

初始化模型训练和数据准备所涉及的参数

In [ ]:
MaxEpochs = 50;
BatchSize = 100;
learningrate = 0.01;
n_features = 1;
num_classes = 2;

数据采集

输入的属性已在脚本开始时定义。标签将稍作重新定义

In [ ]:
Trainlabels = vcat(fill(1, 75), fill(2, 75));
Testlabels = vcat(fill(1, 25), fill(2, 25));

Trainlabels = CategoricalArray(Trainlabels; levels=[1, 2]);
Testlabels = CategoricalArray(Testlabels; levels=[1, 2]);

然后将数据转换成 LSTM 网络所需的输入形式

In [ ]:
train_features = reshape(train_data, 1, size(train_data, 1), size(train_data, 2))
test_features = reshape(test_data, 1, size(test_data, 1), size(test_data, 2))

# TrainFeatures = permutedims(TrainFeatures, (2, 1))
TrainLabels = onehotbatch(Trainlabels, 1:num_classes)
TestLabels = onehotbatch(Testlabels, 1:num_classes)
Out[0]:
2×50 OneHotMatrix(::Vector{UInt32}) with eltype Bool:
 1  1  1  1  1  1  1  1  1  1  1  1  1  …  ⋅  ⋅  ⋅  ⋅  ⋅  ⋅  ⋅  ⋅  ⋅  ⋅  ⋅  ⋅
 ⋅  ⋅  ⋅  ⋅  ⋅  ⋅  ⋅  ⋅  ⋅  ⋅  ⋅  ⋅  ⋅     1  1  1  1  1  1  1  1  1  1  1  1

让我们将训练数据和测试数据导入DataLoader ,并将其转换为 GPU 数据

重要说明: 模型在 GPU 上进行训练,因为它能将学习过程加快很多倍。如果您需要使用 GPU,请联系管理人员,我们将为您分配 GPU 使用权。在工作目录中,会有已经预先训练好的网络权重,这些权重已经转移到 CPU 上。训练完主网络后,您可以通过将相应权重加载到模型中,查看 CPU 格式的网络

In [ ]:
train_loader_lstm = DataLoader((data=train_features, label=TrainLabels), batchsize=BatchSize, shuffle=true);
train_loader_lstm = gpu.(train_loader_lstm);
test_loader_lstm = DataLoader((data=test_features, label=TestLabels), batchsize=BatchSize, shuffle=true);
test_loader_lstm = gpu.(test_loader_lstm);

模型初始化

初始化我们要训练的模型。在本例中,我们的模型是一个层层相连的链条

In [ ]:
model_lstm = Chain(
  LSTM(n_features, 100),
  x -> x[:, end, :], 
  Dense(100, num_classes),
  Flux.softmax) |> gpu;

初始化优化器,即损失函数

In [ ]:
optLSTM = Flux.Adam(learningrate, (0.9, 0.99));   
lossLSTM(x, y) = Flux.Losses.crossentropy(model_lstm(x), y);

训练

接下来是模型的训练周期

In [ ]:
for epoch in 1:MaxEpochs
    total_loss = 0.0
    correct_predictions = 0
    total_samples = 0

    total_loss, total_samples, correct_predictions = train_one_epoch(model_lstm, train_loader_lstm, total_loss, 
                                                correct_predictions, total_samples, lossLSTM, optLSTM, "Recurrent")

    # Вычисление точности
    accuracy = 100.0 * correct_predictions / total_samples

    println("Epoch $epoch, Loss: $(total_loss), Accuracy: $(accuracy)%")
end

保存模型

In [ ]:
cpu(model_lstm)
@save "$(@__DIR__)/models/lstm.bson" model_lstm

评估训练有素的模型

让我们通过计算测试数据集上的准确率来评估我们的模型

In [ ]:
accuracy_score_LSTM, all_predsLSTMm, true_predS_LSTM = evaluate_model_accuracy(test_loader_lstm, model_lstm, classes, lossLSTM, "Recurrent");
println("Accuracy trained model:", accuracy_score_LSTM, "%")
Accuracy trained model:84.0%

现在,让我们构建一个误差矩阵,以便对模型进行可视化评估

In [ ]:
preds_for_CM_LSTM = map(x -> x[1], all_predsLSTMm);
conf_matrix = CM.confmat(preds_for_CM_LSTM, true_predS_LSTM, levels=[1, 2])
conf_matrix = CM.matrix(conf_matrix)
plot_confusion_matrix(conf_matrix)

让我们用一个具体的观测数据测试模型

In [ ]:
random_index = rand(1:size(test_features, 3))  
random_sample = test_features[:, :, random_index] 
random_label = onecold(TestLabels[:, random_index]) 
random_sample = cpu(random_sample);
In [ ]:
model_data = BSON.load("$(@__DIR__)/models/lstm.bson")
cpu_lstm = model_data[:model_lstm] |>cpu

predicted_probs = cpu_lstm(random_sample)
predicted_class = onecold(predicted_probs) 
# Вывод результата
println("Random Sample Index: $random_index")
println("True Label: $random_label")
println("Predicted Probabilities: $predicted_probs")
println("Predicted Class: $predicted_class")
Random Sample Index: 12
True Label: 1
Predicted Probabilities: Float32[0.99999905; 9.580198f-7;;]
Predicted Class: [1]

从上面的结果可以看出,模型正确地对所给的实例进行了分类

结论

本示例介绍了使用机器学习和深度学习技术进行雷达目标分类的工作流程。虽然本示例使用合成数据进行训练和测试,但可以很容易地将其扩展到真实雷达结果中。