Engee 文档
Notebook

利用深度学习对雷达信号进行分类

导言

本示例展示了如何使用深度学习对雷达信号进行分类。

调制分类是智能接收机的重要功能。 调制分类具有许多应用,例如,在认知雷达和软件定义无线电系统中。 通常,要识别这些信号并按调制类型对其进行分类,必须识别显着特征并将其输入分类器。 本示例讨论了使用深度学习网络从信号中自动提取时频特征及其分类。

本示例的第一部分模拟了一个雷达信号分类系统,该系统将三个脉冲信号合成并分类。 雷达信号的波形:

*矩形

*线性频率调制(LFM)

*巴克密码

准备工作

安装必要的软件包

In [ ]:
include("PackagesHelper.jl")
In [ ]:
using Pkg
Pkg.instantiate()
using CategoricalArrays
using OneHotArrays 
using BSON: @save, @load 
using Random, Statistics, FFTW, DSP, Images, ImageIO, ImageTransformations, FileIO
using Flux, Metalhead, MLUtils, CUDA, StatsBase, StatisticalMeasures
include("utils.jl");

我们将设置与模型将被训练的设备,图像的分辨率,数据生成的参数等相关的某些常数。

In [ ]:
Random.seed!(42)
device == gpu ? model = gpu(model) : nothing;
IMG_SZ=(224,224)
OUT="tfd_db"
radar_classes=["Barker","LFM","Rect"]
CLASSES = 1 : length(radar_classes)
N_PER_CLASS=600
train_ratio,val_ratio,test_ratio = 0.8,0.1,0.1
DATA_MEAN=(0.485f0,0.456f0,0.406f0)
DATA_STD=(0.229f0,0.224f0,0.225f0)
Out[0]:
(0.229f0, 0.224f0, 0.225f0)

我们将生成一个表示三种类型信号的数据集。 功能 helperGenerateRadarWaveforms 创建三种类型调制的无线电信号的合成数据集:矩形脉冲,LFM线性调频和巴克码。 参数(载波、频带、长度、调制方向等)。)是为每个信号随机选择的,之后将噪声、频移和失真添加到其中。 在输出端,该函数返回一个复杂序列列表和一个标签列表,其中包含每个信号的调制类型。

In [ ]:
data, truth = helperGenerateRadarWaveforms(Fs=1e8, nSignalsPerMod=3000, seed=0)

接下来,我们将一组无线电信号转换为图像,用于神经网络的后续训练。 一、功能 tfd_image_gray 构建信号的频谱图,将其转换为对数标度,对值进行归一化并形成灰度图像。 然后函数 save_dataset_as_tfd_images_splits 它需要一个信号及其类标签列表,将它们随机分为训练,验证和测试部分,创建所需的文件夹结构,并为每个信号保存频谱图的相应PNG图像。

In [ ]:
save_dataset_as_tfd_images_splits(data, truth; Fs=1e8, outdir="tfd_db", img_sz=(224,224), ratios=(0.8,0.1,0.1), seed=0)

我们可视化每种类型的一个信号。

In [ ]:
img1 = Images.load(raw"tfd_db/val/Rect/4283146180.png")
img2 = Images.load(raw"tfd_db/val/Barker/3375303598.png")
img3 = Images.load(raw"tfd_db/val/LFM/3736510008.png")
[img1 img2 img3]
Out[0]:
No description has been provided for this image

接下来,我们描述定义用于训练模型的数据集的结构。 功能 create_dataset 接受图像路径,将它们加载到内存中,然后对它们应用转换,例如按照FLux要求的顺序调整大小和重新排列轴。

In [ ]:
function Augment_func(img)
    resized = imresize(img, 224, 224)
    rgb = RGB.(resized)
    ch = channelview(rgb)                
    x = permutedims(ch, (3,2,1))          
    Float32.(x)
end
function Create_dataset(path)
    img_train = []
    img_test = []
    img_valid = []
    label_train = []
    label_test = []
    label_valid = []

    train_path = joinpath(path, "train");
    test_path = joinpath(path, "test");
    valid_path = joinpath(path, "val");

    function process_directory(directory, img_array, label_array, label_idx)
        for file in readdir(directory)
            if endswith(file, ".jpg") || endswith(file, ".png")
                file_path = joinpath(directory, file);
                img = Images.load(file_path);
                img = Augment_func(img);
                push!(img_array, img)
                push!(label_array, label_idx)
            end
        end
    end


    for (idx, label) in enumerate(readdir(train_path))
        println("Processing label in train: ", label)
        label_dir = joinpath(train_path, label)
        process_directory(label_dir, img_train, label_train, idx);
    end


    for (idx, label) in enumerate(readdir(test_path))
        println("Processing label in test: ", label)
        label_dir = joinpath(test_path, label)
        process_directory(label_dir, img_test, label_test, idx);
    end

    for (idx, label) in enumerate(readdir(valid_path))
        println("Processing label in valid: ", label)
        label_dir = joinpath(valid_path, label)
        process_directory(label_dir, img_valid, label_valid, idx);
    end

    return img_train, img_test, img_valid, label_train, label_test, label_valid;
end;

创建数据集

In [ ]:
img_train, img_test, img_valid, label_train, label_test, label_valid = Create_dataset("tfd_db");
Processing label in train: Barker
Processing label in train: LFM
Processing label in train: Rect
Processing label in test: Barker
Processing label in test: LFM
Processing label in test: Rect
Processing label in valid: Barker
Processing label in valid: LFM
Processing label in valid: Rect

创建数据加载器

In [ ]:
train_loader = DataLoader((data=img_train, label=label_train), batchsize=64, shuffle=true, collate=true)
test_loader = DataLoader((data=img_test, label=label_test), batchsize=64, shuffle=false, collate=true)
valid_loader = DataLoader((data=img_valid, label=label_valid), batchsize=64, shuffle=false, collate=true)
Out[0]:
15-element DataLoader(::@NamedTuple{data::Vector{Any}, label::Vector{Any}}, batchsize=64, collate=Val{true}())
  with first element:
  (; data = 224×224×3×64 Array{Float32, 4}, label = 64-element Vector{Int64})

接下来,我们描述用于训练和验证模型的函数。

In [ ]:
function train!(model, train_loader, opt, loss_fn, device, epoch::Int, num_epochs::Int)
    Flux.trainmode!(model)
    running_loss = 0.0
    n_batches    = 0

    for (data, label) in train_loader
        x  = device(data)                          
        yoh = Flux.onehotbatch(label, CLASSES) |> device 

        loss_val, gs = Flux.withgradient(Flux.params(model)) do
            ŷ = model(x)
            loss_fn(ŷ, yoh)
        end
        Flux.update!(opt, Flux.params(model), gs)

        running_loss += Float64(loss_val)
        n_batches    += 1
    end

    return opt, running_loss / max(n_batches, 1)
end


function validate(model, val_loader, loss_fn, device)
    Flux.testmode!(model)
    running_loss = 0.0
    n_batches    = 0

    for (data, label) in train_loader
        x  = device(data)
        yoh = Flux.onehotbatch(label, CLASSES) |> device

        ŷ = model(x)
        loss_val = loss_fn(ŷ, yoh)

        running_loss += Float64(loss_val)
        n_batches += 1
    end

    Flux.trainmode!(model)
    return running_loss / max(n_batches, 1)
end
Out[0]:
validate (generic function with 1 method)

初始化SqueezeNet模型

In [ ]:
model = SqueezeNet(pretrain=false, nclasses=length(radar_classes))
model = gpu(model);

让我们设置损失函数,优化器和学习率。

In [ ]:
lr = 0.0001
lossFunction(x, y) = Flux.Losses.logitcrossentropy(x, y);
opt = Flux.Adam(lr, (0.9, 0.99));
Classes = 1:length(radar_classes); 

让我们开始模型的训练周期

In [ ]:
no_improve_epochs = 0
best_model    = nothing
train_losses = [];
valid_losses = [];
best_val_loss = Inf;
num_epochs = 50
for epoch in 1:num_epochs
    println("-"^50 * "\n")
    println("EPOCH $(epoch):")

    opt, train_loss = train!(
        model, train_loader, opt,
        lossFunction, gpu, 1, num_epochs
    )

    val_loss = validate(model, valid_loader, lossFunction, gpu)

    if val_loss < best_val_loss
        best_val_loss = val_loss
        best_model = deepcopy(model)
    end

    println("Epoch $epoch/$num_epochs | train $(round(train_loss, digits=4)) | val $(round(val_loss, digits=4))")

    push!(train_losses, train_loss)
    push!(valid_losses, val_loss)

end
--------------------------------------------------

EPOCH 1:
Epoch 1/50 | train 1.0939 | val 1.0429
--------------------------------------------------

EPOCH 2:
Epoch 2/50 | train 0.9361 | val 0.9319
--------------------------------------------------

EPOCH 3:
Epoch 3/50 | train 0.8447 | val 0.8433
--------------------------------------------------

EPOCH 4:
Epoch 4/50 | train 0.8239 | val 0.7934
--------------------------------------------------

EPOCH 5:
Epoch 5/50 | train 0.8041 | val 0.9281
--------------------------------------------------

EPOCH 6:
Epoch 6/50 | train 0.7789 | val 0.7262
--------------------------------------------------

EPOCH 7:
Epoch 7/50 | train 0.6664 | val 0.5665
--------------------------------------------------

EPOCH 8:
Epoch 8/50 | train 0.5785 | val 0.4695
--------------------------------------------------

EPOCH 9:
Warning: Rejecting shutdown request
@ EngeeLanguageServer /usr/local/ijulia-core/packages/EngeeLanguageServer/8aXAQ/src/EngeeLanguageServer.jl:125
Epoch 9/50 | train 0.4522 | val 0.4481
--------------------------------------------------

EPOCH 10:
Epoch 10/50 | train 0.4094 | val 0.347
--------------------------------------------------

EPOCH 11:
Epoch 11/50 | train 0.3531 | val 0.6724
--------------------------------------------------

EPOCH 12:
Epoch 12/50 | train 0.3164 | val 0.3233
--------------------------------------------------

EPOCH 13:
Epoch 13/50 | train 0.3153 | val 0.2315
--------------------------------------------------

EPOCH 14:
Epoch 14/50 | train 0.2597 | val 0.2473
--------------------------------------------------

EPOCH 15:
Epoch 15/50 | train 0.2397 | val 0.2404
--------------------------------------------------

EPOCH 16:
Epoch 16/50 | train 0.2442 | val 0.2065
--------------------------------------------------

EPOCH 17:
Epoch 17/50 | train 0.1898 | val 0.1939
--------------------------------------------------

EPOCH 18:
Epoch 18/50 | train 0.2025 | val 0.2916
--------------------------------------------------

EPOCH 19:
Epoch 19/50 | train 0.2124 | val 0.1959
--------------------------------------------------

EPOCH 20:
Epoch 20/50 | train 0.1891 | val 0.1844
--------------------------------------------------

EPOCH 21:
Epoch 21/50 | train 0.1815 | val 0.3876
--------------------------------------------------

EPOCH 22:
Epoch 22/50 | train 0.2082 | val 0.2134
--------------------------------------------------

EPOCH 23:
Epoch 23/50 | train 0.1937 | val 0.1336
--------------------------------------------------

EPOCH 24:
Epoch 24/50 | train 0.1956 | val 0.1875
--------------------------------------------------

EPOCH 25:
Epoch 25/50 | train 0.1648 | val 0.1302
--------------------------------------------------

EPOCH 26:
Epoch 26/50 | train 0.1471 | val 0.192
--------------------------------------------------

EPOCH 27:
Epoch 27/50 | train 0.1647 | val 0.1791
--------------------------------------------------

EPOCH 28:
Epoch 28/50 | train 0.1561 | val 0.1322
--------------------------------------------------

EPOCH 29:
Epoch 29/50 | train 0.1404 | val 0.125
--------------------------------------------------

EPOCH 30:
Epoch 30/50 | train 0.1618 | val 0.138
--------------------------------------------------

EPOCH 31:
Epoch 31/50 | train 0.1353 | val 0.227
--------------------------------------------------

EPOCH 32:
Epoch 32/50 | train 0.1375 | val 0.1112
--------------------------------------------------

EPOCH 33:
Epoch 33/50 | train 0.1332 | val 0.1108
--------------------------------------------------

EPOCH 34:
Epoch 34/50 | train 0.1115 | val 0.094
--------------------------------------------------

EPOCH 35:
Epoch 35/50 | train 0.1191 | val 0.1368
--------------------------------------------------

EPOCH 36:
Epoch 36/50 | train 0.1099 | val 0.0945
--------------------------------------------------

EPOCH 37:
Epoch 37/50 | train 0.1088 | val 0.1031
--------------------------------------------------

EPOCH 38:
Epoch 38/50 | train 0.1255 | val 0.1275
--------------------------------------------------

EPOCH 39:
Epoch 39/50 | train 0.1107 | val 0.1423
--------------------------------------------------

EPOCH 40:
Epoch 40/50 | train 0.1088 | val 0.0998
--------------------------------------------------

EPOCH 41:
Warning: Rejecting shutdown request
@ EngeeLanguageServer /usr/local/ijulia-core/packages/EngeeLanguageServer/8aXAQ/src/EngeeLanguageServer.jl:125
Epoch 41/50 | train 0.0943 | val 0.0915
--------------------------------------------------

EPOCH 42:
Epoch 42/50 | train 0.117 | val 0.0944
--------------------------------------------------

EPOCH 43:
Epoch 43/50 | train 0.1073 | val 0.1023
--------------------------------------------------

EPOCH 44:
Epoch 44/50 | train 0.095 | val 0.1146
--------------------------------------------------

EPOCH 45:
Epoch 45/50 | train 0.0937 | val 0.0719
--------------------------------------------------

EPOCH 46:
Epoch 46/50 | train 0.0894 | val 0.0927
--------------------------------------------------

EPOCH 47:
Warning: Rejecting shutdown request
@ EngeeLanguageServer /usr/local/ijulia-core/packages/EngeeLanguageServer/8aXAQ/src/EngeeLanguageServer.jl:125
Epoch 47/50 | train 0.0977 | val 0.0731
--------------------------------------------------

EPOCH 48:
Epoch 48/50 | train 0.0793 | val 0.0813
--------------------------------------------------

EPOCH 49:
Epoch 49/50 | train 0.0774 | val 0.0711
--------------------------------------------------

EPOCH 50:
Epoch 50/50 | train 0.0922 | val 0.0974

保存训练好的模型

In [ ]:
model = cpu(model)
@save "$(@__DIR__)/models/modelCLSRadarSignal.bson" model

测试模型

让我们执行一个推理,并构建一个误差矩阵。

In [ ]:
model_data = load("$(@__DIR__)/models/modelCLSRadarSignal.bson")
model = model_data[:model] |> gpu;

让我们编写一个函数来评估训练后的模型,该模型计算精度度量。

In [ ]:
total_loss, correct_predictions, total_samples = 0.0, 0, 0
all_preds = []  
True_labels = []
for (data, label) in enumerate(test_loader) 

    x  = gpu(data)                          
    yoh = Flux.onehotbatch(label, CLASSES) |> gpu 


    ŷ = model(x)
    total_loss= loss_fn(ŷ, yoh)
    # y_pred = model(imgs, feat)
    # total_loss += Flux.Losses.logitcrossentropy(y_pred, onehotbatch(y, classes))
    preds = onecold(ŷ, classes)
    true_classes = y
    append!(all_preds, preds)
    append!(True_labels, true_classes)
    correct_predictions += sum(preds .== true_classes)
    total_samples += length(y)
end

accuracy = 100.0 * correct_predictions / total_samples
In [ ]:
accuracy_score, all_preds, true_predS = evaluate_model_accuracy(test_loader, model, CLASSES);
println("Accuracy trained model:", accuracy_score, "%")

我们还将在特定对象上进行测试。

In [ ]:
classes = readdir("tfd_db/train")
cls = rand(classes)
files = readdir(joinpath("tfd_db/train", cls))
f = rand(files)
path = joinpath("tfd_db/train", cls, f)

img = Images.load(path)
img = Augment_func(img)
img = reshape(img, size(img)..., 1)
ŷ = model(gpu(img))
probs = Flux.softmax(ŷ)
pred_idx = argmax(Array(probs))
pred_class = radar_classes[pred_idx]

println("真正的阶级: ", cls)
println("预测类: ", pred_class)
Истинный класс: Barker
Предсказанный класс: Barker

结论

在这个演示示例中,训练了一个卷积神经网络来对雷达信号进行分类。

学习成果有很好的指标