Engee documentation
Notebook

Background noise reduction system

In this example, we will consider two models for muting background noise in an audio signal by applying a gain factor.
The figure below shows the interference smoothing subsystem. In the first case, the subsystem operates in pipeline mode. In the second case, it is triggered only when the threshold value of the input signal is reached, that is, when the input energy is low.

image.png

Next, we will announce auxiliary functions for launching models and playing audio files.

In [ ]:
Pkg.add(["WAV"])
In [ ]:
# Подключение вспомогательной функции запуска модели.
function run_model( name_model)
    
    Path = (@__DIR__) * "/" * name_model * ".engee"
    
    if name_model in [m.name for m in engee.get_all_models()] # Проверка условия загрузки модели в ядро
        model = engee.open( name_model ) # Открыть модель
        model_output = engee.run( model, verbose=true ); # Запустить модель
    else
        model = engee.load( Path, force=true ) # Загрузить модель
        model_output = engee.run( model, verbose=true ); # Запустить модель
        engee.close( name_model, force=true ); # Закрыть модель
    end
    sleep(5)
    return model_output
end
Out[0]:
run_model (generic function with 1 method)
In [ ]:
using WAV;
using .EngeeDSP;

function audioplayer(patch, fs, Samples_per_audio_channel);
    s = vcat((EngeeDSP.step(load_audio(), patch, Samples_per_audio_channel))...);
    buf = IOBuffer();
    wavwrite(s, buf; Fs=fs);
    data = base64encode(unsafe_string(pointer(buf.data), buf.size));
    display("text/html", """<audio controls="controls" {autoplay}>
    <source src="data:audio/wav;base64,$data" type="audio/wav" />
    Your browser does not support the audio element.
    </audio>""");
    return s
end 
Out[0]:
audioplayer (generic function with 1 method)

After declaring the functions, we will start executing the models.

Let's run the pipeline implementation model first.

image.png
In [ ]:
run_model("agc_sub") # Запуск модели.
Building...
Progress 0%
Progress 5%
Progress 10%
Progress 15%
Progress 20%
Progress 100%

Now let's run the model with the subsystem being turned on.

image.png
In [ ]:
run_model("agc_enabled") # Запуск модели.
Building...
Progress 0%
Progress 5%
Progress 10%
Progress 15%
Progress 20%
Progress 25%
Progress 30%
Progress 35%
Progress 40%
Progress 45%
Progress 50%
Progress 55%
Progress 60%
Progress 65%
Progress 70%
Progress 75%
Progress 80%
Progress 85%
Progress 90%
Progress 95%
Progress 100%
Progress 100%

Now let's analyze the recorded WAV files and compare them with the original audio track. First of all, we'll listen to the results.

In [ ]:
inp = audioplayer("$(@__DIR__)/speech_fade_48kHz.wav", 48000, 256);
In [ ]:
out_s = audioplayer("$(@__DIR__)/out_48kHz_s.wav", 48000, 256);
In [ ]:
out_e = audioplayer("$(@__DIR__)/out_48kHz_e.wav", 48000, 256);

Conclusion

After listening to these audio tracks, we can see that the sound during pipelining is more uniform and has less distortion than when using the included subsystem. This is due to the fact that the feedback loop in one case processes each clock cycle, and in the other – only at the time of the presence of a control signal. The Delay block inside the Compute Envelope
The subsystem has an initial state of zero. You can experiment with larger values
of the initial state, and this will affect the degree of distortion of the audio track.