Engee documentation
Notebook

Background noise reduction system

In this example, we consider two models for damping background noise in an audio signal by applying gain. The figure below shows the noise cancellation subsystem. In the first case, the subsystem operates in pipeline mode. In the second case it is triggered only when the threshold value of the input signal is reached, i.e. at low energy of the input signal.

image.png

Next, let's declare the auxiliary functions for starting models and playing audio files.

In [ ]:
Pkg.add(["WAV"])
In [ ]:
# Подключение вспомогательной функции запуска модели.
function run_model( name_model)
    
    Path = (@__DIR__) * "/" * name_model * ".engee"
    
    if name_model in [m.name for m in engee.get_all_models()] # Проверка условия загрузки модели в ядро
        model = engee.open( name_model ) # Открыть модель
        model_output = engee.run( model, verbose=true ); # Запустить модель
    else
        model = engee.load( Path, force=true ) # Загрузить модель
        model_output = engee.run( model, verbose=true ); # Запустить модель
        engee.close( name_model, force=true ); # Закрыть модель
    end
    sleep(5)
    return model_output
end
Out[0]:
run_model (generic function with 1 method)
In [ ]:
using WAV;
using .EngeeDSP;

function audioplayer(patch, fs, Samples_per_audio_channel);
    s = vcat((EngeeDSP.step(load_audio(), patch, Samples_per_audio_channel))...);
    buf = IOBuffer();
    wavwrite(s, buf; Fs=fs);
    data = base64encode(unsafe_string(pointer(buf.data), buf.size));
    display("text/html", """<audio controls="controls" {autoplay}>
    <source src="data:audio/wav;base64,$data" type="audio/wav" />
    Your browser does not support the audio element.
    </audio>""");
    return s
end 
Out[0]:
audioplayer (generic function with 1 method)

After declaring the functions, let's start execution of models.

The first model to be launched is the model with pipeline implementation.

image.png

In [ ]:
run_model("agc_sub") # Запуск модели.
Building...
Progress 0%
Progress 5%
Progress 10%
Progress 15%
Progress 20%
Progress 100%

Now let's run the model with the included subsystem.

image.png

In [ ]:
run_model("agc_enabled") # Запуск модели.
Building...
Progress 0%
Progress 5%
Progress 10%
Progress 15%
Progress 20%
Progress 25%
Progress 30%
Progress 35%
Progress 40%
Progress 45%
Progress 50%
Progress 55%
Progress 60%
Progress 65%
Progress 70%
Progress 75%
Progress 80%
Progress 85%
Progress 90%
Progress 95%
Progress 100%
Progress 100%

Now let's analyse the recorded WAV files and compare them with the original audio track. First of all, let's listen to the results.

In [ ]:
inp = audioplayer("$(@__DIR__)/speech_fade_48kHz.wav", 48000, 256);
In [ ]:
out_s = audioplayer("$(@__DIR__)/out_48kHz_s.wav", 48000, 256);
In [ ]:
out_e = audioplayer("$(@__DIR__)/out_48kHz_e.wav", 48000, 256);

Conclusion

Listening to these audio tracks, we can notice that the sound with pipeline processing is more uniform and has less distortion than with the switched subsystem. This is due to the fact that the loopback in one case works every beat, while in the other case it works only when there is a control signal. The Delay block inside the Compute Envelope subsystem has the initial state equal to zero. You can experiment with larger values of the of the initial state, and this will affect the degree of distortion of the audio track.