Investigation of LMS filter parameters: the effect of the adaptation step and filter length on the convergence of the algorithm
Adaptive filters are a key tool in modern digital signal processing, finding applications in noise reduction systems, echo cancellation, system identification, and many other fields. The Least Mean squares algorithm (LMS) remains one of the most popular adaptive filtering methods due to its computational efficiency and ease of implementation.
In this article, we will conduct a systematic study of two critical parameters of the LMS algorithm.:
- Adaptation step (µ) - determines the speed and stability of convergence of the algorithm
- Filter length (L) - affects the filter's ability to simulate the system
Using the Engee language and the EngeeDSP package, we visualize the impact of these parameters on the filter learning process, which will allow us to better understand their practical significance when configuring adaptive systems.
Part 1: Investigation of the impact of the adaptation step
The adaptation step is perhaps the most important parameter of the LMS algorithm. It defines:
- Convergence rate: how quickly the filter reaches optimal coefficients
- Accuracy: how close can the filter get to the optimal solution
- Stability: does the selected value guarantee the stable operation of the algorithm
Theoretically, for guaranteed convergence, μ must satisfy the condition:
0 < μ<2/(λ_max), where λ_max is the maximum eigenvalue of the autocorrelation matrix of the input signal.
using EngeeDSP, DSP, Random
# Инициализация воспроизводимого случайного сигнала
Random.seed!(123)
x = 0.05 * randn(1024) # Входной сигнал (белый шум)
d = filt(FIRFilter([0.5, -0.3, 0.2, 0.1, -0.05]), x) # Желаемый выход
# Создание LMS-фильтра с параметрами по умолчанию
LMS = EngeeDSP.LMSFilter()
# Диапазон исследуемых шагов адаптации
mu = [0.001, 0.01, 0.1, 1, 10, 12]
# Настройка графика с русскими подписями
plt = plot(title="Сходимость LMS-фильтра (длина=32) при различных шагах адаптации",
           xlabel="Номер отсчёта", 
           ylabel="Среднеквадратичная ошибка (сглаженная)",
           yscale=:log10, 
           ylims=(1e-5, 1e0), 
           grid=true, 
           legend=:best)
# Проведение экспериментов для разных μ
for μ in mu
    release!(LMS)
    LMS.StepSize = μ # Установка текущего шага адаптации
    setup!(LMS, x, d)
    @time y, e, w = step!(LMS, x, d)
    # Сглаживание квадрата ошибки для наглядности
    smoothed_error = DSP.filt(ones(500)/500, e.^2)
    plot!(plt, smoothed_error, label="μ=$μ", linewidth=2)
end
display(plt)
We see several characteristic modes on the graph.:
- Very small µ (0.001): extremely slow convergence
- Optimal range (0.01-0.1): balance of speed and accuracy
- Large µ (1-10): fast convergence, but increased noise level
- Very large µ (50): divergence of the algorithm
The practical conclusion is that the choice of μ requires a compromise between the speed of adaptation and accuracy. Values of the order 0.01-0.1 often proves to be optimal for many applications.
Part 2: Investigation of the effect of filter length
The length of the filter L determines:
- Model complexity: how many coefficients are available to approximate the system
- Computing load: increases proportionally with L
- Learning ability: Too short a filter will not be able to accurately model the system
The choice of L should be based on:
- The intended order of the simulated system
- Available computing resources
- Required modeling accuracy
using Statistics
Random.seed!(123)
n_samples = 15000
x = 0.1 * randn(n_samples)
# Создаем желаемый отклик
d = filt(FIRFilter([0.4, -0.35, 0.3, -0.25, 0.2, -0.15, 0.1, -0.05, 0.03, -0.01]), 
         [zeros(15); x])[1:n_samples]
# Инициализация фильтра
LMS = EngeeDSP.LMSFilter()
μ_values = [0.005, 0.01, 0.02]
filter_lengths = [8, 16, 32, 64, 128]
# Настройка графиков с ограничением по X
plt = plot(title="Сравнение сходимости LMS-фильтра (отсчёты 180+)",
           xlabel="Номер отсчёта (начиная с 180)", 
           ylabel="Среднеквадратичная ошибка (логарифмическая шкала)",
           yscale=:log10,
           grid=true, 
           legend=:topright,
           size=(1000, 600),
           minorgrid=true,
           xlims=(180, n_samples))  # Устанавливаем границы по X
# Эксперимент для каждой длины фильтра
for (i, L) in enumerate(filter_lengths)
    release!(LMS)
    LMS.FilterLength = L
    LMS.StepSize = μ_values[2]
    
    setup!(LMS, x, d)
    y, e, w = step!(LMS, x, d)
    
    window_size = max(50, div(10000, L))
    smoothed_error = DSP.filt(ones(window_size)/window_size, e.^2)
    final_error = mean(e[end-div(n_samples,5):end].^2)
    
    # Создаем диапазон отсчётов для отображения
    display_range = 180:n_samples
    plot!(plt, display_range, smoothed_error[display_range], 
          label="L=$L (финал: $(round(final_error, sigdigits=3)))",
          linewidth=2.5,
          color=i)
end
display(plt)
The graph shows:
- Short filters (L=8): insufficient accuracy due to limited number of parameters
- Medium lengths (16-32): a good balance between precision and complexity
- Long filters (64-128): slight improvement in accuracy with significant increase in calculations
An important observation: for this system (FIR filter order = 5), increasing L beyond a certain limit (≈32) does not give a significant gain, which corresponds to theoretical expectations.
Conclusion
To summarize, let's consider a few points, let's start with the relationship of the parameters, dependence ** ** on L: the optimal value of the adaptation step is usually inversely proportional to the length of the filter. a computational****complexity: proportional to O(L) for LMS (as opposed to O(L2) for RLS). The conducted research clearly demonstrates the importance of choosing the right parameters of the LMS algorithm. The use of Engee and specialized packages makes it possible to effectively conduct such research due to:
- Simple syntax for mathematical operations
- High performance
- Powerful visualization tools
The results obtained are of practical value for engineers working with adaptive filtering, helping them to consciously approach the tuning of algorithms in real-world applications.