Engee documentation

arburg

The parameters of the autoregression model with all poles are the Burg method.

Library

EngeeDSP

Syntax

Function call

  • a,e,rc = arburg(x,p) — returns normalized autoregression parameters a, corresponding order models p for the input array x.

    It also returns the calculated variance. e input white noise and reflection coefficients rc, corresponding autoregression models.

Arguments

Input arguments

# x — input array

+ vector | the matrix

Details

An input array specified as a vector or matrix.

Типы данных

Float32 | Float64

Support for complex numbers

Yes

# p — the order of the model

+ a positive integer scalar

Details

The order of the model, given as a positive integer scalar. Meaning p there should be less than the number of elements or rows in x.

Типы данных

Float32 | Float64

Output arguments

# a — normalized autoregression parameters

+ vector string | the matrix

Details

Normalized autoregression parameters returned as a vector or matrix. If x — the matrix, then each row a corresponds to the column x. Argument a has p+1 column and contains autoregression parameters in descending order of degrees .

# e is the white noise variance of the input signal

+ scalar | vector string

Details

The white noise variance of the input signal, returned as a scalar or string vector. If x — the matrix, then for each element e corresponds to the column x.

# rc — reflection coefficients

+ column vector | the matrix

Details

Reflection coefficients returned as a column vector or matrix. If x — the matrix, then for each column rc corresponds to the column x. Argument rc has p lines.

Examples

Estimation of parameters using the Burg method

Details

We use the vector of coefficients of the generating polynomial to generate the process by filtering 1024 white noise samples. Reset the random number generator to get reproducible results. We use the Burg method to estimate the coefficients.

import EngeeDSP.Functions: randn,filter,arburg

A = [1 -2.7607 3.8106 -2.6535 0.9238]
y = filter(1,A,0.2*randn(1024,1))
arcoeffs = arburg(y,4)[1]
1×5 Matrix{Float64}:
 1.0  -2.7743  3.84077  -2.68434  0.936008

Generate 50 implementations of the process, changing the variance of the input noise each time. Let’s compare the variance calculated using the Burg method with the actual values.

nrealiz = 50
order = 4
noisestdz = rand(1, nrealiz) .+ 0.5
randnoise = randn(1024, nrealiz)
noisevar = zeros(1, nrealiz)

for k in 1:nrealiz
    y = filter(ones(1), A, noisestdz[k] * randnoise[:, k])
        arcoeffs,noisevar[k],e = arburg(y, order)

end

p=scatter(vec(noisestdz.^2), vec(noisevar),
        marker=:x,
        markerstrokecolor=:blue,
        xlabel="Input",
        ylabel="Estimated",
        title="Noise Variance",
        label="Single channel loop",
        legend=false)

arburg 1

Let’s repeat the procedure using the multi-channel syntax of the function.

Y = filter(1,A,noisestdz.*randnoise)

coeffs,variances,e = arburg(Y,4)

scatter!(p,noisestdz.^2, variances,
        marker=:circle,
        markercolor=:transparent,
        markerstrokecolor=:green,
        markersize=10)

arburg 2

Additional Info

An autoregression model of the order of p

Details

In the autoregression model of the order ( ) the current output is a linear combination of the previous ones outputs plus a white noise input signal.

Weights on previous ones The outputs minimize the average quadratic error of the autoregression prediction. If — this is the current output value, and — this is an input signal with zero average white noise, then the model it has the form:

Reflection coefficients

Details

_reflection_effects are partial autocorrelation coefficients scaled by −1. Reflection coefficients indicate a time dependence between and after subtracting the prediction based on the intermediate time steps.

Algorithms

The Burg method calculates reflection coefficients and uses them for recursive estimation of autoregression parameters. The relations of recursion and the lattice filter describing the updating of forward and reverse prediction errors can be found in [1].

Literature

  1. Kay, Steven M. Modern Spectral Estimation: Theory and Application. Englewood Cliffs, NJ: Prentice Hall, 1988.