Exported functions and types
Index
#
ControlSystemIdentification.FRD
— Type
FRD(w, r)
Represents frequency-response data. w
holds the frequency vector and r
the response. Methods defined on this type include
If r
represents a MIMO frequency response, the dimensions are ny × nu × nω
.
An object frd::FRD
can be plotted using plot(frd, hz=false)
if using Plots
has been called.
#
ControlSystemIdentification.FRD
— Method
FRD(w, sys::LTISystem)
Generate a frequency-response data object by evaluating the frequency response of sys
at frequencies w
.
#
ControlSystemIdentification.Hz
— Type
Represents frequencies in Herz for indexing of FRD
objects: frd[2Hz:10Hz]
#
ControlSystemIdentification.N4SIDStateSpace
— Type
N4SIDStateSpace <: AbstractPredictionStateSpace
The result of statespace model estimation using the n4sid
method.
Fields:
-
sys
: estimated model in the form of aStateSpace
object -
Q
: estimated covariance matrix of the states -
R
: estimated covariance matrix of the measurements -
S
: estimated cross covariance matrix between states and measurements -
K
: Kalman observer gain -
P
: solution to the Riccatti equation -
x
: estimated state trajectory (n4sid
) or initial condition (subspaceid
) -
s
: singular value decomposition -
fve
: Fraction of variance explained by singular values
#
ControlSystemIdentification.PredictionStateSpace
— Type
PredictionStateSpace{T, ST <: AbstractStateSpace{T}, KT, QT, RT, ST2} <: AbstractPredictionStateSpace{T}
PredictionStateSpace(sys, K, Q=nothing, R=nothing, S=nothing)
A statespace type that contains an additional Kalman-filter model for prediction purposes.
Arguments:
-
sys
: DESCRIPTION -
K
: Infinite-horizon Kalman gain -
Q = nothing
: Dynamics covariance -
R = nothing
: Measurement covariance -
S = nothing
: Cross-covariance
#
ControlSystemIdentification.rad
— Type
Represents frequencies in rad/s for indexing of FRD
objects: frd[2rad:10rad]
#
ControlSystemIdentification.apply_fun
— Function
apply_fun(fun, d::InputOutputData)
Apply fun(y)
to all time series y[,u,[x]] ∈ d
and return a new iddata
with the transformed series.
#
ControlSystemIdentification.ar
— Method
ar(d::AbstractIdData, na; λ=0, estimator=\, scaleB=false, stochastic=false)
Estimate an AR transfer function G = 1/A
, the AR process is defined as A(z⁻¹)y(t) = e(t)
Arguments:
-
d
: IdData, seeiddata
-
na
: order of the model -
λ
:λ > 0
can be provided for L₂ regularization -
estimator
: e.g.\,tls,irls,rtls
-
scaleB
: Whether or not to scale the numerator using the variance of the prediction error. -
stochastic
: if true, returns a transfer function with uncertain parameters represented byMonteCarloMeasurements.Particles
.
Estimation of AR models using least-squares is known to struggle with heavy measurement noise, using estimator = tls
can improve the result in this case.
Example
julia> N = 10000
10000
julia> e = [-0.2; zeros(N-1)] # noise e
10000-element Vector{Float64}:
[...]
julia> G = tf([1, 0], [1, -0.9], 1) # AR transfer function
TransferFunction{Discrete{Int64}, ControlSystemsBase.SisoRational{Float64}}
1.0z
----------
1.0z - 0.9
Sample Time: 1 (seconds)
Discrete-time transfer function model
julia> y = lsim(G, e, 1:N)[1][:] # Get output of AR transfer function from input noise e
10000-element Vector{Float64}:
[...]
julia> Gest = ar(iddata(y), 1) # Estimate AR transfer function from output y
TransferFunction{Discrete{Float64}, ControlSystemsBase.SisoRational{Float64}}
1.0z
-------------------------
1.0z - 0.8999999999999998
Sample Time: 1.0 (seconds)
Discrete-time transfer function model
julia> G ≈ Gest # Test if estimation was correct
true
julia> eest = lsim(1/Gest, y, 1:N)[1][:] # recover the input noise e from output y and estimated transfer function Gest
10000-element Vector{Float64}:
[...]
julia> isapprox(eest, e, atol = eps()) # input noise correct recovered
true
#
ControlSystemIdentification.arma
— Method
model = arma(d::AbstractIdData, na, nc; initial_order=20, method=:ls)
Estimate a Autoregressive Moving Average model with na
coefficients in the denominator and nc
coefficients in the numerator. Returns the model and the estimated noise sequence driving the system.
Arguments:
-
d
: iddata -
initial_order
: An initial AR model of this order is used to estimate the residuals -
estimator
: A function(A,y)->minimizeₓ(Ax-y)
default is\
but another option iswtls_estimator(1:length(y)-initial_order,na,nc,ones(nc))
See also estimate_residuals
#
ControlSystemIdentification.arma_ssa
— Method
arma_ssa(d::AbstractIdData, na, nc; L=nothing, estimator=\, robust=false)
Fit arma models using Singular Spectrum Analysis (SSA). A low-rank factorization (svd or robust svd) is performed on the data in order to decompose the signal and the noise. The noise is then used as input to fit an arma model.
Arguments:
-
d
: iddata -
na
: number of denominator parameters -
nc
: number of numerator parameters -
L
: length of the lag-embedding used to separate signal and noise.nothing
corresponds to automatic selection. -
estimator
: The function to solve the least squares problem. Examples\,tls,irls,rtls
. -
robust
: Use robust PCA to be resistant to outliers.
#
ControlSystemIdentification.arx
— Method
Gtf = arx(d::AbstractIdData, na, nb; inputdelay = ones(Int, size(nb)), λ = 0, estimator=\, stochastic=false)
Fit a transfer Function to data using an ARX model and equation error minimization.
-
nb
andna
are the number of coefficients of the numerator and denominator polynomials.
Input delay can be added via inputdelay = d
, which corresponds to an additional delay of z^-d
. An inputdelay = 0
results in a direct term. The highest order of the B polynomial is given by nb + inputdelay - 1
. λ > 0
can be provided for L₂ regularization. estimator
defaults to \ (least squares), alternatives are estimator = tls
for total least-squares estimation. arx(Δt,yn,u,na,nb, estimator=wtls_estimator(y,na,nb)
is potentially more robust in the presence of heavy measurement noise. The number of free parameters is na+nb
-
stochastic
: if true, returns a transfer function with uncertain parameters represented byMonteCarloMeasurements.Particles
.
Supports MISO estimation by supplying an iddata with a matrix u
, with nb = [nb₁, nb₂…] and optional inputdelay = [d₁, d₂…]
This function supports multiple datasets, provided as a vector of iddata objects.
#
ControlSystemIdentification.arxar
— Method
G, H, e = arxar(d::InputOutputData, na::Int, nb::Union{Int, Vector{Int}}, nd::Int)
Estimate the ARXAR model Ay = Bu + v
, where v = He
and H = 1/D
, using generalized least-squares method. For more information see Söderström - Convergence properties of the generalized least squares identification method, 1974.
Arguments:
-
d
: iddata -
na
: order of A -
nb
: number of coefficients in B, the order is determined bynb + inputdelay - 1
. In MISO estimation it takes the formnb = [nb₁, nb₂...]
. -
nd
: order of D
Keyword Arguments:
-
H = nothing
: prior knowledge about the AR noise model -
inputdelay = ones(Int, size(nb))
: optional delay of input, inputdelay = 0 results in a direct term, takes the form inputdelay = [d₁, d₂…] in MISO estimation -
λ = 0
:λ > 0
can be provided for L₂ regularization -
estimator = \
: e.g.\,tls,irls,rtls
, the latter three requireusing TotalLeastSquares
-
δmin = 10e-4
: Minimal change in the power of e, that specifies convergence. -
iterations = 10
: maximum number of iterations. -
verbose = false
: if true, more information is printed
Example:
julia> N = 500 500 julia> sim(G, u) = lsim(G, u, 1:N)[1][:] sim (generic function with 1 method) julia> A = tf([1, -0.8], [1, 0], 1) TransferFunction{Discrete{Int64}, ControlSystemsBase.SisoRational{Float64}} 1.0z - 0.8 ---------- 1.0z Sample Time: 1 (seconds) Discrete-time transfer function model julia> B = tf([0, 1], [1, 0], 1) TransferFunction{Discrete{Int64}, ControlSystemsBase.SisoRational{Int64}} 1 - z Sample Time: 1 (seconds) Discrete-time transfer function model julia> G = minreal(B / A) TransferFunction{Discrete{Int64}, ControlSystemsBase.SisoRational{Float64}} 1.0 ---------- 1.0z - 0.8 Sample Time: 1 (seconds) Discrete-time transfer function model julia> D = tf([1, 0.7], [1, 0], 1) TransferFunction{Discrete{Int64}, ControlSystemsBase.SisoRational{Float64}} 1.0z + 0.7 ---------- 1.0z Sample Time: 1 (seconds) Discrete-time transfer function model julia> H = 1 / D TransferFunction{Discrete{Int64}, ControlSystemsBase.SisoRational{Float64}} 1.0z ---------- 1.0z + 0.7 Sample Time: 1 (seconds) Discrete-time transfer function model julia> u, e = randn(1, N), randn(1, N) [...] julia> y, v = sim(G, u), sim(H * (1/A), e) # simulate process [...] julia> d = iddata(y .+ v, u, 1) InputOutput data of length 500 with 1 outputs and 1 inputs julia> na, nb , nd = 1, 1, 1 (1, 1, 1) julia> Gest, Hest, res = arxar(d, na, nb, nd) (G = TransferFunction{Discrete{Int64}, ControlSystemsBase.SisoRational{Float64}} 0.9987917259291642 ------------------------- 1.0z - 0.7937837464682017 Sample Time: 1 (seconds) Discrete-time transfer function model, H = TransferFunction{Discrete{Int64}, ControlSystemsBase.SisoRational{Float64}} 1.0z ------------------------- 1.0z + 0.7019519225937721 Sample Time: 1 (seconds) Discrete-time transfer function model, e = [...]
#
ControlSystemIdentification.autocorplot
— Function
autocorplot(y, Ts, [lags])
Plot the auto correlation of y
for lags
that default to 1:size(y, 2)÷2
.
#
ControlSystemIdentification.coherence
— Method
κ² = coherence(d; n = length(d)÷10, noverlap = n÷2, window=hamming, method=:welch)
Calculates the magnitude-squared coherence Function. κ² close to 1 indicates a good explainability of energy in the output signal by energy in the input signal. κ² << 1
indicates that either the system is nonlinear, or a strong noise contributes to the output energy.
-
κ: Squared coherence function in the form of an
FRD
. -
method
::welch
or:corr
.:welch
uses the Welch method to estimate the power spectral density, while:corr
uses the Correlogram approach . Formethod = :corr
, the additional keyword argumentσ
determines the width of the Gaussian window applied to the estimated correlation functions before FFT. A largerσ
implies less smoothing.
See also coherenceplot
Extended help:
For the signal model , is defined as
from which it is obvious that and that κ² is close to 1 if the noise energy is small compared to the output energy due to the input .
#
ControlSystemIdentification.coherenceplot
— Function
coherenceplot(d, [(;n=..., noverlap=...); hz=false)
Calculates and plots the (squared) coherence Function κ. κ close to 1 indicates a good explainability of energy in the output signal by energy in the input signal. κ << 1
indicates that either the system is nonlinear, or a strong noise contributes to the output energy.
hz
indicates Hertz instead of rad/s
Keyword arguments to coherence
are supplied as a named tuple as a second positional argument .
#
ControlSystemIdentification.crosscorplot
— Function
crosscorplot(data, [lags])
crosscorplot(u, y, Ts, [lags])
Plot the cross correlation between input and output for lags
that default to 10% of the length of the dataset on the negative side and 50% on the positive side but no more than 100 on each side.
#
ControlSystemIdentification.detrend
— Method
detrend(d::AbstractArray)
detrend(d::AbstractIdData)
Remove the mean from d
.
#
ControlSystemIdentification.era
— Function
era(YY::AbstractArray{<:Any, 3}, Ts, nx::Int, m::Int = 2nx, n::Int = 2nx)
Eigenvalue realization algorithm. The algorithm returns a statespace model.
Arguments:
-
YY
: Markov parameters (impulse response) sizeny × nu × n_time
-
Ts
: Sample time -
nx
: Model order -
m
: Number of rows in Hankel matrix -
n
: Number of columns in Hankel matrix
#
ControlSystemIdentification.era
— Function
era(d::AbstractIdData, nx; m = 2nx, n = 2nx, l = 5nx, p = l, λ=0, smooth=false)
era(ds::Vector{IdData}, nx; m = 2nx, n = 2nx, l = 5nx, p = l, λ=0, smooth=false)
Eigenvalue realization algorithm. Uses okid
to find the Markov parameters as an initial step.
The parameter l
is likely to require tuning, a reasonable starting point to choose l
large enough for the impulse response to have mostly dissipated.
If a vector of datasets is provided, the Markov parameters estimated from each experiment are averaged before calling era
. This allows use of data from multiple experiments to improve the model estimate.
Arguments:
-
nx
: Model order -
l
: Number of Markov parameters to estimate. -
λ
: Regularization parameter (don’t overuse this, prefer to make more experiments instead) -
smooth
: If true, the regularization given byλ
penalizes curvature in the estimated impulse response. -
p
: Optionally, delete the firstp
columns in the internal Hankel matrices to account for initial conditions != 0. Ifx0 != 0
, forera
,p
defaults tol
, while when callingokid
directly,p
defaults to 0.
#
ControlSystemIdentification.estimate_residuals
— Method
estimate_residuals(model, y)
Estimate the residuals driving the dynamics of an ARMA model.
#
ControlSystemIdentification.estimate_x0
— Function
estimate_x0(sys, d, n = min(length(d), 3 * slowest_time_constant(sys)); fixed = fill(NaN, sys.nx)
Estimate the initial state of the system
Arguments:
-
d
:iddata
-
n
: Number of samples to use. -
fixed
: If a vector of the same length asx0
is provided, finite values indicate fixed values that are not to be estimated, while nonfinite values are free.
Example
sys = ssrand(2,3,4, Ts=1)
x0 = randn(sys.nx)
u = randn(sys.nu, 100)
y,t,x = lsim(sys, u; x0)
d = iddata(y, u, 1)
x0h = estimate_x0(sys, d, 8, fixed=[Inf, x0[2], Inf, Inf])
x0h[2] == x0[2] # Should be exact equality
norm(x0-x0h) # Should be small
#
ControlSystemIdentification.filter_bank
— Method
filter_bank(basis::AbstractStateSpace{<:Discrete}, signal::AbstractMatrix)
Filter signal
through all systems in basis
#
ControlSystemIdentification.find_na
— Function
find_na(y::AbstractVector,n::Int)
Plots the RMSE and AIC For model orders up to n
. Useful for model selection
#
ControlSystemIdentification.find_nanb
— Function
find_nanb(d::InputOutputData, na, nb; logrms = false, method = :aic)
Plots the RMSE and AIC For model orders up to na
, nb
. Useful for model selection. na
can be either an integer or a range. The same holds for nb
.
-
logrms
: determines whether or not to plot the base 10 logarithm of the RMS error. -
method
: determines whether to use the Akaike Information Criterion (:aic
) or the Final Prediction Error (:fpe
) to determine the model order.
If the color scale is hard to read due to a few tiles representing very large errors, avoid drawing those tiles by providing ranges for na
and nb
instead of integers, e.g., avoid showing model order smaller than 2 using find_nanb(d, 3:na, 3:nb)
#
ControlSystemIdentification.find_similarity_transform
— Function
find_similarity_transform(sys1, sys2, method = :obsv)
Find T such that ControlSystemsBase.similarity_transform(sys1, T) == sys2
Ref: Minimal state-space realization in linear system theory: an overview, B. De Schutter
If method == :obsv
, the observability matrices of sys1
and sys2
are used to find T
, whereas method == :ctrb
uses the controllability matrices.
julia> T = randn(3,3);
julia> sys1 = ssrand(1,1,3);
julia> sys2 = ControlSystemsBase.similarity_transform(sys1, T);
julia> T2 = find_similarity_transform(sys1, sys2);
julia> T2 ≈ T
true
#
ControlSystemIdentification.freqvec
— Method
freqvec(h, k)
Return a frequency vector of length k
for systems with sample time h
.
#
ControlSystemIdentification.getARXregressor
— Method
getARXregressor(y::AbstractVector,u::AbstractVecOrMat, na, nb; inputdelay = ones(Int, size(nb)))
Returns a shortened output signal y
and a regressor matrix A
such that the least-squares ARX model estimate of order na,nb
is y\A
Return a regressor matrix used to fit an ARX model on, e.g., the form A(z)y = B(z)u
with output y
and input u
where the order of autoregression is na
, the order of input moving average is nb
and an optional input delay inputdelay
. Caution, changing the input delay changes the order to nb + inputdelay - 1
. An inputdelay = 0
results in a direct term.
Example
A = [1,2*0.7*1,1] # A(z) coeffs
B = [10,5] # B(z) coeffs
u = randn(100) # Simulate 100 time steps with Gaussian input
y = filt(B,A,u)
yr,A = getARXregressor(y,u,3,2) # We assume that we know the system order 3,2
x = A\yr # Estimate model polynomials
plot([yr A*x], lab=["Signal" "Prediction"])
For nonlinear ARX-models, see BasisFunctionExpansions.jl. See also arx
#
ControlSystemIdentification.getARregressor
— Method
yt,A = getARregressor(y::AbstractVector, na)
Returns values such that x = A\yt
. See getARXregressor
for more details.
#
ControlSystemIdentification.iddata
— Function
iddata(y, Ts = nothing)
iddata(y, u, Ts = nothing)
iddata(y, u, x, Ts = nothing)
Create a time-domain identification data object.
Arguments
-
y::AbstractArray
: output data (required) -
u::AbstractArray
: input data (if available) -
x::AbstractArray
: state data (if available) -
Ts::Union{Real,Nothing} = nothing
: optional sample time
If the time-series are multivariate, time is in the last dimension, i.e., the sizes of the arrays are (num_variables, num_timepoints)
(see examples below).
Operations on iddata
-
append two along the time dimension
[d1 d2]
(only do this if the state of the system at the end ofd1
is close to the state at the beginning ofd2
) -
index time series
d[output_index, input_index]
-
index the time axis with indices
d[time_indices]
-
index the time axis with seconds
d[3Sec:12Sec]
(using ControlSystemIdentification: Sec
) -
access number of inputs, outputs and sample time:
d.nu, d.ny, d.Ts
-
access the time time vector
d.t
-
premultiply to scale outputs
C * d
. Scaling the outputs of a multiple-output system to have roughly the same size is usually recommended before estimating a model in case they have different magnitudes. -
postmultiply to scale inputs
d * B
-
plot
Examples
julia> iddata(randn(10))
Output data of length 10 with 1 outputs, Ts = nothing
julia> iddata(randn(10), randn(10), 1)
InputOutput data of length 10, 1 outputs, 1 inputs, Ts = 1
julia> d = iddata(randn(2, 10), randn(3, 10), 0.1)
InputOutput data of length 10, 2 outputs, 3 inputs, Ts = 0.1
julia> [d d] # Concatenate along time
InputOutput data of length 20, 2 outputs, 3 inputs, Ts = 0.1
julia> d[1:3]
InputOutput data of length 3, 2 outputs, 3 inputs, Ts = 0.1
julia> d.nu
3
julia> d.t # access time vector
0.0:0.1:0.9
Use of multiple datasets
Some estimation methods support the use of multiple datasets to estimate a model. In this case, the datasets are provided as a vector of iddata objects. The methods that currently support this are:
Several of the other estimation methods can be made to accept multiple datasets with minor modifications.
In some situations, multiple datasets can also be handled by concatenation. For this to be a good idea, the state of the system at the end of one data set must be close to the state at the beginning of the next, e.g., all experiments start and end at the same operating point.
#
ControlSystemIdentification.iddata
— Method
iddata(y::AbstractArray, u::AbstractArray, w::AbstractVector)
Create a frequency-domain input-output data object. w
is expected to be in rad/s.
#
ControlSystemIdentification.iddata
— Method
iddata(res::ControlSystemsBase.SimResult)
Create an identification-data object directly from a simulation result.
#
ControlSystemIdentification.impulseest
— Method
ir, t, Σ = impulseest(d::AbstractIdData, n; λ=0, estimator=ls)
Estimates the system impulse response by fitting an n
:th order FIR model. Returns impulse-response estimate, time vector and covariance matrix.
This function only supports single-output data, use okid
for multi-output data.
See also impulseestplot
and okid
.
#
ControlSystemIdentification.impulseestplot
— Function
impulseestplot(data,n; σ = 2)
Estimates the system impulse response by fitting an n
:th order FIR model and plots the result with a 95% (2σ) confidence band. Note, the confidence bound is drawn around zero, i.e., it is drawn such that one can determine whether or not the impulse response is significantly different from zero.
This method only supports single-output data, use okid
for multi-output data.
See also impulseest
and okid
.
#
ControlSystemIdentification.kautz
— Method
kautz(a::Vector, h)
Construct a discrete-time Kautz basis with poles at a
amd sample time h
.
#
ControlSystemIdentification.laguerre
— Method
laguerre(a::Number, Nq)
Construct a Laguerre basis of length Nq
with poles at -a
.
#
ControlSystemIdentification.laguerre_id
— Method
laguerre_id(a::Number, Nq, Ts)
Construct a discrete-time Laguerre basis of length Nq
with poles at -a
for system identification.
for large Nq , this basis may be numerically ill-conditioned. Consider applying balance_statespace to the resulting basis.
|
#
ControlSystemIdentification.laguerre_oo
— Method
laguerre_oo(a::Number, Nq)
Construct an output orthogonalized Laguerre basis of length Nq
with poles at -a
.
#
ControlSystemIdentification.minimum_phase
— Method
minimum_phase(G)
Move zeros and poles of G
from the unstable half plane to the stable. If G
is a statespace system, it’s converted to a transfer function first. This can incur loss of precision.
#
ControlSystemIdentification.model_spectrum
— Method
model_spectrum(f, h, args...; kwargs...)
Arguments:
-
f
: the model-estimation function, e.g.,ar,arma
-
h
: The sample time -
args
: arguments tof
-
kwargs
: keyword arguments tof
Example:
using ControlSystemIdentification, DSP T = 1000 s = sin.((1:T) .* 2pi/10) S1 = spectrogram(s,window=hanning) estimator = model_spectrum(ar,1,2) S2 = spectrogram(s,estimator,window=rect) plot(plot(S1),plot(S2)) # Requires the package LPVSpectral.jl
#
ControlSystemIdentification.modelfit
— Method
modelfit(y, yh)
Compute the model fit of yh
to y
as a percentage, sometimes referred to as the normalized root mean square error (NRMSE).
An output of 100 indicates a perfect fit, an output of 0 indicates that the fit is no better than the mean if the data. Negative values are possible if the prediction is worse than predicting the mean of the data.
#
ControlSystemIdentification.n4sid
— Method
res = n4sid(data, r=:auto; verbose=false)
Estimate a statespace model using the n4sid method. Returns an object of type N4SIDStateSpace
where the model is accessed as res.sys
.
Implements the simplified algorithm (alg 2) from "N4SID: Subspace Algorithms for the Identification of Combined Deterministic Stochastic Systems" PETER VAN OVERSCHEE and BART DE MOOR
The frequency weighting is borrowing ideas from "Frequency Weighted Subspace Based System Identification in the Frequency Domain", Tomas McKelvey 1996. In particular, we apply the output frequency weight matrix (Fy) as it appears in eqs. (16)-(18).
Arguments:
-
data
: Identification datadata = iddata(y,u)
-
r
: Rank of the model (model order) -
verbose
: Print stuff? -
Wf
: A frequency-domain model of measurement disturbances. To focus the attention of the model on a narrow frequency band, try something likeWf = Bandstop(lower, upper, fs=1/Ts)
to indicate that there are disturbances outside this band. -
i
: Algorithm parameter, generally no need to tune this -
γ
: Set this to a value between (0,1) to stabilize unstable models such that the largest eigenvalue has magnitude γ. -
zeroD
: defaults to false
See also the newer implementation subspaceid
which allows you to choose between different weightings (n4sid being one of them). A more accurate prediciton model can sometimes be obtained using newpem
, which is also unbiased for closed-loop data.
#
ControlSystemIdentification.newpem
— Method
sys, x0, res = newpem(
d,
nx;
zeroD = true,
focus = :prediction,
stable = true,
sys0 = subspaceid(d, nx; zeroD, focus, stable),
metric = abs2,
regularizer = (p, P) -> 0,
output_nonlinearity = nothing,
input_nonlinearity = nothing,
nlp = nothing,
optimizer = BFGS(
linesearch = LineSearches.BackTracking(),
),
store_trace = true,
show_trace = true,
show_every = 50,
iterations = 10000,
time_limit = 100,
x_tol = 0,
f_abstol = 0,
g_tol = 1e-12,
f_calls_limit = 0,
g_calls_limit = 0,
allow_f_increases = false,
)
A new implementation of the prediction-error method (PEM). Note that this is an experimental implementation and subject to breaking changes not respecting semver.
The prediction-error method is an iterative, gradient-based optimization problem, as such, it can be extra sensitive to signal scaling, and it’s recommended to perform scaling to d
before estimation, e.g., by pre and post-multiplying with diagonal matrices d̃ = Dy*d*Du
, and apply the inverse scaling to the resulting system. In this case, we have
hence G = Dy \ G̃ * Du
where is the plant estimated for the scaled iddata. Example:
Dy = Diagonal(1 ./ vec(std(d.y, dims=2))) # Normalize variance
Du = Diagonal(1 ./ vec(std(d.u, dims=2))) # Normalize variance
d̃ = Dy * d * Du
If a manually provided initial guess sys0
, this must also be scaled appropriately.
Arguments:
-
d
:iddata
-
nx
: Model order -
zeroD
: Force zeroD
matrix -
stable
if true, stability of the estimated system will be enforced by eigenvalue reflection usingschur_stab
withϵ=1/100
(default). Ifstable
is a real value, the value is used instead of the defaultϵ
. -
sys0
: Initial guess, if non provided,subspaceid
is used as initial guess. -
focus
:prediction
or:simulation
. If:simulation
, theK
matrix will be zero. -
optimizer
: One of Optim’s optimizers -
metric
: The metric used to measure residuals. Try, e.g.,abs
for better resistance to outliers.
The rest of the arguments are related to Optim.Options
.
-
regularizer
: A function of the parameter vector and the correspondingPredictionStateSpace/StateSpace
system that can be used to regularize the estimate. -
output_nonlinearity
: A function of(y::Vector, p)
that operates on the output signal at a single time point,yₜ
, and modifies it in-place. See below for details.p
is a vector of estimated parameters that can be optimized. -
input_nonlinearity
: A function of(u::Matrix, p)
that operates on the entire input signalu
at once and modifies it in-place. See below for details.p
is a vector of estimated parameters that is shared withoutput_nonlinearity
. -
nlp
: Initial guess vector for nonlinear parameters. Ifoutput_nonlinearity
is provided, this can optionally be provided.
Nonlinear estimation
Nonlinear systems on Hammerstein-Wiener form, i.e., systems with a static input nonlinearity and a static output nonlinearity with a linear system inbetween, can be estimated as long as the nonlinearities are known. The procedure is
-
If there is a known input nonlinearity, manually apply the input nonlinearity to the input signal
u
before estimation, i.e., use the nonlinearly transformed input in theiddata
objectd
. If the input nonlinearity has unknown parameters, provide the input nonlinearity as a function using the keyword argumentinput_nonlinearity
tonewpem
. This function is expected to operate on the entire (matrix) input signalu
and modify it in-place. -
If the output nonlinearity is invertible, apply the inverse to the output signal
y
before estimation similar to above. -
If the output nonlinearity is not invertible, provide the nonlinear output transformation as a function using the keyword argument
output_nonlinearity
tonewpem
. This function is expected to operate on the (vector) output signaly
and modify it in-place. Example:
function output_nonlinearity(y, p)
y[1] = y[1] + p[1]*y[1]^2 # Note how the incoming vector is modified in-place
y[2] = abs(y[2])
end
Please note, y = f(y)
does not change y
in-place, but creates a new vector y
and assigns it to the variable y
. This is not what we want here.
The second argument to input_nonlinearity
and output_nonlinearity
is an (optional) vector of parameters that can be optimized. To use this option, pass the keyword argument nlp
to newpem
with a vector of initial guesses for the nonlinear parameters. The nonlinear parameters are shared between output and input nonlinearities, i.e., these two functions will receive the same vector of parameters.
The result of this estimation is the linear system without the nonlinearities.
Example
The following simulates data from a linear system and estimates a model. For an example of nonlinear identification, see the documentation.
using ControlSystemIdentification, ControlSystemsBase Plots G = DemoSystems.doylesat() T = 1000 # Number of time steps Ts = 0.01 # Sample time sys = c2d(G, Ts) nx = sys.nx nu = sys.nu ny = sys.ny x0 = zeros(nx) # actual initial state sim(sys, u, x0 = x0) = lsim(sys, u; x0)[1] σy = 1e-1 # Noise covariance u = randn(nu, T) y = sim(sys, u, x0) yn = y .+ σy .* randn.() # Add measurement noise d = iddata(yn, u, Ts) sysh, x0h, opt = ControlSystemIdentification.newpem(d, nx, show_every=10) plot( bodeplot([sys, sysh]), predplot(sysh, d, x0h), # Include the estimated initial state in the prediction )
The returned model is of type PredictionStateSpace
and contains the field sys
with the system model, as well as covariance matrices and estimated Kalman gain for a Kalman filter.
See also nonlinear_pem
.
Extended help
This implementation uses a tridiagonal parametrization of the A-matrix that has been shown to be favourable from an optimization perspective.¹ The initial guess sys0
is automatically transformed to a special tridiagonal modal form. [1]: Mckelvey, Tomas & Helmersson, Anders. (1997). State-space parametrizations of multivariable linear systems using tridiagonal matrix forms.
The parameter vector used in the optimization takes the following form
p = [trivec(A); vec(B); vec(C); vec(D); vec(K); vec(x0)]
Where ControlSystemIdentification.trivec
vectorizes the -1,0,1
diagonals of A
. If focus = :simulation
, K
is omitted, and if zeroD = true
, D
is omitted.
#
ControlSystemIdentification.noise_model
— Method
noise_model(sys::AbstractPredictionStateSpace)
Return a model of the noise driving the system, v
, in
The model neglects u and is given by
Also called the "innovation form". This function calls ControlSystemsBase.innovation_form
.
#
ControlSystemIdentification.okid
— Function
H = okid(d::AbstractIdData, nx, l = 5nx; p = 1, λ=0, estimator = /)
Observer Kalman filter identification. Returns the Markov parameters (impulse response) H
size ny × nu × (l+1)
.
The parameter l
is likely to require tuning, a reasonable starting point to choose l
large enough for the impulse response to have mostly dissipated.
Arguments:
-
nx
: Model order -
l
: Number of Markov parameters to estimate (length of impulse response). -
λ
: Regularization parameter -
smooth
: If true, the regularization given byλ
penalizes curvature in the estimated impulse response. Ifera
is to be used afterokid
, favor a smallλ
withsmooth=true
, but if the impulse response is to be inspected by eye, a larger smoothing can yield a visually more accurate estimate of the impulse response. -
p
: Optionally, delete the firstp
columns in the internal Hankel matrices to account for initial conditions != 0. Ifx0 != 0
, try settingp
around the same value asl
. -
estimator
: Function to use for estimating the Markov parameters. Defaults to/
(least squares), but can also be a robust option such asTotalLeastSquares.irls / flts
orTotalLeastSquares.tls
for a total least-squares solutoins (errors in variables).
#
ControlSystemIdentification.plr
— Method
G, Gn = plr(d::AbstractIdData,na,nb,nc; initial_order = 20)
Perform pseudo-linear regression to estimate a model on the form Ay = Bu + Cw
The residual sequence is estimated by first estimating a high-order arx model, whereafter the estimated residual sequence is included in a second estimation problem. The return values are the estimated system model, and the estimated noise model. G
and Gn
will always have the same denominator polynomial.
#
ControlSystemIdentification.prediction_error
— Method
e = prediction_error(sys::AbstractStateSpace, d::AbstractIdData, args...; kwargs...)
Return the prediction errors `d.y - predict(sys, d, …)
#
ControlSystemIdentification.prediction_error_filter
— Method
prediction_error_filter(sys::AbstractPredictionStateSpace; h=1)
prediction_error_filter(sys::AbstractStateSpace, R1, R2; h=1)
Return a filter that takes [u; y]
as input and outputs the prediction error e = y - ŷ
. See also innovation_form
and noise_model
. h ≥ 1
is the prediction horizon. See function predictiondata
to generate an iddata
that has [u; y]
as inputs.
#
ControlSystemIdentification.predictiondata
— Method
predictiondata(d::AbstractIdData)
Add the output y
to the input u_new = [u; y]
#
ControlSystemIdentification.predplot
— Function
predplot(sys, data, x0=:estimate; ploty=true, plote=false, h=1, sysname="")
Plot system h
-step prediction and measured output to compare them.
By default, the initial condition x0
is estimated using the data. To start the simulation from the origin, provide x0 = :zero
or x0 = zeros(sys.nx)
.
-
ploty
determines whether or not to plot the measured signal -
plote
determines whether or not to plot the residual -
h
is the prediction horizon.
#
ControlSystemIdentification.prefilter
— Method
prefilter(f, d::InputOutputData)
Apply filter coefficients to identification data
#
ControlSystemIdentification.prefilter
— Method
prefilter(d::AbstractIdData, responsetype::FilterType)
Filter both input and output of the identification data using zero-phase filtering (filtfilt
). Since both input and output is filtered, linear identification will not be affected in any other way than to focus the fit on the selected frequency range, i.e. the range that has high gain in the provided filter. Note, if the system that generated d
is nonlinear, identification might be severely impacted by this transformation. Verify linearity with, e.g., coherenceplot
.
#
ControlSystemIdentification.prefilter
— Method
prefilter(d::AbstractIdData, l::Number, u::Number)
Filter input and output with a bandpass filter between l
and u
Hz. If l = 0
a lowpass filter will be used, and if u = Inf
a highpass filter will be used.
#
ControlSystemIdentification.ramp_in
— Method
ramp_in(d::InputOutputData, h::Int; rev = false)
Multiply the initial h
samples of input and output signals with a linearly increasing ramp.
#
ControlSystemIdentification.ramp_out
— Method
ramp_out(d::InputOutputData, h::Int)
Multiply the final h
samples of input and output signals with a linearly decreasing ramp.
#
ControlSystemIdentification.residualplot
— Function
residualplot(model, data)
Plot residual autocorrelation and input-residual correlation.
#
ControlSystemIdentification.schur_stab
— Method
schur_stab(A::AbstractMatrix{T}, ϵ = 0.01)
Stabilize the eigenvalues of discrete-time matrix A
by transforming A
to complex Schur form and projecting unstable eigenvalues 1-ϵ < λ ≤ 2 into the unit disc. Eigenvalues > 2 are set to 0.
#
ControlSystemIdentification.simplot
— Function
simplot(sys, data, x0=:estimate; ploty=true, plote=false, sysname="")
Plot system simulation and measured output to compare them.
By default, the initial condition x0
is estimated using the data. To start the simulation from the origin, provide x0 = :zero
or x0 = zeros(sys.nx)
.
-
ploty
determines whether or not to plot the measured signal -
plote
determines whether or not to plot the residual
#
ControlSystemIdentification.subspaceid
— Method
model, x0 = subspaceid(frd::FRD, Ts, args...; estimate_x0 = false, bilinear_transform = false, kwargs...)
If a frequency-reponse data object is supplied
-
The FRD will be automatically converted to an
InputOutputFreqData
-
estimate_x0
is by default set to 0. -
bilinear_transform
transform the frequency vector to discrete time, see note below.
if the frequency-response data comes from a frequency-response analysis, a bilinear transform of the data is required before estimation. This transform will be applied if bilinear_transform = true .
|
#
ControlSystemIdentification.subspaceid
— Method
subspaceid(
data::InputOutputData,
nx = :auto;
verbose = false,
r = nx === :auto ? min(length(data) ÷ 20, 50) : nx + 10, # the maximal prediction horizon used
s1 = r, # number of past outputs
s2 = r, # number of past inputs
W = :MOESP,
zeroD = false,
stable = true,
focus = :prediction,
svd::F1 = svd!,
scaleU = true,
Aestimator::F2 = \,
Bestimator::F3 = \,
weights = nothing,
)
Estimate a state-space model using subspace-based identification. Several different subspace-based algorithms are available, and can be chosen using the W
keyword. Options are :MOESP, :CVA, :N4SID, :IVM
.
Ref: Ljung, Theory for the user.
Resistance against outliers can be improved by supplying a custom factorization algorithm and replacing the internal least-squares estimators. See the documentation for the keyword arguments svd
, Aestimator
, and Bestimator
below.
The returned model is of type N4SIDStateSpace
and contains the field sys
with the system model, as well as covariance matrices for a Kalman filter.
Arguments:
-
data
: Identification dataiddata
-
nx
: Rank of the model (model order) -
verbose
: Print stuff? -
r
: Prediction horizon. The model may perform better on simulation if this is made longer, at the expense of more computation time. -
s1
: past horizon of outputs -
s2
: past horizon of inputs -
W
: Weight type, choose between:MOESP, :CVA, :N4SID, :IVM
-
zeroD
: Force theD
matrix to be zero. -
stable
: Stabilize unstable system using eigenvalue reflection. -
focus
::prediction
orsimulation
-
svd
: The function to use forsvd
. For resistance against outliers, consider usingTotalLeastSquares.rpca
to preprocess the data matrix before applyingsvd
, likesvd = A->svd!(rpca(A)[1])
. -
scaleU
: Rescale the input channels to have the same energy. -
Aestimator
: Estimator function used to estimateA,C
. The default is, i.e., least squares, but robust estimators, such as
irls, flts, rtls`` from TotalLeastSquares.jl, can be used to gain resistance against outliers. -
Bestimator
: Estimator function used to estimateB,D
. Weighted estimation can be eachieved by passingwls
from TotalLeastSquares.jl together with theweights
keyword argument. -
weights
: A vector of weights can be provided if theBestimator
iswls
.
Extended help
A more accurate prediciton model can sometimes be obtained using newpem
, which is also unbiased for closed-loop data (subspaceid
is biased for closed-loop data, see example in the docs). The prediction-error method is iterative and generally more expensive than subspaceid
, and uses this function (by default) to form the initial guess for the optimization.
#
ControlSystemIdentification.subspaceid
— Method
model, x0 = subspaceid(data::InputOutputFreqData,
Ts = data.Ts,
nx = :auto;
cont = false,
verbose = false,
r = nx === :auto ? min(length(data) ÷ 20, 20) : 2nx, # Internal model order
zeroD = false,
estimate_x0 = true,
stable = true,
svd = svd!,
Aestimator = \,
Bestimator = \,
weights = nothing
)
Estimate a state-space model using subspace-based identification in the frequency domain.
If results are poor, try modifying r
, in particular if the amount of data is low.
See the docs for an example.
Arguments:
-
data
: A frequency-domain identification data object. -
Ts
: Sample time at which the data was collected -
nx
: Desired model order, an interer or:auto
. -
cont
: Return a continuous-time model? A bilinear transformation is used to convert the estimated discrete-time model, see functiond2c
. -
verbose
: Print stuff? -
r
: Internal model order, must be ≥nx
. -
zeroD
: Force theD
matrix to be zero. -
estimate_x0
: Esimation of extra parameters to account for initial conditions. This may be required if the data comes from the fft of time-domain data, but may not be required if the data is collected using frequency-response analysis with exactly periodic input and proper handling of transients. -
stable
: For the model to be stable (usesschur_stab
). -
svd
: Thesvd
function to use. -
Aestimator
: The estimator of theA
matrix (and initialC
-matrix). -
Bestimator
: The estimator of B/D and C/D matrices. -
weights
: An optional vector of frequency weights of the same length as the number of frequencies in `data.
#
ControlSystemIdentification.tfest
— Function
tfest(
data::FRD,
p0,
link = log ∘ abs;
freq_weight = sqrt(data.w[1]*data.w[end]),
refine = true,
opt = BFGS(),
opts = Optim.Options(
store_trace = true,
show_trace = true,
show_every = 1,
iterations = 100,
allow_f_increases = false,
time_limit = 100,
x_tol = 0,
f_tol = 0,
g_tol = 1e-8,
f_calls_limit = 0,
g_calls_limit = 0,
),
)
Fit a parametric transfer function to frequency-domain data.
The initial pahse of the optimization solves
and the second stage (if refine=true) solves
(abs2(link(B/A) - link(l))
)
Arguments:
-
data
: AnFRD
onbject with frequency domain data. -
p0
: Initial parameter guess. Can be aNamedTuple
orComponentVector
with fieldsb,a
specifying numerator and denominator as they appear in the call totf
, i.e.,(b = [1.0], a = [1.0,1.0,1.0])
. Can also be an instace ofTransferFunction
. -
link
: By default, phase information is discarded in the fitting. To include phase, change tolink = log
. -
freq_weight
: Apply weighting with the inverse frequency. The value determines the cutoff frequency before which the weight is constant, after which the weight decreases linearly. Defaults to the geometric mean of the smallest and largest frequency. -
refine
: Indicate whether or not a second optimization stage is performed to refine the results of the first. -
opt
: The Optim optimizer to use. -
opts
:Optim.Options
controlling the solver options.
See also minimum_phase
to transform a possibly non-minimum phase system to minimum phase.
#
ControlSystemIdentification.tfest
— Function
H, N = tfest(data, σ = 0.05, method = :corr)
Estimate a transfer function model using the Correlogram approach (default) using the signal model .
Both H
and N
are of type FRD
(frequency-response data).
-
σ
determines the width of the Gaussian window applied to the estimated correlation functions before FFT. A largerσ
implies less smoothing. -
H
= Syu/Suu Process transfer function -
N
= Sy - |Syu|²/Suu Estimated Noise PSD (also an estimate of the variance of ). Note that a PSD is related to the "noise model" used in the system identification literature as . The magnitude curve of the noise model can be visualized by plotting√(N)
. -
method
::welch
or:corr
.:welch
uses the Welch method to estimate the power spectral density, while:corr
(default) uses the Correlogram approach. Ifmethod = :welch
, the additional keyword argumentsn
,noverlap
andwindow
determine the number of samples per segment (default 10% of data), the number of samples to overlap between segments (default 50%), and the window function to use (defaulthamming
), respectively.
Extended help
This estimation method is unbiased if the input is uncorrelated with the noise , but is otherwise biased (e.g., for identification in closed loop).
#
ControlSystemIdentification.tfest
— Method
tfest(data::FRD, basis::AbstractStateSpace;
freq_weight = 1 ./ (data.w .+ data.w[2]),
opt = BFGS(),
metric::M = abs2,
opts = Optim.Options(
store_trace = true,
show_trace = true,
show_every = 50,
iterations = 1000000,
allow_f_increases = false,
time_limit = 100,
x_tol = 1e-5,
f_tol = 0,
g_tol = 1e-8,
f_calls_limit = 0,
g_calls_limit = 0,
)
Fit a parametric transfer function to frequency-domain data using a pre-specified basis.
Arguments:
-
data
: AnFRD
onbject with frequency domain data.
function kautz(a::AbstractVector)
-
basis
: A basis for the estimation. See, e.g.,laguerre, laguerre_oo, kautz
-
freq_weight
: A vector of weights per frequency. The default is approximately1/f
. -
opt
: The Optim optimizer to use. -
opts
:Optim.Options
controlling the solver options.
#
ControlSystemIdentification.wtls_estimator
— Function
wtls_estimator(y,na,nb, σu=0)
Create an estimator function for estimation of arx models in the presence of measurement noise. If the noise variance on the input σu
(model errors) is known, this can be specified for increased accuracy.
#
DSP.Filters.resample
— Method
DSP.resample(sys::AbstractStateSpace{<:Discrete}, Qd::AbstractMatrix, newh::Real)
Change sample time of covariance matrix Qd
beloning to sys
to newh
. This function does not handle the measurement covariance, how to do this depends on context. If the faster sampled signal has the same measurement noise, no change should be made. If the slower sampled signal was downsampled with filtering, the measurement covariance should be increased if the system is changed to a faster sample rate. To maintain the frequency response of the system, the measurement covariance should be modified accordinly.
Arguments:
-
sys
: A discrete-time system that has dynamics noise covariance matricQd
. -
Qd
: Covariance matrix of dynamics noise. -
newh
: The new sample time.
#
DSP.Filters.resample
— Method
resample(sys::AbstractStateSpace{<:Discrete}, newh::Real)
Change sample-time of sys to newh
.
#
DSP.Filters.resample
— Method
dr = resample(d::InputOutputData, f)
Resample iddata d
with fraction f
, e.g., f = fs_new / fs_original
.
#
StatsAPI.predict
— Method
predict(ARX::TransferFunction, d::InputOutputData)
One step ahead prediction for an ARX process. The length of the returned prediction is length(d) - max(na, nb)
Example:
julia> predict(tf(1, [1, -1], 1), iddata(1:10, 1:10))
9-element Vector{Int64}:
2
4
6
8
10
12
14
16
18
#
StatsAPI.residuals
— Method
residuals(ARX::TransferFunction, d::InputOutputData)
Calculates the residuals v = Ay - Bu
of an ARX process and InputOutputData d. The length of the returned residuals is length(d) - max(na, nb)
Example:
julia> ARX = tf(1, [1, -1], 1)
TransferFunction{Discrete{Int64}, ControlSystemsBase.SisoRational{Int64}}
1
-----
z - 1
Sample Time: 1 (seconds)
Discrete-time transfer function model
julia> u = 1:5
1:5
julia> y = lsim(ARX, u, 1:5)[1][:]
5-element Vector{Float64}:
0.0
1.0
3.0
6.0
10.0
julia> d = iddata(y, u)
InputOutput data of length 5 with 1 outputs and 1 inputs
julia> residuals(ARX, d)
4-element Vector{Float64}:
0.0
0.0
0.0
0.0
#
DelimitedFiles.writedlm
— Function
DelimitedFiles.writedlm(io::IO, d::AbstractIdData, args...; kwargs...)
Write identification data to disk.
#
ControlSystemsBase.c2d
— Function
c2d(w::AbstractVector{<:Real}, Ts; w_prewarp = 0)
c2d(frd::FRD, Ts; w_prewarp = 0)
Transform continuous-time frequency vector w
or frequency-response data frd
from continuous to discrete time using a bilinear (Tustin) transform. This is useful in cases where a frequency response is obtained through frequency-response analysis, and the function subspaceid
is to be used.