Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

extend lpc to complex inputs, other fixes #517

Merged
merged 25 commits into from
Feb 8, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions docs/src/lpc.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# `LPC` - Linear Predictive Coding
```@docs
lpc
lpc(::AbstractVector{Number}, ::Int, ::LPCBurg)
lpc(::AbstractVector{Number}, ::Int, ::LPCLevinson)
arburg
levinson
```
53 changes: 37 additions & 16 deletions src/dspbase.jl
Original file line number Diff line number Diff line change
Expand Up @@ -764,18 +764,24 @@ end


"""
xcorr(u,v; padmode = :none)

Compute the cross-correlation of two vectors, by calculating the similarity
between `u` and `v` with various offsets of `v`. Delaying `u` relative to `v`
will shift the result to the right.

The size of the output depends on the padmode keyword argument: with padmode =
:none the length of the result will be length(u) + length(v) - 1, as with conv.
With padmode = :longest the shorter of the arguments will be padded so they are
equal length. This gives a result with length 2*max(length(u), length(v))-1,
xcorr(u; padmode::Symbol=:none, scaling::Symbol=:none)
xcorr(u, v; padmode::Symbol=:none, scaling::Symbol=:none)

With two arguments, compute the cross-correlation of two vectors, by calculating
the similarity between `u` and `v` with various offsets of `v`. Delaying `u`
relative to `v` will shift the result to the right. If one argument is provided,
calculate `xcorr(u, u; kwargs...)`.

The size of the output depends on the `padmode` keyword argument: with `padmode =
:none` the length of the result will be `length(u) + length(v) - 1`, as with `conv`.
With `padmode = :longest`, the shorter of the arguments will be padded so they
are of equal length. This gives a result with length `2*max(length(u), length(v))-1`,
with the zero-lag condition at the center.

The keyword argument `scaling` can be provided. Possible arguments are the default
`:none` and `:biased`. `:biased` is valid only if the vectors have the same length,
or only one vector is provided, dividing the result by `length(u)`.

# Examples

```jldoctest
Expand All @@ -789,19 +795,34 @@ julia> xcorr([1,2,3],[1,2,3])
```
"""
function xcorr(
u::AbstractVector, v::AbstractVector; padmode::Symbol = :none
u::AbstractVector, v::AbstractVector; padmode::Symbol=:none, scaling::Symbol=:none
)
su = size(u,1); sv = size(v,1)
su = size(u, 1); sv = size(v, 1)

if scaling == :biased && su != sv
throw(DimensionMismatch("scaling only valid for vectors of same length"))
end

if padmode == :longest
if su < sv
u = _zeropad_keep_offset(u, sv)
elseif sv < su
v = _zeropad_keep_offset(v, su)
end
conv(u, dsp_reverse(conj(v), axes(v)))
elseif padmode == :none
conv(u, dsp_reverse(conj(v), axes(v)))
else
elseif padmode != :none
throw(ArgumentError("padmode keyword argument must be either :none or :longest"))
end

res = conv(u, dsp_reverse(conj(v), axes(v)))
if scaling == :biased
res = _normalize!(res, su)
end

return res
end

_normalize!(x::AbstractArray{<:Integer}, sz::Int) = (x ./ sz) # does not mutate x
_normalize!(x::AbstractArray, sz::Int) = (x ./= sz)

# TODO: write specialized (r/)fft-ed autocorrelation functions
xcorr(u::AbstractVector; kwargs...) = xcorr(u, u; kwargs...)
174 changes: 121 additions & 53 deletions src/lpc.jl
Original file line number Diff line number Diff line change
Expand Up @@ -3,89 +3,157 @@ module LPC

using ..DSP: xcorr

using LinearAlgebra: dot
using LinearAlgebra: dot, BlasComplex, BLAS

export lpc, LPCBurg, LPCLevinson
export lpc, arburg, levinson, LPCBurg, LPCLevinson

# Dispatch types for lpc()
mutable struct LPCBurg; end
mutable struct LPCLevinson; end
struct LPCBurg end
struct LPCLevinson end

"""
lpc(x::AbstractVector, p::Int, [LPCBurg()])
lpc(x::AbstractVector, p::Integer, [LPCBurg()])

Given input signal `x` and prediction order `p`, returns IIR coefficients `a`
and average reconstruction error `prediction_err`. Note that this method does
NOT return the leading `1` present in the true autocorrelative estimate; it
NOT return the leading ``1`` present in the true autocorrelative estimate; it
omits it as it is implicit in every LPC estimate, and must be manually
reintroduced if the returned vector should be treated as a polynomial.

The algorithm used is determined by the last optional parameter, and can be
either `LPCBurg` or `LPCLevinson`.
either `LPCBurg` ([`arburg`](@ref)) or `LPCLevinson` ([`levinson`](@ref)).
"""
function lpc end

function lpc(x::AbstractVector{<:Number}, p::Integer, ::LPCBurg)
a, prediction_err = arburg(x, p)
popfirst!(a)
return a, prediction_err
end

"""
lpc(x::AbstractVector, p::Int, LPCBurg())
arburg(x::AbstractVector, p::Integer)

LPC (Linear Predictive Coding) estimation, using the Burg method.
This function implements the mathematics published in [^Lagrange], and
the recursion relation as noted in [^Vos], in turn referenced from [^Andersen].

LPC (Linear-Predictive-Code) estimation, using the Burg method. This function
implements the mathematics published in [1].
[^Lagrange]: Enhanced Partial Tracking Using Linear Prediction.
[DAFX 2003 article, Lagrange et al]
(http://www.sylvain-marchand.info/Publications/dafx03.pdf)

[1] - Enhanced Partial Tracking Using Linear Prediction
(DAFX 2003 article, Lagrange et al)
http://www.sylvain-marchand.info/Publications/dafx03.pdf
[^Vos]: [A Fast Implementation of Burg’s Method]
(https://www.opus-codec.org/docs/vos_fastburg.pdf).
© 2013 Koen Vos, licensed under [CC BY 3.0]
(https://creativecommons.org/licenses/by/3.0/)

[^Andersen]: N. Andersen. Comments on the performance of maximum entropy algorithms.
Proceedings of the IEEE 66.11: 1581-1582, 1978.
"""
function lpc(x::AbstractVector{T}, p::Int, ::LPCBurg) where T <: Number
ef = x # forward error
eb = x # backwards error
a = [1; zeros(T, p)] # prediction coefficients

# Initialize prediction error wtih the variance of the signal
prediction_err = (sum(abs2, x) ./ length(x))[1]

for m in 1:p
efp = ef[1:end-1]
ebp = eb[2:end]
k = -2 * dot(ebp,efp) / (sum(abs2, ebp) + sum(abs2,efp))
ef = efp + k .* ebp
eb = ebp + k .* efp
a[1:m+1] = [a[1:m]; 0] + k .* [0; a[m:-1:1]]
prediction_err *= (1 - k*k)
function arburg(x::AbstractVector{T}, p::Integer) where T<:Number
# Initialize prediction error with the variance of the signal
unnormed_err = abs(dot(x, x))
prediction_err = unnormed_err / length(x)
R = typeof(prediction_err)
F = promote_type(R, Base.promote_union(T))

ef = collect(F, x) # forward error
eb = copy(ef) # backwards error
a = zeros(F, p + 1); a[1] = 1 # prediction coefficients
rev_buf = similar(a, p) # buffer to store a in reverse
reflection_coeffs = similar(a, p) # reflection coefficients

den = convert(R, 2unnormed_err)
ratio = one(R)

@views for m in 1:p
cf = pop!(ef)
cb = popfirst!(eb)
den = muladd(ratio, den, -(abs2(cf) + abs2(cb)))

k = -2 * dot(eb, ef) / den
reflection_coeffs[m] = k

rev_buf[1:m] .= a[m:-1:1]
@. a[2:m+1] = muladd(k, conj(rev_buf[1:m]), a[2:m+1])

# update prediction errors
for i in eachindex(ef, eb)
ef_i, eb_i = ef[i], eb[i]
ef[i] = muladd(k, eb_i, ef[i])
eb[i] = muladd(conj(k), ef_i, eb[i])
end

ratio = one(R) - abs2(k)
prediction_err *= ratio
end

return a[2:end], prediction_err
return conj!(a), prediction_err, reflection_coeffs
end

"""
lpc(x::AbstractVector, p::Int, LPCLevinson())

LPC (Linear-Predictive-Code) estimation, using the Levinson method. This
function implements the mathematics described in [1].
function lpc(x::AbstractVector{<:Number}, p::Integer, ::LPCLevinson)
R_xx = xcorr(x; scaling=:biased)[length(x):end]
a, prediction_err = levinson(R_xx, p)
return a, prediction_err
end

[1] - The Wiener (RMS) Error Criterion in Filter Design and Prediction
(N. Levinson, Studies in Applied Mathematics 25(1946), 261-278,
https://doi.org/10.1002/sapm1946251261)
"""
function lpc(x::AbstractVector{<:Number}, p::Int, ::LPCLevinson)
R_xx = xcorr(x,x)[length(x):end]
a = zeros(p,p)
prediction_err = zeros(1,p)

levinson(x::AbstractVector, p::Integer)

Implements Levinson recursion, as described in [^Levinson], to find
the solution `a` of the linear equation
```math
\\mathbf{T} (-\\vec{a})
=
\\begin{bmatrix}
x_2 \\\\
\\vdots \\\\
x_{p+1}
\\end{bmatrix}
```
in the case where ``\\mathbf{T}`` is Hermitian and Toeplitz, with first column `x[1:p]`.
This function can be used for LPC (Linear Predictive Coding) estimation,
by providing `LPCLevinson()` as an argument to `lpc`.

[^Levinson]: The Wiener (RMS) Error Criterion in Filter Design and Prediction.
N. Levinson, Studies in Applied Mathematics 25(1946), 261-278.\\
<https://doi.org/10.1002/sapm1946251261>
"""
function levinson(R_xx::AbstractVector{U}, p::Integer) where U<:Number
# for m = 1
a[1,1] = -R_xx[2]/R_xx[1]
prediction_err[1] = R_xx[1]*(1-a[1,1]^2)

# for m = 2,3,4,..p
for m = 2:p
a[m,m] = (-(R_xx[m+1] + dot(a[m-1,1:m-1],R_xx[m:-1:2]))/prediction_err[m-1])
a[m,1:m-1] = a[m-1,1:m-1] + a[m,m] * a[m-1,m-1:-1:1]
prediction_err[m] = prediction_err[m-1]*(1-a[m,m]^2)
k = -R_xx[2] / R_xx[1]
F = promote_type(Base.promote_union(U), typeof(k))
prediction_err = real(R_xx[1] * (one(F) - abs2(k)))
R = typeof(prediction_err)
T = promote_type(F, R)

a = zeros(T, p)
reflection_coeffs = zeros(T, p)
a[1] = reflection_coeffs[1] = k
rev_a = similar(a, p - 1) # buffer to store a in reverse

@views for m = 2:p
rev_a[1:m-1] .= a[m-1:-1:1]
k = -(R_xx[m+1] + dotu(R_xx[2:m], rev_a[1:m-1])) / prediction_err
@. a[1:m-1] = muladd(k, conj(rev_a[1:m-1]), a[1:m-1])
a[m] = reflection_coeffs[m] = k
prediction_err *= (one(R) - abs2(k))
end

# Return autocorrelation coefficients and error estimate
a[p,:], prediction_err[p]
# Return autocorrelation coefficients, error estimate, and reflection coefficients
return a, prediction_err, reflection_coeffs
end

# for convenience, define dotu as dot for real vectors
dotu(x::AbstractVector{<:Real}, y::AbstractVector{<:Real}) = dot(x, y)
dotu(x::AbstractVector{T}, y::AbstractVector{T}) where T<:BlasComplex = BLAS.dotu(x, y)
function dotu(x::AbstractVector{T}, y::AbstractVector{V}) where {T,V}
dotprod = zero(promote_type(T, V))
for i in eachindex(x, y)
dotprod = muladd(x[i], y[i], dotprod)
end
dotprod
end

# Default users to using Burg estimation as it is in general more stable
lpc(x, p) = lpc(x, p, LPCBurg())
Expand Down
4 changes: 4 additions & 0 deletions test/dsp.jl
Original file line number Diff line number Diff line change
Expand Up @@ -264,6 +264,10 @@ end
@test xcorr(off_a, off_b, padmode = :longest) == OffsetVector(vcat(0, exp), -3:1)

@test_throws ArgumentError xcorr([1], [2]; padmode=:bug)
@test_throws DimensionMismatch xcorr([1], [2,3]; scaling=:biased)

# check that :biased doesn't throw InexactError
@test xcorr([1, 2], [3, 4]; scaling=:biased) == [2.0, 5.5, 3.0]
end

@testset "deconv" begin
Expand Down
21 changes: 16 additions & 5 deletions test/lpc.jl
Original file line number Diff line number Diff line change
@@ -1,10 +1,21 @@
# Filter some noise, try to learn the coefficients
@testset "$method" for method in (LPCBurg(), LPCLevinson())
coeffs = [1, .5, .3, .2]
x = filt([1], coeffs, randn(50000))
@testset "$T" for T in (Float64, ComplexF64)
coeffs = T <: Complex ? [1, 0.5 + 0.2im, 0.3 - 0.5im, 0.2] : [1, .5, .3, .2]
x = filt(1, coeffs, randn(T, 50000))

# Analyze the filtered noise, attempt to reconstruct the coefficients above
ar, e = lpc(x[1000:end], length(coeffs)-1, method)
# Analyze the filtered noise, attempt to reconstruct the coefficients above
ar, e = lpc(x[1000:end], length(coeffs)-1, method)

@test all(abs.([1; ar] .- coeffs) .<= 1e-2)
@test all(<(0.01), abs.(ar .- coeffs[2:end]))
@test isapprox(e, 1; rtol=0.01)
end
end

# test dotu, levinson with complex Int coefficients
@test isapprox(levinson(complex(1:10), 3)[1] |> real, -[1.25, 0, 0.25])

# test that lpc defaults to Burg
@test let v = rand(1000)
lpc(v, 20) == lpc(v, 20, LPCBurg())
end
Loading