Basics: Transforms
Basics: Transforms
Basics: Transforms
A)) + dim(N (A∗ )) = m (since R(A) = Dual basis; Φ̃ = Φ(ΦΦ∗ )−1 = ΦG−1 , G the Gram matrix.
Sylvester’s inequality: If A is an M × N matrix and B is an N × K matrix, N (A∗ )⊥ ), dim(R(A∗ )) + dim(N (A)) = n (since R(A∗ ) = N (A)⊥ ). ChangeD of basis:
E x, y ∈ H, y = Ax, x = Φα, y = Ψβ, β = Γβ.
then rank(A) + rank(B) − N ≤ rank(AB). Invertibility (equivalences): For A square matrix, N (A) = {0} (trivial Γi,j = Aφj , ψ̃i .
Cholesky decomposition: Any A hermitian PD matrix can be expressed nullspace), full-rank matrix, no 0 eigenvalue, determinant equal to 0.
as A = LL∗ , where where L is an invertible lower triangular matrix with Gram-Schmidt procedure: Want to convert original set {s(k) } onto or-
real and positive diagonal entries. Hilbert spaces and projection operators (k)
thonormal set {u }. At each step k, p (k)
=s(k) Pk−1 D (n) (k) E (n)
− n=0 u , s u .
Vandermonde matrix: A Vandermonde matrix V of size M × N has en- Vector space: Set of vectors (RN , functions,...), field of scalars (real, com-
tries of the form Vi,j = tji for i = 0, . . . , M − 1, j Q u(k) = p(k) /
p(k)
.
= 0, . . . , N − 1, and it is plex), vector addition, scalar multiplication. Satisfy x + y = y + x (com-
of full rank if tl 6= tm when l 6= m since det(V ) = 0≤l<m≤N −1 (tl − tm ). mutativity), (x + y) + z = x + (y + z) and (αβ)x = α(βx) (associativity),
Circulant matrix: Implement circular convolution. Its rows are circular α(x + y) = αx + αy and (α + β)x = αx + βx (distributivity), ∃0 s.t. x + 0 = x Discrete Systems
rotations of a sequence. They are diagonalized bu the DFT matrix. (additive identity), ∃ − x s.t. x + (−x) = 0 (additive inverse), 1x = x (mul- Delta function: −∞
R∞
δ(t)dt = 1, δ(t) = 0 ∀t 6= 0, −∞
R∞
x(t)δ(t)dt = x(0),
Toeplitz matrix; Constant coefficients along diagonals. Implements lin- tiplicative identity). R∞
x(τ )δ(t − τ )dτ = x(t). δ(at) = δ(t)/|a|.
ear convolution. In a matrix representation of a linear and shift-invariant Subspace: S ⊆ V , V vector space and also S itself. S 6= ∅, closed −∞
P
system, the matrix will be Toeplitz. under vector addition (x + y ∈ S ∀x, y ∈ S) and scalar multiplication Impulse
P response: Let yk (t) = A(xk(t)). If A linear (A( k αk xk (t)) =
Unitary matrix: U −1 = U ∗ and its eigenvalues satisfy |λj | = 1. Unitary (αx ∈ S ∀x ∈ S, α ∈ F, so {0} ∈ S). k α k yk (t)) shift-invariant (A(x k (t − τ )) = y k (t − τ )) system, can express
span(S): { N
P A as convolution with impulse response h(t) = A(δ(t)); A(x(t)) = (x ∗ h)(t).
matrices are isometries, i.e. kU xk = kxk. k=0 αk φk : αk ∈ F, φk ∈ S, N ∈ N \ {∞}}. Smallest vector
Normal matrices: AA∗ = A∗ A (so need to be square). space containing the set of vectors S. For S infinite, span(S) (o.w. some Filter as projection: A filter is said to be a projection if hn = (h ∗ h)n ,
Eigenvalue decomposition: Ax = λx (characteristic equation), kxk = 1, vectors cannot be represented with finite linear combiation of φk ). Always or equivalently H(ejω ) = H 2 (ejω ). In case of being a projection, orthogonal
λ ∈ F. λ verifies det(λI − A) = 0. Square matrix diagonalizable if ∃S n × n a subspace. projection if hn = h∗ −n , or equivalently H(e
jω
) = H ∗ (ejω ).
Inner product: hx + y, zi = hx, zi + hy, zi (distributivity), hαx, yi =
and Λ diagonal matrix s.t. A = SΛS −1 . For all square matrices, A = U RU ∗
α hx, yi (linearity in the first argument), hx, yi∗ = hy, xi (hermitian symme- Transforms
with R upper triangular matrix, U unitary (Schur decomposition). For nor-
try), hx, xi ≥ 0 equality iff x = 0 (PD).
mal matrices, R = Λ, with Λ diagonal. R∞ Time domain Frequency domain
Singular value decomposition: A = U SV ∗ , U, V unitary, S (e.g. for tall On CR , hx, yi = −∞ x(t)y ∗ (t)dt.
Fourier Transform Continuous aperiodic Continuous aperiodic
matrix) hermitian diagonal block and 0 block, U = [U1 U0 ]. This leads Norm: kxk ≥ 0 equality iff x = 0 (PD), kαxk = |α| kxk (positive scal-
to thin SVD A = U1 S1 V ∗ , U1 , U0 form orthonormal basis for R(A), N (A∗ ) ability), kx + yk ≤ kxk + kyk equality iff y = αx (triangle inequality). Fourier Series Continuous periodic Discrete aperiodic
resp. U, V are resp- eigenvectors of normal matrices AA∗ and A∗ A. A† = hx, yi = kxk kyk cos α.
DTFT Discrete aperiodic Continuous aperiodic
Pythagorean theorem: For {xk } orthogonal,
k xk
2 = 2
P
V S † U ∗ (S † is S T with inverted singular values).
P
k kxk .
Determinant of a matrix: Oriented volume of the hyper-parallelepiped Cauchy-Schwarz: | hx, yi | ≤ kxk kyk. Equality when x = αy (collinear). DFT Discrete periodic Discrete periodic
defined by column vectors of A. |det(U )| = 1 for unitary matrices (e.g. Parallelogram law: kx + yk2 + kx − yk2 = 2(kxk2 + kyk2 ).
rotation and reflection). Linear operator: A(x + y) = Ax + Ay (additivity), A(αx) = α(Ax) (scal- Fourier Transform
Pn k r n0 −r n+1 ability). Operator norm kAk = supkxk=1 kAxk.
Geometric series: For r 6= 1,
R∞ R∞
k=n r = . x(t)e−jωt dt, x(t) = 1 jωt
0 1−r
Adjoint: For A : H0 → H1 , hAx, yiH = hx, A∗ yiH . Exists and is Definition: X(ω) = −∞ 2π −∞ X(ω)e .
† 1 0 R∞
Pseudo-inverse: The pseudo-inverse of A denoted A satisfies all the four Continuous-time convolution: (x ∗ y)(t) = x(τ )y(t − τ )dτ . Eigen-
statements: AA† A = A, A† AA† = A, (AA† )∗ = AA† and (A† A)∗ = A† A. unique. kA∗ k = kAk. Operator is unitary iff A−1 = A . If a matrix is
∗
−∞
If A has linearly independent columns (non-singular), A∗ A invertible and self-adjoint (hermitian), its eigenvectors define an orthogonal basis. If A functions ejωt .
invertible, (A−1 )∗ = (A∗ )−1 . Parseval: hx(t), y(t)i = 1
2π hX(ω), Y (ω)i.
A† = (A∗ A)−1 A∗ is a left-inverse. If A has linearly independent rows (A∗
Projection: Bounded linear operator (kP k < ∞; linear operators with Properties: x(αt) ↔ α 1
X ω
non-singular), AA∗ invertible and A† = A∗ (AA∗ )−1 is a right-inverse. α , (x ∗ y)(t) ↔ X(ω)Y (ω), x(t)y(t) ↔
finite-dimensional domains are always bounded) that is idempotent P 2 = P −jωτ
Building projections: AA† orthogonal projection into R(A). A† A orthog-
1
(X ∗ Y )(ω), x(t − τ ) ↔ e X(ω), ejω0 t x(t) ↔ X(ω − ω0 ), x(−t) ↔
(in such case kP k ≥ 1). If self-adjoint P ∗ = P , orthogonal projection and 2π
dn x(t) dn X(ω)
onal projection into R(A∗ ) (so orthogonal projection into N (A) is I − A† A). all eigenvalues are real valued (o.w. oblique).Bounded linear operator P sat- X(−ω), x∗ (t) ↔ X ∗ (−ω), dtn ↔ (jω)n X(ω), (−jt)n x(t) ↔ dω n ,
For U orthonormal set of vectors, U † = U ∗ . isfies hx − P x, P yi = 0∀x, y ∈ H iff P orthogonal projection. If S, T closed X(ω)
Rt
−∞
x(τ )dτ ↔ jω , X(0) = 0.
kAxk
Spectral norm: kAk := supx6=0 kxk = supkxk=1 kAxk = σmax (A) = subspaces s.t. H = S ⊕ T ∃projection P on H s.t. S = R(P ), T = N (P ). In P 1
P 2πm j 2πmt
p an orthogonal projection, all eigenvalues are either 0 or 1, and we have that Poisson sum formula: n∈Z x(t − nT ) = T m∈Z X( T )e
T .
∗
λmax (A A). Spectral norm is submultiplicative: kABk ≤ kAk kBk.
kP xk ≤ kxk (orthogonal projection is a contraction). Moreover, if range Proof: LHS is T −periodic so compute Fourier Series and anti-transform re-
∂x ∂x space is not trivial, kP k = 1. sult.
RR RR ∂r ∂φ
↔ ωπ 1{ ω ∈ [−ω0 , ω0 ]}.
Change of coordinates: f (x, y)dxdy = f (r, φ) ∂y ∂y drdφ. Projection theorem: For S closed subspace of Hilbert space H and x ∈ H, sin ω0 t
C P ∂r ∂φ
Common transforms: sinc(ω0 t) = ω0 t 0
kx − x̂k ≤ kx − sk ∀s ∈ S iff x − x̂ ⊥ S, x̂ = P x, P orthogonal projection.
t 1{ t ∈ [−t0 /2, t0 /2]} ↔
2π 2kπ 1
P P
In case δ(t − nT ) ↔ δ(ω − ).
p of switching from cartesian
y
to polar, x = r cos(φ), y = r sin(φ), Basis:P Φ = {φk }k∈K ⊂ V , V = span(Φ) so any x ∈ V can be expressed as
n∈Z T k∈Z T 0
r = x2 + y 2 , φ = arctan x and determinant becomes r. x= t0 ω
1 − |t|, |t| < 1 ↔ sinc2 ( ω
k∈K αk φk and expansion
P
coefficients αk are unique. sinc( 2 ). 2 ). 1 ↔ 2πδ(ω). δ(t) ↔ 1.
Polynomials (finite interval) is a generator of shift-invariant subspace SK,Z , in fact the one with shortest
support. span({β (K) (t − k)}k∈Z ) = SK,Z for odd K and SK,Z+0.5 for even
Approximate x(t) in finite interval [a, b] by polynomial of order K pk (t) =
PK k K.
k=0 ak t . Approximation error eK (t) = x(t) − pK (t), t ∈ [a, b]. Smooth
0 (K−1)
d
(t − k), α0k = αk − αk−1 .
P
Differentiation: dt x(t) = k∈Z αk β+
approximation. Can approximate continuous functions arbitrarily well over Rτ P (1) (K+1) (1) Pk
finite intervals (Weierstrass theorem). Polynomials are infinitely differen- Integration: −∞ x(τ )dτ = k∈Z αk β+ (t − k), αk = m=−∞ αm .
tiable. Approximating continuous functions with high degree polynomials (1)
Canonical Dual Spline Basis: For dual basis of {β+ (t − k)}k∈Z , need
tends to be problematic (e.g. at ending points of interval). Cannot approxi- D
(1) (1)
E
(1)
mate discontinuous functions or over infinite intervals well. β̃+ (t − i), β+ (t − k) = δi−k . For canonical dual, need β̃+ ∈ S1,Z .
D E
P (1) (1)
Least square minimization x̂(t) = k∈Z x(t), β̃+ (t − k) β+ (t − k). Dual spline of degree 1 has
Rb infinite support but decays exponentially.
Minimize keK k22 = a
|x(t) − pK (t)|2 dt. Since PK ([a, b]) =
K 2 Polynomial reproduction (Strang-Fix theorem)
span({1, t, . . . , t }) ⊂ L ([a, b]), solution (by projection theorem) is pK (t) = R∞
(1 + |t|K )|φ(t)|dt < ∞ for k ∈ N and φ function with FT Φ, following
PK K If −∞
k=0 hx, φk i φk (t), {φk }k=0 orthonormal basis of PK ([a, b]). These inner
products with basis functions are not always easy to obtain (e.g. if we only are equivalent:
P
have samples). Gibbs phenomenon: No matter how large K is, absolute (i) pK (t) = k∈Z αk φ(t − k) for pK a polynomial of degree ≤ K.
error stays the same at the ripple near to the boundary. (ii) Φ and its first K derivatives satisfy Φ(0) 6= 0 and Φ(k) (2φl) = 0 for
Legendre polynomials: Orthonormal basis of PK ([−1, 1]). The Legendre k = 1, . . . , K, l ∈ Z \ {0}.
dn 2 n
P
polynomial of degree n, Ln (t) = 2n1n! dt n (t − 1) , has n real distinct zeros Partition of unity (case of Strang-Fix): φ1 (t) = n∈Z φ(t − n) = 1
in the interior of the interval [−1, 1] 1
(periodized version with period 1 of φ ∈ L (R)) iff Φ(2φk) = δk k ∈ Z.