Method of Least Squares
- Assuming a multiple linear regression model, the method attempts to choose the tap weights to minimize the sum of error squares.
- When the error process is white and zero mean, the least-squares estimate is the best linear unbiased estimate (BLUE)
- When the error process is white Gaussian zero mean, the least-squares estimate achieves the Cramer-Rao lower bound (CRLB) for unbiased estimates, hence a minimum-variance unbiased estimate (MVUE)
Recursive Least Squares
- Allows one to update the tap weights as the input becomes available.
- Can incorporate additional constraints such as weighted error squares or a regularizing term, [commonly applied due to the ill-posed nature of the problem].
- The inversion of the correlation matrix is replaced by a simple scalar division.
- Initial correlation matrix provide a mean to specify regularization.
- The fundamental difference between RLS and LMS:
- The step-size parameter μ in LMS is replaced by Φ−1(n), the inverse of the correlation matrix of the input u(n), which has the effect of whitening the inputs.
- The rate of convergence of RLS is invariant to the eigenvalue spread of the ensemble average input correlation matrix R
- The excessive mean-square error converges to zero if stationary environment is assumed and the exponential weight factor is set to λ=1.
No comments:
Post a Comment