The MMSE estimator is the mean of the posterior pdf E(x|y) of x given observation y.
- The estimator is unbiased.
- The covariance is reduced compared to the a priori information.
- Commutes over affine transformation.
- Additivity property for independent data sets.
- Linear in the Gaussian case.
- The estimator error is orthogonal to the space spanned by all Y-measurable functions (affine functions being a subset)
The MAP estimator arg maxθp(θ|x) given observation x
- Jointly Gaussian case, MAP = MMSE (posterior is Gaussian, hence pdf unimodal and symmetric, mean = mode = median)
- Do not commute over nonlinear transformation. (Invariant property does not hold, unlike ML)
- Commutes over linear transformation.
MAP tends to ML when
- Prior is uninformative
- Large amount of information in data compared to prior
Gaussian linear model
Let the observed samples takes on the model
x=Hθ+w with prior N(μθ,Cθ) and noise vector N(0,Cw) independent of θ, then the posterior is Gaussian with mean
E(θ|x)=μθ+CθHT(HCθHT+Cw)−1(x−Hμθ) and covariance Cθ|x=Cθ−CθHT(HCθHT+Cw)−1HCθ Contrary to the classical Gaussian linear model H does not need to be full rank.
In alternative form,
E(θ|x)=μθ+(C−1θ+HTC−1wH)−1HTC−1w(x−Hμθ) and Cθ|x=(C−1θ+HTC−1wH)−1
LMMSE estimator E∗[X|Y]
- A function of first and second order statistics only. E∗[X|Y]=μx+ΣxyΣ−1yy(y−μy) (inverse can be replaced with pseudo-inverse if necessary)
- Jointly Gaussian case, E∗[X|Y]=E[X|Y]
- Error orthogonal to subspace spanned by Y
- Additivity property E∗[X|Y1,…,Yk]=k∑j=1E∗[X|Yj]−(k−1)μx
No comments:
Post a Comment