Processing math: 100%

MathJax

Monday, September 12, 2016

Properties of MMSE and MAP estimator (Bayesian)

The MMSE estimator is the mean of the posterior pdf E(x|y) of x given observation y.
  1. The estimator is unbiased.
  2. The covariance is reduced compared to the a priori information.
  3. Commutes over affine transformation.
  4. Additivity property for independent data sets.
  5. Linear in the Gaussian case.
  6. The estimator error is orthogonal to the space spanned by all Y-measurable functions (affine functions being a subset)
The MAP estimator arg maxθp(θ|x) given observation x
  1. Jointly Gaussian case, MAP = MMSE (posterior is Gaussian, hence pdf unimodal and symmetric, mean = mode = median)
  2. Do not commute over nonlinear transformation. (Invariant property does not hold, unlike ML)
  3. Commutes over linear transformation.
MAP tends to ML when
  • Prior is uninformative
  • Large amount of information in data compared to prior

Gaussian linear model

Let the observed samples takes on the model
x=Hθ+w with prior N(μθ,Cθ) and noise vector N(0,Cw) independent of θ, then the posterior is Gaussian with mean
E(θ|x)=μθ+CθHT(HCθHT+Cw)1(xHμθ) and covariance Cθ|x=CθCθHT(HCθHT+Cw)1HCθ Contrary to the classical Gaussian linear model H does not need to be full rank.  
In alternative form, 
E(θ|x)=μθ+(C1θ+HTC1wH)1HTC1w(xHμθ) and Cθ|x=(C1θ+HTC1wH)1 

LMMSE estimator E[X|Y]
  1. A function of first and second order statistics only.  E[X|Y]=μx+ΣxyΣ1yy(yμy) (inverse can be replaced with pseudo-inverse if necessary)
  2. Jointly Gaussian case, E[X|Y]=E[X|Y]
  3. Error orthogonal to subspace spanned by Y
  4. Additivity property E[X|Y1,,Yk]=kj=1E[X|Yj](k1)μx

No comments: