- Asymptotically efficient (attains CRLB as \(N\rightarrow\infty\))
- Asymptotically Gaussian (asymptotically normality)
- Asymptotically Unbiased
- Consistent (weakly and strongly)
The MLE of the parameter \(\alpha = g(\theta)\), where the PDF \(p(x;\theta)\) is paremeterized by \(\theta\), is given by
\[ \hat{\alpha} = g(\hat{\theta})\] where \(\hat{\theta}\) is the MLE of \(\theta\).
Consistency (in class) is defined as the weak convergence of the sequence of estimates to the true parameter as N gets large.
If \(g(\theta)\) is continuous in \(\theta\), the convergence properties (esp. convergence in prob.) carry over, i.e. the consistency of the estimator \(g(\hat{\theta})\)
However, biasedness of the estimator \(g(\hat{\theta})\) depends on the convexity of \(g\) and does not carry over from \(\hat{\theta}\).
Other properties of MLE
- If an efficient estimator exists, the ML method will produce it.
- Unlike the MVU estimator, MLE can be biased
- Note: CRLB applies to unbiased estimators, so when estimator is biased, it is possible it has variance smaller than \(I^{-1}(\theta)\)
No comments:
Post a Comment