Towards the finite-sample variance bounds for unbiased estimators
When and Where
Speakers
Description
The inverse of the Fisher information matrix in a likelihood problem is i) the variance-covariance matrix of the asymptotic distribution of the maximum likelihood (ML) estimator; ii) the dominant term in the expansion of the finite-sample variance of the ML estimator; and iii) the "lowest" achievable variance-covariance that an unbiased estimator can achieve. "Lowest" here is used to indicate that the difference of the inverse Fisher information from the variance of any unbiased estimator is a positive definite matrix. These three characterizations and the asymptotic unbiasedness of the ML estimator are key justifications for the wide-spread use of the latter in statistical practice. For example, standard regression software typically reports the ML estimates alongside with estimated standard errors coming from the inversion of the Fisher information matrix at the estimates. Nevertheless, the use of that pair of estimates and estimated standard errors for inference implicitly assumes, amongst other things, that the information about the parameters in the sample is large enough for the estimator to be almost unbiased and its variance to be well-approximated by the inverse of the Fisher information matrix. In this talk, we present results from work-in-progress on a novel estimation framework that aims to bridge the finite-sample gap between estimates and the estimated variance-covariance matrix. We also show results from inferential settings that are well-used in statistical practice.
Please register for this event.
More about Ioannis Kosmidis
Ioannis Kosmidis a Reader in Data Science at the Department of Statistics, University of Warwick and Turing Fellow at The Alan Turing Institute, which is the UK’s national institute for data science and artificial intelligence.