Abstract:
This paper introduces a new method for deriving covariance matrix estimators that are decision-theoretically optimal. The key is to employ large-dimensional asymptotics: the matrix dimension and the sample size go to infinity together, with their ratio converging to a finite, nonzero limit. As the main focus, we apply this method to Stein's loss. Compared to the estimator of Stein (1975, 1986), ours has five theoretical advantages: 1. it asymptotically minimizes the loss itself, instead of an estimator of the expected loss; 2. it does not necessitate post-processing through an ad hoc algorithm (called isotonization) to restore the positivity or the ordering of the covariance matrix eigenvalues; 3. it does not ignore any terms in the function to be minimized; 4. it does not require normality; and 5. it is not limited to applications where the sample size exceeds the dimension. In addition to these theoretical advantages, our estimator also improves upon Stein' estimator in terms of finite-sample performance, as evidenced via extensive Monte Carlo simulations. To further demonstrate the effectiveness of our method, we show that some previously suggested estimators of the covariance matrix and its inverse are decision-theoretically optimal with respect to the Frobenius loss function.