Enhancing Students’ Understanding of Variance Estimation through the Lens of the Bias-Variance Trade-off: From Sample Variance to Improved Insight

Authors

DOI:

https://doi.org/10.46328/ijonest.6142

Keywords:

Sample variance, Unbiasedness, Mean squared error, Bias-Variance trade-off, ChatGPT

Abstract

Understanding variance estimation is a cornerstone of statistical education. While the unbiasedness of the sample variance is a valuable property, it should not be the sole criterion for selecting an estimator. This paper advocates for incorporating mean squared error (MSE) considerations into the teaching of variance estimation in statistics classrooms. In contemporary applications, estimators with lower MSE are often preferred, even when they are biased. In this study, we first examine the relationship between the minimum-MSE variance estimators—among those based on the sum of squared deviations—and the kurtosis of the underlying population distribution. Furthermore, we demonstrate that, particularly for skewed distributions, alternative estimators can substantially outperform the sample variance in terms of MSE. By using variance estimation as a framework, instructors can effectively introduce students to the bias-variance trade-off, a foundational concept in statistical estimation and model selection. To support classroom implementation, we provide a series of R codes for the simulation-based visualizations that foster students’ intuition about the interaction between bias and variance.

References

Balanda, K. P., & MacGillivray, H. L. (1988). Kurtosis: A critical review. The American Statistician 42, 111–119. https://doi.org/10.1080/00031305.1988.10475539

Garfield, J., & Ahlgren, A. (1988). Difficulties in learning basic concepts in probability and statistics: implications for research. Journal for Research in Mathematics Education 19, 44–63. https://doi.org/10.2307/749110

Casella, G., & Berger, R. L. (2002). Statistical Inference, 2nd ed, Duxbury.

Cho, E., & Cho, M. (2009). Variance of Sample Variance With Replacement. International Journal of Pure and Applied Mathematics 52, 43–47. http://www.ijpam.eu/contents/2009-52-1/5/5.pdf

Darkwah, K. A., Nortey, E. N. N., & Lotsi, A. (2016). Estimation of the Gini coefficient for the lognormal distribution of income using the Lorenz curve. SpringerPlus 5, 1196. https://doi.org/10.1186/s40064-016-2868-z

Diez, D. M., Barr, C. D., & Çetinkaya Rundel, M. (2019). OpenIntro Statistics, 4th ed, OpenIntro, Inc., USA. https://doi.org/10.5070/T573020084

Evans, M., Hastings, N., & Peacock, B. (2000). Statistical Distributions, 3rd ed, John Wiley & Sons, New York.

Hara, H. (2007). Improved estimation of the MSEs and the MSE matrices for shrinkage estimators of multivariate normal means and their applications. arXiv preprint. https://arxiv.org/abs/0710.1171

Hastie, T., Tibshirani, R., & Friedman, J. (2009). The Elements of Statistical Learning: Data Mining, Inference, and Prediction, 2nd ed, Springer.

Johnson, N. L., Kotz, S., & Balakrishnan, N. (1995). Continuous Univariate Distributions, Volume 2, Wiley.

Kubokawa, T., & Srivastava, M. S. (2002). Estimating risk and the mean squared error matrix in Stein estimation, Journal of Multivariate Analysis 80, 102–132. https://doi.org/10.1006/jmva.2001.2020

Lakshmi, R., & Sajesh, T. A. (2025). Comparing ridge regression estimators: Exploring both new and old methods, Stochastics and Quality Control 40, 85–103. https://doi.org/10.1515/eqc-2024-0043

Lehmann, E. L. (1983). Theory of Point Estimation, Wiley Series in Probability and Statistics, Wiley, New York.

Levy, J. M. (2006). Is unbiasing estimators always justified? arXiv preprint. arXiv:hepph/0604133v2

Longford, N. T. (2009). Inference with the lognormal distribution, Journal of Statistical Planning and Inference 139, 2329–2340. https://doi.org/10.1016/j.jspi.2008.10.015

Louzada, F., Ramos, P. L., & Ramos, E. (2019). A note on bias of closed-form estimators for the gamma distribution derived from likelihood equations. The American Statistician, 73, 195–199. https://doi.org/10.1080/00031305.2018.1513376

Mann, P. S. (2010). Introductory Statistics, 7th ed, John Wiley & Sons, Hoboken, NJ.

Mood, A. M., Graybill, F. A., & Boes, D. C. (1974). Introduction to the Theory of Statistics, 3rd ed, McGraw-Hill.

Murphy, K. P. (2012). Machine Learning: A Probabilistic Perspective, MIT Press, Cambridge, MA.

Okamoto, M. (2022). Lorenz and polarization orderings of the double-Pareto lognormal distribution and other size distributions, Sankhya B 84, 548–574. https://doi.org/10.1007/s13571-021-00264-z

Rosenthal, J. S. (2015). The kids are alright: Divide by ???? when estimating variance, IMS Bulletin 44(8). https://doi.org/10.1080/00031305.2021.1874490

Shen, H., Brown, L. D., & Zhi, H. (2006). Efficient estimation of log-normal means with application to pharmacokinetic data, Statistics in Medicine 25(17), 3023–3038. https://doi.org/10.1002/sim.2456

Stigler, S. M. (1981), Gauss and the invention of least squares, The Annals of Statistics 9(3), 465–474. https://doi.org/10.1214/AOS/1176345451

Ye, Z.-S., & Chen, N. (2017). Closed Form Estimators for the Gamma Distribution Derived from Likelihood Equations. The American Statistician, 71, 177–181. https://doi.org/10.1080/00031305.2016.1209129

Downloads

Published

2025-12-31

Issue

Section

Science

How to Cite

Enhancing Students’ Understanding of Variance Estimation through the Lens of the Bias-Variance Trade-off: From Sample Variance to Improved Insight . (2025). International Journal on Engineering, Science and Technology, 7(2), 160-192. https://doi.org/10.46328/ijonest.6142