1932

Abstract

New technologies have led to vast troves of large and complex data sets across many scientific domains and industries. People routinely use machine learning techniques not only to process, visualize, and make predictions from these big data, but also to make data-driven discoveries. These discoveries are often made using interpretable machine learning, or machine learning models and techniques that yield human-understandable insights. In this article, we discuss and review the field of interpretable machine learning, focusing especially on the techniques, as they are often employed to generate new knowledge or make discoveries from large data sets. We outline the types of discoveries that can be made using interpretable machine learning in both supervised and unsupervised settings. Additionally, we focus on the grand challenge of how to validate these discoveries in a data-driven manner, which promotes trust in machine learning systems and reproducibility in science. We discuss validation both from a practical perspective, reviewing approaches based on data-splitting and stability, as well as from a theoretical perspective, reviewing statistical results on model selection consistency and uncertainty quantification via statistical inference. Finally, we conclude byhighlighting open challenges in using interpretable machine learning techniques to make discoveries, including gaps between theory and practice for validating data-driven discoveries.

Loading

Article metrics loading...

/content/journals/10.1146/annurev-statistics-040120-030919
2024-04-22
2024-12-07
Loading full text...

Full text loading...

/deliver/fulltext/statistics/11/1/annurev-statistics-040120-030919.html?itemId=/content/journals/10.1146/annurev-statistics-040120-030919&mimeType=html&fmt=ahah

Literature Cited

  1. Abbe E. 2017.. Community detection and stochastic block models: recent developments. . J. Mach. Learn. Res. 18:(1):6446531
    [Google Scholar]
  2. Baker M. 2016.. 1,500 scientists lift the lid on reproducibility. . Nature 533::45254
    [Crossref] [Google Scholar]
  3. Barber RF, Candès EJ. 2019.. A knockoff filter for high-dimensional selective inference. . Ann. Stat. 47:(5):250437
    [Crossref] [Google Scholar]
  4. Barredo Arrieta A, Díaz-Rodríguez N, Del Ser J, Bennetot A, Tabik S, et al. 2020.. Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. . Inform. Fusion 58::82115
    [Crossref] [Google Scholar]
  5. Basu S, Kumbier K, Brown JB, Yu B. 2018.. Iterative random forests to discover predictive and stable high-order interactions. . PNAS 115:(8):194348
    [Crossref] [Google Scholar]
  6. Beam AL, Manrai AK, Ghassemi M. 2020.. Challenges to the reproducibility of machine learning models in health care. . JAMA 323:(4):3056
    [Crossref] [Google Scholar]
  7. Bengio Y, Courville A, Vincent P. 2013.. Representation learning: a review and new perspectives. . IEEE Trans. Pattern Anal. Mach. Intel. 35:(8):1798828
    [Crossref] [Google Scholar]
  8. Benjamini Y, Hochberg Y. 1995.. Controlling the false discovery rate: a practical and powerful approach to multiple testing. . J. R. Stat. Soc. Ser. B 57:(1):289300
    [Crossref] [Google Scholar]
  9. Berkhin P. 2006.. A survey of clustering data mining techniques. . In Grouping Multidimensional Data: Recent Advances in Clustering, ed. J Kogan, C Nicholas, M Teboulle , pp. 2571. New York:: Springer
    [Google Scholar]
  10. Berrett TB, Wang Y, Barber RF, Samworth RJ. 2018.. The conditional permutation test for independence while controlling for confounders. . J. R. Stat. Soc. Ser. B 82::17597
    [Crossref] [Google Scholar]
  11. Bien J, Tibshirani R. 2011.. Prototype selection for interpretable classification. . Ann. Appl. Stat. 5:(4):240324
    [Crossref] [Google Scholar]
  12. Blei DM, Kucukelbir A, McAuliffe JD. 2017.. Variational inference: a review for statisticians. . J. Am. Stat. Assoc. 112:(518):85977
    [Crossref] [Google Scholar]
  13. Borjali A, Chen AF, Muratoglu OK, Morid MA, Varadarajan KM. 2020.. Deep learning in orthopedics: How do we build trust in the machine?. Healthcare Transform. https://doi.org/10.1089/heat.2019.0006
    [Google Scholar]
  14. Broderick T, Gelman A, Meager R, Smith AL, Zheng T. 2023.. Toward a taxonomy of trust for probabilistic machine learning. . Sci. Adv. 9:(7):eabn3999
    [Crossref] [Google Scholar]
  15. Brunton SL, Proctor JL, Kutz JN. 2016.. Discovering governing equations from data by sparse identification of nonlinear dynamical systems. . PNAS 113:(15):393237
    [Crossref] [Google Scholar]
  16. Bühlmann P, Van de Geer S. 2011.. Statistics for High-Dimensional Data: Methods, Theory and Applications. New York:: Springer
    [Google Scholar]
  17. Candès E, Fan Y, Janson L, Lv J. 2018.. Panning for gold: ‘model-X’ knockoffs for high dimensional controlled variable selection. . J. R. Stat. Soc. Ser. B 80:(3):55177
    [Crossref] [Google Scholar]
  18. Carvalho DV, Pereira EM, Cardoso JS. 2019.. Machine learning interpretability: A survey on methods and metrics. . Electronics 8:(8):832
    [Crossref] [Google Scholar]
  19. Cortes A, Dendrou CA, Motyer A, Jostins L, Vukcevic D, et al. 2017.. Bayesian analysis of genetic association across tree-structured routine healthcare data in the UK Biobank. . Nat. Genet. 49:(9):131118
    [Crossref] [Google Scholar]
  20. Dong Y, Su H, Zhu J, Bao F. 2017.. Towards interpretable deep neural networks by leveraging adversarial examples. . arXiv:1708.05493 [cs.CV]
  21. Doshi-Velez F, Kim B. 2017.. Towards a rigorous science of interpretable machine learning. . arXiv:1702.08608 [stat.ML]
  22. Drton M, Maathuis MH. 2017.. Structure learning in graphical modeling. . Annu. Rev. Stat. Appl. 4::36593
    [Crossref] [Google Scholar]
  23. Du M, Liu N, Hu X. 2019.. Techniques for interpretable machine learning. . Commun. ACM 63:(1):6877
    [Crossref] [Google Scholar]
  24. Fineberg H, Stodden V, Meng XL. 2020.. Highlights of the US National Academies report on “Reproducibility and Replicability in Science. .” Harv. Data Sci. Rev. 2:(4). https://doi.org/10.1162/99608f92.cb310198
    [Google Scholar]
  25. Fodor IK. 2002.. A survey of dimension reduction techniques. Tech. Rep. , US Dep. Energy, Washington, DC:
    [Google Scholar]
  26. Gan L, Zheng L, Allen GI. 2022.. Model-agnostic confidence intervals for feature importance: A fast and powerful approach using minipatch ensembles. . arXiv:2206.02088 [stat.ML]
  27. Gao LL, Bien J, Witten D. 2022.. Selective inference for hierarchical clustering. . J. Am. Stat. Assoc. https://doi.org/10.1080/01621459.2022.2116331
    [Crossref] [Google Scholar]
  28. Gelman A, Shalizi CR. 2013.. Philosophy and the practice of Bayesian statistics. . Br. J. Math. Stat. Psychol. 66:(1):838
    [Crossref] [Google Scholar]
  29. Gibney E. 2022.. Could machine learning fuel a reproducibility crisis in science?. Nature 608::25051
    [Crossref] [Google Scholar]
  30. Gilpin LH, Bau D, Yuan BZ, Bajwa A, Specter MA, Kagal L. 2018.. Explaining explanations: An overview of interpretability of machine learning. . In 2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA), pp. 8089. Piscataway, NJ:: IEEE
    [Google Scholar]
  31. Glanois C, Weng P, Zimmer M, Li D, Yang T, et al. 2022.. A survey on interpretable reinforcement learning. . arXiv:2112.13112 [cs.LG]
  32. Guidotti R, Monreale A, Ruggieri S, Turini F, Giannotti F, Pedreschi D. 2018.. A survey of methods for explaining black box models. . ACM Comput. Surv. 51:(5):93
    [Google Scholar]
  33. Guyon I, Weston J, Barnhill S, Vapnik V. 2002.. Gene selection for cancer classification using support vector machines. . Mach. Learn. 46::389422
    [Crossref] [Google Scholar]
  34. Handl J, Knowles J, Kell DB. 2005.. Computational cluster validation in post-genomic data analysis. . Bioinformatics 21:(15):320112
    [Crossref] [Google Scholar]
  35. Hassan M, Awan FM, Naz A, deAndrés Galiana EJ, Alvarez O, et al. 2022.. Innovations in genomics and big data analytics for personalized medicine and health care: A review. . Int. J. Mol. Sci. 23:(9):4645
    [Crossref] [Google Scholar]
  36. He Z, Yu W. 2010.. Stable feature selection for biomarker discovery. . Comput. Biol. Chem. 34:(4):21525
    [Crossref] [Google Scholar]
  37. Hennig C, Meilă M, Murtagh F, Rocci R. 2015.. Handbook of Cluster Analysis. Boca Raton, FL:: CRC
    [Google Scholar]
  38. Hodge V, Austin J. 2004.. A survey of outlier detection methodologies. . Artif. Intel. Rev. 22:(2):85126
    [Crossref] [Google Scholar]
  39. Jacovi A, Marasović A, Miller T, Goldberg Y. 2021.. Formalizing trust in artificial intelligence: Prerequisites, causes and goals of human trust in AI. . In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, pp. 62435. New York:: ACM
    [Google Scholar]
  40. Javanmard A, Montanari A. 2014.. Confidence intervals and hypothesis testing for high-dimensional statistical models. . J. Mach. Learn. Res. 15:(1):2869909
    [Google Scholar]
  41. Johnstone IM, Lu AY. 2009.. On consistency and sparsity for principal components analysis in high dimensions. . J. Am. Stat. Assoc. 104:(486):68293
    [Crossref] [Google Scholar]
  42. Jolliffe IT. 2002.. Principal Component Analysis for Special Types of Data. New York:: Springer
    [Google Scholar]
  43. Kim I, Neykov M, Balakrishnan S, Wasserman L. 2022.. Local permutation tests for conditional independence. . arXiv:2112.11666 [math.ST]
  44. Koh PW, Liang P. 2017.. Understanding black-box predictions via influence functions. . Proc. Mach. Learn. Res. 70::188594
    [Google Scholar]
  45. Koltchinskii V, Lounici K. 2016.. Asymptotics and concentration bounds for bilinear forms of spectral projectors of sample covariance. . Ann. Inst. Henri Poincaré Probab. Stat. 52:(4):19762013
    [Crossref] [Google Scholar]
  46. Kruschke JK. 2021.. Bayesian analysis reporting guidelines. . Nat. Hum. Behav. 5:(10):128291
    [Crossref] [Google Scholar]
  47. Lange T, Roth V, Braun ML, Buhmann JM. 2004.. Stability-based validation of clustering solutions. . Neural Comput. 16:(6):1299323
    [Crossref] [Google Scholar]
  48. Lauritzen SL. 1996.. Graphical Models. Oxford, UK:: Clarendon
    [Google Scholar]
  49. Lei J, G'Sell M, Rinaldo A, Tibshirani RJ, Wasserman L. 2018.. Distribution-free predictive inference for regression. . J. Am. Stat. Assoc. 113:(523):1094111
    [Crossref] [Google Scholar]
  50. Li X, Wang Y, Ruiz R. 2022.. A survey on sparse learning models for feature selection. . IEEE Trans. Cybernet. 52:(3):164260
    [Crossref] [Google Scholar]
  51. Lipton ZC. 2018.. The mythos of model interpretability: In machine learning, the concept of interpretability is both important and slippery. . Queue 16:(3):3157
    [Crossref] [Google Scholar]
  52. Liu H, Roeder K, Wasserman L. 2010.. Stability approach to regularization selection (StARS) for high dimensional graphical models. . In NIPS'10: Proceedings of the 23rd International Conference on Neural Information Processing Systems, ed. JD Lafferty, CKI Williams, J Shawe-Taylor, RS Zemel, A Culotta , pp. 143240. Red Hook, NY:: Curran
    [Google Scholar]
  53. Liu W. 2013.. Gaussian graphical model estimation with false discovery rate control. . Ann. Stat. 41:(6):294878
    [Crossref] [Google Scholar]
  54. Löffler M, Zhang AY, Zhou HH. 2021.. Optimality of spectral clustering in the Gaussian mixture model. . Ann. Stat. 49:(5):250630
    [Crossref] [Google Scholar]
  55. Materne J. 1978.. The structure of nearby clusters of galaxies—hierarchical clustering and an application to the Leo region. . Astron. Astrophys. 63::4019
    [Google Scholar]
  56. McDermott MB, Wang S, Marinsek N, Ranganath R, Foschini L, Ghassemi M. 2021.. Reproducibility in machine learning for health research: still a ways to go. . Sci. Transl. Med. 13:(586):eabb1655
    [Crossref] [Google Scholar]
  57. McInnes L, Healy J, Melville J. 2020.. UMAP: uniform manifold approximation and projection for dimension reduction. . arXiv:1802.03426 [stat.ML]
  58. Meinshausen N, Bühlmann P. 2010.. Stability selection. . J. R. Stat. Soc. Ser. B 72:(4):41773
    [Crossref] [Google Scholar]
  59. Meng XL. 2020.. Reproducibility, replicability, and reliability. . Harv. Data Sci. Rev. 2:(4). https://doi.org/10.1162/99608f92.dbfce7f9
    [Crossref] [Google Scholar]
  60. Molnar C. 2022.. Interpretable Machine Learning. N.p.:: C. Molnar. , 2nd ed..
    [Google Scholar]
  61. Montavon G, Binder A, Lapuschkin S, Samek W, Müller KR. 2019.. Layer-wise relevance propagation: an overview. . In Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, ed. W Samek, G Montavon, A Vedaldi, LK Hansen, K-R Müller , pp. 193209. New York:: Springer
    [Google Scholar]
  62. Monti S, Tamayo P, Mesirov J, Golub T. 2003.. Consensus clustering: a resampling-based method for class discovery and visualization of gene expression microarray data. . Mach. Learn. 52:(1–2):91118
    [Crossref] [Google Scholar]
  63. Mothilal RK, Sharma A, Tan C. 2020.. Explaining machine learning classifiers through diverse counterfactual explanations. . In FAT* '20: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pp. 60717. New York:: ACM
    [Google Scholar]
  64. Murdoch WJ, Singh C, Kumbier K, Abbasi-Asl R, Yu B. 2019.. Definitions, methods, and applications in interpretable machine learning. . PNAS 116:(44):2207180
    [Crossref] [Google Scholar]
  65. Natl. Acad. Sci. Eng. Med. 2019.. Reproducibility and Replicability in Science. Washington, DC:: Natl. Acad. Press
    [Google Scholar]
  66. Neufeld A, Dharamshi A, Gao LL, Witten D. 2023.. Data thinning for convolution-closed distributions. . arXiv:2301.07276 [stat.ME]
  67. Ozer M, Kim N, Davulcu H. 2016.. Community detection in political Twitter networks using nonnegative matrix factorization methods. . In ASONAM '16: Proceedings of the 2016 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining, pp. 8188. Piscataway, NJ:: IEEE
    [Google Scholar]
  68. Peng RD, Hengartner NW. 2002.. Quantitative analysis of literary styles. . Am. Stat. 56:(3):17585
    [Crossref] [Google Scholar]
  69. Perou CM, Sørlie T, Eisen MB, Van De Rijn M, Jeffrey SS, et al. 2000.. Molecular portraits of human breast tumours. . Nature 406:(6797):74752
    [Crossref] [Google Scholar]
  70. Rasheed K, Qayyum A, Ghaly M, Al-Fuqaha A, Razi A, Qadir J. 2022.. Explainable, trustworthy, and ethical machine learning for healthcare: a survey. . Comput. Biol. Med. 149::106043
    [Crossref] [Google Scholar]
  71. Redmond M. 2009.. Communities and crime. Data Set, UCI Machine Learning Repository , Univ. Calif., Irvine, CA:
    [Google Scholar]
  72. Reyes M, Meier R, Pereira S, Silva CA, Dahlweid FM, et al. 2020.. On the interpretability of artificial intelligence in radiology: challenges and opportunities. . Radiol. Artif. Intel. 2:(3):e190043
    [Crossref] [Google Scholar]
  73. Ribeiro MT, Singh S, Guestrin C. 2016.. “Why should I trust you?”: explaining the predictions of any classifier. . In KDD '16: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 113544. New York:: ACM
    [Google Scholar]
  74. Roscher R, Bohn B, Duarte MF, Garcke J. 2020.. Explainable machine learning for scientific insights and discoveries. . IEEE Access 8::4220016
    [Crossref] [Google Scholar]
  75. Rubinov M, Sporns O. 2010.. Complex network measures of brain connectivity: uses and interpretations. . Neuroimage 52:(3):105969
    [Crossref] [Google Scholar]
  76. Rudin C. 2014.. Algorithms for interpretable machine learning. . In KDD '14: Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, p. 1519. New York:: ACM
    [Google Scholar]
  77. Rudin C. 2019.. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. . Nat. Mach. Intel. 1:(5):20615
    [Crossref] [Google Scholar]
  78. Rudin C, Chen C, Chen Z, Huang H, Semenova L, Zhong C. 2022.. Interpretable machine learning: fundamental principles and 10 grand challenges. . Stat. Surv. 16::185
    [Crossref] [Google Scholar]
  79. Samek W, Montavon G, Lapuschkin S, Anders CJ, Müller KR. 2021.. Explaining deep neural networks and beyond: a review of methods and applications. . Proc. IEEE 109:(3):24778
    [Crossref] [Google Scholar]
  80. Samek W, Müller KR. 2019.. Towards explainable artificial intelligence. . In Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, ed. W Samek, G Montavon, A Vedaldi, LK Hansen, K-R Müller , pp. 522. New York:: Springer
    [Google Scholar]
  81. Satija R, Farrell JA, Gennert D, Schier AF, Regev A. 2015.. Spatial reconstruction of single-cell gene expression data. . Nat. Biotechnol. 33:(5):495502
    [Crossref] [Google Scholar]
  82. Shah RD, Peters J. 2020.. The hardness of conditional independence testing and the generalised covariance measure. . Ann. Stat. 48:(3):151438
    [Crossref] [Google Scholar]
  83. Shah RD, Samworth RJ. 2013.. Variable selection with error control: another look at stability selection. . J. R. Stat. Soc. Ser. B 75:(1):5580
    [Crossref] [Google Scholar]
  84. Stodden V. 2020.. Theme editor's introduction to reproducibility and replicability in science. . Harv. Data Sci. Rev. 2:(4). https://doi.org/10.1162/99608f92.c46a02d4
    [Google Scholar]
  85. Taeb A, Shah P, Chandrasekaran V. 2020.. False discovery and its control in low rank estimation. . J. R. Stat. Soc. Ser. B 82:(4):9971027
    [Crossref] [Google Scholar]
  86. Taylor J, Tibshirani RJ. 2015.. Statistical learning and selective inference. . PNAS 112:(25):762934
    [Crossref] [Google Scholar]
  87. Tibshirani R. 1996.. Regression shrinkage and selection via the lasso. . J. R. Stat. Soc. Ser. B 58:(1):26788
    [Crossref] [Google Scholar]
  88. Toreini E, Aitken M, Coopamootoo K, Elliott K, Zelaya CG, van Moorsel A. 2020.. The relationship between trust in AI and trustworthy machine learning technologies. . In FAT* '20: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pp. 27283. New York:: ACM
    [Google Scholar]
  89. Tukey JW. 1977.. Exploratory Data Analysis. Boston:: Addison-Wesley
    [Google Scholar]
  90. Vallejos CA, Marioni JC, Richardson S. 2015.. Basics: Bayesian analysis of single-cell sequencing data. . PLOS Comput. Biol. 11:(6):e1004333
    [Crossref] [Google Scholar]
  91. van de Geer S, Bühlmann P, Ritov Y, Dezeure R. 2014.. On asymptotically optimal confidence regions and tests for high-dimensional models. . Ann. Stat. 42:(3):1166202
    [Crossref] [Google Scholar]
  92. van de Schoot R, Depaoli S, King R, Kramer B, Märtens K, et al. 2021.. Bayesian statistics and modelling. . Nat. Rev. Methods Primers 1:(1):1
    [Crossref] [Google Scholar]
  93. Van der Maaten L, Hinton G. 2008.. Visualizing data using t-SNE. . J. Mach. Learn. Res. 9:(86):2579605
    [Google Scholar]
  94. Wainwright MJ. 2019.. High-Dimensional Statistics: A Non-Asymptotic Viewpoint. Cambridge, UK:: Cambridge Univ. Press
    [Google Scholar]
  95. Weinstein JN, Collisson EA, Mills GB, Shaw KRM, Ozenberger BA, et al. 2013.. The Cancer Genome Atlas Pan-Cancer Analysis Project. . Nat. Genet. 45:(10):111320
    [Crossref] [Google Scholar]
  96. Williamson BD, Gilbert PB, Simon NR, Carone M. 2023.. A general framework for inference on algorithm-agnostic variable importance. . J. Am. Stat. Assoc. 118::164558
    [Crossref] [Google Scholar]
  97. Willis C, Stodden V. 2020.. Trust but verify: how to leverage policies, workflows, and infrastructure to ensure computational reproducibility in publication. . Harv. Data Sci. Rev. 2:(4). https://doi.org/10.1162/99608f92.25982dcf
    [Crossref] [Google Scholar]
  98. Witten DM, Tibshirani R. 2010.. A framework for feature selection in clustering. . J. Am. Stat. Assoc. 105:(490):71326
    [Crossref] [Google Scholar]
  99. Xu F, Uszkoreit H, Du Y, Fan W, Zhao D, Zhu J. 2019.. Explainable AI: a brief survey on history, research areas, approaches and challenges. . In Natural Language Processing and Chinese Computing, ed. J Tang, MY Kan, D Zhao, S Li, H Zan , pp. 56374. New York:: Springer
    [Google Scholar]
  100. Yasaka K, Abe O. 2018.. Deep learning and artificial intelligence in radiology: current applications and future directions. . PLOS Med. 15:(11):e1002707
    [Crossref] [Google Scholar]
  101. Yu B, Kumbier K. 2020.. Veridical data science. . PNAS 117:(8):392029
    [Crossref] [Google Scholar]
  102. Zhang L, Janson L. 2022.. Floodgate: inference for model-free variable importance. . arXiv:2007.01283 [stat.ME]
  103. Zhang Y, Song K, Sun Y, Tan S, Udell M. 2019.. “Why should you trust my explanation?” Understanding uncertainty in LIME explanations. . arXiv:1904.12991 [cs.LG]
  104. Zhao P, Yu B. 2006.. On model selection consistency of lasso. . J. Mach. Learn. Res. 7:(90):254163
    [Google Scholar]
  105. Zou H, Hastie T. 2005.. Regularization and variable selection via the elastic net. . J. R. Stat. Soc. Ser. B 67:(2):30120
    [Crossref] [Google Scholar]
/content/journals/10.1146/annurev-statistics-040120-030919
Loading
/content/journals/10.1146/annurev-statistics-040120-030919
Loading

Data & Media loading...

Supplemental Material

  • Article Type: Review Article
This is a required field
Please enter a valid email address
Approval was a Success
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error