1932

Abstract

The most popular methods for measuring importance of the variables in a black-box prediction algorithm make use of synthetic inputs that combine predictor variables from multiple observations. These inputs can be unlikely, physically impossible, or even logically impossible. As a result, the predictions for such cases can be based on data very unlike any the black box was trained on. We think that users cannot trust an explanation of the decision of a prediction algorithm when the explanation uses such values. Instead, we advocate a method called cohort Shapley, which is grounded in economic game theory and uses only actually observed data to quantify variable importance. Cohort Shapley works by narrowing the cohort of observations judged to be similar to a target observation on one or more features. We illustrate it on an algorithmic fairness problem where it is essential to attribute importance to protected variables that the model was not trained on.

Loading

Article metrics loading...

/content/journals/10.1146/annurev-statistics-040722-045325
2024-04-22
2024-06-25
Loading full text...

Full text loading...

/deliver/fulltext/statistics/11/1/annurev-statistics-040722-045325.html?itemId=/content/journals/10.1146/annurev-statistics-040722-045325&mimeType=html&fmt=ahah

Literature Cited

  1. Aas K, Jullum M, Løland A. 2019.. Explaining individual predictions when features are dependent: more accurate approximations to Shapley values. . arXiv:1903.10464 [stat.ML]
  2. Agarwal R, Melnick L, Frosst N, Zhang X, Lengerich B, et al. 2021.. Neural additive models: interpretable machine learning with neural nets. . In 35th Conference on Neural Information Processing Systems (NeurIPS 2021), ed. M Ranzato, A Beygelzimer, Y Dauphin, PS Liang, J Wortman Vaughan , pp. 4699711 Red Hook, NY:: Curran
    [Google Scholar]
  3. Angwin J, Larson J, Mattu S, Kirchner L. 2016.. Machine bias: there's software used across the country to predict future criminals. And it's biased against blacks. . ProPublica, May 23. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
    [Google Scholar]
  4. Aumann RJ, Shapley LS. 1974.. Values of Non-Atomic Games. Princeton, NJ:: Princeton Univ. Press
    [Google Scholar]
  5. Berk R, Heidari H, Jabbari S, Kearns M, Roth A. 2018.. Fairness in criminal justice risk assessments: The state of the art. . Sociol. Methods Res. 50:(1):344
    [Crossref] [Google Scholar]
  6. Berman R. 2018.. Beyond the last touch: Attribution in online advertising. . Mark. Sci. 37:(5):77192
    [Crossref] [Google Scholar]
  7. Bollen KA, Pearl J. 2013.. Eight myths about causality and structural equation models. . In Handbook of Causal Analysis for Social Research, ed. SL Morgan , pp. 30128 Dordrecht, Neth:.: Springer
    [Google Scholar]
  8. Breiman L. 2001.. Random forests. . Mach. Learn. 45:(1):532
    [Crossref] [Google Scholar]
  9. Brennan T, Dieterich W, Ehret B. 2009.. Evaluating the predictive validity of the COMPAS risk and needs assessment system. . Crim. Justice Behav. 36:(1):2140
    [Crossref] [Google Scholar]
  10. Campolongo F, Cariboni J, Saltelli A. 2007.. An effective screening design for sensitivity analysis of large models. . Environ. Model. Softw. 22:(10):150918
    [Crossref] [Google Scholar]
  11. Chan D, Perry M. 2017.. Challenges and opportunities in media mix modeling. Work. Pap. , Google Inc., Mountain View, CA.: https://research.google.com/pubs/archive/45998.pdf
    [Google Scholar]
  12. Chastaing G, Gamboa F, Prieur C. 2012.. Generalized Hoeffding-Sobol decomposition for dependent variables-application to sensitivity analysis. . Electron. J. Stat. 6::242048
    [Crossref] [Google Scholar]
  13. Chouldechova A. 2017.. Fair prediction with disparate impact: a study of bias in recidivism prediction instruments. . Big Data 5:(2):15363
    [Crossref] [Google Scholar]
  14. Chouldechova A, Roth A. 2018.. The frontiers of fairness in machine learning. . arXiv:1810.08810 [cs.LG]
  15. Cochran WG. 1968.. The effectiveness of adjustment by subclassification in removing bias in observational studies. . Biometrics 24:(2):295313
    [Crossref] [Google Scholar]
  16. Corbett-Davies S, Goel S. 2018.. The measure and mismeasure of fairness: a critical review of fair machine learning. . arXiv:1808.00023 [cs.CY]
  17. Corbett-Davies S, Pierson E, Feller A, Goel S, Huq A. 2017.. Algorithmic decision making and the cost of fairness. . In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 797806 New York:: ACM
    [Google Scholar]
  18. Cox DR. 1984.. Interaction. . Int. Stat. Rev. 52:(1):124
    [Crossref] [Google Scholar]
  19. Da Veiga S, Gamboa F, Iooss B, Prieur C. 2021.. Basics and Trends in Sensitivity Analysis: Theory and Practice in R. Philadelphia, PA:: SIAM
    [Google Scholar]
  20. Datta A, Sen S, Zick Y. 2016.. Algorithmic transparency via quantitative input influence: theory and experiments with learning systems. . In Proceedings of the 2016 IEEE Symposium on Security and Privacy (SP), pp. 598617 Los Alamitos, CA:: IEEE
    [Google Scholar]
  21. Dawid AP, Musio M. 2021.. Effects of causes and causes of effects. . Annu. Rev. Stat. Appl. 9::26187
    [Crossref] [Google Scholar]
  22. De Gonzalez AB, Cox DR. 2007.. Interpretation of interaction: a review. . Ann. Appl. Stat. 1:(2):37185
    [Google Scholar]
  23. Dieterich W, Mendoza C, Brennan T. 2016.. COMPAS risk scales: demonstrating accuracy equity and predictive parity. Tech. rep. , Northpointe Inc., Traverse City, MI:
    [Google Scholar]
  24. Donoho DL. 2019.. What's missing from today's machine intelligence juggernaut?. Harv. Data Sci. Rev. 2019:(1.1). https://doi.org/10.1162/99608f92.c698b3a7
    [Google Scholar]
  25. Doshi-Velez F, Kortz M, Budish R, Bavitz C, Gershman S, et al. 2017.. Accountability of AI under the law: the role of explanation. . arXiv:1711.01134 [cs.AI]
  26. Dwork C, Hardt M, Pitassi T, Reingold O, Zemel R. 2012.. Fairness through awareness. . In Proceedings of the 3rd Innovations in Theoretical Computer Science Conference, pp. 21426 New York:: ACM
    [Google Scholar]
  27. Efron B, Stein C. 1981.. The jackknife estimate of variance. . Ann. Stat. 9:(3):58696
    [Crossref] [Google Scholar]
  28. Fisher RA, Mackenzie WA. 1923.. Studies in crop variation. II. The manurial response of different potato varieties. . J. Agric. Sci. 13::31120
    [Crossref] [Google Scholar]
  29. Flores AW, Bechtel K, Lowenkamp CT. 2016.. False positives, false negatives, and false analyses: A rejoinder to “Machine bias: There's software used across the country to predict future criminals. and it's biased against blacks. Fed. Probation J. 80:(2):3846
    [Google Scholar]
  30. Friedler SA, Scheidegger C, Venkatasubramanian S, Choudhary S, Hamilton EP, Roth D. 2019.. A comparative study of fairness-enhancing interventions in machine learning. . In FAT* '19: Proceedings of the Conference on Fairness, Accountability, and Transparency, pp. 32938 New York:: ACM
    [Google Scholar]
  31. Frye C, de Mijolla D, Begley T, Cowton L, Stanley M, Feige I. 2021.. Shapley explainability on the data manifold. . In Proceedings of the 10th International Conference on Learning Representations (ICLR 2021). https://openreview.net/pdf?id=OPyWRrcjVQw
    [Google Scholar]
  32. Gelman A, Park DK. 2009.. Splitting a predictor at the upper quarter or third and the lower quarter or third. . Am. Stat. 63:(1):18
    [Crossref] [Google Scholar]
  33. Ghorbani A, Zou J. 2019.. Data Shapley: equitable valuation of data for machine learning. . Proc. Mach. Learn. Res. 97::224251
    [Google Scholar]
  34. Harrison D, Rubinfeld DL. 1978.. Hedonic prices and the demand for clean air. . J. Environ. Econ. Manag. 5::81102
    [Crossref] [Google Scholar]
  35. Hoeffding W. 1948.. A class of statistics with asymptotically normal distribution. . Ann. Math. Stat. 19::293325
    [Crossref] [Google Scholar]
  36. Holland PW. 1986.. Statistics and causal inference. . J. Am. Stat. Assoc. 81:(396):94560
    [Crossref] [Google Scholar]
  37. Holland PW. 1988.. Causal inference, path analysis and recursive structural equations models. . Sociol. Methodol. 18::44984
    [Crossref] [Google Scholar]
  38. Hooker G. 2012.. Generalized functional ANOVA diagnostics for high-dimensional functions of dependent variables. . J. Comput. Graph. Stat. 16:(3):70932
    [Crossref] [Google Scholar]
  39. Hooker G, Mentch L. 2019.. Please stop permuting features: an explanation and alternatives. . arXiv:1905.03151v1 [stat.ME]
  40. Hooker S, Erhan D, Kindermans PJ, Kim B. 2019.. A benchmark for interpretability methods in deep neural networks. . In 33rd Conference on Neural Information Processing Systems (NeurIPS2019), ed. H Wallach, H Larochelle, A Beygelzimer, F d'Alché-Buc, E Fox, R Garnett , pp. 973748 Red Hook, NY:: Curran
    [Google Scholar]
  41. Jackson E, Mendoza C. 2020.. Setting the record straight: what the COMPAS core risk and need assessment is and is not. . Harv. Data Sci. Rev. 2020:(2.1). https://doi.org/10.1162/99608f92.1b3dadaa
    [Google Scholar]
  42. Jansen MJW. 1999.. Analysis of variance designs for model output. . Comput. Phys. Commun. 117:(1–2):3543
    [Crossref] [Google Scholar]
  43. Jiang T, Owen AB. 2003.. Quasi-regression with shrinkage. . Math. Comput. Simul. 62:(3–6):23141
    [Crossref] [Google Scholar]
  44. Kleinberg J, Mullainathan S, Raghavan M. 2016.. Inherent trade-offs in the fair determination of risk scores. . arXiv:1609.05807 [cs.LG]
  45. Kumar IE, Venkatasubramanian S, Scheidegger C, Friedler S. 2020.. Problems with Shapley-value-based explanations as feature importance measures. . In Proceedings of the 37th International Conference on Machine Learning (ICML 2020), ed. H Daumé, A Singh , pp. 5491500 Brookline, MA:: Microtome
    [Google Scholar]
  46. Lindeman RH, Merenda PF, Gold RZ. 1980.. Introduction to Bivariate and Multivariate Analysis. Homewood, IL:: Scott, Foresman and Co.
    [Google Scholar]
  47. Lundberg SM, Erion G, Chen H, DeGrave A, Prutkin JM, et al. 2020.. From local explanations to global understanding with explainable AI for trees. . Nat. Mach. Intell. 2:(1):5667
    [Crossref] [Google Scholar]
  48. Lundberg SM, Lee SI. 2017.. A unified approach to interpreting model predictions. . In Advances in Neural Information Processing Systems 30 (NIPS 2017), ed. U von Luxburg, I Guyon, S Bengio, H Wallach, R Fergus , pp. 476574 Red Hook, NY:: Curran
    [Google Scholar]
  49. Mase M, Owen AB, Seiler BB. 2019.. Explaining black box decisions by Shapley cohort refinement. . arXiv:1911.00467 [cs.LG]
  50. Mase M, Owen AB, Seiler BB. 2021.. Cohort Shapley value for algorithmic fairness. . arXiv:2105.07168 [cs.LG]
  51. Michalak TP, Aadithya KV, Szczepanski PL, Ravindran B, Jennings NR. 2013.. Efficient computation of the Shapley value for game-theoretic network centrality. . J. Artif. Intell. Res. 46::60750
    [Crossref] [Google Scholar]
  52. Mill JS. 1851.. A System of Logic, Ratiocinative and Inductive. London:: John Parker, 3rd ed.
    [Google Scholar]
  53. Mitchell R, Cooper J, Frank E, Holmes G. 2022.. Sampling permutations for Shapley value estimation. . J. Mach. Learn. Res. 23::146
    [Google Scholar]
  54. Moehle N, Boyd S, Ang A. 2021.. Portfolio performance attribution via Shapley value. . arXiv:2102.05799 [q-fin.CP]
  55. Molnar C. 2018.. Interpretable Machine Learning: A Guide For Making Black Box Models Explainable. Victoria, Can.:: Leanpub
    [Google Scholar]
  56. Morris MD. 1991.. Factorial sampling plans for preliminary computational experiments. . Technometrics 33:(2):16174
    [Crossref] [Google Scholar]
  57. Neuhäuser M, Thielmann M, Ruxton GD. 2018.. The number of strata in propensity score stratification for a binary outcome. . Arch. Med. Sci. 14:(3):695700
    [Crossref] [Google Scholar]
  58. Newton MA, Raftery AE. 1994.. Approximate Bayesian inference with the weighted likelihood bootstrap. . J. R. Stat. Soc. Ser. B 56:(1):326
    [Crossref] [Google Scholar]
  59. Northpointe. 2019.. Practitioner's guide to COMPAS core. Tech. Doc. , Northpointe Inc., Traverse City, MI:
    [Google Scholar]
  60. Owen AB. 1998.. Latin supercube sampling for very high dimensional simulations. . ACM Trans. Model. Comput. Simul. 8:(2):71102
    [Crossref] [Google Scholar]
  61. Owen AB. 2014.. Sobol' indices and Shapley value. . J. Uncertain. Quantif. 2::24551
    [Crossref] [Google Scholar]
  62. Owen AB, Prieur C. 2017.. On Shapley value for measuring importance of dependent inputs. . J. Uncertain. Quantif. 5:(1):9861002
    [Crossref] [Google Scholar]
  63. Pearl J. 2009.. Causal inference in statistics: an overview. . Stat. Surv. 3::96146
    [Crossref] [Google Scholar]
  64. Plischke E, Rabitti G, Borgonovo E. 2021.. Computing Shapley effects for sensitivity analysis. . J. Uncertain. Quantif. 9:(4):141137
    [Crossref] [Google Scholar]
  65. Razavi S, Jakeman A, Saltelli A, Prieur C, Iooss B, et al. 2021.. The future of sensitivity analysis: an essential discipline for systems modeling and policy support. . Environ. Model. Softw. 137::104954
    [Crossref] [Google Scholar]
  66. Ribeiro MT, Singh S, Guestrin C. 2016.. Why should I trust you? Explaining the predictions of any classifier. . In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 113544 New York:: ACM
    [Google Scholar]
  67. Rubin DB. 1974.. Estimating causal effects of treatments in randomized and nonrandomized studies. . J. Educ. Psychol. 66:(5):688701
    [Crossref] [Google Scholar]
  68. Rubin DB. 1981.. The Bayesian bootstrap. . Ann. Stat. 9:(1):13034
    [Crossref] [Google Scholar]
  69. Rudin C, Wang C, Coker B. 2020.. The age of secrecy and unfairness in recidivism prediction. . Harv. Data Sci. Rev. 2020:(2.1). https://doi.org/10.1162/99608f92.6ed64b30
    [Google Scholar]
  70. Saltelli A, Ratto M, Andres T, Campolongo F, Cariboni J, et al. 2008.. Global Sensitivity Analysis: The Primer. New York:: John Wiley & Sons
    [Google Scholar]
  71. Scott SL, Varian HR. 2014.. Predicting the present with Bayesian structural time series. . Int. J. Math. Model. Numer. Optim. 5:(1–2):423
    [Google Scholar]
  72. Shapley LS. 1953.. A value for n-person games. . In Contribution to the Theory of Games II (Annals of Mathematics Studies 28), ed. HW Kuhn, AW Tucker , pp. 30717 Princeton, NJ:: Princeton Univ. Press
    [Google Scholar]
  73. Slack D, Hilgard S, Jia E, Singh S, Lakkaraju H. 2020.. Fooling LIME and SHAP: adversarial attacks on post hoc explanation methods. . In AIES '20: AAAI/ACM Conference on AI, Ethics, and Society (AIES), pp. 18086 New York:: ACM
    [Google Scholar]
  74. Sobol' IM. 1969.. Multidimensional Quadrature Formulas and Haar Functions. Moscow:: Nauka (In Russian)
    [Google Scholar]
  75. Sobol' IM. 1993.. Sensitivity estimates for nonlinear mathematical models. . Math. Model. Comput. Exp. 1::40714
    [Google Scholar]
  76. Song E, Nelson BL, Staum J. 2016.. Shapley effects for global sensitivity analysis: theory and computation. . J. Uncertain. Quantif. 4:(1):106083
    [Crossref] [Google Scholar]
  77. Stone CJ. 1994.. The use of polynomial splines and their tensor products in multivariate function estimation. . Ann. Stat. 22:(1):11884
    [Google Scholar]
  78. Štrumbelj E, Kononenko I. 2010.. An efficient explanation of individual classifications using game theory. . J. Mach. Learn. Res. 11::118
    [Google Scholar]
  79. Štrumbelj E, Kononenko I, Šikonja MR. 2009.. Explaining instance classifications with interactions of subsets of feature values. . Data Knowl. Eng. 68:(10):886904
    [Crossref] [Google Scholar]
  80. Sundararajan M, Najmi A. 2020.. The many Shapley values for model explanation. . Proc. Mach. Learn. Res. 119::926978
    [Google Scholar]
  81. Sundararajan M, Taly A, Yan Q. 2017.. Axiomatic attribution for deep networks. . In ICML'17: Proceedings of the 34th International Conference on Machine Learning, pp. 331928 Brookline, MA:: Microtome
    [Google Scholar]
  82. Tan S, Caruana R, Hooker G, Lou Y. 2018.. Distill-and-Compare: auditing black-box models using transparent model distillation. . In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, pp. 30310 New York:: ACM
    [Google Scholar]
  83. Wei P, Lu Z, Song J. 2015.. Variable importance analysis: a comprehensive review. . Reliab. Eng. Syst. Saf. 142::399432
    [Crossref] [Google Scholar]
  84. Xiang A. 2021.. Reconciling legal and technical approaches to algorithmic bias. . Tenn. Law Rev. 88:(3):649724
    [Google Scholar]
/content/journals/10.1146/annurev-statistics-040722-045325
Loading
/content/journals/10.1146/annurev-statistics-040722-045325
Loading

Data & Media loading...

  • Article Type: Review Article
This is a required field
Please enter a valid email address
Approval was a Success
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error