1932

Abstract

This article reviews the recent literature on algorithmic fairness, with a particular emphasis on credit scoring. We discuss human versus machine bias, bias measurement, group versus individual fairness, and a collection of fairness metrics. We then apply these metrics to the US mortgage market, analyzing Home Mortgage Disclosure Act data on mortgage applications between 2009 and 2015. We find evidence of group imbalance in the dataset for both gender and (especially) minority status, which can lead to poorer estimation/prediction for female/minority applicants. Loan applicants are handled mostly fairly across both groups and individuals, though we find that some local male (nonminority) neighbors of otherwise similar rejected female (minority) applicants were granted loans, something that warrants further study. Finally, modern machine learning techniques substantially outperform logistic regression (the industry standard), though at the cost of being substantially harder to explain to denied applicants, regulators, or the courts.

Loading

Article metrics loading...

/content/journals/10.1146/annurev-financial-110921-125930
2023-11-01
2024-10-08
Loading full text...

Full text loading...

/deliver/fulltext/financial/15/1/annurev-financial-110921-125930.html?itemId=/content/journals/10.1146/annurev-financial-110921-125930&mimeType=html&fmt=ahah

Literature Cited

  1. Aaronson D, Hartley DA, Mazumder S. 2021. The effects of the 1930s HOLC `redlining' maps. Am. Econ. J. Econ. Policy 13:355–92
    [Google Scholar]
  2. Abduo HA, Pointon J. 2011. Credit scoring, statistical techniques and evaluation criteria: a review of the literature. Intell. Syst. Account. Finance Manag. 18:59–88
    [Google Scholar]
  3. Adebayo J, Gilmer J, Muelly M, Goodfellow I, Hardt M, Kim B. 2018. Sanity checks for saliency maps. Proceedings of the 32nd International Conference on Neural Information Processing Systems (NeurIPS'18)9525–36. Red Hook, NY: Curran
    [Google Scholar]
  4. Aka O, Burke K, Bauerle A, Greer C, Mitchell M. 2021. Measuring model biases in the absence of ground truth. Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society327–35. New York: ACM
    [Google Scholar]
  5. Akaike H 1973. Information theory and an extension of the maximum likelihood principle. Proceedings of the 2nd International Symposium on Information Theory BN Petrov, F Csakiy 267–81. Budapest: Akad. Kiado
    [Google Scholar]
  6. Alesina AF, Lotti F, Mistrulli PE. 2013. Do women pay more for credit? Evidence from Italy. J. Eur. Econ. Assoc. 11:Suppl. 145–66
    [Google Scholar]
  7. Aliprantis D, Carroll DR, Young ER. 2023. What explains neighborhood sorting by income and race?. J. Urban Econ. In press
    [Google Scholar]
  8. ALTA (Am. Land Title Assoc.) 2002. Freddie Mac's Loan Prospector® celebrates 20 million loans. TitleNews Online Archive June 11. https://www.alta.org/news/news.cfm?20020611-Freddie-Macs-Loan-Prospector-Celebrates-20-Million-Loans
    [Google Scholar]
  9. Ambrose BW, Conklin JN, Lopez LA. 2021. Does borrower and broker race affect the cost of mortgage credit?. Rev. Financ. Stud. 34:2790–826
    [Google Scholar]
  10. Arya V, Bellamy RKE, Chen PY, Dhurandhar A, Hind M et al. 2019. One explanation does not fit all: a toolkit and taxonomy of AI explainability techniques. arXiv:1909.03012 [cs.AI]
  11. Avenancio-Leon CF, Howard T. 2022. The assessment gap: racial inequality in property taxation. Q. J. Econ. 137:31383–434
    [Google Scholar]
  12. Avery RB, Bostic RW, Calem PS, Canner GB. 1996. Credit risk, credit scoring and the performance of home mortgages. Fed. Reserve Bull. July 621–48
    [Google Scholar]
  13. Avery RB, Brevoort KP, Canner GB. 2009. Credit scoring and its effects on the availability and affordability of credit. J. Consum. Aff. 43:3516–30
    [Google Scholar]
  14. Bansal A, Kauffman RJ, Weitz RR. 1993. Comparing the modeling performance of regression and neural networks as data quality varies: a business value approach. J. Manag. Inf. Syst. 10:111–32
    [Google Scholar]
  15. Barocas S, Hardt M, Narayanan A. 2023. Fairness and Machine Learning: Limitations and Opportunities. Cambridge, MA: MIT Press
    [Google Scholar]
  16. Barocas S, Selbst AD. 2016. Big data's disparate impact. Calif. Law Rev. 104:671–732
    [Google Scholar]
  17. Barocas S, Selbst AD, Raghavan M. 2020. The hidden assumptions behind counterfactual explanations and principal reasons. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (FAT* '20)80–89. New York: ACM
    [Google Scholar]
  18. Bartlett R, Morse A, Stanton R, Wallace N. 2022a. Algorithmic discrimination and input accountability under the Civil Rights Acts. Berkeley Technol. Law J. 36:675–736
    [Google Scholar]
  19. Bartlett R, Morse A, Stanton R, Wallace N. 2022b. Consumer-lending discrimination in the FinTech era. J. Financ. Econ. 143:130–56
    [Google Scholar]
  20. Bayer P, Ferreira F, Ross SL. 2018. What drives racial and ethnic differences in high-cost mortgages? The role of high-risk lenders. Rev. Financ. Stud. 31:1175–205
    [Google Scholar]
  21. Begley TA, Purnanandam AK. 2021. Color and credit: race, regulation, and the quality of financial services. J. Financ. Econ. 141:48–65
    [Google Scholar]
  22. Bellamy RKE, Dey K, Hind M, Hoffman SC, Houde S et al. 2019. AI Fairness 360: an extensible toolkit for detecting, understanding, and mitigating unwanted algorithmic bias. IBM J. Res. Dev. 63:4/54.1–4.15
    [Google Scholar]
  23. Berg T, Burg V, Gombović A, Puri M. 2020. On the rise of FinTechs: credit scoring using digital footprints. Rev. Financ. Stud. 33:72845–97
    [Google Scholar]
  24. Berkovec JA, Canner GB, Hannan TH, Gabriel SA 1996. Mortgage discrimination and FHA loan performance. Mortgage Lending, Racial Discrimination and Federal Policy JJ Choi, B Oskan 29–43. London: Taylor & Francis
    [Google Scholar]
  25. Bhutta N, Hizmo A. 2021. Do minorities pay more for mortgages?. Rev. Financ. Stud. 34:763–89
    [Google Scholar]
  26. Bhutta N, Hizmo A, Ringo D. 2022. How much does racial bias affect mortgage lending? Evidence from human and algorithmic credit decisions. Work. Pap. Fed. Reserve Board New York:
    [Google Scholar]
  27. Black E, Yeom S, Fredrikson M. 2020. FlipTest: fairness testing via optimal transport. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (FAT* '20)111–21. New York: ACM
    [Google Scholar]
  28. Black H, Schweitzer RL, Mandell L. 1978. Discrimination in mortgage lending. Am. Econ. Rev. 68:2186–91
    [Google Scholar]
  29. Black HA, Boehm TP, DeGennaro RP. 2003. Is there discrimination in mortgage pricing? The case of overages. J. Bank. Finance 27:61139–65
    [Google Scholar]
  30. Blattner L, Nelson S 2021. How costly is noise? Data and disparities in consumer credit. Work. Pap. Stanford Univ. Stanford, CA:
    [Google Scholar]
  31. Board Gov. Fed. Reserve 2011. SR 11-7: guidance on model risk management Superv. Regul. Lett., Board Gov. Fed. Reserve Washington, DC.: https://www.federalreserve.gov/supervisionreg/srletters/sr1107.htm
    [Google Scholar]
  32. Brock JM, De Haas R. 2021. Discriminatory lending: evidence from bankers in the lab. Work. Pap. Eur. Bank Reconstr. Dev. London:
    [Google Scholar]
  33. Buchak G, Matvos G, Piskorski T, Seru A. 2018. Fintech, regulatory arbitrage, and the rise of shadow banks. J. Financ. Econ. 130:3453–83
    [Google Scholar]
  34. Budhathoki K, Minorics L, Bloebaum P, Janzing D. 2022. Causal structure-based root cause analysis of outliers. Proc. Mach. Learn. Res. 162:2357–69
    [Google Scholar]
  35. Burkart N, Huber MF. 2021. A survey on the explainablity of supervised machine learning. J. Artif. Intell. Res. 70:245–317
    [Google Scholar]
  36. Burrell PR, Folarin BO. 1997. The impact of neural networks in finance. Neural Comput. Appl. 6:193–200
    [Google Scholar]
  37. Butler AW, Mayer EJ, Weston JP. 2023. Racial disparities in the auto loan market. Rev. Financ. Stud. 36:1–41
    [Google Scholar]
  38. Canetti R, Cohen A, Dikkala N, Ramnarayan G, Scheffler S, Smith A. 2019. From soft classifiers to hard decisions: How fair can we be?. Proceedings of the 2019 Conference on Fairness, Accountability, and Transparency (FAT* '19)309–18. New York: ACM
    [Google Scholar]
  39. Chen D, Li X, Lai F. 2017. Gender discrimination in online peer-to-peer credit lending: evidence from a lending platform in China. Electron. Commerce Res. 17:4553–83
    [Google Scholar]
  40. Cheng P, Lin Z, Liu Y. 2015. Racial discrepancy in mortgage interest rates. J. Real Estate Finance Econ. 51:1101–20
    [Google Scholar]
  41. Courchane M, Nickerson D. 1997. Discrimination resulting from overage practices. J. Financ. Serv. Res. 11:133–51
    [Google Scholar]
  42. Cutler DM, Glaeser EL, Vigdor JL. 1999. The rise and decline of the American ghetto. J. Political Econ. 107:455–506
    [Google Scholar]
  43. D'Acunto F, Ghosh P, Jain R, Rossi AG. 2022. How costly are cultural biases? Work. Pap. Georgetown Univ. Washington, DC:
    [Google Scholar]
  44. Das S, Donini M, Gelman J, Haas K, Hardt M et al. 2021. Fairness measures for machine learning in finance. J. Financ. Data Sci. 3:33–64
    [Google Scholar]
  45. Dastin J. 2018. Amazon scraps secret AI recruiting tool that showed bias against women. Reuters. Oct. 10. https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G
  46. Dep. Treas 2014. Supervisory guidance on implementing Dodd-Frank Act company-run stress tests for banking organizations with total consolidates assets of more than $10 billion but less than $50 billion. Fed. Reg. 79:4914153–69
    [Google Scholar]
  47. Diana E, Gill W, Kearns M, Kenthapadi K, Roth A. 2021. Minimax group fairness: algorithms and experiments. Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society66–76. Washington, DC: AAAI
    [Google Scholar]
  48. Dixon MF, Halperin I, Bilokon P. 2020. Machine Learning in Finance: From Theory to Practice. Cham, Switz: Springer
    [Google Scholar]
  49. Dobbie W, Liberman A, Paravisini D, Pathania V. 2021. Measuring bias in consumer lending. Rev. Econ. Stud. 88:2799–832
    [Google Scholar]
  50. Donnan S, Choi A, Levitt H, Cannon C. 2022. Wells Fargo rejected half its Black applicants in mortgage refinancing boom. Bloomberg March 10. https://www.bloomberg.com/graphics/2022-wells-fargo-black-home-loan-refinancing/
    [Google Scholar]
  51. Durand D. 1941. Risk Elements in Consumer Installment Financing. Cambridge, MA: NBER
    [Google Scholar]
  52. Dwork C, Hardt M, Pitassi T, Reingold O, Zemel R. 2012. Fairness through awareness. Proceedings of the 3rd Innovations in Theoretical Computer Science Conference214–26. New York: ACM
    [Google Scholar]
  53. Engel KC, McCoy PA. 2011. The Subprime Virus: Reckless Credit, Regulatory Failure, and Next Steps New York: Oxford Univ. Press
    [Google Scholar]
  54. Episcopos A, Pericli A, Hu J. 1998. Commercial mortgage default: a comparison of logit with radial basis function networks. J. Real Estate Finance Econ. 17:2163–78
    [Google Scholar]
  55. FDIC (Fed. Depos. Insur. Corp.) 2017. Supervisory guidance model risk management. Financ. Inst. Lett. 22-2017 FDIC Washington, DC.: https://www.fdic.gov/news/financial-institution-letters/2017/fil17022a.pdf
    [Google Scholar]
  56. Feldman D, Gross S. 2005. Mortgage default: classification trees analysis. J. Real Estate Finance Econ. 30:4369–96
    [Google Scholar]
  57. FICO Inc 2022. Quarterly report pursuant to Section 13 or 15(d) of the Securities Exchange Act of 1934 for the quarterly period ended March 31, 2022 Form 10-Q US Secur. Exch. Comm. Washington, DC:
    [Google Scholar]
  58. FinRegLab 2021. The use of machine learning for credit underwriting: market and data science context. Tech. Rep. FinRegLab Washington, DC:
    [Google Scholar]
  59. Fisher RA. 1936. The use of multiple measurements in taxonomic problems. Ann. Eugen. 7:2179–88
    [Google Scholar]
  60. Flitter E. 2022. A Black homeowner is suing Wells Fargo, claiming discrimination. New York Times March 31. https://www.nytimes.com/2022/03/21/business/wells-fargo-mortgages-discrimination-suit.html
    [Google Scholar]
  61. Flood MD, Jagadish HV, Raschid L. 2016. Big data challenges and opportunities in financial stability monitoring. Financ. Stabil. Rev. April 129–42
    [Google Scholar]
  62. Fuster A, Goldsmith-Pinkham P, Ramadorai T, Walther A. 2022. Predictably unequal? The effects of machine learning on credit markets. J. Finance 77:15–47
    [Google Scholar]
  63. Fuster A, Plosser M, Schnabl P, Vickery J. 2019. The role of technology in mortgage lending. Rev. Financ. Stud. 32:51854–99
    [Google Scholar]
  64. Galhotra S, Brun Y, Meliou A. 2017. Fairness testing: testing software for discrimination. Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering (ESEC/FSE 2017)498–510. New York: ACM
    [Google Scholar]
  65. Galster G 1996. Future directions in mortgage discrimination research and enforcement. Mortgage Lending, Racial Discrimination and Federal Policy JJ Choi, B Oskan 38–44. London: Taylor & Francis
    [Google Scholar]
  66. Ghent AC, Hernández-Murillo R, Owyang MT. 2014. Differences in subprime loan pricing across races and neighborhoods. Reg. Sci. Urban Econ. 48:199–215
    [Google Scholar]
  67. Giacoletti M, Heimer R, Yu EG. 2022. Using high-frequency evaluations to estimate disparate treatment: evidence from mortgage loan officers. Work. Pap. Univ. South. Calif. Los Angeles:
    [Google Scholar]
  68. Gilpin LH, Bau D, Yuan BZ, Bajwa A, Specter MA, Kagal L. 2018. Explaining explanations: an overview of interpretability of machine learning. 2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA)80–89. Piscataway, NJ: IEEE
    [Google Scholar]
  69. Grother PJ, Quinn GW, Phillips PJ. 2011. Report on the evaluation of 2D still-image face recognition algorithms. Tech. Rep. IR 7709 Natl. Inst. Stand. Technol. Gaithersburg, MD:
    [Google Scholar]
  70. Guidotti R, Monreale A, Ruggieri S, Turini F, Giannotti F, Pedreschi D. 2018. A survey of methods for explaining black box models. ACM Comput. Surv. 51:593.1–93.42
    [Google Scholar]
  71. Hanson A, Hawley Z, Martin H, Liu B. 2016. Discrimination in mortgage lending: evidence from a correspondence experiment. J. Urban Econ. 92:48–65
    [Google Scholar]
  72. Hardt M, Chen X, Cheng X, Donini M, Gelman J et al. 2021. Amazon SageMaker Clarify: machine learning bias detection and explainability in the cloud. Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD '21)2974–83. New York: ACM
    [Google Scholar]
  73. Hardt M, Price E, Srebro N. 2016. Equality of opportunity in supervised learning. Proceedings of the 30th Conference on Neural Information Processing Systems (NeurIPS 2016)3323–31. Red Hook, NY: Curran
    [Google Scholar]
  74. Hirshleifer J. 1971. The private and social value of information and the reward to inventive activity. Am. Econ. Rev. 61:561–74
    [Google Scholar]
  75. Hutchinson B, Mitchell M. 2019. 50 years of test (un)fairness: lessons for machine learning. Proceedings of the 2019 Conference on Fairness, Accountability, and Transparency (FAT* '19)49–58. New York: ACM
    [Google Scholar]
  76. Iris B. 2016. What Works: Gender Equality by Design Cambridge, MA: Harvard Univ. Press
    [Google Scholar]
  77. Johnson K. 2021. The efforts to make text-based AI less racist and terrible. Wired June 17. https://www.wired.com/story/efforts-make-text-ai-less-racist-terrible
    [Google Scholar]
  78. Kearns M, Roth A. 2020. The Ethical Algorithm Oxford, UK: Oxford Univ. Press
    [Google Scholar]
  79. Kim B, Wattenberg M, Gilmer J, Cai C, Wexler J et al. 2018. Interpretability beyond feature attribution: quantitative testing with concept activation vectors (TCAV). Proc. Mach. Learn. Res. 80:2668–77
    [Google Scholar]
  80. Kleinberg J, Lakkaraju H, Leskovec J, Ludwig J, Mullainathan S. 2018a. Human decisions and machine predictions. Q. J. Econ. 133:1237–93
    [Google Scholar]
  81. Kleinberg J, Ludwig J, Mullainathan S, Sunstein CR. 2018b. Discrimination in the age of algorithms. J. Leg. Anal. 10:113–74
    [Google Scholar]
  82. Kullback S, Leibler RA. 1951. On information and sufficiency. Ann. Math. Stat. 22:179–86
    [Google Scholar]
  83. Kusner MJ, Loftus J, Russell C, Silva R. 2017. Counterfactual fairness. Proceedings of the 31st Conference on Neural Information Processing Systems (NeurIPS 2017)4069–79. Red Hook, NY: Curran
    [Google Scholar]
  84. Ladd H. 1998. Evidence on discrimination in mortgage lending. J. Econ. Perspect. 12:241–62
    [Google Scholar]
  85. Lee MSA, Floridi L. 2021. Algorithmic fairness in mortgage lending: from absolute conditions to relational trade-offs. Minds Mach. 31:165–91
    [Google Scholar]
  86. Lewis E. 1992. An Introduction to Credit Scoring San Rafael, CA: Athena
    [Google Scholar]
  87. Liang P, Bommasani R, Lee T, Tsipras D, Soylu D et al. 2022. Holistic evaluation of language models. arXiv:2211.09110 [cs.CL]
  88. Lundberg SM. 2018. shap.TreeExplainer Software Doc. Revis. c22690f3. https://shap-lrjball.readthedocs.io/en/latest/generated/shap.TreeExplainer.html
    [Google Scholar]
  89. Lundberg SM, Lee SI. 2017. A unified approach to interpreting model predictions. Proceedings of the 31st Conference on Neural Information Processing Systems (NeurIPS 2017)4768–77. Red Hook, NY: Curran
    [Google Scholar]
  90. Lunter J. 2020. Beating the bias in facial recognition technology. Biom. Technol. Today 2020:95–7
    [Google Scholar]
  91. McDonald DW, Pepe CO, Bowers HM, Dombroski EJ. 1997. Desktop Underwriter: Fannie Mae's automated mortgage underwriting expert system. Proceedings of the 14th National Conference on Artificial Intelligence and 9th Innovative Applications of Artificial Intelligence Conference875–82. Washington, DC: AAAI
    [Google Scholar]
  92. McMillen D, Singh R. 2020. Assessment regressivity and property taxation. J. Real Estate Finance Econ. 60:155–69
    [Google Scholar]
  93. Mehrabi N, Morstatter F, Saxena N, Lerman K, Galstyan A. 2021. A survey on bias and fairness in machine learning. ACM Comput. Surv. 54:6115
    [Google Scholar]
  94. Menestrel ML, Wassenhove LN. 2016. Subjectively biased objective functions. Eur. J. Decis. Proc. 4:173–83
    [Google Scholar]
  95. Menzies P, Beebee H. 2019. Counterfactual theories of causation. The Stanford Encyclopedia of Philosophy EN Zalta Stanford, CA: Stanford Univ. Winter ed.
    [Google Scholar]
  96. Millar B. 2020. Digital lending: key takeaways on the rise of AI in the mortgage industry. Rep. Forbes Insight Jersey City, NJ: https://forbesinfo.forbes.com/the-rise-of-AI-in-the-Mortgage-Industry
    [Google Scholar]
  97. Molinar C. 2022. Interpretable Machine Learning: A Guide for Making Black Box Models Explainable. https://christophm.github.io/interpretable-ml-book/
    [Google Scholar]
  98. Munnell AH, Browne L, McEneaney J, Tootel G. 1996. Mortgage lending in Boston: interpreting HMDA data. Am. Econ. Rev. 86:125–54
    [Google Scholar]
  99. Obermeyer Z, Powers B, Vogel C, Mullainathan S. 2019. Dissecting racial bias in an algorithm used to manage the health of populations. Science 366:447–53
    [Google Scholar]
  100. Off. Comptrol. Curr 2011. Sound practices for model risk management: supervisory guidance on model risk management. Bull. 2011-12 Off. Comptrol. Curr. Washington, DC.: https://www.occ.gov/news-issuances/bulletins/2011/bulletin-2011-12.html
    [Google Scholar]
  101. O'Neil C. 2016. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. New York: Broadway. Repr. ed.
    [Google Scholar]
  102. Papakyriakopoulos O, Hegelich S, Serrano JCM, Marco F. 2020. Bias in word embeddings. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (FAT* '20)446–57. New York: ACM
    [Google Scholar]
  103. Patil DJ, Mason H, Loukides M. 2018. Of oaths and checklists: Oaths have their value, but checklists will help put principles into practice. O'Reilly Radar Blog July 17. https://www.oreilly.com/radar/of-oaths-and-checklists
    [Google Scholar]
  104. Pearl J. 2009. Causal inference in statistics: an overview. Stat. Surv. 3:96–146
    [Google Scholar]
  105. Perrone V, Donini M, Zafar MB, Schmucker R, Kenthapadi K, Archambeau C. 2021. Fair Bayesian optimization. Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society (AIES '21)854–63. New York: ACM
    [Google Scholar]
  106. Perrone V, Shcherbatyi I, Jenatton R, Archambeau C, Seeger M. 2019. Constrained Bayesian optimization with max-value entropy search. arXiv:1910.07003 [cs.ML]
  107. Perry A, Rothwell J, Harshbarger D. 2018. The devaluation of assets in Black neighborhoods: the case of residential property. Tech. Rep. Brookings Inst. Washington, DC:
    [Google Scholar]
  108. Pessach D, Shmueli E. 2020. Algorithmic fairness. Work. Pap. Tel-Aviv Univ. Tel-Aviv, Isr.:
    [Google Scholar]
  109. Reid CK, Bocian D, Li W, Quercia RG. 2017. Revisiting the subprime crisis: the dual mortgage market and mortgage defaults by race and ethnicity. J. Urban Aff. 39:4469–87
    [Google Scholar]
  110. Ribeiro MT, Singh S, Guestrin C. 2016. “Why should I trust you?”: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD '16)1135–44. New York: ACM
    [Google Scholar]
  111. Ross SL, Yinger J. 2002. The Color of Credit: Mortgage Discrimination, Research Methodology, and Fair-Lending Enforcement Cambridge, MA: MIT Press
    [Google Scholar]
  112. Rudin C. 2019. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell. 1:5206–15
    [Google Scholar]
  113. Samek W, Müller KR 2019. Towards explainable artificial intelligence. Explainable AI: Interpreting, Explaining and Visualizing Deep Learning W Samek, G Montavon, A Vedaldi, LK Hansen, KR Müller 5–22. Cham, Switz: Springer
    [Google Scholar]
  114. Samuel S. 2021. AI's Islamophobia problem: GPT-3 is a smart and poetic AI. It also says terrible things about Muslims. Vox Future Perfect Blog Sept. 18. https://www.vox.com/future-perfect/22672414/ai-artificial-intelligence-gpt-3-bias-muslim
    [Google Scholar]
  115. Shapley LS. 1952. A Value for n-Person Games Santa Monica, CA: RAND
    [Google Scholar]
  116. Simonite T. 2018. When it comes to gorillas, Google Photos remains blind. Wired Jan. 11. https://www.wired.com/story/when-it-comes-to-gorillas-google-photos-remains-blind
    [Google Scholar]
  117. Srinivas S, Fleuret F. 2019. Full-gradient representation for neural network visualization. Proceedings of the 33rd International Conference on Neural Information Processing Systems (NeurIPS '19)4124–33. Red Hook, NY: Curran
    [Google Scholar]
  118. Straka JW. 2000. Shift in the mortgage landscape: the 1990s move to automated credit evaluations. J. Hous. Res. 11:2207–32
    [Google Scholar]
  119. Sudjianto A, Knauth W, Singh R, Yang Z, Zhang A. 2020. Unwrapping the black box of deep ReLU networks: interpretability, diagnostics, and simplification. arXiv:2011.04041v1 [cs.LG]
  120. Sundararajan M, Taly A, Yan Q. 2017. Axiomatic attribution for deep networks. J. Mach. Learn. Res. 70:3319–28
    [Google Scholar]
  121. Szepannek G. 2017. On the practical relevance of model machine learning algorithms for credit scoring applications. WIAS Rep. Ser. 17:88–96
    [Google Scholar]
  122. Tantri P. 2021. Fintech for the poor: financial intermediation without discrimination. Rev. Finance 25:2561–93
    [Google Scholar]
  123. Thomas LC. 2009. Consumer Credit Models: Pricing, Profit, and Portfolios Oxford, UK: Oxford Univ. Press
    [Google Scholar]
  124. Tjoa E, Guan C. 2020. A survey on explainable artificial intelligence (XAI): towards medical XAI. IEEE Trans. Neural Netw. Learn. Syst. 32:4793–813
    [Google Scholar]
  125. Vasudevan S, Kenthapadi K. 2020. LiFT: a scalable framework for measuring fairness in ML applications. Proceedings of the 29th ACM International Conference on Information and Knowledge Management2773–80. New York: ACM
    [Google Scholar]
  126. Waller B, Aiken M. 1998. Predicting prepayment of residential mortgages: a neural network approach. Inf. Manag. Sci. 39:437–44
    [Google Scholar]
  127. Wauthier FL, Jordan MI. 2011. Bayesian bias mitigation for crowdsourcing. Proceedings of the 24th International Conference on Neural Information Processing Systems (NeurIPS '11)1800–8. Red Hook, NY: Curran
    [Google Scholar]
/content/journals/10.1146/annurev-financial-110921-125930
Loading
/content/journals/10.1146/annurev-financial-110921-125930
Loading

Data & Media loading...

Supplementary Data

  • Article Type: Review Article
This is a required field
Please enter a valid email address
Approval was a Success
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error