In 2023, Federal Reserve Vice Chair for Supervision Michael Barr released an official statement cautioning banks against fair lending violations arising from their use of artificial intelligence (AI).
As artificial intelligence continues to gain global momentum, its use for data analytics has become something of great power and danger. Theoretically, machine learning models are wiped of human biases and only make conclusions on data based off of actual numerical trends. However, this is far from the truth. AI derived data trends may be accurate, insightful, and even profitable, but when analyzed further reveal a darker reality. The common pattern of AI banking models is that they may be productive on paper but reinforce discriminatory biases that are invisible to the naked eye.
Ethical Implications
Finance and banking have a long history of discriminatory practices against certain populations, whether it be on a basis of race, religion, gender, or age certain people have been gatekept from the financial benefits of banking. Algorithmic bias occurs when an AI model or algorithm discovers a historically discriminatory trend and implements it into its decisions, calculations, and outputs. The truth is, AI models are only as good as the training data they are given. If their training data contains any unrepresentative manipulations, notably systemic racism, the machine will pick up on these trends and implement them as just and uniform. The stains of “redlining”, drawing boundaries around neighborhoods based on residents' race and depriving them of resources and opportunities, from the 1930s-60s remain vibrant; for example, while a loan approval AI model will not literally take a applicants race as a factor, they will discriminate against people in a historically redlined area, thus, implicitly discriminating against them due to their race. In a perfect world where systemic biases are not present, AI models would be flawless, this is not the reality, and for now their valid implementation should be just as a tool to aid human banking decision making.
Potential Efficiency and Profitability
Many customers wait days or weeks to receive a verdict on their loan application. One thing is for certain– AI can provide a speedier and more efficient loan approval process. Attop this speed, using specific machine learning algorithms, AI loan approval models can select and weed out potential high risk applicants in a calculative manner that would be hard for a human to do. For example through taking into account alcohol consumption on the applicant's bank statements, factoring in consistency of employment, and analyzing credit reports an AI banking model would be able to derive a calculated and serviceable risk statement.
Opinions and Takeaways on the Future of AI in Banking
While it may seem appealing that AI banking models can discover risk factors through patterns of alcohol consumption and credit history, they may also (as explained earlier) implicitly take into account race. This is a moral trade-off that cannot be made. For now, AI banking models should be consistently monitored and only used as referential tools for humans. And importantly, for the forseeable future, we should rely on human judgment to guarantee equality in finance.
Works Cited
Lim, Paul. “Yet Another Warning From Banking Regulators About AI Bias” arnoldporter.com, 1 August. 2023,https://www.arnoldporter.com/en/perspectives/advisories/2023/08/another-warning-about-ai-bias
Riley, James. “Why the Financial Upside of AI Banking Models Does Not Offset the Biases They Reinforce - Google Docs” hbr.org, 29 September. 2023, https://hbr.org/2023/09/eliminating-algorithmic-bias-is-just-the-beginning-of-equitable-ai#:~:text=Algorithmic%20bias%20occurs%20when%20algorithms,criminal%20justice%2C%20and%20credit%20scoring.
“A brief history of redlining” nyc.gov , 6 January. 2021, https://a816-dohbesp.nyc.gov/IndicatorPublic/data-stories/redlining/#:~:text=This%20was%20redlining%20%2D%20drawing%20boundaries,into%20the%20rules%20of%20society.
コメント