Artificial intelligence algorithms are increasingly utilized in financial services, but they arrive with serious risks of discrimination.
Sadik Demiroz | Photodisk | Getty Images
AMSTERDAM — Artificial intelligence has a problem with racial prejudice.
From biometric identification systems that disproportionately misidentify black and minority faces to voice recognition apps that fail to differentiate between voices with pronounced regional accents, AI has a lot to do with discrimination.
And the problem of reinforcing existing prejudices may be much more serious in the case of banking and financial services.
Deloitte notices this AI systems are ultimately only pretty much as good as the information they receive: incomplete or unrepresentative datasets can limit AI objectivity, while biases in the event teams that train such systems can perpetuate this cycle of bias.
AI can be silly
Nabil Manji, head of Cryptography and Web3 at Worldpay by FIS, said the important thing thing to know about AI products is that the strength of a technology depends heavily on the source material used to coach it.
“The thing about how good an AI product is is there are two variables,” Manji told CNBC in an interview. “One is the information it has access to and the opposite is how good the massive language model is. So on the information side you see corporations like Reddit and others that go public and say we is not going to allow corporations to steal our datayou will have to pay us for it.”
Regarding financial services, Manji said that many backend data systems are divided into different languages and formats.
“None of them are consolidated or harmonized,” he added. “It will make AI-based products far less successful in financial services than in other industries or other businesses where they’ve uniform and more modern systems or data access.”
Manji suggested that blockchain, or distributed ledger technology, could function a strategy to get a clearer picture of disparate data buried within the cluttered systems of traditional banks.
Nevertheless, he added that banks – being heavily regulated, slow-moving institutions – are unlikely to act at the identical speed as their more technologically agile counterparts in adopting recent AI tools.
“You have got Microsoft AND Googlewho, as up to now decade or two, have been seen as drivers of innovation. They can’t sustain with that speed. And then you definately take into consideration financial services. Banks usually are not known for speed,” Manji said.
The problem of AI in banking
Rumman Chowdhury, former head of machine learning ethics, transparency and accountability at Twitter, said lending is a prime example of how the bias of an AI system towards marginalized communities can come to light.
“Algorithmic discrimination is definitely very tangible in lending,” Chowdhury said during a panel at Money20/20 in Amsterdam. “Chicago had a history of literally denying it [loans] to predominantly black neighborhoods.”
In the Nineteen Thirties, Chicago was known for the discriminatory practice of “redlining”, by which the credit worthiness of real estate was strongly determined by the racial demographics of a particular neighborhood.
“There would be a giant map of all of the neighborhoods in Chicago hanging on the wall, and they might draw red lines through all of the neighborhoods that were predominantly African American, and so they would not give them loans,” she added.
“Fast forward a few a long time later and you might be developing algorithms to find out the danger of various districts and individuals. And while you could not include someone’s race data point, it is chosen by default.”
Indeed, Angle Bush, founding father of Black Women in Artificial Intelligence, a company dedicated to empowering black women within the AI sector, tells CNBC that when artificial intelligence systems are specifically used to make loan approval decisions, it has found that there may be a risk of replicating existing errors present within the historical data used to coach the algorithms.
“This might lead to automatic loan denials for people from marginalized communities, reinforcing racial or gender disparities,” Bush added.
“It is important for banks to acknowledge that implementing AI as a solution can inadvertently perpetuate discrimination,” she said.
Frost Li, a developer who has been working on artificial intelligence and machine learning for greater than a decade, told CNBC that the “personalization” dimension of AI integration can also be problematic.
“The interesting thing about AI is how we elect the ‘basic features’ for training,” said Li, who founded and runs Loup, a company that helps online retailers integrate AI into their platforms. “Sometimes we elect features unrelated to the outcomes we would like to predict.”
When AI is applied to banking, Li says it’s harder to discover the “offender” of bias when every part is convoluted within the calculations.
“A very good example is what number of fintech startups are specifically for foreigners, because a graduate of the University of Tokyo is not going to be capable of get any bank card, even when he works at Google; nevertheless, a person can easily get a card at a college credit union because bankers get to know local schools higher,” added Li.
Generative AI shouldn’t be typically used to create credit scores or assess consumer risk.
“That is not what this tool was built for,” said Niklas Guske, COO at Taktile, a startup that helps fintechs automate decision-making.
As an alternative, Guske said probably the most powerful applications are within the pre-processing of unstructured data, equivalent to text files – for classifying transactions, for instance.
“These signals can then be fed into a more traditional insurance model,” Guske said. “Subsequently, Generative AI will improve the standard of the information underlying such decisions, reasonably than replacing common scoring processes.”
Nevertheless it’s also hard to prove. Apple and Goldman Sachs, for instance, have been accused of giving women lower limits on Apple Cards. But those claims were dismissed by the Latest York Department of Financial Services after the regulator found no evidence of gender discrimination.
Based on Kim Smouter, director of the anti-racism group European Network Against Racism, the problem is that it can be difficult to prove whether AI-based discrimination actually took place.
“Considered one of the difficulties with the mass adoption of AI,” he said, “is the shortage of transparency about how these decisions are made and what redress mechanisms exist should a racist person even notice that there may be discrimination.”
“Individuals have little understanding of how AI systems work and that their individual case may actually be the tip of the system-wide iceberg. Subsequently, additionally it is difficult to detect specific cases where something went mistaken,” he added.
Smouter cited the instance of a Dutch childcare scandal where 1000’s of profit claims were wrongly accused of fraud. The Dutch government was forced to resign after a 2020 report found that victims were “treated with institutional bias.”
This, said Smouter, “shows how quickly such dysfunctions can spread and the way difficult it’s to prove and redress them once they’re discovered and significant, often irreparable damage is completed within the meantime.”
Police bias of artificial intelligence
Chowdhury says a global regulatory body just like the United Nations is required to deal with a few of the risks of artificial intelligence.
While AI has proven to be an progressive tool, some technologists and ethicists have expressed doubts concerning the ethical and moral soundness of the technology. Among the many predominant concerns of those within the industry are misinformation; racial and gender bias built into artificial intelligence algorithms; and “hallucinations” generated by ChatGPT-like tools.
“I’m a little concerned that with generative AI we’re entering this post-truth world where nothing we see online is trustworthy – not any text, not any video, not any audio, but how will we get our information ? And the way can we make sure that the data has a high degree of integrity?” Chowdhury said.
It is time for meaningful AI regulation to come back into force – but knowing how long it would take to implement regulatory proposals equivalent to the European Union’s Artificial Intelligence Act, some fear it won’t occur soon enough.
“We call for greater transparency and accountability of algorithms and the way they work, and a layman’s declaration that permits non-AI experts to self-assess, test proof and publish results, independent complaints process, periodic audits and reporting, involvement of racist communities when the technology is being designed and regarded for deployment,” said Smouter.
The Artificial Intelligence Act, the primary regulatory framework of its kind, incorporates a fundamental rights approach and ideas equivalent to redress, in accordance with Smouter, adding that the regulation will be enforced in about two years.
“It could be great if this period could be shortened to make sure that transparency and accountability are at the center of innovation,” he said.