UK Banks Get the First Taste of What AI Regulation May Look Like 

loan app

Banks in the U.K. have been warned by financial regulators not to use artificial intelligence systems to approve loan applications unless they can prove that their algorithm isn’t biased against minorities, according to the Financial Times. 

For the time being, the warning doesn´t come in the form of an official ruling, and it leaves ample discretion to banks to continue using AI or machine learning systems as long as the lenders make sure the data used to feed the algorithms and the outcome do not discriminate against people who are already struggling to borrow. This message against algorithmic biases may be a preview of what the upcoming white paper on AI regulation may look like. 

Financial institutions around the world are using AI models and machine learning to decide whether to grant loans applications based on the data they can collect, which in many instances includes post codes, default rates in the area, employment benefits, salaries, etc.  

The debate about using this data and allowing an algorithm to provide a final decision without human supervision is that the data fed into the algorithm may be already biased, and it may skew a decision towards a discriminatory outcome. Some of the information may be based on historic data rather than personalized data, and this may unfairly affect an applicant. For instance, living in a post code where people are likely to default on a loan may affect your score to get your application approved, even if your personal situation may be different than the average. Other types of demographic information may have a similar effect on your application unless you can demonstrate that you don´t fall in that category — but without human supervision, such a demonstration is no longer possible. 

One way to improve the AI system is by including more data points to “customize” the decision as much as possible, trying to eliminate or minimize the risk of biased decisions. Perhaps the best example of this is Alibaba’s Ant Group. Ant’s artificial intelligence system automatically sets credit limits, interest rates and even takes decisions based on usage history of services from Alibaba. This means that before making a decision, Ant analyses up to 3,000 data points for each consumer, including phone bills, consumer behavior and demographic data. As a result, Ant´s decisions probably aren’t biased — but if a company in Europe or the U.S. would try to gather similar data, it would probably face privacy concerns, as consumers probably won´t feel comfortable giving away that data. 

Banks argue that it is the human factor, and not the AI system, that is more prone to be subjective and provide unfair outcomes. Both the AI system and the human factor may have flaws — but in coordination, they may also bring the best outcome, for the time being. In October, the Bank of England and the Financial Conduct Authority discussed an ethical framework and training around AI, including some human oversight and a requirement that banks could explain the decision taken by automated systems. 

Read More: UK Seeks Its Place to Shape Global Standards in Artificial Intelligence 

This is exactly what regulators are asking banks to do now: continue improving AI systems, as they have clear benefits for consumers and for banks, but pay attention to the data sets used and the outcomes produced. 

This warning may be the first step in regulating AI in the U.K., which is using a soft approach with more recommendations and less regulation. However, the government is expected to publish a white paper on regulating AI in early 2022. Probably the white paper, which eventually may become law, will provide more information on how to reduce “biases” in AI. Algorithmic biases are probably the biggest concern regarding AI regulation among regulators around the world. The European Commission proposed legislation in 2021 which aimed to limit this problem, and the U.S. Federal Trade Commission also announced last year that it may take action to reduce discriminatory outcomes when companies use AI. 

Read More: FTC Mulls New Artificial Intelligence Regulation to Protect Consumers 

 

Sign up here for daily updates on the legal, policy and regulatory issues shaping the future of the connected economy.