Christian Steiner
Do only people have prejudices? – About the use of AI in lending
Polanyi-Paradox
How does a bank employee decide whether or not to grant a customer a loan? You will probably immediately think of a number of important indicators: amount of credit, collateral, income, etc. – These indicators are further complemented by a wealth of contextual information and personal perceptions: Does the credit applicant seem nervous? Is the weather good? – Such trivialities have a proven impact on economically relevant decisions. In the end, this results in a credit decision that is often described by bank employees themselves as a gut feeling and whose exact origins are no longer known to anyone.
This problem of explanation is known as Polanyi’s paradox, after the Hungarian philosopher and scholar Karl Polanyi. It is based on the fact that our minds often use subconscious, intuitive knowledge that comes from experience and evolutionary instincts. It can be found in many areas of life, so even chess grandmasters are often unable to explain their moves rationally and point out that they just felt right. Some time ago, this paradox still determined the limits of software development. Only what a human being could express in programming language a software could also execute. Despite this limitation, many processes could be automated and rationalised by rule-based systems. Others, such as the recognition of objects or the interpretation of language, are simply too complex to be programmed by hand. Machine learning is currently the focus of public attention for such challenges. Machine Learning is a sub-area of Artificial Intelligence (AI). It comprises a variety of methods that allow computers to learn without having to be explicitly programmed for it. Mostly this is done by recognizing mutual relationships in high-dimensional data. These techniques, combined with ever-increasing computing power and the rapidly growing amount of available data, are already making it possible to perform tasks that sounded like science fiction until recently, such as autonomous driving or call assistants. A major problem of many machine learning techniques, however, is that, unlike classical software, they are difficult to explain in a comprehensible way. This results in a situation similar to the Polanyi paradox. The program reliably executes a process, for example a credit decision, but not even the developer understands how it came about. The non-transparency of many machine learning methods, especially the popular neural networks, leads to a number of legal and economic challenges that have to be solved.
Legal basis
The DSGVO is of particular importance as the legal basis for transparency of AI. Article 22 “Automated decisions in individual cases including profiling” applies here. According to the DSGVO, companies that use AI to make decisions that have significant legal effects on their customers must explain their processes. This includes the granting of credit. In some cases this is also required by the BaFIN. It is still unclear whether procedural transparency is sufficient for this or whether more far-reaching transparency of decisions is required. Both terms will be explained in more detail. Furthermore, they must not discriminate (Article 9 DSGVO). In other words, demographic factors such as the origin or gender of the applicants may not play a role in a credit decision. Discrimination can enter the machine learning algorithm via the training data. For example, if a bank has historically granted only a few loans to ethnic minorities, this could make it difficult for the algorithm to make correct decisions for such groups of people. Furthermore, discrimination could occur indirectly, by using a variable that is unproblematic at first glance, such as residence, if it correlates strongly with an ethnic group.