When AI says ‘no’ to your mortgage application

Imagine this scenario:

A married couple working hard for a few years finally save up enough for a deposit on a small house. At last, they think, we can move out of this rented property into our own home. They fill out the necessary forms for a mortgage and send it off. Afterwards, for a couple of weeks, they eagerly await the postman every morning hoping that the brown envelope of acceptance will fall through the door onto their mat. Eventually something does arrive, but unfortunately it’s the brown envelope of rejection. They look disbelievingly at the letter and at each other. Why was their application rejected? The lady makes a phone call and asks the lender.

According to the European Union’s General Data Protection Regulation, their “right to explanation” law says they must be given this information, but the man on the other end of the phone doesn’t know the answer. The mortgage application was processed by an artificial intelligent system that the company had just installed at great expense and it provides no clues. The application goes in and the answer comes out. Still, if it’s the law, then he must find out to tell the clients.

He goes to the computer scientists and asks for the reason of rejection. The AI program learned from other applications and made the decision on its own. No one knows why.

When the manager calls back the couple and tells them this, they can’t believe it. They’re rather angry. Computers are meant to help people, but now they are controlling them in ways that have never been programmed but rather learnt. This lack of transparency not only affects industries that heavily rely on autonomous decision making, but also fails people’s trust and can let possible bias in the data go unobserved.

My work addresses this exact problem. I’m a computer scientist specialised in artificial intelligence (AI) and I want to increase people’s trust in the decisions that algorithms make for them. I give domain experts, such as financial advisors, tools that they can use to understand why and how an algorithmic decision is made. This can also help spot potential biases and eliminate them.

There’s a wealth of literature on the link between explanation and trust. It’s shown that the easier it is to explain the output of a system, the more likely humans would trust it. But not all output of algorithms used in decision making nowadays can be explained. For example, black-box methods such as neural networks – artificial networks inspired by the human brain that are capable of “learning” from examples without being explicitly programmed to do so – are not transparent.

I develop methods to get from black-box models to transparent models. This facilitates explaining the output of the decision-making process in a human-like way. Here is how I do this.

Convincing by argumentation

When seeking to reach an agreement, it is common for humans to reason by exchanging arguments in favour of or against a certain decision. One of the methods that I use to provide human-like reasoning is argumentation. Suppose an algorithm makes a decision or a recommendation for you. As a user, you can get engaged in a dialogue and challenge the decision or recommendation by bringing arguments against it. For the algorithm to convince you, it has to counterattack your arguments by other arguments, until you run out of arguments.

As an example, assume you are a clinician and there is a decision support system in which you can specify a cancer patient’s information and conditions (imaging or clinical data) to get advice on what the best course of treatment is. If the recommended treatment is chemotherapy, the clinician’s argument could be “why not radiotherapy?” The counterargument can be a recent result on the ineffectiveness of radiotherapy given the patient’s clinical data.

Convincing using diagrams

After working on providing explanation through argumentation, I joined Mateja Jamnik’s group in the Computer Laboratory in Cambridge to work on a project that investigates the explanatory power of diagrams. These diagrams represent concepts, individuals and the relationships between them. Using diagrams for human-like reasoning is based on neuroscience which shows that people find reasoning significantly easier to understand when using diagrams. By constructing a series of diagrams which correspond to each reasoning step the algorithm took, we can explain its outcome.

Similar to the previous scenario, assume we have a decision support system that recommends a certain drug for a patient. The clinician may wish to investigate the reasoning that the algorithm went through to match the patient’s symptoms with this drug. For instance, a simple reason for its recommendation could be a similarity of the patient’s symptoms to the symptoms of another disease for which this drug has shown to be effective.

This is my personal and professional journey that is geared towards claiming control on AI and the changes that it is making to our lives. My hope is that such research gets acknowledged instead of only focusing on the negative aspects of AI, such as reinforcing existing biases. We have a long way to go but AI practitioners do care!

Image created by Ali Baydoun.