Are you mindful of discrimination in (your) models?

August 28, 2020 General

Picture blog discrimination 300x200

Financial institutions cannot simply accept every application for e.g. a mortgage loan or non-life insurance product. To avoid harmful commitments for both bank and client, acceptance is based on selection criteria, often with the help of mathematical models. In these processes, discrimination should be prevented, as we consider it unethical and it is also strictly prohibited. Until recently, it was relatively easy to prevent model discrimination: by omitting variables mentioned in Article 1 of the Dutch Constitution from the model, discrimination could be avoided.

However, the use of advanced machine learning algorithms is becoming more widespread. While such models achieve high performance e.g. as selection criteria, their complexity reduces their transparency. In becoming somewhat more of a ‘black box’, the chance of ‘latent’ discrimination increases. For example, neural networks with different layers can identify very complex patterns.

In order not to make this blog post overly complex, regulations other than Article 1 of the Dutch Constitution (such as the GDPR) are not taken into account. Furthermore, several methods are known in the literature to guarantee ‘algorithmic fairness’. These will be discussed in a future blog.



Discrimination

Discrimination is (pre)judging an individual based on the group to which the individual belongs. There should be equal opportunities for all without bias.

Mind you, there may also be latent discrimination. For example, pregnancy is inextricably linked to the female sex (a direct relationship, see figure below) and shopping in a halal supermarket is strongly linked to religion (a derived characteristic in the figure). It shows that latent discrimination can appear in unexpected forms.

It seems that a simple way to avoid discrimination is to exclude potentially directly or indirectly discriminating variables from the model. However, advanced machine learning algorithms can find very complex patterns which could result in hidden, or latent, discrimination.

Input control

While discrimination is obviously prohibited, an individual may be subject to a decision on the basis of his or her behavior insofar as this involves a free choice for which they could be held accountable. A free choice is here defined as a choice which cannot be traced back to protected characteristics.

When deciding whether to accept a new client, it is important to motivate which criteria are relevant, i.e. show a causal (and not just statistical) relationship with the purpose of the decision. For example, the historical payment behavior of an applicant is very relevant to a mortgage application.

Thus, two dimensions are important in determining variables that form the input of a model: voluntariness and relevance.

Below are some examples of the different groups of variables:

A. Not a free choice characteristic that is relevant:
The sex of an applicant for a car insurance policy could potentially be relevant for predicting future car damage[1], but this is clearly not allowed.

B. Partly free choice characteristic that is relevant:
This category is most interesting for the discussion, because the variable is considered relevant and in some sense as a behavioral characteristic of the individual applicant. Consider, for example, the street where someone lives. This often appears to be relevant, but at the same time may betray a certain cultural background of the applicants. It is not always a completely free choice.

C. Free choice characteristic that is relevant
The fact that someone is a member of a mountaineering association can potentially be relevant for the purpose of a decision. Furthermore, such a membership can be labeled as a free choice. The same goes for whether someone smokes. That is a free choice that is very relevant for a life insurance policy.

D. Free choice characteristic that is irrelevant
The number of hours someone spends watching TV per week is a completely free choice. However, it does not seem very relevant to assessing his or her behavior as a motorist. The motto would therefore be to omit such a variable, since a decision based on it would not be justifiable.

E. Not a free choice characteristic that is irrelevant
Variables in this category are not allowed nor are they relevant for the purpose of the selection. For example, political affiliation does not seem to be causally related to the risk of accidents. Apart from the legal prohibition to include these variables, there are also no justified grounds for inclusion, since we cannot argue for a causal relationship.

In fact, variables in category B should lead to the most discussion, because they are considered relevant and are (partly) the result of free choice and partly traceable to a protected characteristic from Article 1. Is the street where someone lives a completely free choice? Or the country where people go on vacation? Is it fair to hold someone accountable for these choices?

Output of the model

One could argue that if protected or non-behavioral variables are excluded from the model a priori, the risk of discrimination is smaller. After all, what does not go in, cannot come out. However, this is not necessarily true. Deep learning algorithms can find very complex patterns that may still correlate with protected characteristics.

It could therefore happen, even if all protected data is excluded, that the use of behavioral variables still leads to a model which discriminates. For examples, suppose all Cretans lie[2] about their income statement. This could result in a mortgage acceptance model which rejects all applications of Cretans.

If the model works correctly and all Cretans lie, there could be justified grounds to reject their applications. At the same time, this result can be undesirable and call for a debate about algorithmic fairness. Also, the bank could suffer reputational damage, regardless of justifiability.

Conclusion

Financial institutions may assess applications based on the individual behavior of the applicant and possibly discriminate. The central question here is to what extent behavioral variables are the result

of completely free choice and to what extent these can be traced down to characteristics as referred to in Article 1 of the Dutch Constitution.

The discussion about algorithmic fairness and which variables can lead to discrimination is certainly not black and white, but gray. We therefore encourage an industry-wide discussion on this topic to establish what is permissible and required.

The discussion is becoming increasingly relevant with the introduction of advanced machine learning algorithms, where model transparency can decline. It is therefore very important that financial institutions introduce proper monitoring and controls to prevent discrimination in their models.

RiskQuest has experience in quantifying and counteracting discrimination in models. If you would like to receive more information, please contact us.