New best practice guidelines for the use of supervised machine learning

On 10 July 2019, the Danish Financial Supervisory Authority (the "FSA") published new guidelines on the use of supervised machine learning. The guidelines were published because of; inter alios, the FSA's experience with machine learning technology in the FT Lab, the FSA's "regulatory sandbox".

As the development and use of artificial intelligence, including machine learning, has gained larger traction in both the financial sector as a whole and specifically within fintech companies, the FSA recommendations can be used as guidance for companies to ensure, that they are using the technology in accordance with what the FSA deems to be "best practice".

Therefore, we will focus on the new guidelines in this Plesner Insight.

Background

The guidelines were developed as a result of the FSA's experience in FT Lab with the company e-nettet A/S, which used supervised machine learning through a neural network for the purpose of pricing residential real estate.

The purpose of the FSA guidelines is to highlight some of the risks related to supervised machine learning as well as to guide and inspire companies to use the technology in accordance with what the FSA deems to be best practice.

What is supervised machine learning?
The FSA defines machine learning as a sub-category of artificial intelligence, which can be described as algorithms that process data, learn from this data and thereafter utilises what it has learn for the purpose of making well-informed decisions.

Further, the FSA defines supervised machine learning as a special form of machine learning, where both the input and output variables are known. Based on these variables, the optimal connection between all the input and output variables is deducted. This connection can thereafter describe new examples and connections.

The FSA recommendations

The FSA guidelines contains nine overall recommendations for the use of supervised machine learning. The themes of these recommendations are spread out across different subjects, including recommendations relating to governance, responsibility, explainability and data ethics. Below, we have briefly set out some of the most considerable recommendations contained in the FSA guidelines.

Governance
The FSA recommends that financial companies, which are already subject to regulation and rules requiring them to develop policies and procedures, ensure that machine learning is incorporated in the usual governance-structures already in places within the business areas where the technology will be put to use. Furthermore, the FSA recommends that the company actively reviews these new processes and potential risks, which the use of machine learning may entail, and that the company ensures that these processes and risks are integrated into the existing policies and procedures.

In many companies, it is likely that the machine learning models will be developed and optimised on an ongoing basis. As such, the FSA recommends that companies ensure that the development of the models is continuously documented and validated. The logging of changes to these machine learning models should take place in a systematically order and companies must be able to document changes during the development, running and updating of a specific model.

Data processing and data ethics
A substantial part of the general machine learning debate has been dedicated to questions on data processing and data ethics. These themes are also touched upon in the FSA guidelines. The FSA recommends that companies ensure that the data quality and stability is kept on a satisfactory level and that companies are able to document their data processing at all times.

Furthermore, the FSA recommends that companies actively take a view on potential bias in the data set as well as how undesirable outcomes can be avoided to ensure that data is used in a responsible manner.

In connection thereto, the FSA notes that responsible use of data is built on the principles already contained within the personal data regulatory regime. Thus, financial institutions may be able to find inspiration within these rules when assessing whether their use of machine learning measures up to good data ethics.

Remarkably, the FSA also recommends that companies consider what the FSA calls reasonableness, which the FSA considers as being "the general understanding in society of what is right and wrong from time to time". In connection thereto, the FSA notes that, while a model may be free of bias from both sources of data and the model development, the model may potentially lead to outcomes, which can be considered as unfair to, e.g., certain customer segments. As a result, the FSA notes that it considers it best practice that companies actively assesses reasonableness and are able to document this.

The concept of reasonableness as defined by the FSA is a rather elastic and subjective concept, which can drastically change over time. Financial companies should therefore consider this when developing new machine learning models.

Performance and robustness
The FSA recommendations also deals with what the FSA calls performance and robustness. The term "performance" relates to the ability of machine learning models to improve data results and precision compared to traditional statistical models, including the models ability to correctly estimate a given parameter. The FSA notes that it is important that companies ensure that the model is robust and can withstand changes to the data input and other external influences such as malignant actors trying to influence the results of the model.

The term robustness also covers the models ability to handle new and updated data for the purpose of ensuring that the output does not drastically change from one version of the model to the next. To ensure this, the FSA recommends that companies ensure that the versioning of models is documented and saved in order to recreate earlier results thereby making it possible to evaluate why a considerable change in the model has occurred.

Explainability
One of the potential risks related to the use of machine learning is that the processes and results of the model are harder to explain compared to traditional statistical methods.

Thus, the FSA notes that it considers it best practice when using supervised machine learning that companies are able to explain how a model works and the reasoning behind its results. Furthermore, companies should be able to explain the rationale for decisions based on machine learning models to individuals to the extent that the result has an impact on such individuals.

In connection thereto, companies should note that the GDPR contains rules regulating the use of automated decisions, which have a legal effect or an equivalent effect on the addressee of the decision. Such types of decisions, which are solely based on automated case administration, may not be made without the consent of the addressee.

The future use of supervised machine learning

With the FSA guidelines, companies now have, for the first time, some guidance on the use of machine learning. As machine learning technology is further introduced within financial companies - thereby rendering the technology more important across the financial sector as a whole - it is not unlikely that the legislator will have an increased focus on regulating the use of machine learning.

Thus, for financial companies currently considering or already using supervised machine learning, these guidelines may be of significant interest as to ensure that they are complying with what the FSA deems to be best practice on the use of supervised machine learning.

Latest news on Banking and Finance

Banking and Finance