Artificial intelligence in finance could pose new systemic risk

(Nick Youngson CC BY-SA)

The Financial Stability Board believes that the development of artificial intelligence in finance will increase efficiency in the financial system, but it may lead to the creation of a serious risk for financial stability.

In the report on artificial intelligence (AI) and machine learning in financial services, the Financial Stability Board (FSB) indicates that these technologies are finding increasingly wider application in the financial system. They are driving changes in the way entities in the financial market are making decisions concerning, among others, transactions, allocation of capital and lending.

There are many applications using artificial intelligence and machine learning technologies in the financial system. This is due to both supply factors (such as technological advances and the availability of the financial sector data and infrastructure), and demand factors (such as issues of profitability, competition, and regulatory demands). These technologies are used, among others, to assess the credit quality of counter parties, set the price and sell insurance contracts, automate interactions with clients, optimize the use of capital and commercial transactions, manage risk and prevent fraud, and to assess the quality of data and compliance with the regulations.

Artificial intelligence is the theory and practice of creating intelligent computer systems that are able to assist or replace human intellectual work, and enable a deeper understanding of the human cognitive process. Meanwhile, machine learning involves the analysis of learning processes and the creation of systems that enhance their own operation based on past experiences thanks to the use of self-optimizing algorithms, which stimulate the learning process.

The most advanced technologies in the context of AI enable the processing of large amounts of data, which allows us to identify trends or signals in an entire data set. The FSB believes that as a result of these technologies, the financial system will become more efficient. These changes could also result in the emergence of new market players (including systemically important players), new connections, and new types of risk.

The complexity of AI systems could mean that they may become incomprehensible to humans. The machine learning technique known as deep-learning is difficult to interpret due to the complicated method of data processing. While traditional machine learning algorithms are linear, deep-learning algorithms are arranged hierarchically in order of increasing complexity and abstraction, and the data have to go through several layers of processing. This makes it difficult for the user to analyze the process on an ongoing basis.

As a result, the FSB believes that the lack of interpretability and lack of ongoing verification of the utilized artificial intelligence and machine learning methods could mean that these technologies may become a source of risk for the financial system. Their widespread use could lead to adverse and unintended consequences, which is particularly important in conditions of a market downturn. In such a situation, it could be difficult to predict the behavior of artificial intelligence systems, and the only solution could be to turn them off completely. This, in turn, could cause macroeconomic consequences of unknown type and scale.

In this context, we should also keep in mind that the existing AI systems are currently being tested in conditions of historically low volatility in the financial markets. It is not known whether they will be resilient and effective in conditions of increased market volatility.

In addition, AI systems lead to reduced clarity concerning liability, as it is not entirely clear whether the responsibility for this technology is to be borne by the entity using it or by its creator. And supervisory authorities face a challenge which is the difficulty in understanding the functioning of algorithms within the AI systems. On the other hand, these technologies could also be used by the supervisory authorities, because machine learning techniques are effective in detecting repetitive patterns (e.g. of behaviour), which could be useful, for example, in the development of applications detecting fraud and other irregularities. AI could also increase the effectiveness of the financial supervisory authorities and enable a better analysis of systemic risk.

The FSB indicates that at the present stage it is important to evaluate the applications based on AI and machine learning for possible risks, including compliance with the relevant protocols concerning data privacy, business risk and cybersecurity. The progress of AI and machine learning applications should be followed by progress in interpretation of decisions and results of the algorithms.

Milena Kabza is an economist at the Poland’s central bank NBP Financial Stability Department.

Share this post