Introduction
In recent years, Artificial Intelligence systems have become prominent tools in a multitude of industries, including banking. They play a key role in improving decision-making processes, influencing business results and increasing operational efficiency.[1] These sophisticated algorithms have enabled unprecedented advances in many areas, including fraud detection, customer service automation,[2] and risk monitoring and assessment.[3]
However, alongside these transformative benefits, the adoption of Artificial Intelligence systems has introduced challenges that could increase risks for companies throughout the entire lifecycle of these systems. Indeed, compared to classical systems, artificial intelligence and machine learning systems have an inherent complexity due to their non-deterministic nature.[4]
Therefore, considering the rapid and inexorable spread of these techniques, the European Parliament has been working in recent years, in concert with other non-EU regulations, to establish a legal framework, which took shape in Regulation 2024/1689 (EU AI Act) published on 12 July 2024 and entered into force on 2 August 2024, with the aim of promoting the spread of human-centred and reliable artificial intelligence.[5]
Although the regulatory dictates of the EU AI Act clearly apply to high-risk systems, the regulator itself encourages adopting the principles of the regulation also to non-high-risk AI systems. Given the pervasiveness of the phenomenon, it is therefore essential to have a robust and industrialised approach with clear rules and a resilient operating model. The approach observed in the market is to leverage the model risk framework by adopting its processes and methodologies adapted to handle the specificities of AI systems as well.
Organisational model for AI risk management
After a thorough understanding of the regulatory principles, the first step is to equip oneself with an appropriate governance model by answering questions such as: has a centralised or decentralised approach been chosen? Have roles and responsibilities within the bank been assigned? Have appropriate internal policies been defined? Has a risk and quality management system been adopted?
Currently, in answer to the first question, it is observed that the Swiss banking system is still at a stage of understanding artificial intelligence and the consequences it may have on the organisational model. Therefore, only in some banking contexts is the Hub-and-Spoke model adopted, which is considered the target to strive for, while, for the other market players, a model is found where they all make artificial intelligence systems, introducing potential inefficiencies and duplications.
The Hub-and-Spoke model consists of a central presidium (the Hub) that brings order by setting the rules and distributed units (the Spoke) to enhance specific skills. The Hub is in charge of giving a clear strategy, establishing the rules for managing the needs of the different units of the bank and having a clear overview of the current and future path on artificial intelligence issues, avoiding unnecessary duplication and addressing the needs in the best way. Spokes present the appropriate sector and process expertise to best address the development activities of vertical solutions.
Development of an AI risk management policy
The second step, after the organisational model has been identified, is to define a robust policy that outlines the guiding principles and operational methods to be adopted and followed in order to maintain an adequate control over AI.
In order to be effective, the policy must describe the organisational model, and the rules adopted at group level, give a clear definition of what is meant by an Artificial Intelligence system, and correctly assign roles and responsibilities, involving not only the technical functions but also the strategic bodies and the control functions as far as they are concerned.[6] The policy must also outline how to survey and classify artificial intelligence systems and regulate how to choose between an in-house implementation (‘make’) and the adoption of a solution developed by an external provider (‘buy’).
As the regulator imposes stricter requirements for the production release of AI systems, it is crucial to have a robust framework to document how the system was built and how it was tested. These rules must also be provided for in the policy and set out in pre-defined document standards. Also following the regulatory dictate, the policy must contain clear rules for the periodic monitoring of these systems in order to avoid the spread of unfair usage practices or the occurrence of errors due to possible deterioration of the system as it evolves.
The grounding of the target operational model must also include the bank’s internal promotion of Artificial Intelligence culture.[7] After a careful assessment of the literacy level of all stakeholders, both internal and external, it is crucial to set up a solid training programme. It will be necessary to plan specific induction sessions for top management and, in parallel, to develop a training plan for staff upskilling and reskilling. This plan will have to be based on the changes introduced by artificial intelligence in operational processes and people’s modus operandi, in order to overcome the natural fear of being replaced by intelligent systems.
Another fundamental aspect to be addressed is the proper involvement of business functions. One naturally thinks of the Innovation, Data & Technology functions, but increasingly the Control and Organisation & People functions must also play an important role. In this context, the Chief Risk Officer’s area has the task of integrating the bank’s risk appetite by considering the propensity for artificial intelligence, of declining its limits of use, but above all of classifying, in accordance with the standard, the systems and assigning them a precise and calibrated risk index. Therefore, with the involvement of the organisation function, a structured process must be designed for collecting and classifying AI systems, possibly by preparing concise and effective questionnaires to be adopted at group level.
Best practices for long-term risk management
A good practice observed on the market is to make the sponsor of the AI system responsible for the census phase of the system, which, in order to facilitate the management of the system’s life cycle, can take place through the use of an IT tool.[8] In fact, there are IT solutions on the market that make it possible to manage the workflow with agility, so that the functions involved can collaborate and everyone is aware of the nature of the model, the state of approval and use, highlighting the need for revision and facilitating the planning of corrective action. At the time of the census, it is important for the sponsor to assign an initial risk assessment to the system, the so-called synthetic risk, which will then be duly checked and, if appropriate, endorsed by the Chief Risk Officer’s area. At this point, the system can be classified in accordance with internal governance and, above all, in line with the requirements of the AI Act.
With the synthetic risk, one can immediately exclude the cases that are not acceptable as per the regulations and, by adding two other dimensions of analysis, assign a specific risk assessment that takes into account the impact that a system error could cause to the banking reality, combined with the complexity of the system. By way of example, the impact dimension must take into account the possible regulatory, reputational and economic implications that may reverberate both internally and on third parties and customers.
The second dimension to consider is the complexity of the system, i.e. aspects such as the objective the system sets itself, the methodologies and technologies it adopts, any interconnections and dependencies it has with other systems, the nature of the data it uses, the ethical and fairness implications and, last but not least, how much it interacts with human beings in explaining the choices made.
At this point, considering the synthetic risk, complexity and impacts, the system can be positioned within a compliance matrix. Priority should be placed on models with high impact, high complexity and low regulatory compliance, on which a validation process and continuous monitoring over time should be initiated.
Notwithstanding the importance of the monitoring process over time, systems positioned in the riskiest part of the compliance matrix should also be subject to periodic review and validation with a frequency depending on the specific risk level and planned ex-ante.
Conclusion
In conclusion, the management of artificial intelligence systems requires a multidisciplinary approach and robust governance to constantly monitor the risks and opportunities these technologies entail. Only through well-defined policy, appropriate training and continuous risk management can banks fully exploit the potential of AI, while ensuring regulatory compliance and safeguarding the health, safety and fundamental rights of users. The adoption of good risk management practices and a corporate culture open to innovation will be key elements for long-term success.
References
- NSA Polireddi, ‘An Effective Role of Artificial Intelligence and Machine Learning in Banking Sector’, Measurement: Sensors, vol. 33, 2024, pp. 101135, https://doi.org/10.1016/j.measen.2024.101135. ↑
- BF Maseke, ‘The Transformative Power of Artificial Intelligence in Banking Client Service’, South Asian Journal of Social Studies and Economics, vol. 21, issue 3, 2024, pp. 93–105, https://doi.org/10.9734/sajsse/2024/v21i3787. ↑
- H Xu et al., ‘Leveraging Artificial Intelligence for Enhanced Risk Management in Financial Services: Current Applications and Future Prospects’, Academic Journal of Sociology and Management, vol. 2, issue 5, 2024, pp. 38–53, https://doi.org/10.5281/zenodo.13765819. ↑
- B Fabrègue, ‘Artificial Intelligence Governance in Smart Cities: A European Regulatory Perspective’, Journal of Autonomous Intelligence, vol. 7, issue 2, 2024, https://doi.org/10.32629/jai.v7i2.672. ↑
- B Fabrègue, ‘Artificial Intelligence Governance in Smart Cities: A European Regulatory Perspective’, Journal of Autonomous Intelligence, vol. 7, issue 2, 2024, https://doi.org/10.32629/jai.v7i2.672. ↑
- R Feldman and K Stein, ‘AI Governance in the Financial Industry’, Stanford Journal of Law, Business & Finance, vol. 27, 2022, pp. 94, . ↑
- R Feldman and K Stein, ‘AI Governance in the Financial Industry’, Stanford Journal of Law, Business & Finance, vol. 27, 2022, pp. 94, . ↑
- R Feldman and K Stein, ‘AI Governance in the Financial Industry’, Stanford Journal of Law, Business & Finance, vol. 27, 2022, pp. 94, . ↑