The purpose of risk management in financial services is usually defined as to ‘protect and enable’. The ‘protect’ dimension can refer to the franchise value of the business but is mainly about protecting from regulatory intervention. ‘Enable’ has a perspective of value (however defined) and achievement of company objectives.
AI-based solutions, leveraging on vast amounts of data, are already a reality in the world of financial services, and these solutions are only likely to become more prevalent in the next ten years. What are the implications of AI developments for a Board Risk Committee?
The simple ‘protect and enable’ approach suggests a number of points for discussion:
- How would your company evidence that AI systems comply with relevant legislation, e.g. non-discriminatory laws?
- How would the wider data needs of AI system cope with data protection legislation? What about the so-called ‘right of explanation’? What would be the impact of these wider data needs on cyber-security?
- What is the business purpose of introducing an AI system? Does the business seek to enhance operational efficiencies? Does it aim to enhance business performance? How would you ensure that this purpose is achieved?
- What would be the operational impact of the deployment of specific AI tools in the business? Would it also alter the overall risk profile of the business? The profile of certain risks?
- What are the implications for risk governance, the risk management function and other oversight functions?
These are not simple questions that can be covered in a meeting of the Risk Committee. In some cases, the answer to the questions may not be clear-cut. For example, an AI-based underwriting system can be deployed to enhance business performance or to seek operational efficiencies. In other cases, addressing some of the issues would require the development of appropriate monitoring systems rather than a point-in-time consideration.
However, it is also worth bearing in mind that unless you operate in a start-up business, there would be a fair amount of technology available which would not necessarily be based on AI, and can be applied to improve existing business processes and reflect a (more) customer-centric perspective. So perhaps the main question about AI systems is really whether there is an adequate understanding of technology in the business to ensure that AI is the appropriate technology.
So where should a Risk Committee start? It may be useful to think about this as discussions outside the usual calendar of the Risk Committee meetings and develop a programme that consider these over time.
This article was published originally in Crescendo Advisors’ blog. You can reach it through www.crescendo-erm.com and then select “thought leadership”.
Isaac Alfon, senior risk practitioner and NEDonBoard member