Bank of England and Financial Conduct Authority Publish Report on Machine Learning in the UK Financial Services Sector

Alert
October 28, 2019
4 minutes
Authors:

On 16 October 2019, the Bank of England (“BoE”) and Financial Conduct Authority (“FCA”) published a joint report on the use of machine learning (“ML”) in the UK financial services industry (the “Report”). The Report’s findings will be of interest to those operating in the financial services sector, which increasingly uses ML to assist with a wide variety of commercial and compliance-related functions, including anti-money laundering compliance, sanctions screening, fraud detection and, for investment funds, identifying investment opportunities.

The Report summarises the results of a survey conducted by the BoE and FCA involving 106 respondents from a group of almost 300 banks, credit brokers, e-money institutions, financial market infrastructure firms, investment managers, insurers, non-bank lenders and principal trading firms. It reflects the BoE and FCA’s intention to better understand the interaction between an increasingly data-driven economy and dramatic changes to the structure and nature of the financial system supporting it. In particular, the Report emphasises the need to strike a balance between supporting development of innovative and transformative technology whilst also addressing the risks posed by such developments to consumers and the UK financial system as a whole.

Key Findings

Although the Report cautions that its conclusions cannot be considered to be representative of the entire UK financial services industry, it cites the following among a number of “key findings”:

  • Firms in the financial services sector are using ML with increasing frequency. Two-thirds of respondents reported using ML in some form, with most firms expecting usage to increase significantly in the coming years.
  • The insurance and banking sectors use ML most extensively. Overall, ML is deployed most often in relation to anti-money laundering and fraud detection, as well as in customer-facing applications such as customer services and marketing.
  • Firms consider improvements in AML, fraud detection and overall efficiency as the biggest benefits of using ML. They identified risks, including a lack of explainability, inadequate controls or governance, data quality issues and poor model performance. To mitigate those risks, firms implement alert systems and so-called “human-in-the-loop” mechanisms to flag when the ML model is not working as intended.
  • Firms do not consider regulation to be an unjustified barrier to ML deployment, but some believe there should be additional guidance to clarify existing regulations. Respondents noted that, because ML is a relatively new technology, it may not always be obvious how the existing regulatory framework applies to it. Some firms noted, for example, the difficulty of explaining decision-making, as required by current regulation, when using black box ML models (which, by their very nature, are highly difficult to explain). As such, several respondents suggested it would be helpful to have clear guidance on regulators’ expectations around ML use to inform the development of best practices in this area.
  • Firms do not believe that ML necessarily creates new risks, but it could amplify existing ones. Respondents recognised that governance and controls processes will need to keep pace with technological development to appropriately manage those risks.
  • Although most firms reported using their existing risk management frameworks to address risks posed by ML, they noted that these frameworks might have to evolve as ML becomes increasingly mature and sophisticated.

Next Steps

As stated in the Report, the BoE and FCA regard this survey as a first step towards gaining a better understanding of the impact of ML on the UK financial services sector. The survey’s findings will inform the BoE and FCA’s development of policy relating to ML. To that end, they have announced that they will establish a public-private working group on artificial intelligence (“AI”) to further discuss ML issues and will consider repeating the survey in 2020.

The Report also forms part of the FCA’s broader acknowledgement that, given the pace and scale of technological advancement in the financial services sector, it may become necessary for regulation to move away from a narrow and prescriptive rules-based approach to a more flexible framework that focuses on desired outcomes for consumers. This approach was articulated most recently by Christopher Woolard, the FCA’s Executive Director of Strategy and Competition, who, in a 21 October 2019 speech, acknowledged that “our rules feel increasingly analogue in a digital world” and spoke of the need for “regulation that is agile and doesn’t become outdated as domestic and global markets evolve”.

Moreover, the FCA has made clear that it views innovative technology such as ML as a useful tool – for use by both firms and regulators – in the detection and disruption of financial crime. In a 23 October 2019 speech, Megan Butler, the FCA’s Executive Director of Supervision – Investment, Wholesale and Specialists, said that, although the FCA is “technology neutral”, it will welcome innovative technology that ultimately reduces harm. As a particular example, Ms. Butler highlighted the potential for ML to monitor transactions in real time and flag potentially fraudulent patterns with greater accuracy than is possible using existing technologies. Ms. Butler also highlighted ML applications developed under the auspices of the FCA’s regulatory sandbox – which supports firms in piloting innovative technologies and approaches in the real market – as further examples of the potential for ML to assist in combating financial crime.

What is clear, therefore, is that both regulators and regulated firms alike recognise the opportunities and challenges posed by innovative technologies such as ML, as well as the need for an agile, flexible regulatory framework that keeps pace with technological advances. With the BoE and FCA’s commitment to establishing a joint public-private working group on AI matters, we anticipate there will be further developments in this area in the coming months.