From code-breaking to code-writing: new UK initiatives to encourage the development of AI-enabled technologies in the life sciences sector

Viewpoints
November 10, 2023
3 minutes

Last week, pioneers of the AI industry convened at the home of Allied code-breaking during the Second World War, Bletchley Park, for the UK’s AI Safety Summit. Attendees included tech leaders, AI experts, and representatives from leading AI nations such as the US, France, Germany, Italy, Japan and China.

As the passion project of Prime Minister Rishi Sunak, the aim of the summit is to facilitate discussion on the future of AI and how to work towards a shared understanding of its risks. 

With attention focussed on AI, the UK authorities have taken the opportunity to announce various AI initiatives, certain of which seek to address challenges which impede the uptake of these technologies in the life sciences sector. Notably, and in contrast to the European Union, the UK will – for now, at least – not introduce legislation governing AI. Rather, the Government has adopted a non-statutory, principles-based regulatory framework that will be supported by existing regulators, such as the Information Commissioner’s Office and the Medicines and Healthcare products Regulatory Agency. 

AI is currently being used in the life sciences sector for a whole range of different uses including, for example, the interpretation of scan results and the discovery of new antibiotics. As interest in its potential uses accelerates, the challenges associated with its development and uptake are becoming more pronounced.

The fundamental issues centre around how to regulate a technology which is continuously evolving without any human oversight and that involves novel and highly complex privacy, data protection and intellectual property issues, as well as whether payers will be prepared to fund the use of these technologies in their healthcare systems. The uncertainty surrounding these factors has undoubtedly impacted how these technologies are viewed by investors.

In an effort to alleviate these challenges, and against the backdrop of the AI Safety Summit, the UK authorities have recently announced three new AI-specific initiatives. In particular: 

Government funding: The establishment of a new £100 million government investment, the AI Life Sciences Accelerator Mission, which will fund research into how AI can contribute to the attainment of the eight critical healthcare missions contained in the UK government’s Life Sciences Vision.

Prime Minister Rishi Sunak explained in a press release that the funding will be targeted towards areas where rapid deployment of AI has the greatest potential to create transformational breakthroughs in treatments for previously incurable diseases. This builds on the UK Government’s “UK Science and Technology Framework”, announced earlier this year.

Regulatory sandbox: In 2024, the UK’s Medicines and Healthcare products Regulatory Agency (MHRA) intends to establish a ‘regulatory sandbox’, the AI-Airlock, which will provide a “regulator-monitored virtual area for developers to generate robust evidence for their advanced technologies”. The MHRA’s press release explains that the regulatory sandbox model will assist developers in anticipating and addressing the regulatory challenges that they would otherwise confront when seeking to bring their products to market.

It’s hoped that by facilitating early engagement with the various stakeholders involved (including the MHRA, Approved Bodies and the NHS) that the AI-Airlock will accelerate market access, which will ultimately benefit patients.

Medical devices: To the extent standalone software performs a medical purpose it will be regulated as a medical device, meaning that it must meet comprehensive requirements. Traditional regulatory frameworks for medical devices assess safety and performance at discrete points in the product lifecycle.

However, much like the Engima code which was decrypted at Bletchley Park over 80 years ago, AI and machine learning-enabled devices are constantly changing. This poses the question of whether, and if so when, the need for a fresh conformity assessment will be triggered.

In this regard, last month the U.S. Food and Drug Administration, MHRA and Health Canada proposed five guiding principles for the development of predetermined change control plans which seek to alleviate the regulatory burden for developers, by allowing them to demonstrate which changes and updates would be made, and how they would ensure that maintenance of the safety and effectiveness of their devices.

For those looking to develop or invest in AI-enabled technologies within the life sciences sector, these initiatives should provide encouragement. They demonstrate the UK’s ambition to establish a pro-innovation regulatory environment, which will hopefully translate to improved uptake within the healthcare system. However, as always, the success of these initiatives will turn on how they are implemented in practice.