top of page
Bryan Lister

Update on the use of AI in medical devices in Europe

The first 6 months of 2023 have seen Artificial Intelligence or ‘AI’ become inserted into everyday vocabulary, with the media trying to identify those jobs that will be at risk from replacement due to AI, and GCSE students turn to ChatGPT to request the writing of a Grade 7 piece for their Shakespeare themed homework. AI has been here for decades and already exists in some medical devices, but exponential increases in processing power, data storage and networking are key drivers that have prompted recent growth. So how will this Big Brotheresque digital technology be regulated?




It is clearly going to be difficult. For those of us who work in regulatory affairs, especially with medical devices, it is all too common to see technical developments outpace the regulations. Even the latest EU Medical Devices Regulation (2017/745), although recognising ‘medical device software’ as a type of device (or what our USA cousins term ‘Software as  Medical Device’), it does not have the granularity to define different types of software. For example, there are software products that exist that possess the intended purpose to actually treat individuals, what have become known as ‘Digital Therapeutics’. Add AI into the equation, and it can soon become a very complex proposition to understand how to achieve regulatory compliance.


There is a clear need for AI regulation and the EU has been attempting to address this through the AI Act, which received the approval of the EU Parliament on 14th June 2023 https://tinyurl.com/49a9cbdu . It is a complex piece of legislation and makes a start. However, it potentially falls short of sufficiently governing all of the threats posed by generative AI. What it does do is seek to tighten up rules surrounding data quality, human oversight, transparency and accountability. Whilst the AI Act applies across all sectors e.g finance, energy and education, it is the healthcare sector that is the most relevant to us.

A key foundation of the EU AI Act is a classification system that exists to determine the level of risk that AI could present to a person’s health, safety and fundamental rights: unacceptable, high, limited and minimal. Medical devices that use AI are most likely to be classified as ‘high risk’, but will also be regulated under existing medical device regulation as well.


In the UK, there is a national AI strategy that aims to operate a pragmatic light touch approach, and the MHRA is preparing guidance for the regulation of AI as a medical device. This therefore does not embrace new regulation, but it most likely to present clarifications that build upon existing legislation.


If you are a company that is developing software for healthcare purposes such as digital healthcare assessments, mental assessments, digital therapeutics, diagnostics or decision support tools, whether they use AI/ML or not, please get in touch with us if you require regulatory assistance.

8 views0 comments

Comments


bottom of page