Artificial intelligence in radiology: what is the potential?

This content relates to the following topics:

Artificial intelligence (AI) is widely recognised as having the potential to transform health care. Of its possible uses, radiology presents one of the biggest opportunities for the application of AI.
In this free online event, our panel explored how AI can be incorporated into a radiologist’s day to improve patient care and address current challenges around workload. We discussed how AI can alleviate the pressure presented by growing volumes of patient data and how its ability to increase departmental capacity could be used to prevent employee burnout or missed findings.

Watch on demand

Event partner

This event was held in partnership with IBM Watson Health. 

If you're interested in partnering with us on an online event please email Chloe Smithers or call her on 020 7307 2482.




Matthew Honeyman

Researcher, Policy, The King’s Fund


Dr Neelam Dugar

Consultant Radiologist, Doncaster and Bassetlaw Teaching Hospitals NHS Foundation Trust, and Informatics Adviser and Chair of the Radiology Informatics Committee, Royal College of Radiologists


Dr Mark Davies

Chief Medical Officer (EMEA), IBM Corporation and IBM Watson Health


Professor Erika Denton

Medical Director and Honorary Professor of Radiology, Norfolk and Norwich University Hospital, and National Advisor for Imaging, NHS Improvement

Questions from this event

Our online audience submitted questions to the panel during this event.  A couple of our speakers answered some of the questions that the panel weren't able to get to on the day.

Professor Erika Denton: There’s some information about this in the webinar. CT scanners, for example, already have AI embedded in them for image optimisation. As an example, there may be an opportunity for AI to flag when a radiographer may need to refer the image to a radiologist for interpretation – essentially an early alert system. With brain CTs, for example, AI can detect fresh haemorrhage reliably. These images can then be passed to a radiologist for urgent review. 

Matthew Honeyman: It depends what kinds of applications of AI we're talking about. It varies – and will vary – across different services and across different technologies. Back-end operational implementations of AI tools might come through new tools, or as plug-ins offered to healthcare providers through app store style arrangements or third-party integrations. Applications of AI in diagnostics and screening might be commissioned as part of complete new service models and pathways. For the centre's latest thinking, I would recommend reading the NHSX policy document Artificial intelligence: how to get it right. This report talks about the centre's new flagship AI Lab which will undertake national level research and procurement, and fund R&D and pilot projects, as well as giving providers legal and commercial support.

Professor Erika Denton: This is the subject of considerable debate. The most likely scenario is that there will be umbrella providers with a number of AI-based solutions in their portfolios. One of the difficulties we have in the NHS with the uptake of such solutions is the sheer number of small providers out there with single-point solutions all using a different basis for their algorithms. This makes it very difficult in terms of procurement in the NHS, so the expectation is that there will need to be a single platform or umbrella provider where upgrades and solution enhancements are provided as part of the contract. An additional challenge with using multiple suppliers is the question of interoperability and how different solutions can work together in the context of a radiologist's workflow.

Matthew Honeyman: Giving providers the support to make informed, intelligent decisions about partnerships for this kind of work is really important – for example, ensuring people know the right questions to ask about a proposed data-sharing partnership or deal. Of course, there is a set of risks with data-sharing but these can be managed and mitigated by choosing partners that meet the right standards of data protection and governance in law and regulation. And when considering these kinds of projects, products being built need to be designed to meet the needs of patients and the system – cost-effective systems for improving health. There is a host of options about the way NHS organisations and their partners could share benefits commercially, and they are neatly summarised by Imperial's Global Health Institute white paper here

Matthew Honeyman: Regulators recognise the challenges associated with 'adaptive' systems that can change the way they respond to given cases over time. Particularly important in the class of products mentioned in this question is post-market surveillance and plans to have clinicians review cases. For further deep reading on the issues for health care regulation thrown up by adaptive systems and the interpretability problem in ML, I would recommend PHG Foundation's recent work and this paper in particular.

Matthew Honeyman: I think there are two separate issues here. First, systems that use ML-based algorithms are not always adaptive, where the response to a given input changes over time. Many applications of these techniques involve training an algorithm that achieves a certain performance on a test set. This is the traditional supervised machine learning paradigm. These algorithms are then deployed as 'fixed' part of software, responding to new cases. But the response to a given input would not vary. Adaptive systems involve the retraining and 'tuning' of the algorithm over time and they raise additional regulatory challenges. 

However, bias and error will be present to some degree in all systems, and it is about adopting the best approaches to minimise these. Robust evaluation of digital health and AI systems is really important. Some of the challenges associated with this are summarised by experts at the Health Foundation in the NHSX AI policy paper  (section authored by Deeney et al).

I also recommend reading this BMJ paper on how to ensure safety and address bias from a systematic clinical practice perspective. 

Professor Erika Denton:  The Royal College of Radiologists has produced guidance in this area. You can read it here