Skip to content
Long read

Infrastructure for innovation: getting the NHS and social care ready for AI

Authors

Basic infrastructure challenges are holding back the potential for AI to improve health and care services. These challenges erode staff enthusiasm for AI and contribute to a demoralised workforce. Those staff who are positive about AI recognise its possibilities but in their day-to-day work they often find that poor infrastructure (such as computers crashing and unstable internet connections) adds to their workload, and that the technology is outdated and unable to support AI.  

AI tools promise huge improvements to people’s care, as well as more efficient use of finite resources. But if that huge potential is to be realised, basic technological infrastructure needs to be in place. Unfortunately, this is often not the case.   

Here we try to answer the question: what infrastructure needs to be in place to ensure that AI can be used in the health and care sectors? Informed by conversations across health and social care with staff, suppliers, researchers and patients, we consider what’s needed for not only the technical infrastructure, but also the wider environment and system capabilities.  

Technical infrastructure 

Access to suitable devices  

For the health system to be able to use AI, devices – everything from computers, tablets and smartphones to more sophisticated medical equipment – must be suitable and available. We heard from one interviewee: 

Our IT infrastructure is really crumbling. The computers we have are old. They run on outdated operating systems. They struggle with very basic tasks, like keeping Outlook open at the same time as a web browser.  

AHP digital lead  

However, the quote above isn’t necessarily true for everywhere. Other providers have better digital capability are developing and using AI.  

We're an oasis within the NHS. I think we're ranked third in our specialty in Europe and so we're relatively well resourced. There's nothing they have in the Mayo Clinic that we don't have.

Neurosurgeon 

Different parts of the NHS have varying levels of old or unsuitable technology systems; it’s estimated that between 10–50% of systems need to be modernised. Unfortunately, this doesn’t just relate to old devices but also newer devices that were purchased without considering their future functionality. Although hospitals have programmes to replace and improve devices, this still takes time.  

They do try and replace these machines. But we're talking about five hospitals with thousands of staff members, that's not a quick process.

AHP digital lead 

Old devices are unsuitable because they are unable to support AI. If the wider IT infrastructure isn’t considered then even if devices are updated, they will quickly become obsolete and unusable, and given the pace of AI without forward planning the devices will become outdated more quickly, wasting money and resources. Having infrastructure that is capable of supporting AI means both having suitable devices and having a sufficient number of devices. Unfortunately, staff often have to share IT equipment and if they are unable to access a device, they can’t make use of AI capabilities.  

We've got some mobile devices and some static devices. But if they're all in use, it's much easier to grab a [paper] fluid chart out of the drawer and use that.  

AHP digital lead 

Recommendations  

  • Providers need to ensure that devices are updated regularly and that every staff member has at least one device.  

  • As part of this regular refresh, providers need to carry out an audit of their IT equipment to determine the age of the equipment and estimate when it will need replacing. This estimation of device use duration should incorporate modern and future functionality requirements needed to support AI.  

  • National bodies, such as the Department of Health and Social Care, need to track these audit figures to enable accurate forecasting for budgets, and provide protected funding to purchase modern IT equipment.  

Ensuring adequate connectivity 

For AI to work, connectivity is really important. A fast, high capacity and stable internet connection is essential in all workplaces, but is particularly important for AI tools in health and care. AI can work on devices (desktops), but some AI only works in the cloud. For example, clinicians can send medical images to an AI algorithm in the cloud to create a 3D simulation, which they are then able to use to better plan surgery, but getting those images to the cloud requires good connectivity. Currently, it is not clear which way AI usage is going to develop – whether it will sit on devices (desktops) or in protected cloud data centres, or a combination of both. Providers therefore need to plan for both types of AI capability, or there needs to be specific decisions at a national level as to which route AI use will take. 

Unfortunately, in 2023 current provider connectivity was determined to be inadequate and is still insufficient for current daily demands.  

The wi-fi in in our hospital trust is dreadful, even just basics really need a big upgrade. 

AHP digital lead 

Many providers have invested in wi-fi to meet present-day requirements, but this is unlikely to be sufficient to meet the increasing demand of AI applications.  

Recommendation 

  • The Department of Health and Social Care needs to work with those providers testing AI applications to estimate the connectivity needs for future AI functionality and ensure there is appropriate funding. 

Developing a cloud strategy 

Despite the 2013 government Cloud First Strategy, which mandated that government departments and public organisations should use cloud services as their first option, the NHS continues to use a combination of both on-premises data centres and cloud-based servers for data storage and processing. Cloud is believed to be cheaper but there are different risks and benefits, such as how much budgetary control providers have and how easy it is to increase or decrease digital storage. Digital leaders have mixed opinions on the use of cloud systems based on factors including information governance risk and budgetary control. 

On-premises data centres could hold back AI development, for example if they lack physical space for more processors and cooling or have insufficient electrical supply. Cloud data centres offer more flexibility to scale up or down without having to manage the physical infrastructure, but a wholesale shift to cloud-first would need sustained funding and digital leaders who can advocate for cloud as the best option.  

However, many NHS providers leading AI development consider cloud computing to be expensive, particularly for computer-vision forms of AI that use large imaging datasets. As a result, these providers are not investing in cloud but installing more on-premises server capability. Providers are also building partnerships with academia to access more powerful computing capability, but this approach comes with its own challenge of increased complexity to navigate data-sharing agreements.  

The health and care system needs to have a comprehensive cloud and on-premises computer strategy that leverages the best of both. This should support and build on the ​​AI opportunities action plan.  

​​​Recommendation 

  • The Department of Health and Social Care needs to work with NHS providers to develop an updated cloud strategy that takes account of the changing data and AI landscape. This should include considerations around national purchasing opportunities (using multiple suppliers) and building workforce and skills in the health and care system.  

Reducing permissions constraints to allow for innovation 

The use of virtual machines (computers with limited capabilities that run in the cloud) and stringent IT permissions are substantially limiting the capabilities of devices as well as clinicians’ ability to innovate with AI tools.  

I can’t install anything on my NHS computer without authorisation. In order for me to jump the hurdle of authorisation, I would need to get my DPO [Data Protection Officer] involved, which means filling in a long form, setting up communication channels between the supplier and them, and then eventually a wing and a prayer, and it might turn up. 

GP doctor 

Similarly, GPs are limited in their ability to use their computers to test and innovate, which prevents experimentation and testing of AI tools.  

They [central IT desk] have to review what the software is, and often they won't know because they've never seen it before, log on remotely to a computer as administrator, then allow install and that's every time. A good example is I can't update my printer driver.  

GP doctor 

It’s not just the NHS but social care, too.  

Care workers have restrictions, like they can't use the camera on their phone. The way they connect into systems is just archaic. Notifications are turned off so they can't receive push notifications on their phone. 

Social care AI supplier 

Striking the right balance between centralised and individual control of technology is difficult. Limiting the functionality of devices can help improve security and avoid overloading devices’ capabilities. However, this significantly restricts the potential utility of a device, preventing the use of AI and suppressing innovation.  

Recommendations 

  • Provider leaders need to work with staff and digital champions to understand the constraints staff face when testing AI tools. These constraints should be reduced in line with digital, innovation and research strategies.  

  • Digital strategies should be updated to incorporate approaches to adjusting device constraints to enable greater innovation and AI capabilities.  

  • Technical, clinical and leadership staff should explore alternative options, such as the creation of sandboxes (protected digital spaces) that allow people to properly utilise and experiment with technology in a safe environment with minimal risk.  

Data, governance and AI monitoring 

Consistent and integrated data information systems 

Data information systems, for example electronic health records, digital care records, case management systems, radiology information systems, and picture archiving and communication systems, are essential infrastructure for AI development and implementation. These systems store essential medical information, including the data needed to train AI models, data for analysis by AI and the outputs generated by AI models. However, to do this the data needs to be easily accessible.  

A common frustration among many health and social care providers, as well as AI suppliers, is the struggle to access this data. In the experience of staff leading AI development there is a lack of conformity when it comes to standards, or the data information systems are not designed for data access. There is a perception that data information system suppliers are uninterested in helping innovators to access data and plug in new technologies. Not all suppliers behave the same way, but many fall into three main groups: those that don’t allow data access; those that allow data access but no support; and those that charge to enable data access. This is perceived to be for various reasons, including lack of resources, avoiding additional costs, protecting business interests, preventing competition and to generate income. This means there is a hidden cost that involves paying for support or investing in technical staff just to access data before AI development and implementation are even considered.  

The other element of the infrastructure is the case management systems, particularly in social care, so the suppliers are horrendous to deal with. They charge providers a fortune, they're not user friendly and they don't enable integration with other suppliers. 

Social care AI supplier 

This is a problem across both health and social care.  

If you are in a closed proprietary system the data belongs to you, but the database belongs to the company. So how do you get data out of a database that you don't own? They say you can get it out but we're not going to help you, or we will help you but we will charge you, or they just say no.  

Neurologist 

Circumventing these challenges involves additional cost to NHS and social care providers. One option is to use a supplier who can offer data integration with electronic patient records. Another is for providers and integrated care systems to invest in the technical staff and expertise to engineer the requisite access to the data. A third option is to establish a renewed agreement with suppliers. Without the appropriate resources or leadership, providers are unable to overcome barriers to data access, hindering the development and use of AI tools.  

As the system continues to digitalise and legacy systems are modernised, data accessibility needs to be considered or there is a risk that data will not be able to be used for AI purposes. Even recent initiatives such as imaging networks face challenges.  

Look at the imaging networks as an example, the technical architecture's different so you can't use the same product in all the networks so then you need external companies to help with data transfer. There are significant costs associated with trusts needed to access their own data. It's just cost upon cost before you've actually had a look at the data and what's happening with the tool.  

Health care technology assessor 

Data information systems are necessary to digitalise the NHS. There’s an incorrect assumption that digitalisation automatically enables more accessible and usable information. There can be significant costs incurred to the system or health and care providers remain unable to access the data for AI purposes.  

It’s notoriously difficult to integrate with EPRs [electronic patient records] – for example if you want to do an automated solution with them it's almost impossible. It's so hard when the NHS buys into these systems that stops us from being able to deliver health tech.  

Local authority AI supplier 

Effective AI development and use need federated access to data, the existence of a shared care record, access to national data infrastructure (for example, GP Connect), approval to share data across providers, and approval to use data for secondary purposes of service planning and research. Good data infrastructure creates protected and ethically approved environments to bring data together to develop and test AI models.  

The data landscape, the legal basis for getting the variables, the infrastructure you're going to use, the people that built the software, the platforms on which it sits – all of that stuff preceded the AI modelling.  

Anaesthetist 

Recommendations 

  • National and system leaders need to remove technical barriers to data sharing by ensuring development and enforcement of data standards in line with the Data (Use and Access) Bill.  

  • There needs to be national direction on the mechanism and expectation for data access through software systems. These approaches for data flows should link to a cloud strategy and use existing data-sharing mechanisms, such as the Federated Data Platform and the Secure Data Environment, where suitable.  

Improving data quality 

Accessible data is essential for AI to be developed, run and monitored, but it’s not just about accessing the data – it also needs to be of sufficient quality and completeness. Real-world data in the NHS is often incomplete, mislabelled or unsuitable for a particular AI tool. This limits the NHS’s ability to develop, evaluate and use AI tools.  

There are so many issues with the way that data is stored, annotated and labelled. Sometimes you can open a scan image that says chest, but it will actually be pelvis because staff were imaging the abdomen and pelvis and they just added an extra view at the end because they noticed something. Those decisions are made every day for every patient.  

AI clinical scientist 

Addressing the quality of data means having systems, tools and processes that allow datasets to be cleaned, labelled, shared and ​stored​. This starts with overcoming the challenges to accessing data within data information systems (see above) then developing the workforce, skills and tools that can clean, label and assess the quality of the data. A feedback and quality improvement loop is also needed to identify opportunities to improve data quality by supporting staff training and changing processes, where necessary.  

Recommendations 

  • National and local leaders need to prioritise creating the environment to enable data quality improvement. They should provide clear direction on accessing data, developing tools (eg, CogStack) to improve data quality and nurturing the workforce skills to be able to use these tools and processes. 

  • National and local leaders need to develop and support initiatives that incorporate approaches to improving data quality as part of existing service delivery without increasing staff workloads. 

Making information governance work for AI 

Information governance is often viewed as a blocker to change but it’s important for ensuring that data is used appropriately and privacy protections are in place. Good information governance balances data sharing to enable better care with protecting privacy and maintaining trust. For example, clinicians can view hospital and GP records for the purposes of care provision, but if they wish to use that data for testing, developing or implementing AI, there needs to be a legal basis and different information governance agreements in place.  

The NHS, local authorities and social care providers have often put information governance in place on a case-by-case basis for each locality, creating increasing complexity. NHS and social care organisations trying to navigate information governance requirements are often frustrated by this lack of clarity, which then creates complex negotiations at a local level, as well as significant national variation.  

No central government department is willing to stand up and say ‘It's OK for you to use this data for these purposes’. What they say is ‘If it's appropriate in these broad guidelines, we'll say it's OK’. So what that means is at a local level, you're into quite complex negotiations on data access and use with local variation. We can have the data for this area but we can't have the same data for another area. 

NHS AI supplier 

The variation in local interpretation of national guidance can make it more challenging to combine datasets, slowing the development and use of AI. Where we see progress is when staff are supported by local leaders. This helps staff to navigate information governance challenges and often involves conversations with communities and people to continue to build trust in data use.  

Recommendations 

  • System leaders and provider leaders need to ensure sufficient governance staff and capacity across all providers. These staff need to be provided with training and peer support to become more familiar and confident with the information governance changes required to develop, test, implement and monitor AI tools.  

  • Appropriate bodies such as the Information Commissioner’s Office should facilitate and support clarifying guidance and training provision.  

Post-deployment monitoring 

Where AI is potentially very different from previous software capabilities is that it needs to be constantly monitored for changes in performance – also known as drift. This is when the AI response changes over time and could result in incorrect care decisions. Drift could occur due to changes in the input data (eg, if a scanner is upgraded or replaced the quality of outputs may change), processes (eg, variation in tissue sample preparation), patients/environment (eg, demographic shifts) or technology (eg, the AI model changes). For example, changes in the medical imaging equipment may result in a change in the image quality, and hence the AI response.  

If they swap out the machines that take the [x-ray] image that might cause drift. 

Director of clinical innovation 

Drift means an AI response is no longer acceptably accurate, which could result in harm, so there need for processes for alerting for drift and rectifying it.  

As mentioned in the section above on data information systems, these systems have led to siloed data, which prevents a complete picture of the patient. This can make monitoring for drift across pathways a challenge. As AI is still quite unfamiliar, there needs to be concerted national and local approaches to ensure appropriate monitoring for drift is in place. 

Providers don't have standard operating procedures, processes for discrepancy, feedback on the management of that process and understanding of how much to automate. There needs to be recognition that a smaller organisation may not have the capability to do these things. So I think there's going to be a requirement for companies to do post-deployment surveillance.  

Director of clinical innovation, Foundation Trust 

Monitoring AI when it’s in use should include automated quality checks and establishing the levels at which the AI is no longer performing well. This monitoring will have cost implications to the NHS to create the capacity and workforce to ensure robust monitoring. 

AI monitoring has not been implemented in most of the NHS, largely because the NHS is very product driven and they think they can buy the AI model and go. That's a product, it's done, it's off the shelf. I don't need to do any configuring and maintenance.  

Neurologist 

Leaders should be aware of the need for post-deployment monitoring and work with technical specialist staff and suppliers to develop appropriate monitoring technologies, tools and processes. Leaders should also ensure that procurement processes include requirements for post-deployment monitoring to comply with regulation and clinical safety requirements

Recommendation 

  • Regional, provider and department leadership need to develop the capabilities, tools and processes to monitor AI tools and respond if a tool becomes non-compliant. 

​​Organisational and system capabilities

Supporting IT staff and teams  

Historically, NHS hospitals had IT support staff on site linked to each department. This meant that staff understood the IT systems and the needs of the department, but budget cuts and centralised IT have changed the configuration of these IT teams. There also tends to be a high turnover of IT staff, which erodes institutional memory and knowledge, and results in limited awareness of the processes and systems. The knock-on effect of all this is that it takes longer to do the necessary changes to enable systems to use AI.  

Nobody wants to fund IT what it's actually worth. The way we experience that is that every time we log a call, the person who helped us before is no longer in post. So now you're constantly retraining people to do basic things which are commonly needed

Clinical AI scientist 

IT teams are essential to manage permissions and support the linking of IT systems so that new tools can be used. But too often their time is prioritised to immediate needs in order to keep services operational, and continued underinvestment means that these teams have little capacity to carry out the exploratory work needed to support and use AI systems. This means that innovators can find IT teams risk adverse and reluctant to change existing systems, as their focus is more on mitigating cyber and hacking risks.  

Upskilling, training and educating the IT team on the infrastructure needs is needed. I find that they really want to support and help, but they are always under-resourced and battered because all they get is complaints.  

Director of digital, Foundation Trust 

​​​Recommendations

  • Providers and system leaders should invest in developing and growing the capacity of IT teams.  

  • Digital leaders need to work collaboratively across integrated care systems and regions to support the standardisation of governance and processes to reduce demands on IT teams, while investing in improvements to infrastructure reliability.  

Securing and protecting funding  

More often than not technology budgets are raided for other uses, meaning that after buying the technology there is little left to fund the implementation. This then results in insufficient funding for staff recruitment, training and upskilling, changing processes and workflows, and patient engagement. Without the funding for implementation, the technology is unlikely to have the anticipated benefits and staff are unlikely to be ready to use the AI. Some suppliers and providers are using research grants to update legacy infrastructure in order to get the provider into a more AI-ready state.  

Technology funding announcements often give insufficient notice and have short application periods. Providers then respond hurriedly, making it more likely that there will be poorly co-ordinated decisions about how to allocate funding. Leaders should ensure that digital strategies take a proactive and collaborative approach to AI with regard to strategic aims and patient needs. When funding opportunities arise, a strategically aligned approach can then be taken, continuing to build progress towards organisational and system aims. 

Recommendations

  • National leaders need to create multi-year funding settlements that cover technology and implementation to enable iterative and long-term transformation.  

  • As part of AI development funding, system and provider leaders need to protect technology budgets, ensure budgets incorporate implementation costs not just purchasing technology, and seek opportunities to update legacy infrastructure.  

  • Provider leaders need to work collaboratively to identify AI applications so that short-term funding calls can be accessed. 

Assessing and evaluating AI opportunities  

Providers can find themselves overwhelmed by AI suppliers and need to manage the large number of suppliers wanting to work with them. In response, providers are creating methods to sift and assess AI tools.  

Some providers have developed internal decision-making guidance, outlining the necessary steps and considerations.  

Maybe it's free for us to access the AI tool as a proof of concept and then the suppliers get the results, but actually it takes five clinicians’ time and two radiologists to actually make it work.

Hospital director of digital 

This internal guidance takes into consideration the provider’s resources and what is needed to evaluate the AI. The process of evaluating an AI product starts with ascertaining whether a tool is a solution for the provider’s most pressing problems and the improvements they seek to make. A multi-disciplinary group that includes patients and clinical leaders establishes the utility of a tool, ensuring that it meets a need, solves a problem or provides a clear improvement. This process ensures that tools align with organisational and system priorities.  

Next, there needs to be a process to check the infrastructure requirements, data use and flow, compliance with data protection and information governance, how much testing has already happened, how well the AI has performed to date, and how well the tool integrates into existing systems. The provider also needs to consider evaluation and monitoring approaches. This information should be shared with procurement staff to support informed procurement approaches.  

After this comes the assessment of a tool using historic data specifically from the provider, this helps staff to understand if the tool would work for their particular patient demographics and processes. Using retrospective data, the AI results can be compared and reviewed with historic clinical staff results to understand any differences. Some tools can have very specific tolerances. For example, one provider evaluated an AI tool used for medical-imaging analysis and found that it wouldn’t work for approximately a quarter of their scans because the tool was too specific about how the patient should be orientated within the scanner. The tool would therefore require significant investment in time and training to reduce the number of rejected images, but even this would not guarantee that the number of rejected images would be low enough to reduce workload, rather than increase it due to requiring repeated scans.  

Last, is the testing of the AI in the actual care pathways with staff. This phase presents the opportunity to assess the technology and develop the workflows and processes with the clinical teams to ensure the updated pathway is evaluated against key measures. 

Recommendations

  • Provider and system leaders should develop stages of assessment of AI tools, including awareness, needs matching, retrospective testing, pilot and scale, to identify the benefits and implications of an AI tool.  

  • National and regional leaders should support providers to share and learn how to use a staged assessment process. This should include exploring national and regional tools that could accelerate the sifting and assessment processes.  

Summary 

For AI tools to benefit patients, people who draw on services, staff and health and care providers it’s essential that the correct infrastructure be put in place. This includes technical infrastructure but also facilitating teams, permissions, governance and leadership. The journey the NHS and the social care sector needs to go on is a complex development of operational infrastructure, workforce shift and culture shift. Digital transformation needs the right technology to be available but also depends upon changes to the environment including infrastructure, skills, roles, workflows and processes and leadership. 

AI progress is racing ahead but the lack of investment and focus on infrastructure is an anchor on the ability of the health and care system to use AI to benefit patients and staff. Everyday struggles of crashing computers and poor connectivity suck away at the belief that technology can help to solve problems. AI recasts what infrastructure is needed for these tools to be effective but what is still important is infrastructure remains an essential necessity.

Comments