Ahead of The King’s Fund’s festival of ideas to inspire and challenge the future of health care, Hugh Harvey, Clinical Artificial Intelligence and Predictive Analysis Researcher at King's College London, imagines a world in which artificial intelligence has transformed frontline care.
In the GP’s office, speech-recognition software quietly pulls key phrases from the doctor–patient consultation, and sends them to an aggregator. The tone of the patient’s voice, their heart rate and breathing rhythms, temperature, blinking and micro-movements are analysed by a camera in the corner of the room. The patient’s medical history, drug history, surgical history, psychiatric history, family history, travel history… their history since birth is pulled from the national data bank, and data is constantly processed, sorted and coded.
The GP performs her examination, voice-dictates the findings as she goes, which the system updates in real-time, adding to its knowledge base. She pulls over a small camera to analyse a rash, a photo is taken and a 3D micro-surface image pings off into the pattern recognition system. The aggregator sifts and then dispatches select information to the intelligence servers; the probabilistic engine gets to work calculating the infinite differential medical issues, and pharmacological algorithms crunch through thousands of variations of potential treatments.
Instantly, on her screen, the GP has available some suggested follow-up questions to ask, alongside most probable diagnoses and their treatments, all bespoke and specific to her patient. Suggested follow-up, specialist referrals, lab tests and recommended medical imaging tests appear, as and when relevant, and all this information is sent through to the patient’s device, alongside an artificial nurse avatar to guide them through the treatment process and answer any questions that may develop in the coming weeks. The GP takes time to explain the findings to her patient, and together they discuss options and select an appropriate course of action. The patient, satisfied, thanks the GP and leaves the consultation room. Without having to write a single medical note, the doctor calls in her next patient, and the process begins once more.
In many ways this scenario sounds like implausible futuristic wishful thinking.
Clinical artificial intelligence has been ‘happening’ since the 1970s and the digital revolution – when it finally arrives – will come, slowly, surely and powerfully.
The reason for the apparent snail’s pace is simple; feeding any artificial intelligence with enough knowledge to deal with the infinity of nuances, intricacies and quirks of the human body (and all that may weaken it) requires oceans of data. More data than has ever been created. More data than we can, at present, deal with, let alone collect or organise. However, that hasn’t stopped us starting the process off. Indeed, the slow crawl has recently become a gallop.
First, we need a common language. Computers need a binary language – hard data points, facts, numbers and measurements – to perform the most basic functions, and yet our human bodies remain in an altogether analogue form, currently impossible for computers to read. We inhabit a biological, three-dimensional, ever-changing environment and our internal machinations reflect this in a constant biochemical dance. To convert these physiological analogue signals into binary is the first step in teaching an artificial brain about us. Stutteringly, we have already begun to develop the first primitive steps towards doing this.
Since the arrival of the desktop computer, hospitals and medical practices have been labelling, sorting and storing patient data, and armies of clinical coders have been converting doctors’ scrawls and lab test results into medical data-point entries, depositing these in vast digital storage systems. Validated clinical coding languages mean there is a code for every possible medical event, test, outcome, or possibility. These are stored in electronic health records (EHRs). This data is a fraction of the available medical data we could collect, but represents our current knowledge base and a computer’s window into our bodies. It is, possibly, the most powerful toolset the NHS has.
However, dividing human illness up into these digital entities requires human effort: doctors and nurses who tick boxes, answer seemingly pointless questions, and negotiate pop-up warnings galore. EHRs are not user-friendly, yet they exist because they store valuable information and are the very foundation of the artificial intelligence revolution.
In the next decade or so we will start to see more autonomous machines collecting and sorting data, freeing up medical staff to do their clinical jobs. For example, voice-recognition paired with a clinical coding algorithm has the potential to document consultations without the need for note-taking. Remote patient monitoring could automatically record our vital signs. Robots that can take blood and other physiological measurements are already being developed. Once the automatisation of data collection is solved, the path to a true artificial intelligence will be exponential.
Thousands of companies around the world are working to train machines in pattern recognition, deep-learning algorithms, natural language processing, remote patient monitoring, visual image analysis and processing of medical data. A cornucopia of solutions is being built, and at its nexus a three-stage awakening will occur: scattered systems will be connected via a common language; there will be an abundance of labelled, curated clinical data; and artificial intelligence will be allowed to grow its neural connections. The results will transform frontline health care.
A longer version of this blog was originally submitted as an entry to The NHS if… essay competition earlier this year.
- Sign up to attend Ideas that change health care, a festival of ideas to inspire and challenge the future of health care on 6 October 2017.