Mustafa Suleyman: New ways for technology to enhance patient care

Featuring:

Mustafa Suleyman

Mustafa Suleyman of Google DeepMind talks about their clinician-led approach to developing health technology. In particular, he describes their work with the Royal Free managing acute kidney injury with the Streams app and with Moorfields Eye Hospital using machine learning to improve diagnosis of serious eye conditions.

Mustafa Suleyman

This presentation was filmed at our Digital Health and Care Congress on 6 July 2016.

Transcript

Let me briefly tell you how we attempt to go about building general purpose learning algorithms. Everything starts with an agent, you can think of an agent as a control system for a robotic arm or a self driving car or a recommendation engine and that agent has some goal that it’s trying to optimise.  We hand code that goal, it’s the only thing that we give the agent, we say, “These are things that you should find rewarding in some environments,” and the environment can also be very general so it could be a simulator to train a self driving car or it could be YouTube where you’re really trying to recommend videos that people find entertaining and engaging.  And the agent is able to take a set of actions in some environments, so it’s able to experimentally interact independently and autonomously in the environment, and that environment then provides back a set of observations about how the state has changed in that environment as a result of the agent interacting with that environment and of course the environment passes back a reward which the agent is able to learn from.  So it’s really learning through feedback or through the reinforcement learning process.

The remarkable thing about health was that there’s an incredible margin for improvement in if we’re successful in being able to deploy cutting edge and modern technology systems. I mean there really is no other sector that I can think of in the world that is so far behind the cutting edge in terms of technology and if we’re successful that represents a massive opportunity for us to have the beneficial impact.  As many people have already pointed out there’s a graveyard of failed technology efforts over the last 20 years and so I think in that context we really had to think about what are we going to bring that’s going to be very different?  Clearly we have machine learning and artificial intelligence, but I think a lot of this is about the approach that we take to developing software and how you put both patients and clinicians at the very forefront of that.

So the approach that we take is to frame everything as starting with an observation of what does a user do on a day to day basis? We spend lots of time immersing ourselves in wards and with nurses in the mess rooms trying to observe what they do, define their challenge, gather as many insights as we can and then immediately start to build something.  As fast as possible we want to show what a rough design might look like, here are some wire frames, and then develop that a little bit further, test it and then as we start to develop a solution to try and measure, build and learn and then just rinse and repeat.  Try and do that in very, very quick iterative cycles.  So within three weeks or so of meeting our first nurses, signing our agreements with the Royal Free back in September and October we had a working prototype, obviously not connected to any data, but that nurses and doctors could actually point to and say this button is in the wrong place, this colour is difficult to read, this menu hierarchy is in the wrong order.  So we can instantly get feedback and deliver pretty much what nurses and doctors tell us they want to see.

So this is our mantra, ABC, always be clinician led, and every single project that we will work and the projects that we’ve worked on so far have been brought to us by a nurse or a doctor who has some idea, some insight into how they can change the behaviour in their day to day operation and how the technology solution might actually work.

So how might patient care be better supported by technology? Well I think obviously there’s an enormous opportunity for improvement.  One in ten patients experience some kind of harm in hospitals and half of those are completely preventable or avoidable harm, and in many of those cases 50% of those cases detection of the patient deterioration in question has actually been delayed and that’s actually a communication and coordination issue.  Largely I think this is because of these current limitations most of the really valuable data sits on paper and on charts and isn’t logged or tracked or recorded.  There’s no auditable log that you can verify of the pager messages they have been sent, the reminders that have been sent.

So I think there are two core patient safety challenges that have framed everything that we do in DeepMind Health. The first is how can we do a better job of identifying which patients are at risk of deterioration largely in real time?  The second is once we’ve identified which patients are at risk how do we actually intervene?  We don’t want this to end up just as a report that advises on some reorganisation of facilities on a ward, we actually want to deploy technology in real time that enables clinicians to do a better job of escalation and intervention.

So on our patient safety challenge number one how do we do better detection we’ve looked at acute kidney injury over the last twelve months or so and this is a remarkably important problem; 25% of all admissions present with some kind of evidence of an AKI and there are 40,000 or so deaths in England due to AKI alone. It’s estimated that something like 20% of these are actually preventable and that the cost could be as much as a billion and a half pounds.  So a couple of years ago in 2014 NHS England issued a patient safety alert to mandate that the acute kidney injury algorithm be deployed in hospitals.

So once again the first thing that we did is try to observe users in their day to day setting. So we went into the Royal Free and we mapped out the pathway.  What is the experience from a patient perspective today?  And it turns out it’s actually really, really complicated.  There are lots of different stages to the path that a patient might go through and what we noticed is that there are a whole series of life threatening and complicated stages in that pathway which actually seem to be where we’re missing on the key bits of deterioration.  So what we wanted to do is take a step back and see how could we intervene earlier to do better risk assessments, more real time prevention and monitoring and then hopefully try to redirect patients through the pathway towards potentially a full recovery and then a discharge?  Once we’ve broken it down into these sorts of steps we have a shared visualisation between us and the clinicians on where all of the key intervention opportunities actually sit.

So in response to this we developed streams, our AKI alerting platform for blood test results. That’s the very simple intervention that we’ve built so far keeping it really focused on one very specific condition using the blood test results, but in the future I think there’s a real opportunity for us to go much, much further and extend this to a broader patient centric collaboration platform.  This actually essentially puts in the palm of our hand the ability to detect in real time patients that are at risk of deterioration, but that’s only one part of the challenge.  The next key thing that we need to be able to do is escalate and better intervene and this is where messaging and commenting becomes so important.  So take, for example, with the x-ray that we looked at just now here we see that a registrar is able to make a comment on the x-ray on that report and then plus in a respiratory consultant to get an expert view and that exchange can happen in an auditable way that allows us to verify retrospectively if needed what the senior clinician has actually said and what action is subsequently taken.

Quite separately to this we’re also embarking on a research program to see if our machine learning and AI technologies can actually help with some aspects of diagnosis and the remarkable thing is that if you do have diabetes you are 25 times more likely to suffer some kind of sight loss, but most interestingly the very most severe types of sight loss due to diabetic retinopathy can actually be prevented through earlier detection. So one of the things that we’ve been thinking about is how could we potentially help with better real time classification of those radiology exams coming through to enable a more sensible triage of which patients require more immediate responses?

So the current reality is that in human performance there’s a great deal of backlog in reporting which means that the results aren’t available potentially in clinic for weeks and there’s also a lack of consistency between different graders and sometimes the reporters will miss some of the sensitive changes in diabetic retinopathy in AMD. With machine learning one of the things that we hope we might be able to do is to do much faster and near instant results but also more consistent and more standardised performance and I think this will also help us to understand to adjust for some of the normal variations that we see which will allow us to increase our specificity.

This is very much early work but we’re committed to publishing all of the results of our work including our algorithms, our methodologies and our technical implementations and so hopefully when we’re ready you’ll hear more from us on the results of that research towards the back end of this year.

Add new comment