Charles Tallack: Evidence emerging from evaluation of the new care models

This content relates to the following topics:

Charles Tallack, Head of NHS Operational Research and Evaluation, NHS England, presents data and findings from evaluation of the new care models.

This presentation was recorded at our conference on Mainstreaming primary and acute care systems and multispecialty community providers on 21 March 2017.

Transcript


Thank you very much Chris and thank you very much for inviting me here. I always enjoy speaking about evaluation and I think, particularly for the new care models programme, we have been extremely fortunate in having a very strong commitment to evaluation and, I hope that during the programme and by the end of the programme, we will have really demonstrated the value that evaluation can bring to this kind of change.

This is really about the evidence emerging from evaluation. Sam has already talked a little bit about some of that.  The top question people always ask, when you talk about evaluation, is what is the impact of the new care models?  And of course, that is the ultimate question but if you could just answer that question, that would not really tell you everything you need to know.  All it would tell you is that this new care models programme was perhaps reducing emergency admissions, compared to not having a new care model but it would not really tell you anything about why it is doing it, it would not be much help for people who are trying to follow in the wake of the new care models.  You want to know how have they done that.  What are the elements which have contributed to the success?  Assessing impact is an aspect of evaluation, an important aspect, but the evaluation has also been set up to address a number of other questions that people have asked.  These questions on the left-hand side are the questions that we have heard of during the course of the work we have been doing.  There is an early question, which Sam alluded to.  What is the model and how is it intended to work?  That was the question that Sam talked about when someone said that is wonderful but I do not really know what you are talking about.  I think there was an early stage of evaluation which was really trying to crystallise what these new care models were.  How were the things that were being proposed intended to work to produce the change?  That is a very, very important part of it.  Then there is obviously an important question – what changes are being made and what is happening as a result?  So, a vanguard for an area might be talking about a programme of social prescribing.  But what does that really mean?  What changes are being made there and what are the consequences to the patients, the staff, other people?

Next question was what is the impact? We have talked about that.  Also, important, what is the cost?  So, you might be having a large impact but it may actually be a more costly system than before.  Or it could be a cheaper system.  You need to know something about cost and resources going in.  What is causing the impact?  Now that might sound like a straight forward question.  Obviously, there are a multitude of changes going on.  Working out which of those are making a positive contribution and which of those are discretionary, they are making much less of a contribution is important but quite tricky to unpack.

Then the final two questions are really about the ones around spreading. What should be replicated and spread?  Some things which might be causing the impacts, might be contextual factors within the vanguard, they might be to do with things that other vanguards cannot easily copy.  So, to do with the kind of configuration of services, to do with the population, the rurality, all those kind of things, much harder to copy.  What should be replicated and spread?  What can be taken up, copied by others?  Then there are also some very practical questions around how can we implement these things?  What are the barriers?  What are the enablers?  If we want to turbo charge implementation, what do we have to do?

I am not going to go through all the things at the top but we have several different aspects to our evaluation and the reason we have got all those is because these questions all require different kinds of tools and techniques to answer them.

Everyone probably knows about logical models. They are a very important part of the early stage of a care model.  They are basically a diagrammatic way of setting out how the changes you are making are intended to lead to the outcomes you are aiming at.  We have got a dashboard.  We have got local metrics and national metrics.  We have got an independent evaluation.  So, Manchester university, together with Kent university have just been appointed to undertake a, four year, evaluation, which is looking at things that we cannot easily look at within our rapid ongoing evaluation.  They will look at the wider impact of these care models.  They will look at how other areas are taking up the care models.  They will look at impacts on things like inequalities and a variety of other wider impacts but also looking at the programme as a whole as a method of change.  What they said to us when we met them last week at Manchester university, was this is a different way for the NHS to do change.  This is an important question for us to evaluate.  So, we can think about this as a method in the future.  Then, we have got some other learning and impact.  I will talk about some of these in the rest of what I am going to say.

So, the question of impact and progress? Now even that is not a simple question.  You can talk about impact at a PACS or MCP cohort level.  Overall, are the multi-speciality community providers having an impact?  The impact of a cohort depends on the impact of individual vanguards.  You also want to know what is happening in individual vanguards and of course the impact in individual vanguards, is as the result of the interventions they are putting in place.  I have just set out here, vanguards, the first three might be doing the intervention, one, the fourth vanguard it might be at a much earlier stage in terms of implementing intervention one.  We use different approaches to look at the impact at these different levels.  For the cohort level, what we have got here is, we are doing national analysis of the time series.  We have got some core metrics we are looking at.  We are trekking the progress of those against what would have happened otherwise.  So, the counterfactual.  At the vanguard level, we have got a huge, rich variety of data there.  We have obviously got the data from these national metrics but we have also got data from local metrics.  Each vanguard has now got a local evaluation partner, so they have been funded by the national programme to have their own evaluation.  They are collecting data on those local metrics but they are also doing their own piece of analyses and producing reports to look at, perhaps, interventions in more depth.  Then, at the third level down, we have got evaluations of individual interventions.  So, the evaluators in the vanguards are doing evaluations of particular interventions but we also have a partnership with the Health Foundation which we have called the improvement analytics unit, which is using sophisticated statistical techniques to do really robust evaluations.  They have got a couple of pilots going on at the moment in the vanguards, so they are looking at principia MCP, they are looking at the care home intervention there and they are also looking at the Northumberland vanguard.   We are going to have some really high quality studies of interventions.

Just looking at the top level, what we can tell from national analysis. We set out about six different metrics.  The ones which we can track most rapidly are ones based on HES data relating to activity in hospitals.  All vanguards talked about reducing emergency admissions and bed days.  This is the kind of approach that we are using.  So, the green line, actually I will talk about the red line first.  The red line is the trend in emergency admissions per head of population by quarter or multi-speciality community providers.  This is the change, this axis, from the base.  If you look at this top line over here, this is at about 3.8 %, so what that is saying is that MCPs have grown, so that the quarterly figure for quarter three, 2016/17, was 3.8 % higher than the quarter in the base year, with the base year being 2014/15.  So, PACS are slightly lower, so that is about 2.2 % higher than the quarter in the base year and the question is, have they done better than they would have done?  The comparative there is the green line which is the non-new care models.  If you look at this data, if you do some further processing of data, you find that the PACS and the MCPs have, over the last year, the growth from the base year has been lower than for non-new care models.  This is the data which Simon Stevens was using when he talked to the PAC a couple of weeks ago.  That is the kind of aggregate level analysis but, as I said, the important thing really, to understand that, is to understand what is going on within vanguards themselves.

So, I am going to talk about a particular vanguard, which is Dudley. Dudley is a good example because there is some really rich evaluation data coming out and they are extremely open about sharing their data.  They have embodied the spirit of the programme of openness.  So, this is their logical model.  I am not going to talk about this in detail but just to show the structure of it.  You have got your rationale for the programme.  You have got your priority themes and actions.  You have got your intermediate outcomes.  You have got your outcomes and then you have got the overall impact and these overall impacts are really the kind of gaps in the five year forward view.  Improved quality and quantity of life for all people in Dudley and integrated and self-improving system of care.  Zooming in on the priority themes and actions.  They have basically grouped their interventions into three themes – access, continuity and coordination.  If you look at the coordination line, under there we have got some of the things that they are doing.  So, they have got multi-disciplinary teams, extend and roll out, including risk stratification.  They have got an end of life care intervention there.  Training programme.  Engaging care homes.  Care plans.  You have got new types of workers and they have got a frail elderly team without walls.  So, putting evidence together on Dudley.  This is telling the whole story from the national metrics down to what we are getting from the local evaluations.  Emergency admissions per capita.  So, a bit of an odd picture here.  Dudley has increased since the base year by 12.9 % compared to an increase for the rest of England of about 3.3 %.  Strangely, bed days have actually been reducing slightly more than the rest of England.  What this tells us, I think, is that just relying on the national metrics does not really tell us what is going on.  We need local interpretation.  So, this is the change in emergency admissions.  This is the percentage contribution of emergency admissions by a different length of stay.  This is zero length of stay, the blue line at the top.  As you can see, its contribution to the overall number of emergency admissions has increased over time.

What we think is going on in Dudley is that there have been some coding changes which has meant that A&E attendances in the past, have now been classified as short-term admissions. The headline data basically raises further questions.  It is not definitive itself.

I mentioned earlier that each of the vanguards are submitting local metrics to us on a quarterly basis. We have now had two submissions.  We had one in October and we had one in January.  The October one felt something like a trial run.  We worked with each of the vanguards then, to improve the quality of the data coming in the second quarter in January.  Vanguards were told or asked to provide data that they themselves would use to judge the success of their programme.  This should not feel like a new burden that the national team is placing on them.  We are asking them to say, how are you measuring the success and please will you send us the data which shows how you, yourselves, are measuring that.

So, this is one of Dudley’s many local metrics. This is unplanned hospitalisations for ambulatory care, sensitive conditions for asthma.  They are using, in Dudley’s case, some quite sophisticated approaches.  They are using time series analysis with statistical process control and they are also using funnel plot, each of these dots here is a GP practice.  This is, as you can see, these three themes – access, continuity, coordination – this is in the coordination theme.  This is as a result of some of the multi-disciplinary teams and risk stratification that they had as down as the one of their interventions.

Going down, you remember that chart that had the cohort level, the vanguard level and the intervention level. They have also done an evaluation of their multi-disciplinary teams.  Since April 2014, they have added over 7000 patients to their registers and they have done some interrupted time series analysis which looks at the trend in both bed days, length of stay and non-elective admissions.  Interestingly, what this shows is that this is when the intervention started.  The blue line is the continuation of the trend and the red line is actually what has been achieved – averaged.  There has been a reduction in average length of stay as a result of this intervention they think.  That is where the time series started to show a change.  For admissions, themselves, there was not a change, which might seem strange at first glance but then when you look at what the multi-disciplinary teams are actually doing, they are working with people who are in effect, probably focusing less on the practice of identification of the patients at the moment and what they are doing is working with patients who are probably already under the care of a GP but they are helping the hospital discharge elements of that.  Again, this is showing how the evaluation can really form part of the learning because I think probably this will show them that, to have further impact they will want to do more om the proactive case finding aspects of it.

That has talked about one particular vanguard. This final slide is about how we hope the evaluation will help SDPs and areas wanting to introduce new care models.  I have already talked about how it is helping vanguards themselves and helping us shape the programme by learning what is working.  I think there are two main products, the first are a set of tool kits which are aimed at helping people evaluate change themselves and monitor.  This being an incredibly important ingredient for the kind of improvement that needs to go on in developing and implementing a care model.

We have already started collecting together a set of guidance on logical models. We have got some really good learning from the vanguards on how they have used logic models.  What they found were the barriers and the enablers.  There is a lot of rich learning around the metrics that people are using.  The national data sets available to us are fairly limited for a kind of real time evaluation.  For example, the GP patient survey, which is great as a survey.  It is a very, very long lag between when the survey happens and when the results come out and it is not terribly sensitive to the types of changes which are being implemented within vanguards.  Lots of the vanguards are developing their own measures of patient outcomes and patient experiences.  There is a lot of learning that others can pick up from that.  That is just one example.  Then the constructing counterfactuals.  What would have happened?  I think one of my observations, looking at vanguards is that there is still quite a long way to go in this.  Some of them are very, very good at identifying change compared to what would have happened otherwise.  I think there is some relatively straightforward things that we can help people learn in that area and then there are a series of implementation tool kits which we will be putting together and they will draw on various aspects of the work that we are undertaking.  Learning and impact studies for the elements of the care model that Sam talked about.  For example, risk stratification, the extent for this model.  Social prescribing.  What we are doing is working together with groups of vanguards, so typically around five or six, who are really doing something quite active in this area and then looking at what they are doing, what we can learn, what the impact is?  So that we can distil that learning and write it up and pass it on.

I mention the, high quality, impact studies from the Health Foundation and NHS England improvement analytics unit and the vanguard evaluator, so the MDT study is one example of that. There will be some really nice high quality studies which can show people that this intervention can have this positive impact and then there are a wider a set of evidence summaries that we will be putting together and sharing and that is the end of what I have got to say, so thank you very much for listening.

Comments

Add your comment