Last week, Jeremy Hunt announced the government’s plans to introduce Ofsted-style ratings for clinical commissioning groups (CCGs), to help fill what he called the ‘transparency gap’ in the NHS. He said the ratings would help the public to understand the quality of their local services and how this compares with other places. But are more ratings really what the NHS needs?
Drawing on the findings of our recent review for the Department of Health on measuring the performance of local health systems, we would suggest there are at least five important questions to be asked about these plans.
Will the focus be on CCGs or local health systems?
This might sound like quibbling over semantics, but it’s worth being clear that CCGs and local health systems aren’t the same thing. CCGs commission some (but not all) health services in their area. They are just one part of a broader health system that includes all those commissioning and providing health services for the local population.
This distinction has implications for how performance is assessed. Assessing CCGs means using indicators that can be attributed to their performance as organisations, while local health systems need to be assessed through a wider lens – for example, looking at how NHS services work with social care and public health services and assessing their collective impact.
Taking a broader approach and looking at local health system performance as a whole has the potential to encourage commissioners and providers to work together to improve care for the population they collectively serve, an argument we put forward in our review. And does the public really care about CCGs anyway?
What’s the aim of CCG ratings?
Measuring quality in health services isn’t simple. Choices have to be made about what to measure and how to report results – and both require clarity about why performance is being measured in the first place.
Jeremy Hunt’s speech suggested two main aims for CCG ratings: providing information to the public about local services and offering a judgement on CCG performance (from ‘inadequate’ to ‘outstanding’). He has also talked about transparent reporting of data –‘intelligent transparency’ – to support improvements in NHS care.
The problem is that these aims require different approaches. Take judgement and improvement. Judgements require assessments to be made based on indicators which are attributable markers of performance, while indicators to support improvements in care can be less robust and leave more room for interpretation.
How will the aggregate ratings be constructed?
Aggregate ratings – summary ratings combining different measures – are not new in the NHS. Evidence of their impact over the last 15 years is mixed, at best.
While ratings can sometimes help to improve quality, they have also had perverse effects. This includes NHS organisations manipulating data, taking actions that are not in the best interests of patients, and paying less attention to areas not covered by ratings. They have also distorted local priorities and damaged organisational culture, staff morale and recruitment.
These issues – alongside many others – led us to advise against using aggregate scores based on performance metrics in our review. Put simply, they risk hiding far more than they reveal. If aggregate scores are going to be produced for CCGs, they should at least draw on ‘softer’ intelligence too, relating to leadership, culture and other factors. The suggestion of ‘expert committees’ to interpret performance data may not be quite the same thing.
How does this fit with everything else being used or developed to assess NHS performance?
The way that performance is reported and assessed in the NHS is complicated and confusing. The danger is that a new way of measuring performance is being layered on top of what we already have.
How will the new approach fit with the existing outcomes frameworks and the Care Quality Commission’s plans for place-based ratings? And how will local health systems know what their shared priorities are? There needs to be simplification and alignment of the way that performance is assessed in the NHS – not more confusion.
Do we know what information the public really wants?
If one aim of the ratings is to provide information to the public, then they should be asked (first) about what kind of information they want. This is particularly true for information about CCGs or local health systems – things that people have no real choice over.
In his speech, Jeremy Hunt said that ratings in six clinical areas will be produced alongside overall CCG ratings. The challenge in doing this is that not all parts of the population are covered by the areas chosen, and that some clinical areas are prioritised over others. Asking the public what they think will be essential.
These are just some of the questions that will need to be considered as the plans for CCG ratings are developed. We strongly support Jeremy Hunt’s aim of ‘intelligent transparency’ in the NHS to support improvements in care – but it will be important to ensure that the approach taken to achieving it is just as ‘intelligent’ as the language used to describe it.
- Read the review: Measuring the performance of local health systems
- See the press release: NHS performance frameworks need radical simplification and alignment, The King’s Fund review finds
- More about the review: Measuring the performance of local health systems