The quest for quality in the NHS: still searching?

John Appleby
Publication:  BMJ
Reference:  BMJ, Jul 2005; 331: 63 - 64

It must be incontestable that patients want the highest possible quality of health care, given the necessarily limited budgets allowed by governments and social or private insurers. Quality, reputation, and other similar concepts often top the public's lists of desirable features of health services - often in preference to specific criteria such as shorter waiting times and greater choice. But what quality of care is being provided by health services? And is the considerable extra funding that is flowing into the United Kingdom's NHS improving quality?

According to Sheila Leatherman and Kim Sutherland, and to answer a question posed by Richard Smith in the BMJ two years ago when reviewing Leatherman and Sutherland's earlier look at this issue, the NHS is improving, but patchily. Waiting times in England are now at a historic low, mortality is decreasing, and ratings of patient satisfaction are high. However, the prevalence of methicillin resistant Staphylococcus aureus (MRSA) is up, and waiting times in Northern Ireland and Wales remain very long. No real surprises here.

In their report The Quest for Quality in the NHS, Leatherman and Sutherland provide their view on the quantifiable measures that constitute the somewhat slippery concept of quality of care. We may know high quality health care when we see it, but can we measure it, benchmark it, and put it into graphs? Leatherman and Sutherland do just that: in 110 graphs divided into six domains (effectiveness, access, capacity, safety, patient centredness, and disparities) they set out their quantification of quality in health care. Their stated aim is to 'provide a coherent and compelling summary of the state of health care quality in the UK'.

In one sense this report is a summary, in that it omits some factors that might be considered important measures of quality. Not least among these is some measure of health, which should feature highly on any list of criteria of quality of care. Examples include a patient related outcome measure (PROM) or a self assessed rating of health related quality of life as measured by such generic instruments as the SF-36 or the EQ-5D. The NHS does not measure these routinely, but it could do so. The report also omits some potentially useful sources of data on public attitudes, such as the health component of the British social attitudes survey.

In another sense, the figures collated by Leatherman and Sutherland, mainly from official government or government agency sources, are a desultory collection of facts; there is, in fact, little summarising in the report. Nor, indeed, does the report acknowledge that there may be trade-offs between different aspects of quality of care. Budgets are limited, so do we go for shorter waiting times, perhaps at the expense of investment in interventions to treat childhood leukaemia? Furthermore, despite the authors' aim to provide an independent dataset to challenge the often contested statistics from the government, most of the data comes from government sources, and these are not comparable.

Another important issue to consider is the weight or value society might place on achieving different aspects of quality, as set out by Leatherman and Sutherland. Again, given the inescapable opportunity costs of pursuing one policy rather than another, do we get more value from improving some aspects of quality (reduced prevalence of MRSA, say) rather than others, such as choice of doctor?

What Leatherman and Sutherland have produced is laudable; as a self proclaimed graph obsessive I am pleased to see such a collection of data. I also know how much effort is needed to produce even the most straightforward of comparisons and time trends from often rather wobbly official sources.

But this is only a first step in a long and arduous process of really answering the somewhat simple question of what we get for our healthcare investment and, more specifically, how we identify poor performance by both organisations and individuals. Leatherman and Sutherland's collection of statistics is one starting point, but much more analysis is needed. For example, Harley and colleagues recently reported an interesting use of hospital episode statistics (HES), a UK national database based on patient records, to identify the rogue gynaecologist Rodney Ledward, who was suspended in 1996 and was the subject of the Ritchie inquiry into quality and practice within the NHS.

And now, Lakhani and colleagues have provided additional uses of the hospital episode statistics database to tackle questions of variations in performance across the NHS. While also finding improvements (and some failures) in care in the NHS, Lakhani et al acknowledge that, as well as wider use of this database, more and different data need to be collected if we are to get a better grip on what our hard earned taxes are buying in health care. They cite a study from the London School of Hygiene and Tropical Medicine, commissioned by the UK Department of Health, which attempts to bring together and summarise, through modelling, disparate data on quality and other measures that link inputs to outcomes.

Despite Lakhani and colleagues' assertion that such models may trade off scientific rigour against transparency, there is no other way to discover what we get for our money. This work suggests that the NHS needs to develop the health care equivalent of the Bank of England's model of the economy, in which evidence based relationships between key variables – unemployment and interest rates, for example – allow analysts to judge the potential economic impact of changes in policy. An equivalent model for the NHS could indicate by how much the quality indicators compiled by Leatherman and Sutherland would improve for every additional pound invested.