Will this new incarnation provide the definitive measure of a hospital's quality and will it put to bed the arguments about the merits and evils of HSMRs? The answer to both questions is 'probably not'.
The experts on the review panel, of which I was a member, debated long and hard about the methodology for producing this indicator. But statistics cannot entirely overcome the complexities of adjusting for the many factors – some unrelated to the quality of care – that contribute to deaths in hospital. The result is an indicator that is inevitably subject to caveats, as provided by the Information Centre. No one, not even its ardent supporters, would maintain that this indicator is an unequivocal marker of hospital quality. No single indicator ever is, given the complexity of an acute trust's patient mix and clinical activities, and there are many ifs and buts associated with this one.
There has been fierce debate among professionals about the validity of this indicator, and whether it is even sensible to try to use hospital-wide mortality as a measure of quality, when most deaths in hospital are unavoidable or inevitable. This debate will not go away and could even be stoked further by the SHMI. Critics may say this is window dressing for an inherently flawed concept.
So why have the SHMI at all? First, such an indicator has been around in one guise or another for many years, in NHS Choices and other publicly available sources of information; withdrawing it from public view would be difficult to explain. Second, the government's open data policy aims to make more, not less, information publicly available for accountability, transparency and to support choice. The Freedom of Information Act also makes publication unavoidable. Third, hospitals should make use of all available information, even if imperfect, for scrutinising the quality of their care. Mortality rates can be a useful prompt for further investigation by providers and regulators – this indicator has, on occasion, helped to identify organisations where there have been serious failures of care. An analogy I have heard is that mortality indicators serve as a smoke alarm – when a smoke alarm goes off, it doesn't necessarily mean the kitchen is on fire but it does mean you should check if there is a problem.
On the other hand, there is a risk that the data may be used inappropriately, without regard to the accompanying health warnings, and that some organisations may incorrectly be categorised as providing poor-quality care. This can mislead patients and the public, and erode confidence in NHS staff and organisations.
Is it therefore impossible to measure the quality of hospital care, and where do we go from here? We have two practical suggestions. First, the focus should be on developing clinical indicators that relate to particular conditions or procedures; these are less prone to the technical problems that come with an SHMI-type summary indicator. Second, the Department of Health and Information Centre should speed up the process of using national clinical audit data as the basis for measuring and reporting on quality. Such data provides a more robust basis for informing health care professionals, patients and the public about the quality of care provided by an organisation. The time will come when NHS data on the quality of its services is a useful and valued currency that serves many audiences and purposes. In the meantime, boards should be focused on reviewing all available data to understand how they are performing and how they can drive improvement.
For more statistics and analysis on NHS reforms, read our quarterly monitoring report