Most of the measures available are reported at the organisation level, covering many service areas, and sometimes several hospitals. Evidence suggests there is no such thing as a 'good' hospital – an organisation that performs well in one service area does not necessarily perform well across the board.
Such measures may be useful for holding boards to account for the overall performance of their organisation, but these composite measures are much less useful for the purposes of patient choice, or to drive improvements in clinical care. It's important that we are clear about why we are collecting and reporting data – for patients (to choose), the public (to hold local organisations to account), clinicians (to improve their practice) and regulators (to assure us of minimum standards).
The public are now faced with a bewildering array of assessments and information sources. Hospitals may be rated differently by the Care Quality Commission and the Dr Foster Guide, there is increasingly detailed information from the National Patient Safety Agency on patient safety incidents, and you can view patient opinion via sites like Patient Opinion and NHS Choices.
On top of this, from next year all hospitals will have to publish quality accounts. Our recent research on choice at the point of referral suggests that most people still rely on personal experience and that of friends and family rather than consulting other sources of quality information when choosing a hospital. When patients are given information to support a choice of provider, they want information about the particular service they need. Some even want information on the individual doctor who will treat them. Until now, only cardio-thoracic surgeons have published individual-level data on their surgical outcomes. Other clinicians should follow their lead.
Publishing data has its problems and limitations. Data quality in the NHS is generally poor and this has a number of negative consequences across the health care system as reported recently by the Audit Commission. It is also extremely variable within and between hospital trusts. This can significantly undermine the credibility of performance information presented to the public and has been the basis of much of the debate around the recent Dr Foster mortality figures. Publishing can also lead to gaming (massaging the numbers to make an organisation look good) and not everything that matters can be measured. That said, publishing data can motivate organisations and those who record data to pay more attention to data accuracy. As with targets, measures can focus attention and publishing comparative benchmarked data puts pressure on providers to improve.
Measuring the quality of health care is a complicated business, but we must not shy away from the challenge or the need for greater transparency. It's important that the NHS is honest with the public about variations in the quality of care and that people understand that health care cannot always be completely safe. Rather than leading to a collapse in confidence this should be channelled positively to empower patients and the public to put pressure on local organisations to improve, as many non-executives on boards and patient representatives already do.
We are committed to supporting the delivery of high-quality care in the NHS through a range of projects focused on quality measurement, patient-reported outcome measures, quality accounts and information for patient choice, all of which we will report on over the next few months. I hope this work will contribute to what continues to be a challenging but necessary debate within the health service.