The Dr Foster data debate

This content relates to the following topics:

A mass of data exists within the NHS and has done for years, but it is only recently that more of this information has been put in the public domain. For that we have in part to thank Dr Foster who thrust the work of academic researchers on to the public stage. But their latest Good Hospital Guide, which focuses on safety, has stirred up controversy about how to measure quality, what should be reported, and how to interpret such information.

Most of the measures available are reported at the organisation level, covering many service areas, and sometimes several hospitals. Evidence suggests there is no such thing as a 'good' hospital – an organisation that performs well in one service area does not necessarily perform well across the board.

Such measures may be useful for holding boards to account for the overall performance of their organisation, but these composite measures are much less useful for the purposes of patient choice, or to drive improvements in clinical care. It's important that we are clear about why we are collecting and reporting data – for patients (to choose), the public (to hold local organisations to account), clinicians (to improve their practice) and regulators (to assure us of minimum standards).

The public are now faced with a bewildering array of assessments and information sources. Hospitals may be rated differently by the Care Quality Commission and the Dr Foster Guide, there is increasingly detailed information from the National Patient Safety Agency on patient safety incidents, and you can view patient opinion via sites like Patient Opinion and NHS Choices.

On top of this, from next year all hospitals will have to publish quality accounts. Our recent research on choice at the point of referral suggests that most people still rely on personal experience and that of friends and family rather than consulting other sources of quality information when choosing a hospital. When patients are given information to support a choice of provider, they want information about the particular service they need. Some even want information on the individual doctor who will treat them. Until now, only cardio-thoracic surgeons have published individual-level data on their surgical outcomes. Other clinicians should follow their lead.

Publishing data has its problems and limitations. Data quality in the NHS is generally poor and this has a number of negative consequences across the health care system as reported recently by the Audit Commission. It is also extremely variable within and between hospital trusts. This can significantly undermine the credibility of performance information presented to the public and has been the basis of much of the debate around the recent Dr Foster mortality figures. Publishing can also lead to gaming (massaging the numbers to make an organisation look good) and not everything that matters can be measured. That said, publishing data can motivate organisations and those who record data to pay more attention to data accuracy. As with targets, measures can focus attention and publishing comparative benchmarked data puts pressure on providers to improve.

Measuring the quality of health care is a complicated business, but we must not shy away from the challenge or the need for greater transparency. It's important that the NHS is honest with the public about variations in the quality of care and that people understand that health care cannot always be completely safe. Rather than leading to a collapse in confidence this should be channelled positively to empower patients and the public to put pressure on local organisations to improve, as many non-executives on boards and patient representatives already do.

We are committed to supporting the delivery of high-quality care in the NHS through a range of projects focused on quality measurement, patient-reported outcome measures, quality accounts and information for patient choice, all of which we will report on over the next few months. I hope this work will contribute to what continues to be a challenging but necessary debate within the health service.


Matthew Coleman

Comment date
10 December 2009
Balanced comparable data, sharing similar experiences including outcomes between individuals and organisations is welcomed as a fair means to quality improvement. However inaccurate erroneous and misleading data with subsequent comparisons and checking produce very negative influences throughout healthcare. Under normal circumstances we would test the theory that the "intervention of CURRENT data comparison" is beneficial rather than harmful. Where is this?

Mary E Hoult

Comment date
05 December 2009
In most cases of reporting the 20/80 rules apply 80% CHEER LEAD THE PROCESS and the other 20% mainly patients voice is lost. I have attended the NHS IC board meetings for the last 6 months and found them to be very professional in their reporting. The last meeting in the absence of the CEO appeared to fall apart with the Chair cautioning the executive members of the board that this was a meeting held in public and he would abandon the meeting if they continued with their behaviour!!!! I do wonder now if they knew what was about to hit the press? Clearly a great deal of public confidence has been eroded by events. I do hope NHS IC can recover.

Add your comment