Using statistics to detect poor practice

John Appleby explores the extent to which poor performance by individual clinicians could be reflected in data available from routine NHS statistics.

Publication:  Health Service Journal
Reference:  HSJ vol 115, no 5958, pp 21

In 2000 the Department of Health published the Ritchie report, an inquiry into the medical career of gynaecological surgeon Dr Rodney Ledward. It covered a 16-year period in Dr Ledward's career. It was damning and raised nearly 300 criticisms of his clinical practice, involving nearly 150 patients.

The report's criticisms of Dr Ledward included unnecessary operations, excessive numbers of surgical errors, poor care in general and undue pressure on some patients to get private care.

Many women suffered permanent physical damage as a result of his malpractice.

Dr Ledward was unusual, both in terms of his behaviour and the detailed record (of part) of his practice that the Ritchie inquiry pieced together.

But an obvious question – and one asked by Birmingham University researchers in a BMJ article in April – is the extent to which his excessively poor performance was reflected in data available from routine NHS statistical systems, in particular hospital episode statistics.

The answer, as Dr Mike Harley and his university colleagues showed, was that it is possible to spot Dr Ledward using a carefully selected set of basic statistics (length of stay, proportion of hysterectomies on women under 40 etc) and some fancy statistical work to identify clinical outliers.

As the researchers note, there are problems with this analysis and care needs to be taken in interpreting the results. In a way, it is like a screening tool with a degree of imprecision: there will be false-positives (outliers who are not poorly performing) and false-negatives (poorly performing doctors not identified as outliers).

However, lack of perfection should not hamper further investigation of such an approach.

Apart from delving further into the statistics for gynaecology, work could be done in other specialties. Although the Society for Cardiothoracic Surgeons has been instrumental in promoting the dissemination of cardiac surgeons' performance, there has been little from other specialties. The work of Dr Harley and his colleagues shows that it is possible to investigate performance using more sophisticated factors than, for example, mortality.

There will always be medical resistance to this, but as Jan Chalmers, a nurse member of the Ritchie inquiry, told BMJ: 'In retrospect, if the kind of statistical analysis of hospital data that [Dr Harley et al] now report had been presented in an easily understood format during the early 1990s, it seems likely that fewer women would have suffered.'

© 2005 Emap