While I think information-sharing initiatives like this work best with the support and confidence of the individuals involved (or at least with the blessing of their representative organisations) I shared Stephen Dorrell’s surprise that, as public employees, doctors could keep this data unseen in the first place.
But I can understand why some doctors might object to their professional data being made public, for example, perhaps they have doubts about data quality, the risk-adjustment methodology or how data might be misrepresented. Numbers, red/amber/green ratings and the like are powerful influencers, as anyone, who, like me, has a friend who always has a suspiciously convenient statistic to hand to back up any argument they make, knows.
Getting data right is not always easy. Clinical coding remains imperfect; risk adjustment is complex and can never capture every relevant factor; and small numbers can lead to confidence intervals so large as to render observed variation meaningless. These and other data issues need to be (and are being) addressed as comprehensively as possible, and the caveats around the data need to be presented and explained clearly.
Given these issues, and the long (slowly changing) tradition of autonomy and self-regulation we afford our doctors, I was, if anything, heartened that only 4 per cent of this first tranche had objected to their outcomes data being published, especially as this is essentially a government-led not a profession-led initiative.
But in principle, is publishing outcomes like this the right thing to do?
We should be honest about the evidential and theoretical basis for this policy. Will it improve quality? We have evidence from profession-led initiatives in cardiac surgery that associates the publication of data with improved outcomes, with no negative consequences. But of course, no causal relationship between publication and improved outcomes has been proved. There are other studies about hospital-level data, but that’s a different issue and findings are mixed, one we explore in a previous blog on the health and social care ratings review. However, limited evidence of impact is usually all you get to go on when you innovate, as is happening in this case.
Is there a theory for how publishing this outcomes data might improve quality? There are several, although we don’t know which will work in practice. Data publication might give the poor-performing ‘knaves’ among clinicians no hiding place and force them to improve or stop carrying out certain procedures and operations. It might give the naturally competitive ‘knights’ a push to improve to be the best among their peers. It might genuinely reveal differences in relative performance that doctors were simply not paying sufficient attention to through existing audit systems, and, once they are made aware of these differences, their professional duty to patients will make them want to improve. It might give managers and commissioners the ammunition they need to tackle the poor or outlier performers that they have, up to now, failed to hold to account. And, if we’re speculating, it might even work through patients using the information to make choices about who to be treated by.
But is this the only way to improve quality? Of course not, and neither is it likely to be the best. But we need multiple approaches to tackling poor quality and to improving performance, and on balance this seems like a plausible one to try.
Perhaps most importantly though, transparency has an inherent moral value that makes this a good idea, despite the multiple theories about effectiveness and weak evidence. In our publicly funded health care system, which aims to put patients' interests first and foremost, this data should be in the public domain. It should be part of our modern interpretation of medical ethics and professionalism. Having 100 per cent of the doctors involved happy about it straightaway is not realistic, but transparency has to be the right thing to aim for.