Overview
A great deal of mental health data is collected but gaps remain, especially around outcomes and patient experience.
Of the data that is collected, quality and completeness are often poor.
Data collection currently places a large burden on staff yet provides limited value to staff in their daily roles.
Lack of good quality data means it is hard to understand variations in services and how the sector is performing.
A great deal of mental health data is collected but gaps remain
In this Mental health 360 we have relied on published data from various organisations and national bodies. This includes prevalence data, national surveys of patient and staff experience, data associated with delivery of national targets and plans, and operational data shared through bodies such as NHS Benchmarking. However, when turning data into insights, a consistent challenge has been the need to piece together disparate datasets, and the notable gaps that make it difficult for policy-makers, practitioners and patients to understand what is going on.
The Mental Health Services Data Set (MHSDS), established in 2015, aimed to provide a single mechanism for submitting data for providers of mental health care. It has expanded over time but is largely focused around areas of care where there are specific standards or delivery commitments, serving as a mechanism to track implementation. Given that these standards only cover a limited number of service areas, published data leaves significant gaps, and provides limited insights into pathways of care.
The data available also focuses primarily on access, which is only one section of the patient pathway. For example, there is no centralised data on treatments people receive, so trusts have to rely on national or local clinical audits, which are not an effective or viable mechanism for assessing routine interventions for the populations being served. As others have highlighted (UK Parliament, Office for Statistics Regulation (OSR) and Getting It Right First Time (GIRFT) programme), without effective and standardised outcomes it is impossible to assess the effectiveness of a service and the treatment it provides. This gap has been recognised; NHS England commissioned guidance on use of patient reported outcome measures (PROMs) in services for adults and older adults and has been reporting nationally on outcomes metrics since April 2023.
A key issue encountered during the research for this report has been the timeliness of data. The Adult Psychiatric Morbidity Survey is supposed to run every seven years (it was last published in 2016; the next survey, conducted in 2022, is due to be published during 2024). Lack of timely data limits the ability of commissioners and providers to plan for changes in demand. Also, national targets that are based on population levels of prevalence as a component of expected access are unlikely to reflect current levels of need.
A further area where there is a gap in the data is patient experience. Nationally, people who have used community mental health services are surveyed annually to capture their experience of care, but mental health inpatients are excluded from the annual adult inpatient survey. The recent rapid review into data in mental health inpatient settings identified gaps in capturing ‘the voice of people in hospital and their carers about the safety and quality of care and their involvement in care’. This has contributed to failures in being able to identify and address instances of poor care.
Data completeness and quality are both lacking
The data that is collected is sometimes poor, both in terms of completeness (whether all providers submit data) and quality. Quality of data is described as ‘very variable across every metric that we examined’ (GIRFT programme), while a recent National Audit Office (NAO) report states that ‘many of the datasets that are important for managing and developing services and measuring improvement, including access to services and costs, still lack the required completeness and quality’.
The Mental Health Services Monthly Statistics are not classed as official statistics because they do not reach the quality or completeness level required for this classification. The data quality report on this dataset shows that in November 2023, only 64% of all requested data was collected. And due to invalid or not known responses, the amount of accurate data collected falls to 50%. This varies by data category; ethnicity had 97% completeness and 78% accurate data, but disability only had 17% completeness and 11% accurate data. This problem is particularly acute for independent sector providers, who scored just 45% on the Data Quality Maturity Index (an NHS England measure of data quality) in August 2023 compared with 92.8% for NHS providers.
With the continual expansion of this dataset, there is an expectation that newer data collections are subject to improvement. However, since the decision was taken to retire data collected by the Department of Health and Social Care on the number of detentions under the Mental Health Act in favour of the MHSDS, that data has been incomplete. As the annual report on Mental Health Act statistics states, ‘not all providers submit data, some submit incomplete data’. As a result, trend comparisons are also affected by changes in data quality. As the only group of patients subject to involuntary detention, the state is unable to account for how the Act is being used.
Mental health is often seen as the ‘poor relation’ to physical health, and lack of parity is reflected in data collection, both in gaps in data as discussed in the previous section, and the completeness and quality of the data that is collected. For example, data on out-of-area placements in mental health services had a 75% response rate in October 2023, which contrasts to acute datasets such as the elective care waiting list and diagnostic waits, which had a 100% and 96% response rate respectively in their most recent (at the time of writing) publications.
The NHS Mental Health Implementation Plan committed to substantially improving quality in mental health data. Some progress has been made; the number of providers submitting data to the core mental health service dataset increased from 85 to 364 between 2016 and 2022, and further work is planned.
Data collection is a substantial burden for staff
The mental health workforce currently faces a significant data collection burden. A recent rapid review into data in inpatient services stated that ‘some clinicians reported spending half or more of their time entering data’. This is worsened by duplication of data entry and outdated systems. Staff reported that ‘data was often not fed back or [was] reported in a timescale so far from “real time” as to lack utility’. This is not only burdensome for staff, but also means that the likelihood of trusts submitting data decreases. The review found that independent providers are on average less likely to submit data, while those providers reported the data burden to be very high, and that they did not get value from the data they provide.
Poor data makes it difficult to understand variation and performance
High levels of variation are a consistent feature of data collected in mental health. As already explained, gaps in data and issues with data quality and completeness mean that it is difficult to understand what is going on in the sector – what variation there is, what that variation means, and what ‘good’ looks like. In an NAO survey of integrated care boards, only 2 out of 29 said they had all or most of the data needed to assess variations in patients’ access, experiences and outcomes.
For example, looking at ethnic inequalities in care, the Care Quality Commission (CQC) found that an increasing proportion of entries are recorded as ‘not known’ and ‘not stated’, making it more difficult for organisations to interrogate and use data to address potential inequalities and ensure that services are meeting the needs of individuals. For mental health services, this will reduce their ability to understand variation in referrals, treatments and deaths by ethnicity. This was echoed in a 2022 review by the NHS Race and Health Observatory that found ‘few national datasets with sufficiently high-quality ethnic monitoring data to allow for robust analysis to investigate ethnic inequalities’.
This lack of comprehensive and robust data means it is difficult for providers to tackle variation. The GIRFT programme analysis stated that ‘inconsistency in data quality and reporting, including the extremely limited use of outcome measures... is a major factor behind unwarranted variation across mental health services’.
It is also challenging for providers to plan and provide care more generally and assess their relative performance. For example, the NHS Long Term Plan contained several ambitions to increase access to mental health services, but inconsistent data collection makes it difficult to assess whether the NHS is on track to achieve those ambitions.
Mental health 360: prevelance
Read the next section of Mental health 360
Comments