The article by van Veghel et al. [ 1 ] describes a national initiative in the Netherlands termed ‘Meetbaar Beter’ or in English ‘Measurably Better’. The stated goal of this multicentre effort according to its website ‘aims to improve quality and transparency of care for patients with heart diseases by measuring limited patient-relevant outcome measures’ ( http://www.meetbaarbeter.com/ ). Those outcomes include survival, degree of health/recovery, time to recovery and return to normal activity, disutility of the care of treatment process, sustainability of health/recovery and nature of recurrences and long-term consequences of the therapy. By doing this, they are proposing to implement a ‘Value Based Healthcare Theory’ by ‘measuring patient relevant outcomes and sharing and adopting each others best practices’.

This programme that began in 2012, termed the Netherlands Joint Outcomes and Transparency Initiative is a voluntary cooperative of 14 of the 16 heart centres in the country. The initial results of that effort in 86 000 patients treated at 12 of those hospitals with one of three diseases are reported in the accompanying article. The conclusion of this study is that ‘annual data collection of patient relevant outcomes appears to be feasible’. Furthermore, the authors conclude that transparency drives quality improvement and that using a limited set of outcomes measures enables comparisons and ‘exposes the quality of decision-making’. Lastly, they conclude that transparent communication is feasible, safe, cost-effective and stimulates professional decision-making and disease management.

While those lofty goals of this programme are laudable, the obstacles to actually effecting meaningful change are huge and the degree that this effort has been able to make disease-based healthcare ‘measurably better’ is not immediately apparent from this report. The evidence in this report to support the stated conclusions is somewhat lacking. While we are totally supportive of the transparency of outcomes and effecting change towards patient-centric, value-based care, it is not clear that this programme which has been in place for 4 years has made significant progress to date in achieving those goals. Everyone agrees that accurate registries are useful because ‘knowledge is power’. But this initiative collects data from different centres and from different databases within those centres without clear quality control or data monitoring at a significant centre-borne cost. A report is then produced without clear statistical methodology and what appears to be a large amount of missing data.

A few specific examples that raise these concerns are as follows:

Firstly, the results of a number of disease-specific outcomes are reported; yet, we are given no information on what is the rate of data missingness leading to potential reporting bias. There are 14 hospitals participating and data on two diseases, coronary and aortic valve diseases, are presented from 12 hospitals. Any data with >10% missing fields are excluded. Only patients who are alive and completed a pre- and postinformation survey are included and heart centres with more than one missing condition were excluded. No information is given on exactly what proportion of patients are excluded. Data are available for 5 years for coronary artery bypass grafting (CABG), 3 years for percutaneous coronary intervention (PCI) and none for medical therapy. There were no significant differences reported in PCI outcomes on the basis of 1-year mortality (8 centres) or target vessel revascularization (TVR) (5 centres); yet, we are told that one centre initiated a project to lower TVR. One is left not knowing how this national programme made PCI outcomes ‘measurably better’. Survival at 120 days after CABG is reported from seven centres with one outlier of statistically better performance and the remainder statistically the same. The authors state ‘the observed variance between the other heart centers should be interpreted as natural variance. It was hypothesized that the use of a stringent perioperative safety check was the most striking difference in surgical practice amongst the centers’. One is left wondering how this conclusion is drawn from the data presented!

Secondly, the programme collects data on outcomes measures of three medical conditions: coronary artery disease (CAD) aortic valve disease and atrial fibrillation. The authors state that outcomes analyses ‘should be based on patient groups with the same medical condition’. They measure the performance of surgery, catheter-based intervention and medical therapy on these three diseases. It is not clear how they combined both inpatient and outpatient databases without common definitions to come up with disease-based metrics of success. It is unclear how monitoring by disease state can yield meaningful information. How does one analyse the treatment of CAD by three different treatments (CABG, PCI, medical therapy) and arrive at a value-based outcome analysis? How does one know that the Heart Team is making the right or wrong decision in choosing each of the three treatment options?

Thirdly, it is not clear how this programme is different from any other public reporting initiative. The authors state that there is a national registry that has reported all in-hospital mortality and morbidity after cardiac surgery since 2013. It is unclear what additional value the Meetbaar Beter programme has added to this registry. It is not readily apparent how it has affected decision-making. Regarding PCI, it is stated that an in-depth analysis of outcomes resulted in process improvement in several heart centres, leading to increases in prehydration and need for target vessel revascularization in a year. Yet, there is no evidence cited to support that claim.

Another example of lack of clarity as to how the change has been effected in 120-day mortality is presented as a funnel plot for consolidated aortic valve disease [patients treated with transcatheter aortic valve implantation (TAVI) and surgical aortic valve replacement (AVR) in seven centres]. All centres lie within the funnel meaning that there is no significant difference in outcomes at any of the centres. What do we learn here? We do not know the proportion of TAVI versus AVR per centre. We do not know if outcomes at some centres are better for TAVI and others for AVR. Has there been any change in any centre as a result of measuring better?

The epidemiologist Ioannidis has written: ‘On occasion, changes may indeed improve mortality rates [ 2 ]. It is almost equally likely that these changes may worsen mortality rates, e.g., if performance and quality data are misguided and lead to changes that deteriorate aspects of care that matter, while focusing attention on improving trivia’. Or these changes may simply cost money and effort to set up and maintain the performance evaluation and quality audit machinery but achieve nothing for patient outcomes. By managing non-statistically significant outcome differences between hospitals, the quality of care could also deteriorate rather than improve. In the absence of (adequate) data, improvement initiatives are better based on qualitative analyses such as process analysis, and analysis of communication or work culture issues.

While this is an ambitious national multicentre initiative that is based on achieving patient-centred healthcare value, it is not clear from this study exactly how much the ‘needle has moved’. It is also not apparent how much of any change that may have occurred is truly causation and not merely association. Don Berwick, former administrator of the Center for Medicare and Medicaid in the USA, has recently stated as the first of nine steps to improve healthcare that we must stop excessive measurement [ 3 ]. Stated another way, just because you step on a scale does not mean you lose weight. While the goals of this programme are laudable and beyond reproach, the evidence that change is actually occurring is not evident from this report. We would encourage the leaders of this national initiative to make efforts to provide greater clarity of results actually achieved and quality improvement in patient-relevant outcomes that they can directly credit to the programme. They are stepping on the scales and weighing a lot but is any weight really being lost?

REFERENCES

1

van Veghel
D
,
Marteijn
M
,
de Mol
B
.
First results of a national initiative to enable quality improvement of cardiovascular care by transparently reporting on patient-relevant outcomes
.
Eur J Cardiothorac Surg
2016
;
49
:
1660
9
.

2

Freedman
DH
.
Lies, Damned Lies, and Medical Science
.
The Atlantic [published November 2010]
. .

3

Stempniak
M
.
Don Berwick Offers Health Care 9 Steps to End Era of ‘Complex Incentives’ and ‘Excessive Measurement
’.
Hospitals & Health Networks [published 11 December 2015]
. .