-
PDF
- Split View
-
Views
-
Cite
Cite
Faye Forsyth, Liesbet Van Bulck, Bo Daelman, Philip Moons, When the computer says yes, but the healthcare professional says no: artificial intelligence and possible ethical dilemmas in health services, European Journal of Cardiovascular Nursing, Volume 23, Issue 8, November 2024, Pages e165–e166, https://doi.org/10.1093/eurjcn/zvae059
- Share Icon Share
Introduction
In the last 3 months in the UK, the media and public have been gripped by the ‘Post Office Horizon scandal’. It is a heart-breaking story of the nationwide implementation of faulty accountancy software, known as Horizon, across Post Office businesses (the Post Office is the nationwide network of branches offering postal, Government, and financial services).1 Poor coding and bugs within the system resulted in accounting errors, which were blamed on individual postmasters.2 Despite protesting their innocence, many were suspended, dismissed, prosecuted, or imprisoned; tragically, some even took their own lives.1
While this might seem far removed from health, at the heart of it is one important point: the presumption of reliability of computer evidence.3 As we move into an era of increasing integration of artificial intelligence (AI) systems within nursing research and practice, the Horizon scandal serves to remind us of an important point: computers are not always right.4
Artificial intelligence: a brief overview
As previously outlined in this journal, AI is an umbrella term that describes ‘techniques used to teach computers to mimic human-like cognitive functions like reasoning, communicating, learning, and decision-making’.5 The potential for AI is immense, given that it possesses the ability to process and integrate complex and heterogenous data at significant speed.6 The latter computational capability has been deemed particularly relevant to deliver on aspirations for personalized or precision medicine.7
For example, during the Covid-19 pandemic, researchers were able to harness machine learning (ML) techniques to accurately predict how many ventilators would be required at a hospital level and even which patients might require a ventilator following admission.8 Further, AI is not affected by unconscious psychological factors that might negatively influence human decision-making,9 creating a more equitable system. Again using the Covid-19 example, AI models trained to analyse large volumes of data could make ‘emotion-free’ decisions regarding the best use of scarce resources, like ventilators, that would deliver the maximum benefit.8
Techniques such as ML are just one of many AI approaches that have been touted for their potential to deliver more effective and equitable healthcare services. Other methods, explained in detail elsewhere,5,10 include natural language processing (NLP), deep learning, robotics, computational simulations, and extended, virtual, or augmented reality.
The potential role of artificial intelligence in nursing research and practice in cardiovascular disease
Common AI approaches, like ML and NLP, could improve risk prediction, decision-making, patient care, service delivery, healthcare professional and patient education, and research methods.11–13 However, as systematic reviews of AI are quick to highlight, these approaches are in their infancy,11 particularly in the field of nursing.14 Indeed, most reports relating to the application of AI, in either a research or clinical practice setting, focus on illustrating their development or evaluating their potential to perform a task, as opposed to randomized trials of their efficacy and impact.11
Despite this, we are beginning to see publications that describe the results of empirical testing. Within this journal, there have been a number of articles demonstrating the potential role of AI in cardiovascular disease (CVD) nursing research and clinical practice. Van Bulck and Moons15 demonstrated that ChatGPT-generated responses to potential patient questions about CVD were generally considered trustworthy and valuable by clinicians. When compared to information retrieved by an equivalent Google search, clinicians indicated ChatGPT responses were superior, thereby highlighting the potential value of an NLP algorithm in supporting patients to access good clinical information.
The same authors went on to examine the abilities of NLP systems like ChatGPT and Google Gemini (formerly Google Bard) to simplify patient information materials that had been published within medical journals. Plain language communication has long been a priority for healthcare settings.16 However, despite best efforts to make texts ‘plain’, ‘lay’, or ‘jargon free’ and at a reading level equivalent to national averages, most efforts appear to fall wide of the mark.17 In their analysis, the authors demonstrated the superior ability of some NLP systems to produce simpler, more readable texts compared to human efforts.18
Another example from this journal that demonstrates the potential value of AI comes from Turchioe et al.19 Using a NLP algorithm applied to electronic health records of 1293 patients undergoing ablation for atrial fibrillation, they were able to demonstrate symptom clusters that resolved or persisted post ablation. They were further able, via logistic regression models, to interrogate symptom prevalence by specific demographic and clinical characteristics. The authors postulate that this information, if replicated in larger samples, may be useful in personalizing medicine, as it might help patients decide whether or not to proceed with an ablation if their goal is symptom relief.19
The possibility for ethical dilemmas
Given the potential of AI to transform health services, it is unsurprising that healthcare professionals are calling for their professions to embrace it.20,21 However, as our opening story reminds us, not all technological innovation is superior. Some leaders in the field of AI in nursing have already asserted there has been limited critical discourse.6 Other experts caution that there is ‘the risk of greasing the slippery slope’ and have called for greater regulation.22 Most practitioners can probably remember a time in their clinical career when they performed a clinical assessment because they were mandated to do so, even though it was contrary to their clinical judgement. Herein lies the rub with AI: there are multiple unintended consequences that are only emerging or that we have yet to negotiate, for example the potential erosion of human contact, the loss of professional autonomy, the perpetuation of historical biases, and/or other issues with the validity of decision algorithms.6,22
What’s next?
As papers in this journal have noted,5,21 with the advent of AI, we are entering a new era in healthcare. All healthcare professionals can and should be at the forefront when creating and testing AI systems for healthcare settings, particularly if the downstream effects challenge their autonomy or professional integrity. Those involved in educating future healthcare professionals must embed AI within teaching to ensure future professionals have the knowledge and skills to negotiate imminent technologies. All healthcare professionals should be encouraged to develop technological capabilities to develop, apply, and interpret AI-generated outputs relevant to their practice. Lastly, they should be supported to foster critical thinking and advocacy skills that would protect against potential scandals like the one that unfolded at the Post Office.
Data availability
Not applicable.
References
Author notes
The opinions expressed in this article are not necessarily those of the Editors of the European Heart Journal or of the European Society of Cardiology.
Conflict of interest: In line with the journal’s conflict of interest policy, this paper was handled by Jeroen Hendriks.
Comments