Introduction

In the last 3 months in the UK, the media and public have been gripped by the ‘Post Office Horizon scandal’. It is a heart-breaking story of the nationwide implementation of faulty accountancy software, known as Horizon, across Post Office businesses (the Post Office is the nationwide network of branches offering postal, Government, and financial services).1 Poor coding and bugs within the system resulted in accounting errors, which were blamed on individual postmasters.2 Despite protesting their innocence, many were suspended, dismissed, prosecuted, or imprisoned; tragically, some even took their own lives.1

While this might seem far removed from health, at the heart of it is one important point: the presumption of reliability of computer evidence.3 As we move into an era of increasing integration of artificial intelligence (AI) systems within nursing research and practice, the Horizon scandal serves to remind us of an important point: computers are not always right.4

Artificial intelligence: a brief overview

As previously outlined in this journal, AI is an umbrella term that describes ‘techniques used to teach computers to mimic human-like cognitive functions like reasoning, communicating, learning, and decision-making’.5 The potential for AI is immense, given that it possesses the ability to process and integrate complex and heterogenous data at significant speed.6 The latter computational capability has been deemed particularly relevant to deliver on aspirations for personalized or precision medicine.7

For example, during the Covid-19 pandemic, researchers were able to harness machine learning (ML) techniques to accurately predict how many ventilators would be required at a hospital level and even which patients might require a ventilator following admission.8 Further, AI is not affected by unconscious psychological factors that might negatively influence human decision-making,9 creating a more equitable system. Again using the Covid-19 example, AI models trained to analyse large volumes of data could make ‘emotion-free’ decisions regarding the best use of scarce resources, like ventilators, that would deliver the maximum benefit.8

Techniques such as ML are just one of many AI approaches that have been touted for their potential to deliver more effective and equitable healthcare services. Other methods, explained in detail elsewhere,5,10 include natural language processing (NLP), deep learning, robotics, computational simulations, and extended, virtual, or augmented reality.

The potential role of artificial intelligence in nursing research and practice in cardiovascular disease

Common AI approaches, like ML and NLP, could improve risk prediction, decision-making, patient care, service delivery, healthcare professional and patient education, and research methods.11–13 However, as systematic reviews of AI are quick to highlight, these approaches are in their infancy,11 particularly in the field of nursing.14 Indeed, most reports relating to the application of AI, in either a research or clinical practice setting, focus on illustrating their development or evaluating their potential to perform a task, as opposed to randomized trials of their efficacy and impact.11

Despite this, we are beginning to see publications that describe the results of empirical testing. Within this journal, there have been a number of articles demonstrating the potential role of AI in cardiovascular disease (CVD) nursing research and clinical practice. Van Bulck and Moons15 demonstrated that ChatGPT-generated responses to potential patient questions about CVD were generally considered trustworthy and valuable by clinicians. When compared to information retrieved by an equivalent Google search, clinicians indicated ChatGPT responses were superior, thereby highlighting the potential value of an NLP algorithm in supporting patients to access good clinical information.

The same authors went on to examine the abilities of NLP systems like ChatGPT and Google Gemini (formerly Google Bard) to simplify patient information materials that had been published within medical journals. Plain language communication has long been a priority for healthcare settings.16 However, despite best efforts to make texts ‘plain’, ‘lay’, or ‘jargon free’ and at a reading level equivalent to national averages, most efforts appear to fall wide of the mark.17 In their analysis, the authors demonstrated the superior ability of some NLP systems to produce simpler, more readable texts compared to human efforts.18

Another example from this journal that demonstrates the potential value of AI comes from Turchioe et al.19 Using a NLP algorithm applied to electronic health records of 1293 patients undergoing ablation for atrial fibrillation, they were able to demonstrate symptom clusters that resolved or persisted post ablation. They were further able, via logistic regression models, to interrogate symptom prevalence by specific demographic and clinical characteristics. The authors postulate that this information, if replicated in larger samples, may be useful in personalizing medicine, as it might help patients decide whether or not to proceed with an ablation if their goal is symptom relief.19

The possibility for ethical dilemmas

Given the potential of AI to transform health services, it is unsurprising that healthcare professionals are calling for their professions to embrace it.20,21 However, as our opening story reminds us, not all technological innovation is superior. Some leaders in the field of AI in nursing have already asserted there has been limited critical discourse.6 Other experts caution that there is ‘the risk of greasing the slippery slope’ and have called for greater regulation.22 Most practitioners can probably remember a time in their clinical career when they performed a clinical assessment because they were mandated to do so, even though it was contrary to their clinical judgement. Herein lies the rub with AI: there are multiple unintended consequences that are only emerging or that we have yet to negotiate, for example the potential erosion of human contact, the loss of professional autonomy, the perpetuation of historical biases, and/or other issues with the validity of decision algorithms.6,22

What’s next?

As papers in this journal have noted,5,21 with the advent of AI, we are entering a new era in healthcare. All healthcare professionals can and should be at the forefront when creating and testing AI systems for healthcare settings, particularly if the downstream effects challenge their autonomy or professional integrity. Those involved in educating future healthcare professionals must embed AI within teaching to ensure future professionals have the knowledge and skills to negotiate imminent technologies. All healthcare professionals should be encouraged to develop technological capabilities to develop, apply, and interpret AI-generated outputs relevant to their practice. Lastly, they should be supported to foster critical thinking and advocacy skills that would protect against potential scandals like the one that unfolded at the Post Office.

Data availability

Not applicable.

References

2

Hearn
 
A
. How the Post Office’s Horizon system failed: a technical breakdown. The Guardian. 2024. https://www.theguardian.com/uk-news/2024/jan/09/how-the-post-offices-horizon-system-failed-a-technical-breakdown (15 March 2024).

3

Christie
 
J
.
The Post Office Horizon IT scandal and the presumption of the dependability of computer evidence
.
Digit Evid Electron Signat Law Rev
 
2020
;
17
:
49
70
.

4

Computers make mistakes and AI will make things worse—the law must recognize that
.
Nature
 
2024
;
625
:
631
.

5

Van Bulck
 
L
,
Couturier
 
R
,
Moons
 
P
.
Applications of artificial intelligence for nursing: has a new era arrived?
 
Eur J Cardiovasc Nurs
 
2023
;
22
:
e19
e20
.

6

Ronquillo
 
CE
,
Peltonen
 
LM
,
Pruinelli
 
L
,
Chu
 
CH
,
Bakken
 
S
,
Beduschi
 
A
, et al.  
Artificial intelligence in nursing: priorities and opportunities from an international invitational think-tank of the nursing and artificial intelligence leadership collaborative
.
J Adv Nurs
 
2021
;
77
:
3707
3717
.

7

Johnson
 
KB
,
Wei
 
W-Q
,
Weeraratne
 
D
,
Frisse
 
ME
,
Misulis
 
K
,
Rhee
 
K
, et al.  
Precision medicine, AI, and the future of personalized health care
.
Clin Transl Sci
 
2021
;
14
:
86
93
.

8

van der Schaar
 
M
. Progress using COVID-19 patient data to train machine learning models for healthcare. https://www.cam.ac.uk/research/news/progress-using-covid-19-patient-data-to-train-machine-learning-models-for-healthcare (16 April 2024).

9

Du
 
M
.
Machine vs. human, who makes a better judgment on innovation? Take GPT-4 for example
.
Front Artif Intell
 
2023
;
6
:
1206516
.

10

Samant
 
S
,
Bakhos Jules
 
J
,
Wu
 
W
,
Zhao
 
S
,
Kassab Ghassan
 
S
,
Khan
 
B
, et al.  
Artificial intelligence, computational simulations, and extended reality in cardiovascular interventions
.
JACC: Cardiovascular Interventions
 
2023
;
16
:
2479
2497
.

11

O'Connor
 
S
,
Yan
 
Y
,
Thilo
 
FJS
,
Felzmann
 
H
,
Dowding
 
D
,
Lee
 
JJ
.
Artificial intelligence in nursing and midwifery: a systematic review
.
J Clin Nurs
 
2023
;
32
:
2951
2968
.

12

Hobensack
 
M
,
Von Gerich
 
H
,
Vyas
 
P
,
Withall
 
J
,
Peltonen
 
L-M
,
Block
 
LJ
, et al.  
A rapid review on current and potential uses of large language models in nursing
.
Int J Nurs Stud
 
2024
;
154
:
104753
.

13

Eftekhari
 
H
.
Transcribing in the digital age: qualitative research practice utilizing intelligent speech recognition technology
.
Eur J Cardiovasc Nurs
 
2024
;
23
:
553
560
.

14

Mitha
 
S
,
Schwartz
 
J
,
Hobensack
 
M
,
Cato
 
K
,
Woo
 
K
,
Smaldone
 
A
, et al.  
Natural language processing of nursing notes: an integrative review
.
Comput Inform Nurs
 
2023
;
41
:
377
384
.

15

Van Bulck
 
L
,
Moons
 
P
.
What if your patient switches from Dr. Google to Dr. ChatGPT? A vignette-based survey of the trustworthiness, value and danger of ChatGPT-generated responses to health questions
.
Eur J Cardiovasc Nurs
 
2023
;
23
:
95
98
.

16

Warde
 
F
,
Papadakos
 
J
,
Papadakos
 
T
,
Rodin
 
D
,
Salhia
 
M
,
Giuliani
 
M
.
Plain language communication as a priority competency for medical professionals in a globalized world
.
Can Med Educ J
 
2018
;
9
:
e52
e59
.

17

Shiely
 
F
,
Daly
 
A
.
Trial lay summaries were not fit for purpose
.
J Clin Epidemiol
 
2023
;
156
:
105
112
.

18

Moons
 
P
,
Van Bulck
 
L
.
Using ChatGPT and Google Bard to improve the readability of written patient information: a proof of concept
.
Eur J Cardiovasc Nurs
 
2024
;
23
:
122
126
.

19

Turchioe
 
M R
,
Volodarskiy
 
A
,
Guo
 
W
,
Taylor
 
B
,
Hobensack
 
M
,
Pathak
 
J
, et al.  
Characterizing atrial fibrillation symptom improvement following de novo catheter ablation
.
Eur J Cardiovasc Nurs
 
2024
;
23
:
241
250
.

20

Rony
 
MKK
,
Parvin
 
MR
,
Ferdousi
 
S
.
Advancing nursing practice with artificial intelligence: enhancing preparedness for the future
.
Nurs Open
 
2024
;
11
:.

21

Woo
 
B
,
Huynh
 
T
,
Tang
 
A
,
Bui
 
N
,
Nguyen
 
G
,
Tam
 
W
.
Transforming nursing with large language models: from concept to practice
.
Eur J Cardiovasc Nurs
 
2024
;
23
:
549
552
.

22

Thamman
 
R
,
Yong Celina
 
M
,
Tran Andrew
 
H
,
Tobb
 
K
,
Brandt Eric
 
J
.
Role of artificial intelligence in cardiovascular health disparities
.
JACC: Advances
 
2023
;
2
:
100578
.

Author notes

The opinions expressed in this article are not necessarily those of the Editors of the European Heart Journal or of the European Society of Cardiology.

Conflict of interest: In line with the journal’s conflict of interest policy, this paper was handled by Jeroen Hendriks.

This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://dbpia.nl.go.kr/pages/standard-publication-reuse-rights)

Comments

0 Comments
Submit a comment
You have entered an invalid code
Thank you for submitting a comment on this article. Your comment will be reviewed and published at the journal's discretion. Please check for further notifications by email.