This response refers to ‘Letter to the editor – Dr. ChatGPT in cardiovascular nursing: a deeper dive into trustworthiness, value, and potential risks’ by P.P. Ray and P. Majumder, https://doi.org/10.1093/eurjcn/zvad047.

In the letter to the editor entitled ‘Dr. ChatGPT in Cardiovascular Nursing: A Deeper Dive into Trustworthiness, Value, and Potential Risks’,1 a critical appraisal was provided regarding our recently published study on the trustworthiness, value, and danger of ChatGPT-generated responses to health questions.2 In the letter, the value of our survey was acknowledged and some points of critique were given. We want to thank Ray and Majumder for their interest in our work and for their critical reflections.

Ray and Majumder’s argument that our study did not directly compare between ChatGPT and other sources, such as Google or traditional patient education materials, is valid. Furthermore, we did not assess the impact of using ChatGPT on patient behaviour or outcomes, indeed. Such extensive evaluations would certainly provide deeper insights in the potential benefits and/or drawbacks of ChatGPT by patients. However, this was beyond the aim of our snapshot evaluation. Our objective was to provide a first brief evaluation of the value of ChatGPT responses for patients, to prompt reflection and debate about the value or risks of using such a language model. Such a snapshot evaluation is by no means conclusive, but rather triggers and stimulates discussion and generates profound research questions. Several other brief vignette-based evaluations of ChatGPT have been conducted and published over the past weeks.3–5 The contribution of Ray and Majumder to the scientific debate is highly valuable and offers suggestions for further research.

The authors correctly note that AI technology is advancing rapidly. The technology that is launched today may become outdated in a month. Whereas ChatGPT, which was launched on 30 November 2022, was initially based on GPT-3.5, on 14 March 2023, GPT-4 was released.6 On 4 May 2023, Microsoft made its new search engine Bing publicly available. The New Bing is a search engine that incorporates GPT technology. The chat feature is similar to ChatGPT, but it has already eliminated some critical limitations of ChatGPT. As discussed in our editorial and research letter, ChatGPT was only trained up to 2021 and transparency was limited due to the absence of references.2,7 The New Bing provides references for the information provided and features up-to-date information. Hence, it would be worthwhile to replicate our vignette-based methodology using information provided by the New Bing. Nevertheless, it is very likely that in the upcoming weeks, there will be new advancements in this lighting fast-evolving field.

As Ray and Majumder pointed out, our results might have been susceptible to confirmation bias, since the experts were selected from our professional network. However, the experts were asked to provide an honest assessment of the answers and no suggestions were made regarding the accuracy of ChatGPT. Furthermore, if confirmation bias were present, the findings would have been less favourable since we are sceptical towards the clinical use of the current chatbots, by either clinicians or patients. Prior to our survey, and even today, we believe that we should not endorse the use of these chatbots for clinical use yet. Our hesitation is corroborated by recent evaluations.3

In conclusion, we concur with most of the criticisms raised by Ray and Majumder. However, as also mentioned in our research letter, our study was only a first step in exploring the usefulness of language models, such as ChatGPT, for patients. This is just the beginning of exciting developments of AI in healthcare in general and in nursing specifically.8

Data availability

This letter does not include original data.

References

1

Ray
PP
,
Majumder
P
.
Dr. ChatGPT in cardiovascular nursing: a deeper dive into trustworthiness, value, and potential risks
.
Eur J Cardiovasc Nurs
2024
;
23
:
e11
e12
.

2

Van Bulck
L
,
Moons
P
.
What if your patient switches from Dr. Google to Dr. ChatGPT? A vignette-based survey of the trustworthiness, value and danger of ChatGPT-generated responses to health questions
.
Eur J Cardiovasc Nurs
2024
;
23
:
95
98
.

3

Lee
P
,
Bubeck
S
,
Petro
J
.
Benefits, limits, and risks of GPT-4 as an AI chatbot for medicine
.
N Engl J Med
2023
;
388
:
1233
1239
.

4

Hirosawa
T
,
Harada
Y
,
Yokose
M
,
Sakamoto
T
,
Kawamura
R
,
Shimizu
T
.
Diagnostic accuracy of differential-diagnosis lists generated by generative pretrained transformer 3 chatbot for clinical vignettes with common chief complaints: a pilot study
.
Int J Environ Res Public Health
2023
;
20
:
3378
.

5

Au Yeung
J
,
Kraljevic
Z
,
Luintel
A
,
Balston
A
,
Idowu
E
,
Dobson
R
, et al.
AI chatbots not yet ready for clinical use
.
Front Digit Health
2023
;
5
:
1161098
.

6

Sanderson
K
.
GPT-4 is here: what scientists think
.
Nature
2023
;
615
:
773
.

7

Moons
P
,
Van Bulck
L
.
ChatGPT: can artificial intelligence language models be of value for cardiovascular nurses and allied health professionals
.
Eur J Cardiovasc Nurs
2023
;
22
:
e55
e59
.

8

Van Bulck
L
,
Couturier
R
,
Moons
P
.
Applications of artificial intelligence for nursing: has a new era arrived?
Eur J Cardiovasc Nurs
2023
;
22
:
e19
e20
.

Author notes

Conflict of interest: None declared.

This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://dbpia.nl.go.kr/pages/standard-publication-reuse-rights)

Comments

0 Comments
Submit a comment
You have entered an invalid code
Thank you for submitting a comment on this article. Your comment will be reviewed and published at the journal's discretion. Please check for further notifications by email.