-
PDF
- Split View
-
Views
-
Cite
Cite
Liesbet Van Bulck, Philip Moons, Response to the letter to the editor – Dr. ChatGPT in cardiovascular nursing: a deeper dive into trustworthiness, value, and potential risk, European Journal of Cardiovascular Nursing, Volume 23, Issue 1, January 2024, Pages e13–e14, https://doi.org/10.1093/eurjcn/zvad049
- Share Icon Share
This response refers to ‘Letter to the editor – Dr. ChatGPT in cardiovascular nursing: a deeper dive into trustworthiness, value, and potential risks’ by P.P. Ray and P. Majumder, https://doi.org/10.1093/eurjcn/zvad047.
In the letter to the editor entitled ‘Dr. ChatGPT in Cardiovascular Nursing: A Deeper Dive into Trustworthiness, Value, and Potential Risks’,1 a critical appraisal was provided regarding our recently published study on the trustworthiness, value, and danger of ChatGPT-generated responses to health questions.2 In the letter, the value of our survey was acknowledged and some points of critique were given. We want to thank Ray and Majumder for their interest in our work and for their critical reflections.
Ray and Majumder’s argument that our study did not directly compare between ChatGPT and other sources, such as Google or traditional patient education materials, is valid. Furthermore, we did not assess the impact of using ChatGPT on patient behaviour or outcomes, indeed. Such extensive evaluations would certainly provide deeper insights in the potential benefits and/or drawbacks of ChatGPT by patients. However, this was beyond the aim of our snapshot evaluation. Our objective was to provide a first brief evaluation of the value of ChatGPT responses for patients, to prompt reflection and debate about the value or risks of using such a language model. Such a snapshot evaluation is by no means conclusive, but rather triggers and stimulates discussion and generates profound research questions. Several other brief vignette-based evaluations of ChatGPT have been conducted and published over the past weeks.3–5 The contribution of Ray and Majumder to the scientific debate is highly valuable and offers suggestions for further research.
The authors correctly note that AI technology is advancing rapidly. The technology that is launched today may become outdated in a month. Whereas ChatGPT, which was launched on 30 November 2022, was initially based on GPT-3.5, on 14 March 2023, GPT-4 was released.6 On 4 May 2023, Microsoft made its new search engine Bing publicly available. The New Bing is a search engine that incorporates GPT technology. The chat feature is similar to ChatGPT, but it has already eliminated some critical limitations of ChatGPT. As discussed in our editorial and research letter, ChatGPT was only trained up to 2021 and transparency was limited due to the absence of references.2,7 The New Bing provides references for the information provided and features up-to-date information. Hence, it would be worthwhile to replicate our vignette-based methodology using information provided by the New Bing. Nevertheless, it is very likely that in the upcoming weeks, there will be new advancements in this lighting fast-evolving field.
As Ray and Majumder pointed out, our results might have been susceptible to confirmation bias, since the experts were selected from our professional network. However, the experts were asked to provide an honest assessment of the answers and no suggestions were made regarding the accuracy of ChatGPT. Furthermore, if confirmation bias were present, the findings would have been less favourable since we are sceptical towards the clinical use of the current chatbots, by either clinicians or patients. Prior to our survey, and even today, we believe that we should not endorse the use of these chatbots for clinical use yet. Our hesitation is corroborated by recent evaluations.3
In conclusion, we concur with most of the criticisms raised by Ray and Majumder. However, as also mentioned in our research letter, our study was only a first step in exploring the usefulness of language models, such as ChatGPT, for patients. This is just the beginning of exciting developments of AI in healthcare in general and in nursing specifically.8
Data availability
This letter does not include original data.
References
Author notes
Conflict of interest: None declared.
Comments