Emergency Medicine

ChatGPT and conversational artificial intelligence: Friend, foe, or future of research?

Journal logoUnlabelled imageAmerican Journal of Emergency Medicine 70 (2023) 191

Contents lists available at ScienceDirect

American Journal of Emergency Medicine

journal homepage:

ChatGPT and conversational artificial intelligence: Ethics in the eye of the beholder

We thank Dr. Mungmunpuntipantip and Dr. Wiwanitkit for their interest and response to our article entitled “ChatGPT and conversa- tional artificial intelligence: Friend, foe, or future of research?” [1]. In the article, we highlighted the potential applications of conversational Artificial intelligence software, such as ChatGPT, in research while also raising awareness of the potential risks and limitations. In their letter, Dr. Mungmunpuntipantip and Dr. Wiwanitkit discuss the impor- tance of human oversight in AI. We agree with this importance and wish to emphasize the critical need for human oversight when using this software.

This should begin with the decision of whether it is appropriate to use conversational AI for a given application. While conversa- tional AI offers many advantages and efficiencies, it is not a universal solution. Researchers must thoughtfully consider the benefits and limitations of the software and avoid overreliance on AI when inap- propriate. When it is used, researchers must ensure the data they are providing are accurate (much the same as one would for performing statistical analyses of study findings). Peer reviewers must also as- sess the writing to ensure that it is of sufficient quality, reflecting the nuance of the findings and avoiding incorrect statements or an overly positivist interpretation of the results [2]. Because conversa- tional AI programs are algorithms built upon published information, they run a high risk of plagiarism and both authors and journals should pay particular note to assess for this. ChatGPT and some con- versational AI programs have also been reported to provide fabri- cated references (often referred to as “hallucinations”). To combat this, both authors and reviewers must pay particular attention to the references and ensure that they are accurate and reflect the statement to which they are attributed. Finally, authors should disclose the use of conversational AI, including the software, version, and in what capacity it was used [3,4].

Conversational AI programs are software programs that have the ability to significantly enhance research efficiency and dissemination. While these conversational AI programs will continue to advance and become more sophisticated, it is ultimately the end-user who is respon- sible for the outcomes that result from their use. As such, we cannot ask these programs to be the ethical end-point – it is the users (e.g., author, reviewer, editor, reader) who must serve in that role.

Prior presentations

None.

Financial support/disclosures

None.

CRediT authorship contribution statement

Michael Gottlieb: Writing – original draft, Writing – review & editing. Jeffrey A. Kline: Writing – original draft, Writing – review & editing. Alexander J. Schneider: Writing – original draft, Writing – review & editing. Wendy C. Coates: Writing – original draft, Writing – review & editing.

Declaration of Competing Interest

We have no conflicts of interest to declare and this manuscript has not been submitted elsewhere.

Acknowledgements

None.

References

  1. Gottlieb M, Kline JA, Schneider AJ, Coates WC. ChatGPT and conversational artificial intelligence: friend, foe, or future of research? Am J Emerg Med. 2023 May 18(70): 81-3. https://doi.org/10.1016/j.ajem.2023.05.018. [Online ahead of print].
  2. World Association of Medical Editors. Chatbots, ChatGPT, and scholarly manuscripts: WAME recommendations on ChatGPT and chatbots in relation to scholarly publica- tions. Available at: https://wame.org/page3.php?id=106. Last accessed 5/29/2023.
  3. Elsevier. Publishing ethics: the use of AI and AI-assisted technologies in scientific writing. Available at: https://www.elsevier.com/about/policies/publishing-ethics. Last accessed 5/29/2023.
  4. JAMA. Instructions for Authors. Updated January 30, 2023. Available at: https:// jamanetwork.com/journals/jama/pages/instructions-for-authors. Last accessed 5/29/ 2023.

Michael Gottlieb, MD

Department of Emergency Medicine, Rush University Medical Center,

Chicago, IL, United States of America

*Corresponding author.

E-mail address: [email protected]

Jeffrey A. Kline, MD

Department of Emergency Medicine, Wayne State University School of

Medicine, Detroit, MI, United States of America E-mail address: [email protected]

Alexander J. Schneider, BS NantGames, Inc, Los Angeles, CA, United States of America E-mail address: [email protected]

Wendy C. Coates, MD Department of Emergency Medicine, University of California, Los Angeles, David Geffen School of Medicine, Los Angeles, CA, United States of America

E-mail address: [email protected]

5 June 2023

https://doi.org/10.1016/j.ajem.2023.06.023

0735-6757/(C) 2023

Leave a Reply

Your email address will not be published. Required fields are marked *