Should ChatGPT Be an Author on Scientific Papers Written with Its Help?
(Thursday, June 15, 2023)
As longform artificial intelligence (AI) algorithms like ChatGPT become popular, these would obviously be used by authors to write, edit, or involve in other ways in the creation of manuscripts. To address the use of AI in scientific publication, the International Committee of Medical Journal Editors (ICMJE) revised the instructions to authors to not include AI programs as one of the authors and disclose how AI-assisted technologies were used. Basically, only individuals who reviewed the manuscript “critically for important intellectual content” can be authors.
AI-assisted technologies such as Large-Language Models (LLMs), chatbots or image creators would inevitably play a role in writing about scientific work. One can use chatbots such as ChatGPT to create and edit written text. However, ChatGPT “cannot be responsible for the accuracy, integrity and originality of the work”. More importantly, AI can “generate authoritative-sounding output that can be incorrect, incomplete, or biased.” It is the responsibility of the human authors submitting an article for review to a journal to carefully review and results and assert that there is no plagiarism in the paper, including text and images produced by the AI. “Referencing AI-generated material as the primary source is not acceptable”. “Humans must ensure there is appropriate attribution of all quoted material including full citations”, says the current instructions from ICMJE.
The ICMJE also requires that the reviewers of the submitted manuscript not use AI-technologies for review of the articles. Reviewers are prohibited to upload the manuscript to AI-technologies where confidentiality cannot be assured. Just like the authors, reviewers also are required to disclose to the journal if AI-technologies were used during the review and to be responsible to review and edit any comments generated using AI.
In a very short time, ChatGPT and thousands of related and similar AI-technologies have created a challenge for all aspects of science. While such technologies offer some obvious benefits in increasing efficiency of certain tasks, particularly in generating textual content, they also encourage plagiarism, circular thought process where the same ideas can be rewritten and re-presented as original. The AI-technologies also are severely limited (currently) by the content used to train them. For example, ChatGPT only has a total of 25 jokes that it repeats over and over. So, it cannot generate a sustainable comedy routine which thrives on creative thinking and new material. But these limitations are short-lived. As the AI-technologies get exposed to a broad range of new material from its hundreds of millions of users, it will overcome these deficiencies.
In the end, scientific publications are dependent on the honor system where journals primarily trust the honest disclosures of the authors and reviewers for integrity. And they are in for challenging times as there are too many tools available freely to entice the same individuals to cheat the system.
Dr. Mukesh Kumar
Founder & CEO, FDAMap
Linkedin: Mukesh Kumar, PhD, RAC