Using ChatGPT to Write Informed Consent for Clinical Practice and Clinical Trials
(Thursday, January 18, 2024) Informed consents from patients/subjects are critical both in clinical care and clinical trials. However, informed consents are mostly written such that it would be hard for most patients to adequately understand them leading to improper informed consent. A study published this week reports on the experience of a clinical practice using ChatGPT to write/revise informed consent which could be a model for others looking to improve their informed consent documents. In clinical trials, it is mandated that the informed consent document (ICD) be written at an 8th-grade educational level so that most participants can adequately understand it. However, most clinical trial consent forms are written at the 12th-grade or higher educational level. Simplifying an informed consent document could take significant resources. An average medical or technical writer could take 2-4 hours per page to re-write or revise an ICD. With most ICDs running into 10-12 pages, revising an ICD could take more than a week per revision cycle, and applying appropriate quality control steps to the revision translates to a prolonged delay in finalizing an ICD. However, large language model (LLM) based artificial intelligence programs, such as ChatGPT, are designed to edit documents efficiently and adequately in a shorter time. The key to making ChatGPT deliver an acceptable outcome is the command or the “prompt” given to it for a given job. To test this, a group of clinicians at Lifespan, Rhode Island’s largest healthcare system and the primary medical teaching affiliate of Brown University, used ChatGPT-4 to rewrite an informed consent used in its surgical practice and found that it worked wonderfully well. GPT-4 was able to generate a simplified draft of the surgical consent form with a Flesch-Kincaid reading level of 6.7. There were two concerns with using GPT-4. First, there were concerns about the introduction of stereotypes and discrimination due to the training used to create GPT-4. This was addressed by having multiple human reviewers with diverse backgrounds review the simplified draft for such issues. The second concern was regarding the use of appropriate words by GPT-4 which was addressed by getting an institutional review for the simplified consent. The experiment was successful in its primary goal. GPT-4 was able to generate the simplified draft of a large document in less than a minute. Since it was used to modify existing written material rather than writing de novo, it mitigated the risk of “hallucinations” by GPT-4. The “prompt” used for this task, “While preserving content and meaning, convert this consent to the average American reading level”, worked well as it instructed the software to not analyze the content or draw conclusions, but just simplify. The human review of the simplified document further ensured that the modified text was an accurate representation of the original text. Overall, the experiment demonstrated that ChatGPT can be used effectively for tasks such as simplifying ICDs provided there are appropriate “guard rails” to mitigate its risks. It does not mean that we can simply feed a clinical protocol into ChatGPT and ask it to create an ICD de novo, but that once an ICD has been written, the software can be used to simplify it. This report indicates the promise. The true test of this tool would be when it is used to simplify larger documents describing complex clinical trials. Now that we have seen a demonstration of the possibility, it is up to us to put this into practice. AUTHOR
Dr. Mukesh Kumar Founder & CEO, FDAMap Email: [email protected] Linkedin: Mukesh Kumar, PhD, RAC Instagram: mukeshkumarrac Twitter: @FDA_MAP Youtube: MukeshKumarFDAMap |
|