FDA-Acceptable AI-Driven Clinical Summaries
(Thursday, February 22, 2024) ChatGPT can revolutionize the medical note taking process for clinical practices but it can exert unpredictable effects on clinical decision making by introducing nuances. A perspective in a recent publication in the Journal of American Medical Association (JAMA) provides a few pointers to what developers and practitioners must plan for when using ChatGPT or similar artificial intelligence tools for note-taking. Large Language Models (LLMs) can summarize clinical data from audio notes and from electronic health records (EHR) to create an up-to-date clinical “snapshot” to be reviewed by the treating physician in preparation for a patient appointment or follow-up post-visit. The convenience and productivity of using a tool like this to do seemingly mundane tasks has made these tools very popular, but they carry risks. The report discusses how the variations in the length, organization, and tone of the summaries could alter the medical interpretation of the data and subsequent medical decisions either intentionally or unintentionally due to the selection of the training data or bias in the algorithm. LLMs are probabilistic, can introduce small errors can collectively harm, and suffer from “Sycophancy” bias. The authors recommend comprehensive standards, clinical testing of the algorithm under FDA supervision, and regulatory oversight before approval. Overall, this is a great argument for policymakers and developers alike. AUTHOR
Dr. Mukesh Kumar Founder & CEO, FDAMap Email: [email protected] Linkedin: Mukesh Kumar, PhD, RAC Instagram: mukeshkumarrac Twitter: @FDA_MAP Youtube: MukeshKumarFDAMap |
|