ChatGPT = “Weapon of Mass Disinformation” for Healthcare Information
(Thursday, November 16, 2023) Chat GPT can generate volumes of credible looking misinformation in minutes raising concerns about its ability to maximize the impact of disinformation generated by malicious players. Such disinformation is almost impossible to counter since accurate and credible information can be easily drowned in seemingly unlimited false and misleading information creating a nightmare for health authorities and the public alike. In an experiment, ChatGPT was used to generate 102 distinct blog articles that contained more than 17,000 words of disinformation related to vaccines and vaping. The content was coercive, contained fake patient and clinician testimonials, and references to fake scientific-looking publications, and could be written to target diverse groups such as young adults, young parents, older persons, pregnant people, and those with chronic health conditions. The software was used to create 20 realistic images in minutes by non-trained individuals. ChatGPT was more prolific in creating such articles compared to Google’s Bard and Microsoft’s Bing with similar long-form generative algorithms, which is a minor consolation knowing that those tools could also be trained to generate disinformation. This should not be surprising for anyone even remotely familiar with the versatility of ChatGPT. The algorithm promises human-like generative text on any topic using publicly available information and imaginative text generative functionality that mimics human writing. There are practically no guardrails on these online algorithms. There are no easy solutions either. It’s a new world that we live in where disinformation and misinformation will be more common than accurate information, subject to the readers’ prejudices and beliefs rather than facts. AUTHOR
Dr. Mukesh Kumar Founder & CEO, FDAMap Email: [email protected] Linkedin: Mukesh Kumar, PhD, RAC Instagram: mukeshkumarrac Twitter: @FDA_MAP Youtube: MukeshKumarFDAMap |
|