FDA MAP
  • Home
  • Training
    • WORKSHOPS
    • WEBINARS
    • CUSTOM TRAINING
  • SERVICES
    • REGULATORY SERVICES
    • FDA AUDITING
    • QUALITY ASSURANCE
    • CLINICAL TRIAL SERVICES
  • News Releases
  • About Us
Critical Deficiency in AI-Enabled Medicine Limit their Practical Application  
(Thursday, October 27, 2022)
A key application of digital medical tools is in diagnostic applications that rely on artificial intelligence (AI) algorithms. Experts opine that there are critical deficiencies in the methods used for training and validation of AI tools that might over-estimate the accuracy of these products, and hence make them less reliable. These deficiencies must be addressed for the AI to be trustworthy for clinical use. Use of digital medicine has been predicted to be the next big improvement in overall healthcare management. Medical processes that involve well-defined and standardized methods can be easily automated which has enabled a wide use of digital tools that have improved patient experience and facilitated workload management for the medical staff. However, the areas of medical practice that involve human interactions driven by analysis of information available is still in its infancy. For example, diagnosis of a disease requires a physician or other medical staff to review the outcome from a test and draw conclusions. The current practice for validating AI algorithms involves the developers to collect data to used to validation. A sub-set of this data is used to train the algorithm and then the rest of the data is used to test the model for accuracy. Experts opine that this practice leads to “data leakage” that inflates the estimates of the accuracy of the model. A new independent data set should be used to test the model from the one used to train the algorithm to get a better measure of the accuracy of the algorithm. Seems simple enough. The reliability and accuracy of an AI algorithm depends entirely on that of the training and testing of the tools which depends on the data used for both steps. The authors also suggest using an overall holistic approach to the testing and validation of AI algorithms that include use of larger datasets, involving clinicians and patients in the design process, and discussing with FDA at all steps of development. AI algorithms have been predicted to supplement human analysis for the short-term and eventually completely replacing the need for human interaction. It is important for developers to avoid shortcuts to validate their AI products.

Picture
AUTHOR               

 Dr. Mukesh Kumar
​ Founder & CEO, FDAMap


 Email: mkumar@fdamap.com
​ Linkedin: Mukesh Kumar, PhD, RAC
 Instagram: 
mukeshkumarrac 
                           Twitter: @FDA_MAP
                           Youtube: MukeshKumarFDAMap

​

    Newsletter Signup
    Subscribe to FDAMap Newsletter for Refreshing Outlook on Regulatory Topics

Submit

Our Services

  • Workshops
  • Webinars
  • On-Demand Webinars
  • Custom training
  • Training Subscriptions
  • Corporate Registration
  • Certification Program
  • Smart Recruitment
  • Corporate Registration - Online Workshop​

Company

  • About Us​
  • Testimonials
  • Free Resources
  • Newsletter
  • Videos (Free Tutorials)

Support

  • Contact Us
  • Terms of Service
  • Privacy Policy
  • News Archive
Consultation Service
FDA Training Company
Copyright © 2022 FDAMap.com
Use of this Web site constitutes acceptance of the FDAMap Terms of Use and Privacy Policy.