Artificial intelligence is increasingly powering your doctor’s decisions. But as AI advice becomes a staple in modern healthcare, a surprising trend is emerging: patients aren’t quite ready to trust it.
Artificial intelligence (AI) is making its mark on healthcare, offering unprecedented support in diagnostics, treatment planning, and even behind-the-scenes administrative work. But new research reveals that patients in the U.S. are still wary of doctors who rely on AI, especially when it’s mentioned explicitly in their care.
A January 2025 study surveyed 1,276 American adults using realistic mock advertisements for family doctors. The ads were identical except for one detail: whether the doctor mentioned using AI in administrative, diagnostic, or therapeutic settings, or not at all. What happened next was telling. Across the board, doctors who mentioned using any type of AI were rated significantly lower in competence, trustworthiness, and empathy. Whether the AI was helping with paperwork or suggesting treatments, it didn’t matter. Patients were less likely to book an appointment with that doctor. The biggest drop in patient confidence came when the AI was said to influence diagnosis or treatment decisions, the core responsibilities patients expect from their human physicians.
This resistance to AI, even in 2025, points to deep-rooted perceptions about what makes a “good doctor.” For many patients, a competent physician isn’t just about accuracy; it’s about feeling heard, seen, and cared for. AI, however brilliant, doesn’t exactly evoke warmth and compassion. The fear that doctors may “offload” judgment to machines is real, and it’s affecting trust.
So, what can be done?
To bolster public confidence, the FDA can take a more active role in setting and communicating standards for AI use in medicine. Establishing a recognizable “FDA Approved AI” seal, similar to existing approvals for drugs and devices, could help patients distinguish between safe, validated AI tools and untested tech. Public education campaigns explaining how FDA-reviewed AI supports, rather than replaces, clinical decision-making could go a long way in reducing skepticism.
For physicians, transparency is key. Rather than letting AI quietly inform decisions, doctors should proactively explain how AI is being used to enhance patient care, not override it. Patients need to hear that the final call still rests with their human provider. Emphasizing that AI can reduce errors, cut wait times, and personalize treatments can reshape the narrative from “cold machine” to “powerful partner.”
Ultimately, AI is here to stay in healthcare. But for patients to embrace it, the system must double down on trust. By making AI less of a black box and more of a visible, FDA-endorsed tool in the physician’s arsenal, both regulators and healthcare providers can ensure that innovation doesn’t come at the cost of patient confidence.
AI might be brilliant at crunching data and presenting human-like output, but winning over hearts, that’s still the doctor’s job. The future of healthcare is human and machine, but only if patients believe in both.