Home Bots in SocietyAI Adoption Accelerates in Healthcare, but Trust and Training Still Lag Behind

AI Adoption Accelerates in Healthcare, but Trust and Training Still Lag Behind

Clinician of the Future 2025 report highlights how AI is reshaping clinical practice globally

by Pieter Werner

Artificial intelligence is becoming a standard part of clinical practice, according to the Clinician of the Future 2025 report by Elsevier. Based on input from over 2,200 clinicians across 109 countries, the report outlines rapid growth in AI use, with nearly half of respondents (48%) now using AI tools in their professional work—almost double the percentage from 2024.

This acceleration is driven by the promise of improved efficiency and patient care. Over half of the clinicians surveyed say AI tools help save time (57%), empower them (53%), and offer greater flexibility in how they deliver care. However, these benefits are unevenly distributed and often limited by gaps in institutional readiness and lingering concerns over trust and transparency.

From ChatGPT to Medical Imaging

While general-purpose AI tools such as ChatGPT dominate current usage (97% of AI-using clinicians), 76% have also engaged with clinical-specific tools. Common applications include identifying drug interactions, analyzing medical images, and summarizing medication data. Administrative support is another major area of adoption, with clinicians using AI to draft patient letters, write clinical notes, and complete insurance authorizations.

Despite these gains, AI use in direct clinical decision-making remains limited. Just 16% of clinicians currently rely on AI tools to inform medical decisions, although 48% would like to do so in the future. This gap reflects broader skepticism, particularly in Europe and North America, where a significant proportion of clinicians say they would prefer not to use AI for diagnosis or treatment support.

Optimism for the Near Future

Looking ahead, clinicians are largely optimistic. Within the next two to three years, 70% expect AI will save them time, 58% believe it will enable faster diagnoses, and 54% anticipate more accurate diagnostics. Over half (55%) think AI will improve patient outcomes, and 41% predict that clinicians who use AI will provide higher-quality care than those who do not.

Yet the report also warns of emerging challenges, including the potential for more patients to bypass clinicians entirely. Around 38% of respondents expect most patients will soon use AI tools for self-diagnosis, raising concerns about the quality and reliability of online medical advice.

Trust and Institutional Support Lacking

While clinicians are eager to embrace AI, many say their institutions are not keeping pace. Only 30% rate their employers highly in providing AI training, and just 29% believe their institutions have effective AI governance frameworks in place. Meanwhile, 57% say better guidance on how to use AI would increase their trust in these tools.

The survey underscores the importance of transparency: 68% of clinicians want AI tools to automatically cite sources, 65% require assurance of data confidentiality, and the majority want models to be trained on high-quality, peer-reviewed content. The call for independent review and ethical oversight is strong, particularly among doctors in regions with lower AI adoption rates.

A Role for All Stakeholders

The report concludes with a call to action for technology developers, healthcare institutions, and governments. For AI to fulfill its potential in healthcare, clinicians must be equipped with reliable tools, robust training, and clear guidelines on responsible use. Without this support, the gap between the promise of AI and its practical impact may continue to widen.

Elsevier’s Clinician of the Future 2025 report paints a clear picture: AI is no longer a future consideration in healthcare—it is already reshaping the profession. The challenge now is to ensure it does so safely, ethically, and equitably.

Misschien vind je deze berichten ook interessant