Jan Beger’s Post

View profile for Jan Beger

Global Head of AI Advocacy @ GE HealthCare

This paper explores how AI is shifting from a promising concept to practical application in clinical medicine, highlighting its transformative potential, existing limitations, and future needs. 1️⃣ AI now rivals expert clinicians in diagnostic tasks—deep convolutional neural networks match dermatologists in classifying skin lesions, and ML improves cancer prognosis prediction accuracy. 2️⃣ LLMs like ChatGPT support emergency care decisions, generate clinical notes, and aid surgical workflows with up to 90% instrument recognition accuracy. 3️⃣ AI enhances operational efficiency by automating documentation, enabling real-time translation, and optimizing EHR management through autoML. 4️⃣ Core limitations include lack of transparency ("black box" AI), bias in training data, poor generalizability, usability gaps in clinical settings, and weak regulatory oversight. 5️⃣ Ethical concerns focus on accountability, clinician overreliance, patient privacy, and informed consent in data use, especially affecting marginalized groups. 6️⃣ Explainable AI (XAI) is essential to gain clinician trust—tools must align with clinical reasoning, not just technical transparency. 7️⃣ Bias mitigation requires more than diverse datasets; adaptive learning and real-time fairness audits are needed for equitable outcomes. 8️⃣ Real-world adoption challenges persist—future studies must evaluate AI’s impact on workload, decision-making, and patient outcomes in dynamic settings. 9️⃣ Regulatory evolution is critical—unlike drugs, AI tools often bypass RCTs. Continuous post-deployment monitoring is needed to ensure safety and accountability. 🔟 The paper calls for interdisciplinary collaboration and deliberate implementation strategies to ensure AI enhances care rather than widens healthcare inequities. ✍🏻 Ariana Genovese, Sahar Borna, Cesar Abraham Gomez Cabello, MD, Syed Ali Haider, Prabha Srinivasagam, Maissa Trabilsy, Antonio Jorge de Vasconcelos Forte. From Promise to Practice: Harnessing AI’s Power to Transform Medicine. Journal of Clinical Medicine. 2025. DOI: 10.3390/jcm14041225 ✅ Sign up for our newsletter to stay updated on the most fascinating studies related to digital health and innovation: https://lnkd.in/eR7qichj

  • chart, sunburst chart
Dr. Nawal Amoudi

PHYSICIAN | FICM | Passionate About Critical Care | Clinical educator| Member of ESBICM

1w

Jan BegerAshley Varol, PhD After reading your post about how LLMlike ChatGPT can support emergency care decisions, generate clinical notes, and even recognize surgical equipment, I was intrigued by the idea and decided to test ChatGPT’s capabilities myself. I asked it a simple clinical question: ‘Tremor that is not relieved with distraction, high-frequency tremor of both hands, and prominent with finger-to-nose testing?’ ChatGPT initially made a mistake, misclassifying the tremor type, which highlights a major flaw. While AI has made impressive advancements, this instance shows that it can still make errors in basic clinical reasoning. ChatGPT’s mistake is a reminder that, despite its potential, machines are not reliable enough to support emergency care decisions. Human brains are essential in diagnosing and making life-saving decisions. Machines will never fully replace the critical thinking, intuition, and expertise that medical professionals bring to the table. Look at the screenshot below and IMAGINE USING THIS IN EMERGENCY CASES 👀

  • No alternative text description for this image
Like
Reply
Felipe Nakayama-Burattini

Digital Transformation Leader | AI Strategy & Implementation | Driving Business Growth with Data Intelligence

2w

Fantastic summary, Jan! AI in healthcare is indeed moving from promise to real-world impact, but as you pointed out, adoption comes with significant hurdles. The role of explainable AI (XAI) in building trust resonates deeply—black-box models may achieve superior accuracy, but without clinician confidence, integration into high-stakes environments remains slow. On regulatory evolution, I find the contrast with drug approval fascinating—AI tools often bypass traditional RCTs, yet their real-world impact can be just as consequential! Do you think post-deployment monitoring will become the gold standard for AI safety, or do we need a new framework altogether? Would love to hear your take!

Danny Lieberman

I help people 45-60 in life science turn their expertise into freedom.

1w

Jan Beger Nice infographic. Except that there is a huge conceptual bug - a product of corporate thinking. It's based on the US sick-care system. You assume that the model will not change. But it will.

Like
Reply
Torsten Rehder

Trend Analyst & Strategic Foresight Consultant

1w
Like
Reply

One of the most underrated topic is the impact of the AI use in Professional education. For health personnel of course, but mostly for medtech companies' employees.

Rene Anand

Chief Executive Officer and Founder of Neurxstem Inc.

1w

👌

Like
Reply
Anna Barker

Healthcare Innovator | Executive | Non-Executive Director | Advisor | Clinician

1w

Brilliant Jan Beger

Seth van der Meer

Experienced business and market developer and strategist in (Digital) Health, Customer Experience and Technology | Supervisor | Innovator

2w

This graph is flawed at best. What about all the nurse/clinician-patient interactions? There is a huge and highly underrated potential for the use of narrative medicine. With AI the conversations, nurse notes, family observations and all other narratives become a rich context that can help understand the cause, the chances of relapse, the potential success of dismissal/readmission and so much more.

  • No alternative text description for this image
Zhaohui Su

VP, Biostatistics, Data Science, Epidemiology, Real-World Analytics, AI

2w

Exciting!

See more comments

To view or add a comment, sign in

Explore topics