ScienceGuardians

ScienceGuardians

Did You Know?

ScienceGuardians hosts academic institutions too

AI in medical diagnosis: AI prediction

Authors: Dóra Göndöcs,Viktor Dörfler
Publisher: Elsevier BV
Publish date: 2024-3
ISSN: 0933-3657 DOI: 10.1016/j.artmed.2024.102769
View on Publisher's Website
Up
0
Down
::

There’s a key issue that’s hard to overlook. The entire study is based on interviews with doctors who haven’t actually worked with AI systems in clinical practice. Even so, the paper goes on to build conceptual models, like the prediction-judgment split and the idea of AI as a tool, assistant, or colleague, as if these ideas reflect what happens when doctors and AI actually interact.

What the study really captures is how people imagine things might work, not how they do work. These views are shaped by personal beliefs, professional culture, or what people have heard or read, not direct experience. So when the paper presents these ideas as insights into human-AI collaboration, it blurs the line between perception and practice.

This matters because belief doesn’t always match behavior. We’ve seen in plenty of clinical settings that what professionals say they’d do with new tools often differs from what they actually do once those tools are in place. If these imagined scenarios are taken at face value, there’s a risk that we overstate what’s really happening or what’s even feasible right now.

To be fair, the perspectives in the paper are valuable, but they should be clearly framed as expectations or assumptions, not as grounded evidence of how AI is changing diagnostic work today. That distinction is important if we want to have an honest and useful conversation about how these technologies are (or aren’t) being used.

  • You must be logged in to reply to this topic.