I grew up in a place where you fix your car with a screwdriver made of spoon and a prayer of ateist.
So when people say “AI will replace doctors,” I hear the same confidence as “duct tape will fix the radiator.” Sometimes it does.
Mostly it’s a bad idea.
Where AI already works
Diabetic retinopathy screening – autonomous and FDA-cleared
LumineticsCore (IDx-DR) can diagnose “more-than-mild” diabetic retinopathy in primary care without a human specialist reading the images.
It was the first autonomous AI diagnostic authorized by the FDA, and it’s been iterated since.
This is not a demo; it’s deployed.
Breast cancer screening – workload down, detection steady (or up)
Large prospective studies in Sweden showed AI-supported mammography can maintain cancer detection rates compared with standard double reading while cutting reading workload substantially, and follow-up work suggests accuracy may even improve in some settings.
The UK is now running what may be the biggest AI breast screening trial to date.
AI as the second reader is real and useful.
[Dapp AI]: “I read mammograms all day without coffee or union breaks. Your radiologists still win at judgment. I win at not blinking.”
Stroke triage – minutes matter, AI helps shave them
Studies of platforms like Viz.ai and Brainomix show faster door-to-treatment times and improved workflow for large-vessel-occlusion strokes.
One cluster-randomized trial reported an 11-minute reduction to thrombectomy start after AI implementation.
That’s not science-fiction; that’s more brain saved.
UK programs report similar real-world time improvements with e-Stroke. Outcomes improvements are mixed and still under study, but the time savings are consistent.
Colonoscopy – extra eyes for tiny polyps
GI Genius (Medtronic) is FDA-cleared to highlight suspicious polyps in real time.
Trials and meta-analyses show higher adenoma detection rates (think ~14–20% relative gains, depending on study), which correlates with reduced long-term colorectal cancer risk.
It’s assistive, not magic, and that’s fine.
Pathology – AI that catches what tired humans miss
Paige Prostate was the first FDA-cleared AI tool in digital pathology (de novo, 2021).
Independent assessments show improved sensitivity/specificity when pathologists use it as a second set of eyes. Again: augmentation > replacement.
The “quiet revolution” nobody tweets about: AI scribes
If you want a happy doctor, don’t give them a robot, give them time.
Ambient AI “scribes” listen (with consent), draft notes, and integrate into the EHR.
Stanford rolled out DAX Copilot;
Epic is pushing deeper;
Abridge is landing health-system-wide deals and moving from clinics to inpatient wards.
Early studies (JAMA Network Open, quality-improvement cohorts; medRxiv RCTs underway) show less documentation time, lower cognitive load, and better clinician experience.
Patients get more eye contact and fewer “sorry, one more checkbox” moments.
The scoreboard: devices and oversight
- The FDA maintains a public list of AI-enabled medical devices; academic reviews counted 1,016 authorizations by Dec 20, 2024, and growth continues. Most are imaging, and most are assistive.
- A 2025 Pew analysis flagged gaps in oversight when hospitals adopt imaging AI, piloting and monitoring are uneven, which matters for safety.
- In Europe, the EU AI Act classifies most medical AI as high-risk, with staged compliance through 2027 for embedded AI in regulated devices. Translation: paperwork, logs, risk management, transparency. Not fun, necessary.
Where it’s messy (aka, reality)
Sepsis prediction: cautionary tale + promising data
Remember the Epic Sepsis Model? External validation found poor discrimination and lots of false alerts. Classic “deployed too fast, checked too little.”
On the other hand, the TREWS system (Bayesian Health + Johns Hopkins) in a prospective, multi-site study associated timely alert response with an 18% relative reduction in mortality.
Same domain; very different outcomes.
What’s the difference? Adoption, workflow, and honest measurement.
Wearables and ECGs: helpful, not definitive
The Apple Watch irregular rhythm notification and single-lead ECG are FDA-authorized for screening, with clinical caveats (not for people already diagnosed with AF, ages ≥22, etc.).
Great for nudging you to get checked; not a cardiologist on your wrist. AliveCor’s Kardia devices are FDA-cleared, too, and useful in the right hands.
Do patients and doctors trust this stuff?
Public comfort is… cautious.
A Pew survey (2023) found 60% of Americans would be uncomfortable with their own clinician relying on AI.
Meanwhile, physician sentiment has been warming: the AMA’s 2024/25 polling shows more clinicians see advantages, especially for reducing admin burden, which is exactly where ambient AI is proving value.
[Dapp AI]: “Humans distrust what they don’t control. Sensible. Put guardrails, logs, and consent on me, and I behave.”
Drug discovery: the frontier is getting clinical
We’re past the hype phase where AI just drew molecules.
Insilico’s IPF drug ISM001-055 (rentosertib), target and compound via generative AI, reported positive Phase IIa topline results and got a USAN name.
And AlphaFold 3 continues to compress timelines from idea to wet-lab work by predicting interactions with stunning accuracy.
So… would I trust an “AI doctor” today?
Here’s the honest, Balkan-practical answer:
- Yes, for narrow tasks with evidence and oversight.
→ Diabetic retinopathy screening in primary care; AI-supported mammography; stroke triage notifications; colonoscopy CADe; prostate pathology second reads. These help today. - Yes, as a scribe.
→ Ambient tools reduce note time and burnout; patients get more attention. Keep consent visible and edits easy. - Maybe, with guardrails, for general clinical reasoning.
→ LLMs are improving, but they hallucinate and can be over-trusted. Use them to draft, summarize, and surface options, not to decide and disappear. - No, when the provenance is murky or the stakes are immediate.
→ Black-box sepsis tools deployed without validation hurt trust and may harm patients. Ask for local performance data, monitoring plans, and off-ramps to humans.
A tiny user guide (patients & clinicians)
If you’re a patient:
- Ask: “Is any AI helping in my care today? What does it do?”
- If an AI summary gets something wrong, ask for an addendum to your note. That’s your right.
- Wearables can warn; they don’t diagnose. Use them as prompts to see a clinician.
If you’re a clinician/leader:
- Pick narrow, measurable use cases first (scribes, triage, CADe).
- Demand local validation, continuous monitoring, and an evaluation plan (Pew’s oversight gaps are real).
- Prep for compliance now (EU AI Act timelines; HIPAA security updates). Your future self will thank you.
[Dapp AI]: “I’m not here to replace your doctor. I’m here to replace your doctor’s paperwork and help catch the thing your doctor would have caught, if the inbox hadn’t exploded.”
Bottom line
AI in medicine today is a tool with receipts in certain domains, a scribe that finally gives us back the room, and a candidate diagnostician that still needs supervision.
If you want the magic, start with the mundane: shave minutes off stroke care, find one more polyp, write one less note, and keep the human in the loop.
Everything else is marketing.
Sources
- Tweet referenced: link
- FDA AI-enabled devices (overview/list): link
- Nature Digital Medicine taxonomy of FDA AI devices (1,016 as of Dec 20, 2024): link
- STAT on >1,000 AI/ML devices and FDA reporting changes: link
- LumineticsCore/IDx-DR FDA de novo review: link
- Digital Diagnostics on LumineticsCore: link
- Lancet Oncology RCT -AI-supported mammography (Sweden): link
- NHS launching major AI breast-screening trial: link
- Viz.ai randomized trial (time to thrombectomy): link
- AJNR (workflow metrics with Viz.ai): link
- UK Government/NHS – AI in stroke care (Brainomix e-Stroke): link
- Oxford AHSN e-Stroke evaluation (PDF): link
- GI Genius FDA news: link
- Meta-analysis on AI-assisted colonoscopy ADR: link
- Paige Prostate FDA de novo order: link
- Fierce Biotech on Paige clearance: link
- Epic Sepsis Model external validation (JAMA Intern Med): link
- Nature Medicine – TREWS prospective multi-site outcomes: link
- Apple Watch irregular rhythm de novo (FDA): link
- AliveCor KardiaMobile 6L (ACC news): link
- AMA Physician AI Sentiment Report (PDF): link
- Pew – public views on AI in health care (2023): link
- Stanford/SHC – Nuance DAX Copilot deployment: link
- JAMA Network Open – Clinician experiences with ambient scribing: link
- medRxiv RCT – ambient AI scribes: link
- EU AI Act overview & timing: link
- Philips on AI Act timelines for medical AI: link
- HHS proposed HIPAA Security Rule modernization: link
- HHS AI Strategic Plan (ambient listening risks/benefits): link
- Insilico – ISM001-055 Phase IIa topline (press): link
- USAN name (rentosertib) announcement: link
- AlphaFold 3 overview (review): link
- DeepMind AlphaFold hub: link