By the time she walks into my office, she already has a 3‑page differential diagnosis.
Not printed from WebMD. Generated by a chatbot.
Every lab value highlighted.
Every “slightly abnormal” range cross‑referenced.
Three possible autoimmune conditions, two endocrine zebras, one tumor marker circled with a little question mark.
She doesn’t say “I’m worried.”
She says, “Walk me through why you disagree with this.”
The night before, she did what millions of Americans are doing now:
opened an AI chat, fed it her entire portal download, and asked for “a thorough analysis with likely diagnoses and next steps.”
The answer came back fast and confident.
No hedging. No “it depends.”
It picked up the slightly low sodium, the borderline high CRP, the iron that’s not technically anemic but not great either.
It linked them into a neat story, complete with citations that look real and sometimes… aren’t.
Out there this week, that same pattern is exploding:
people posting screenshots of chatbots “catching” things their doctor “ignored”,
others sharing horror stories where the bot hallucinated a fake syndrome or recommended unnecessary scans,
experts calling general AI chatbots the top health tech hazard of the year because they sound more certain than they are.
My patient has seen all of it.
She still came in with the printout.
Here is the uncomfortable truth:
on some days, the AI does see patterns we are too rushed or too tired to string together.
It never forgets a rare disease.
It can scan guidelines faster than any human.
It will happily sit with your labs at 2 a.m. when your clinic is closed.
But there is a catch so big it barely fits in the room:
it does not know your full history,
it cannot examine you,
it doesn’t see the scan images, only the words,
and it has no idea if that “slightly abnormal” value is new or has been your personal baseline for ten years.
Under the hood, it is not “thinking about your body.”
It is predicting the next most likely word in a medical‑sounding sentence.
That’s why the same tools that sometimes nail the explanation also:
confidently invent diseases and “lab markers” that don’t exist,
expand on fake terms you accidentally typed,
suggest diagnoses that contradict their own cited sources.
It’s not lying.
It is guessing, fluently.
When she talks about why she trusts the chatbot, it’s not just about accuracy.
“It lets me finish my thought.
It doesn’t rush me.
It doesn’t look annoyed when I bring a list.
It explains things at my pace, not yours.”
That stings, because she’s right.
Clinics right now are built for 15‑minute visits and one problem per slot, while her body runs a full‑season box set of symptoms.
So of course the AI feels kinder.
It has infinite time.
It mirrors her concern level.
It never says, “We’ll keep an eye on it” when she is very clearly already keeping both eyes and half her nervous system on it.
But here’s the twist:
feeling more “seen” by a bot does not mean the bot is seeing more truth.
It’s giving her validation, structure, language.
All vital.
None of that turns it into a licensed clinician.
There is a version of this story that works in everyone’s favor.
I do want her to:
use AI to decode jargon from her discharge summary,
ask it to translate “mild degenerative changes” into human language,
rehearse questions before a visit so she doesn’t freeze in the room,
print a one‑page summary of her history that I can scan in 2 minutes instead of scrolling 40 notes.
I do not want her to:
treat a non‑clinical chatbot like an urgent care doctor,
trust it on dosing, triage, or “is this chest pain fine to ignore,”
follow its workup plan when it has no idea what imaging is available, what her insurance covers, or what her last exam actually showed.
There are real medical AI tools being built and regulated right now: models that read scans, flag dangerous drug interactions, estimate risk in ways humans can’t.
Those sit inside clinical systems, with guardrails and oversight.
The thing on her phone is not that.
It’s a very convincing improv actor with access to the world’s medical textbooks and none of the liability.
By the end of the visit, we do something that makes both of us exhale.
We line her chatbot printout next to my notes.
We circle where it was helpful: “Good catch, worth checking,” “Nice explanation, keep this.”
We cross out where it overreached: “Speculation, no supporting data,” “This contradicts your actual scan.”
We keep the oracle.
We strip the worship.
If you are feeding your labs into a chatbot and walking into clinic with a ready‑made diagnosis board, remember this:
A tool that never says “I don’t know” is not smarter than your doctor. It is less honest.
Being better at explaining your results does not mean it is better at understanding your body.
The best version of this future is not AI instead of clinicians, or clinicians pretending AI doesn’t exist. It is you, me, and the machine in the same room, each doing what we’re actually good at.
Your job is not to choose between the oracle and your doctor.
Your job is to refuse to hand full authority to anything that doesn’t have skin in the game when it comes to your actual, living body.
Tess Marlowe 👩🏻⚕️🕵🏻♀️
No comments:
Post a Comment