6 Comments

I'd like for there to be a Dr Robot that was FREE, for things like "I've had an ear infection for 2 weeks now that seems to still be getting worse, and I really think it might be time to consider antibiotics" type stuff. For the uninsured working poor, that would just be an absolute godsend.

That would, in my utopia, free up resources to better pay human family physicians (and other MDs) for the more complicated stuff, where you really need hands, eyeballs, and a real organic brain to figure out how to best help the patient. The chatbot can be a wonderful *assistant to* a physician there.

I can't imagine that a Dr Robot would be good at working with patient preference when it comes to treatments, too. Having another human who knows more than you help you figure out what sort of plan of action is best FOR YOU can't be replaced. Side effects that are borderline intolerable for one patient are no big deal at all for another. I don't envision the chatbots understanding "I have X to do, and I HAVE to be in the sun to do it, or it's seriously going to mess up my quality of life" type issues.

Expand full comment

I have very little confidence in physicians. They seem flawed, rushed and not that bright. That sounds bad, I know. Perhaps I'm projecting. I've long wanted AI to come into the field. I suspect that AI medicine will be heavily controlled by the same outside forces that control so much of what physicians do and say.

Expand full comment

I admit I haven’t tried ChatGPT. But I do wonder what it says if you tell it I’ve had “fever for 4 days with skin rash”.

I suspect you get prompts for more clarity, which like a real clinician could help narrow down the issue. But I’d like to see outcome data before assuming that chat bots will replace medical consults.

I’m not surprised that it would ace that question prompt either. There’s enough detail there that I’m sure the top hit is a test bank or textbook entry with post-streptococcal glomerulonephritis in the title. I’m sure with an encyclopedia open in front of me I could dissect those answers choices astutely as well.

In this sense it may be a step up from Google, given the efficiency with which it parses queries for the user. On the other hand, it’s a black box. And while I didn’t check if any of the answers it gave to your prompt are fabrications, there’s plenty of evidence now that when faced with technical knowledge these Chat AI systems often falsify information unknowingly. And if the user is equally ignorant of the truth then chat GPT results could be a major step down from Google which at least shares its sources and requires humans to parse the results.

Also, as a clinician I think it’s important to share my own uncertainty with patients. ChatGPT in its present iteration can’t do that if it produces false statements with the same authority as true ones.

Expand full comment

Loved tour article!

Expand full comment

Well written with many valid points. I thank you for writing it from physician point of view as myself look at it from patients point of view. Reading it gives me glimpses into an insight of being written by someone who strives to be a good physician so I understood where you are coming from when writing this part: "Still — I think that even a disappointing encounter with a human is better than a leisurely conversation with a robot, in terms of its healing potential.". This is an area I cannot agree with you. Sadly, as a patient. I was too many times put into such position of being in disappointing encounter with a physician, not believed, not taken seriously, being discarded despite my medical record clearly showing that I live decades with chronic disease that already demanded 4 surgeries. In such cases I would prefer AI for one and one reason alone: my symptoms would not be disregarded. Yet different doctors to many times do that. Because they think they know better. Many times they put their ego before my need as a patient. Arrogance, beside ignorance, is another bothersome issues that prevents good encounter with healing potential which can come from a simple acknowledgment of being heard and believed. Trust must go both ways. Too many times physician puts him/herself on a pedestal of theoretical knowledge against patient's years of experience living with a certain condition disregarding that experience as it is not suppose to be like that as it is not such in medical books. However when physician stop listening with prejudice that patients problems exist while watching its anxious face and decides the problem is psychological nature unable to rethink that perhaps the psychological manifestation originates from physiological, the distress that cause to a patient is far greater than neutral polite debate with a sentient being. You see, as important as is placebo effect is the recognition of nocebo one with bad treatment from a physician. The problem I see in most of countries is that under the huge amount of work doctors started to loosing personal touch with their patients and we are becoming more and more disappointed, seeking refuge in dr. Google being unable to reach less and less available physicians who, at least in our country, work more to raise their status and already high payments in comparison to the rest of the nation, as to improve their communication skills, empathy and compassion. Those with such traits are hard to find and last 3 years worsened the problem. What I can see is a lot of people needing help and good physicians being rare. The rest with their poorer and poorer diagnostic knowledge and most of the time impersonal cold communication pave the path to AI gaining the momentum with many desperate people needing healthcare and this is where the danger lies: when we are desperate we accept less then and this is where all dangers of handing over to AI you wrote excellently about come to light. To conclude, the personal exchange is of utmost importance and that being of quality, trust is essential. However too many physician are abusing our trust by disregarding our symptoms, worries, not apologizing or even admitting mistakes they made and with that showing us they are also imperfect but willing to keep improving, and basically pushing us into direction of dr.Google or AI, in the end, at their own expense. People are easy. We go where we get more and better and stay if we feel validated, trusted and safe. So if a physician cannot do more with intelligence and data, he certainly can with human relations. But is he/she willing? In this moment, where I live, no is the prevalence. It just might happen that AI will did what we as humans were not able: push as in a direction of appreciating authentic, quality and respectful human connection of those willing to do so.

Expand full comment