Panacea Financial, a division of Primis Bank, deposit products:
FDIC-Insured – Backed by the full faith and credit of the U.S. Government

Dr. Robert Pearl

Surgeon, CEO, author, and professor Dr. Robert Pearl, a trailblazer in the medical community, talks about generative AI and the role it could play in the future of medicine. Can generative AI fill in the gaps within healthcare for patients and clinicians alike? Who is liable as AI enters the workforce? Dr. Pearl dives into the numerous scenarios where generative AI could improve the medical world as we know it and how it could affect patient outcomes.

Here are five takeaways from the conversation with Dr. Robert Pearl:

1. The Role of AI in Healthcare

AI, particularly generative AI like ChatGPT, is seen as a transformative tool in healthcare. It has the potential to improve chronic disease management, reduce medical errors, and empower patients with more knowledge and control over their health. Dr. Pearl emphasizes that AI can fill gaps in care, especially in underserved areas, by providing expertise and support that might not be otherwise accessible.

2. Challenges and Opportunities with AI Integration

While AI offers significant benefits, there are challenges in its integration into healthcare systems. Dr. Pearl discusses the need for changes in reimbursement models, such as moving from fee-for-service to capitation, to fully realize AI’s potential. He also highlights the importance of addressing biases and ensuring equitable access to AI tools to prevent widening healthcare disparities.

3. AI as an Augmentative Tool for Clinicians

AI is not seen as a replacement for healthcare professionals but as an augmentation to their work. It can help manage workloads, reduce burnout, and improve patient outcomes by providing continuous monitoring and data analysis. Dr. Pearl envisions a future where AI, empowered patients, and skilled clinicians work together to achieve better health outcomes.

4. Educational Implications for Medical Trainees

The integration of AI into healthcare education is crucial. Dr. Pearl suggests that medical training should evolve to include AI tools, teaching students how to leverage these technologies effectively. This shift will prepare future clinicians to use AI in enhancing patient care and making informed decisions.

5. Addressing Healthcare Inequities with AI

AI has the potential to bridge gaps in healthcare access, particularly for vulnerable populations. Dr. Pearl advocates for investments in technology to ensure that all patients, regardless of socioeconomic status or language barriers, can benefit from AI-driven healthcare solutions.

Transcript

Dr. Robert Pearl:

Then all of sudden, chat GPT, gender AI came out. And I looked at it and I said, this is exactly the tool that’s needed.

Dr. Michael Jerkins:

Welcome back to the podcast for Doctors by Doctors. I’m Dr. Michael Jerkins.

Dr. Ned Palmer:

I’m Dr. Ned Palmer.

MJ:

Dr. Palmer, nice to have you physically here in the studio today.

NP:

Excited to be here in the studio today with you. This is the first time we’re recording live together.

MJ:

Are you really excited?

NP:

I actually am excited. This is really fun. Yeah, yeah. I’m really excited to be talking to Dr. Robert Pearl. It’s also kind of nice to be here in the studio.

MJ:

That’s the added bonus. Of course. Icing on top. Yeah. Now, it is going to be very interesting.

As you know, and a lot of people listening know, AI has been at the forefront of the conversations in many sectors. But where you and I live in healthcare, of course, everyone’s talking about it. So this is, I think, one of the thought leaders on this topic right now. He actually co-wrote a book. I’ll talk about who the co-author was a little bit later. But I’m really excited to dive in and really see how is AI going to affect our lives as doctors.

NP:

Absolutely. I’m really excited to talk to Dr. Pearl today because I think AI is still this vast unknown in its intersection with healthcare and what’s going to come out of the next five years. I don’t know. That’s why I’m excited to speak to somebody who’s really dove deep into this space and talk about all things about licensure, billing, note writing, everyday aspects of practicing medicine.

MJ:

And the basic thing that I think we hear a lot is it’s a universal tool that can do everything.

NP:

But I think the master of none.

MJ:

Yeah, that’s right. As doctors, we’re trained to be a little skeptical. So I think it’s nice to try to fill out what is real and what’s maybe fantasy right now. But it’s hard to see what’s going to happen.

NP:

You couldn’t have gone with what’s real and what’s artificial in that moment. That would have been an opportunity.

MJ:

That was a softball. Next time. That’s why I’m glad to have you here in studio. Thank you, Dr. Palmer. But let’s get into it with our interview with Dr. Robert Pearl.

MJ:

All right, we are excited to have Dr. Robert Pearl with us today. And he is quite the physician and well-accomplished doctor. He’s done a lot of work throughout the years for fellow physicians and patients alike. He was the former CEO of the Permanente Medical Group from 1999 to 2017.

MJ:

In this role, he led 12,000 physicians, 42,000 staff, and was responsible for the nationally recognized medical care of over 5 million Kaiser Permanente members on the West and East Coast. He’s been named Modern Health Care’s 50 Most Influential Physician Leaders. He also serves as a clinical professor of plastic surgery at Stanford University School of Medicine and is on the faculty of the Stanford Graduate School of Business.

He’s authored several books including Mistreated, Why We Think We’re Getting Good Healthcare and Why We’re Usually Wrong, which is a Washington Post bestseller, and Uncaring, How the Culture of Medicine Kills Doctors and Patients, they published back in 2021, and most recently, Chat GPT-MD. It’s notable that all the profits of these books goes to Doctors Without Borders. He also hosts popular podcasts including Fixing Healthcare and Coronavirus, The Truth.

He also publishes a newsletter called Monthly Musings on American Health Care and is a regular contributor to Forbes. He received his medical degree from Yale University School of Medicine and completed his residency at Stanford University. You can also connect with him on Twitter or X at RobertPearlMD, LinkedIn or his website robertpearlmd.com. Dr. Pearl, welcome to the web or to the podcast.

RP:

Thank you so much for having me today and look forward to our conversation.

MJ:

Absolutely. So I must say your book, Chat GPT MD, has definitely been something I’ve been seeing on my social media feed over the last several weeks. Tell us a little bit about what this book is about and what inspired your work.

RP:

Thank you so much. The book is already number one on Amazon’s Best New Book List. So I’m glad you’re seeing it on your social.

NP:

Congratulations. Yeah.

RP:

Thank you so much and supporting, as you mentioned, a great charity, Doctors Without Borders, does remarkable work around the world. As you know, when I was the CEO in Kaiser Permanente, I was very focused on value-based care.

I was a big believer the American healthcare system is struggling today, that it’s fragmented, it’s a 19th century cottage industry with doctors and hospitals spread apart communities, unconnected with each other, that it was paid on a piecemeal basis fee for service, the more you do the more you get paid. Whether there’s any good or not doesn’t matter, you still get paid. If I could if you have a complication, you often get paid twice and it uses technology for the last century actually from the century before an 1834 invention the fax machine is how doctors most commonly communicate with each other burnouts 60 % of physicians.

 

Healthcare is unaffordable. We could spend a whole lot of time talking about that. And that was sort of the origin of my first book and what led to my third book. But going back to my first book, Mistreated, why we think we’re getting good healthcare, we’re usually wrong. And it begins with the story of my dad who died for a preventable medical error. And at that time, having led Kaiser Permanente, which is an integrated organization, paid through capitation, I just believed that the biggest problems were systemic.

RP:

And that once I told the world, about the systemic problems that existed, change would happen and life would become halcyon once again. And obviously, as you know, not much changed between my writing the book in 2015 and to 2020. As a professor at the business school, I asked myself, if the logical thing doesn’t make, doesn’t happen, why is that? Something has to be accounting for that.

I spent some time researching that question and wrote my second book on caring of the culture of medicine kills doctors and patients. Cause it had to be the medical culture. You know, what we learned in medical school and residency carry with us across our careers that maybe something there was holding us back. And again, three years later, nothing much had changed. So when I asked myself the third time, what’s wrong? Why does change so hard to happen when it comes to moving from pay for volume to pay for value?

I decided it was because we didn’t have the tool. And actually I met with the leadership of Apple and Google. And I said, you guys have these devices, wearable watches for atrial fibrillation. Why don’t you attach an artificial intelligence tool to that watch so that people with atrial fibrillation can know how they’re doing and whether they need to have a change in medication or whether they’re at risk for some type of major problem. And of course they didn’t want to do that.

The liability seemed…too great, then all of sudden, CHAT GPT, Genitive AI came out. And I looked at it and I said, is exactly the tool that’s needed. And in particular, we can talk about all the different applications, but the one that to me is most interesting and exciting is chronic disease. Chronic disease affects 60 % of Americans, accounts for 70 to 80 % of medical costs. And we know we do a pretty poor job controlling it.

RP:

We control hypertension, the leading cause of stroke and major contributor to kidney failure and heart disease. 55 to 60 % of the time. When I was CEO in Kaiser Permanente, we controlled it 90%. But it was very expensive. We used a lot of nurses and we spent a lot of physician time doing it. Diabetes, one in three Americans, we know the leading cause of kidney failure, peripheral amputation, major contributor to heart disease. We control that 30 % of the time. How is that possible?

 

And I want to stress something you know very well, but I teach at Stanford and it was a great engineering school and they have and invented band-aid size monitors that patients could wear at home to do blood pressure, pulse, temperature, blood glucose, blood oxygen. Highly reliable, 99.9 % reliable. And I defy you or any of the listeners to the show, most of whom are clinicians to say,

Why does a tool like this not be used? I would defy people to come up with a place in which doctors are actually using the information. And again, where I dive a little deeper inside of it, what I realize is that the data is knowledge, but not expertise. By that, mean that you can get, let’s say, you see a clinician, you have hypertension, you start on a medication, three times a day you get readings

End of the month, you have a hundred readings. 90 of them are normal, 10 of them are abnormal. You’re doing great because 90 are normal. You’re doing terrible because 10 of them are abnormal. And as you well know, no clinician wants a hundred blood pressure readings. No clinician wants them filling up their EHR. No, we have a source of tremendous information and it’s not being used. And this is what generative AI can do and will do to be able to empower that patient.

That’s why the subtitle of the book isn’t called, ChatGPT MD, how AI-empowered patients and doctors can take back control of American medicine. This is what I think should happen. As you’re well aware, the CDC has shown that if we can effectively manage chronic disease to the extent that we know that we could, not the impossible, the things that we know we could do if we did everything we’re supposed to do, that we would reduce heart attacks, strokes, cancer, kidney failure, peripheral amputation 30 to 50%.

RP:

And I challenge you to think about what would happen if 30 to 50 % of heart attacks, strokes, kidney cancer, kidney failure, peripheral amputations didn’t happen. But what happened to the health of our nation? What would happen to the cost of medical care? And it was that realization that led me to write the book, Chat GPT MD.

MJ:

That is phenomenal. I appreciate your backstory and explaining that. I think all of us that see patients are constantly trying to think through what improvements can be made, how can things be better for our systems and ultimately our patients. I did want to just ask, because I want to confirm this, did you actually use generative AI to also help with the writing of the book? Is that correct?

RP:

I absolutely did. You know, my first two books, I followed the very traditional publishing route of an author and it’s a two-year process. It takes about a year to write the book, then about six months to work with your editor and then another six months pretend they print it and distribute it. I said, you know, this technology is advancing, doubling every single year. That exponential growth means that in five years, it’s going to be 32 times more powerful than today. By 10 years, it will be a thousand times more powerful.

 

That’s the equivalent of your car in five years going as fast as an airplane and by a decade as fast as a rocket ship. I said, if I wait two years, everything I write will be out of date. How can I speed this up? That was more than that. This whole book was really an adventure into the unknown, into the dawning of a new era. And so, yes, I wrote the book with ChatGPT. I started by taking the 1.2 million words that I had written and downloaded it into ChatGPT.

So, it knew exactly the way I think it knew my voice. It was able to come up with some research and areas I hadn’t thought about that I could look at. It was able to come up with some rough drafts. It was able to take my rough drafts. I approach it the same way you would write a paper with a medical student or a resident. It’s not that they’re gonna write the paper. Their name might go on it, but they’re not gonna write the paper. But your job as a mentor is to be able to help them to make sure, they’re on the right path.

RP:

The book itself has 30-page bibliography, because I fact checked every single thing that ChatGPT came up with. And most of it was right, 99%. But it did hallucinate an exploration of the North Pole, for which I could find no reference to it ever happening. I don’t want to tell you it’s perfect. And that’s what I really want to leave you and listeners thinking, which is don’t look at this today. Today we’re at dawn. We’re looking at the beginning of this process, it’s going to get better between chat, between GPT, the first version 3.5, chat GPT and GPT 4, 82 % of hallucinations were eliminated.

By the time GPT 5 comes out, which OpenAI is already talking about, I’m going to guess it’s 95 % and then 98%. We should be looking to the future and thinking about what do we need to do today? When that technology is as good as we know it will be. And not think about exactly what’s wrong today unless we don’t think it can be corrected, but ask ourselves, what will we do with it when it is that accurate and reliable?

MJ:

Well, it’s interesting. This might be a little meta, but actually I use chat GPT a lot and different aspects of my life. I don’t really use it clinically admittedly at all, but I did ask it for a summary of your book. And one of the themes that had mentioned that you already kind of referenced this, but I wanted to really try to drill down if we can is the reduction of medical errors.

 

MJ:

You know, as two doctors, we still see patients, still practice. What does this look like on a day-to-day practical basis? Because I think a lot of us think high level, you obviously have as well, on what it can do. But what has it actually broken down, I guess, change our day-to-day practice to reduce medical errors in the short term and maybe long term? How do you think about that?

RP:

Well, first of all, we should be looking at what does the data say about how well we do today. Let’s again, let’s not compare technology to the perfect. And we don’t do so well today. 400,000 people die annually from misdiagnosis. Another 400,000 people on top of that have permanent disability from misdiagnosis. I’m sure you saw the article in the England Journal of Medicine about six or 12 months ago that talked about the fact that all the patients who die in a hospital or are sent from the ward to an ICU bed, one in four of them have had a major misdiagnosis.

We don’t do very well when it comes to diagnoses on a lot of complex cases. So, this tool is going to help us. You can put all the information, the more information you put in about the history, about the medical, about the medications, about the laboratory results, the entire genome of the patient, whatever, the more you put in, the better and more exact the answer is going to be.

RP:

And it’s going to allow us to come up with diagnoses that we might not have thought about. It’s going to allow us to avoid the cognitive biases that exist. You know, in the book, I talk about a surgeon who did a very complex neck operation.

And at the end of the operation, the anesthesiologist said, we’ve got a problem. The patient’s vocal cords are adducted. I know there are podcasts, I have to explain what adducted is, but I know all of your listeners. And I can’t extubate the patient because she’ll have an obstruction. And she knew she hadn’t done any damage to the nerves, the muscles, to the larynx. And so, she pulled out her phone, and she put the information in the chat GPT.

RP:

And it pointed out some case studies where there’s very similar thing had happened after similar types of procedures. And I hypothesized that the reason was in some people more rapid and more widely spread diffusion of local anesthetic. It’s often injected for the epinephrine piece to control hemostasis. And it said that by 30 minutes later when the anesthetic wore off, the patient would be fine. So rather than putting the patient in the ICU, rather than thinking about doing a tracheostomy or anything else, they waited.

30 minutes to the OR, the cord started, separated, patient was extubated and went home. That’s the kind of information. Medical advances are happening, doubling, medical information is doubling every 72 days. The advances are happening so fast. We can do that. And the obscure diagnoses. You know, I talk in the book about a mistake that I made when I didn’t contemplate a rare possibility because I was so confident that the…you know, the expression we use when you hear hoofbeats, think of it horses, not zebras. And sometimes it’s a zebra.

And so that’s the kind, and once you assume it’s a horse, then anything that says it’s got some stripes on it, you ignore because you’re sure it has a tail. So, it must be a horse. And you go on like that and those opportunities that exist, but it’s not just for the clinicians. I think it’s important for clinicians to understand that generative AI, and I use chat GPT-MD because the five letters all flow together so beautifully.

But it’s really all the generative AI tools. And these generative AI tools are able to provide expertise. We know that you can paint in the style of Rembrandt even though you never took an art course. You can write a song like Drake and The Weeknd, even though you never play an instrument. You can program a computer even though you never had an IT course. And you can make a diagnosis on a podcast that I did last month.

I’ve luckily had a chance to do a 55 podcast, but one of them, a host said to me at the end of the program, who done recording, she said, my husband fell skiing two months ago. Dr. Pearl, you’re a skier and you know medicine. He slid down the mountain about a hundred feet, his arm was over his head, and the shoulder still hurt him. He can’t use it as well as the other side. What’s going on? I said, I think I know exactly what’s going on.

RP:

But why don’t you put everything you can think about into the application. She’d never used, by the way, Chat GPT or any other generative AI tool. Put all the information in place that you can think about. Describe the pain, describe his lack of motion. The more information, the better and see what it says. Five days later, she sent me a message. She said, you know, thank you so much. First of all, I made the diagnosis of a torn rotator cuff. It said that he’s going to need an MRI.

And it said that he should contact the orthopedic surgeon because he most likely needed surgery. And that’s what happened. I made the same diagnosis, the MRI, did the surgery. And the surgeon said, if you’d waited three or four more months, I might not have been able to reattach that torn tendon because the muscle would have contracted. Now think about that compared to Google. Had she Googled shoulder pain, as she Googled a weakness of an arm or lack of range of motion in an arm, we got a lot of knowledge. She would have gotten a differential diagnosis of a hundred items. She would have gotten things to consider. It probably would have told her to see the orthopedic surgeon, but why? It didn’t give her the rationale. Expertise. A question for us as clinicians is going to be number one, are we willing to accept an empowered, knowledgeable patient, not as knowledgeable as we are, but much more knowledgeable than the patient has ever been across the time of history.

RP:

And if so, how are we gonna change our medical practice to leverage that skill that our patient now has to be able to get better clinical outcomes? And I wanna add to do 20 to 30 % of what clinicians do in their office today as a means to diminish burnout and as a means to free time up so that with the patients that we do see, we can delve deeper and actually improve, not undermine, the doctor-patient relationship.

NP:

I think that’s, I have so many questions I’m tripping over where to start, Dr. Pearl. I think one of the more interesting things that have landed in the physician community in the last three to five years was the Cures Act, democratizing access for patients to their own actual hard copy records. I think even different than what you’re describing of symptomatology, where they’ve always had access to their own history and symptoms, it’s now lab results, imaging results, note results.

Have you seen any experience where patients are starting to merge their actual hard copy records, taking that frankly more like direct medical data, even objective medical data and leveraging AI themselves?

RP:

I’ve heard anecdotal stories of people doing that, putting all their information into chat GPT, particularly with the most recent upgrades to the size of memory that can be retained by the application. I haven’t heard anything being done broadly, but there’s, I call, I like plugins is I call them. Other people do too, because it’s a nice visual, you actually don’t plug it in. It’s actually done with Bluetooth, but GPT is what it’s technically called.

RP:

To be able to enter that information. Not widely known how to do it, not widely accomplished, but I could see that happening. And once that is done, that’s exactly what you’re describing. It’s also interesting, there’s a company called NVIDIA. NVIDIA is a manufacturer of the most advanced chips in the world. They also went on a great stock buy a couple of years ago, since it’s up many thousands of times. They and Hippocratic, which is a AI company, are going to be creating a nurse bot, call it, a nurse AI bot, that will have the ability to take the information from the doctor’s notes and explain it to you in detail, that will be able to take your symptoms to answer questions about your medications, and will do it for a cost of $9 an hour. Think about that. So, the answer is yes, I think a lot of people are experimenting, a lot of people are trying it.

 

The challenge still is going to be that they still need a, I’ll call it a guide. And I think that that is what is coming. And I’ll point out also that I don’t know again, how close do you follow the technological world, about two weeks ago, open AI released, they call, GPT four and this is going to have two things, actually three things. Number one it’s voice. So, you don’t have to type.

It’s not a text application; it’s a conversational one. Number two, it responds in regular human time. Just like we have a slight pause between questions and responses, so it has, but no more than that. And number three, it had the voice of a very famous actress from…Scarlett Johannsen.

NP:

Certain Marvel movies, yeah.

RP:

Yeah, yeah. Her. So yeah, so it had all those. Now, why did they release that? You know, these are the kind of questions I was posed in my professorial kind of endeavors. Because what they recognize is if you look at generative AI, the two generations that are using it a lot right now are Gen Z and millennials. Who’s not using it a lot? Gen X and baby boomers.

Who has chronic disease? Who has the most medical problems? It’s the latter generations. And think about it. If you’re someone who’s a baby boomer, a little reticent about technology, and you had to do all this typing and looking up, you might not do it. But if all you want to do is talk to it, the way you talk to a doctor or a nurse and be able to get the answers that you want.

All of a sudden it changes things. And by the way, who just stepped into the ring? Apple. Apple stepped in the ring because they’re going to take ChatGPT and put it as part of Siri. And now Siri, who people have grown accustomed to asking questions about the weather or the sports scores or some specific event off of the internet.

RP:

Now, suddenly you can ask Siri, my doctor prescribed this medication and I’m not quite sure about some of the side effects. Can you tell me about the side effects and whether they’re indicated in a particular kind of a specific disease? And that to me is going to start to be a process where these traditional barriers between doctors and patients, the traditional barrier between the clinician who was at the very top of the hierarchy and the called the patient who wasn’t even in the number two or number three spot way down.

Now that gap is going to narrow. I’ll ask you that same question I asked a couple of minutes ago. How are we going to change the way that we practice in the context of what’s about to occur?

NP:

Which the one question that I keep coming back to is now when you’re starting to ask for actionable items, and I just saw Google rolled out for their version of AI, I think it’s Gemini they’ve called, very similar to the addition of GPT to Siri. Who’s the risk holder then? Because we’ve gone from a answer some basic information, answer some questions to actually like you’re describing analytics, synthesis, reading back and coaching. So where does the risk live?

Whereas, like, who’s the bag holder when it comes to the liability?

RP:

Yeah. I just actually wrote an article for Forbes on this question about who has the liability and any listeners who want the information around that go to that website, my website, robertprolmd.com that you mentioned earlier and access that particular publication. First of all, well, as part of that, consulted Michelle Mello, who is a professor at the law school at Stanford, probably the world’s expert in liability when it comes to healthcare matters.

And she stressed a couple of things. Number one, she said, we don’t really know. There has not been any case. She had to be honest about the reality of what exists. But she said, if a clinician is going to simply blindly follow what AI says, the clinician is going to be in trouble because clinicians are expected to have expertise.

RP:

But if they rely in some way, they’re not going to get judged by that component to it. If it adds value, they’re going to be praised. mean, when we go to a textbook or journal article and there’s a mistake or a problem there, we’re expected to overcome that. We can’t use that as an excuse. Well, it was published somewhere into a journal because that’s why we trained for an entire decade to have the ability to do so. But if we encourage patients to use it, and in particular, we give them specific ways that they can accomplish that, give them the training around it.

Allow them to understand that they’ll have more insights and ideas. Then in that situation, the chances are very low. I know both of you do practice pediatric as well as adult medicine but think about it this way. It’s 1130 at night and a child has a fever of 102 or 103.

New parents without a background in medicine. What do they do? Do they take the child to the ER where they know there’s going to be an hour wait, they’ll be surrounded by people coughing, the infectious disease. Do they wait till the morning? Well, they clicked in Google, and they saw that meningitis can happen. The child would be dead by the time the sun rises the next day. How do they know what to do? So, what do they do? They call the doctor’s office and where do they get it? Recorded, this says go to the ER.

 

Why does it say go to the ER? It’s not because they really recommend going to the ER. It’s because it protects the doctor. It’s not an answer to what the patient needs. They call the answering service. say, the doctor will call you back in the morning. Say, we don’t want to be called back in the morning. We need to know now. We don’t want to wait till the child’s dead to speak to this clinician. What do they do? This is where I think this type of application will be incredibly powerful because it’s going to ask questions.

RP:

Is that child lying flaccid? Is the child riding his bike around the living room? Does the child complain of headaches or neck stiffness? It’s gonna ask the kind of questions that we would. And in that situation, assuming that what is asked is gonna be reasonable, the liability would be no greater than if we responded or a nurse in our office responded to a patient calling with that particular problem. So, I don’t wanna tell people there’s no risk involved.

But as we’ll probably talk about when we address areas around security and privacy and misinformation and bias, it’s not gonna be any greater than that which exists today, unless we somehow completely default to it, which I don’t think that as clinicians committed to our patients, we’re going to do.

MJ:

Can I ask a question as it relates to trainees? We have a lot of residents, fellows, and students that listen. And how do you see this affecting the trainees and their education at becoming practicing doctors?

RP:

I’m sure that every trainee understands that the way that we teach medicine today is a relic of the last century. The idea to test people on examinations around obscure facts about the Nile River or about some fifth level pathways sitting somewhere in the human body that they’re never gonna use again is an absurdity, but it’s the way that generations ago, could be differentiated because in the 20th century, if you wanted to carry around with you all of medical information, you had to have a 50 pound backpack.

And even then, you probably would not gonna have everything that you needed to be able to research it. And although there’s been slight modifications, that’s still the way we accept medical students, figure out who’s best for a given residency, hand out grains and so on down the line.

About a decade ago, I was being interviewed by Malcolm Gladwell and he asked me this exact question. And I thought about it at the time and I said, you know, what we should be doing is forcing people to bring an iPhone to the exam. Because that’s what they carry with them in clinical practice. Give them scenarios and let them use their iPhone to come up with the right answer.

 

RP:

Of course, nothing happened. No one seemed to do that on the national licensing exam level. But now I believe the generative AI must be built in that particular way. I think when we deal with the kind of complex questions you get asked on these examinations, particularly ones that might have an obscure piece of information, we need to be able to go to the generative AI, understand it, and put all the information in place. And with the information in place, be able to then figure out

How are we going to use it? Are we going to follow it or are we going to question? Are we going to ask for more prompts around it? Now, in most of medicine, a lot of things we do become repetitive. After you’ve seen a problem a hundred times or a thousand times in your practice, you’re not going to be going and asking the application about it every single time. But that opportunity to use that tool to become better at it, I think that’s something that we have to start teaching.

In medical school, know, the medical students who are on the line right now by the time they graduate, this application is gonna be 50 times as powerful. How are we gonna use that? How are we gonna train our patients? How are we gonna rely on them? What sort of information do we need to put into their application, so-called GPTs, to be able to be sure that after a month of blood pressure is not controlled after that new medication, we don’t wanna wait to the four-month follow-up.

RP:

We want to know about it right there and then so we can make a change. We don’t want one blood pressure reading in our office. And if it’s high, everyone says white coat syndrome or something else. We want 100 applet measurements, but we don’t want 100 pieces of data. We want something, I would start to say someone, but Chat GPT MD is still probably not a person, but a technologist.

NP:

Achieve personhood status, yeah.

RP:

We want this technology, application to be able to access it and tell us, here’s the trend. Yes, there were 10 abnormals, but they all happened in the first two weeks. And as a result of that, it’s been totally normal for the past two weeks. We don’t want to do anything. And on the other hand, if it turns out that, yeah, nothing seems better. Every week there’s three abnormal results and they’re occurring at various times of the day. We can’t pin it down.

Probably we need to make a medication adjustment going there. The same is going be true for the blood glucose. We don’t need to wait three months for the hemoglobin A1C. We can have information at that particular moment, and we need to learn how to turn it around. If someone has diabetes, we want them to have a different diet, but we don’t really teach them how to do that.

They can put information into the application and they can get a weekly shopping list with recipes. They can be able to say, I only have 20 minutes a day because I work two jobs. I have two kids. Kids like to eat this kind of food. Give me recipes. I don’t give it to them. I want it in Spanish. I want it in Japanese. These are the kinds of tools and possibilities we have.

And if students don’t learn in medical school, the application of it, I’m absolutely convinced that technology won’t be a problem for them. But the application of it, the doctor-patient relationship, how you use it to earn trust, how you use it to be certain that when you promise the patient you’re gonna be there, should a problem arise, that you will be there. And you’re gonna be willing to do it because the totality of times that you’re being asked actually is less than today because the application has solved the majority of them and left them to know with expertise when a clinician needs to be brought on in.

RP:

That’s a skill I think that needs to be learned. The challenge of course is that our professors don’t know how to do it. So we’re going to have to figure it all out together. But I do believe that it’s going to be the next evolution of the, I’ll call it the relationship between doctors and patients. And I believe in the end, that a dedicated skilled clinician plus an empowered patient plus a generative AI, that triad will give outcomes by three to five years from now that are gonna be dramatically better than any of the three alone can do today.

MJ:

Wow, so let me ask you this. You had mentioned already the development of a nurse bot at $9 an hour. We’ve obviously talked about increased efficiencies in management and diagnosis of chronic disease. Do you see this in this three-to-five-year timeframe, generative AI being a net job creator or a net job eliminator in healthcare?

The first thing I point out when people ask me this question is that we have a massive of both doctors and nurses today. 71,000 physicians left medicine last year alone. It’s projected that we’re going to have a shortage of over 100,000 clinicians in the future. There’s no way we can train that number of people. And by the way, the cost of doing so would be unaffordable.

RP:

Already it’s projected that healthcare costs are going to rise by $3 trillion over the next seven years. I’m not worried about, in quote, replacing clinicians. I’m worried about helping clinicians who today are overwhelmed, that have more work that they can possibly complete in a given day. As you well know, the research has been done that said if a primary care physician did all the things that were recommended in patient care, it would take 27 hours.

a day. It’s not possible. When I diagnose burnout, don’t, the things we talk about, the bureaucratic tasks, the EHR, et cetera, they are problems. Don’t get me wrong. But the biggest problem I think is that clinicians are just given a job that can’t humanly be done. It takes too much time. And in the end, it compromises their professional life and their personal life. I think if we could find the ways to augment it, augment what clinicians do. So that’s the first thing that I would say.

RP:

I’m not worried about it. Certain jobs will never get replaced. The nurse in the hospital, that’s not gonna get replaced. But the nurse in an advice center, that could be augmented. So, I see it augmenting what we do. And the second thing, and I mentioned it a little bit early, but I explain it in great detail now, is to fill in the gaps. I said before, you know, we see a patient with chronic disease, we make a medication change, we say, back in four months. We have no idea what happens in those four months.

This starts to fill it in, gives us a fuller picture. It allows things to be done sooner for patients who need it. And by the way, we don’t need to see you in four months. Everything’s going well. We could see you at eight or 12 months. Think about a patient in the hospital. The nurse comes by and rounds at 8 a.m. and doesn’t come back till noon. What happens to those four hours unless they’re in the ICU or a telemonitor? The hospital at home.

We could actually have people in the home because we can fill in those gaps to be able to address the shortage. We have a massive shortage today of skilled nurses in our hospitals. So, I’m not worried that it’s going to replace people. I think it’s going to expand our expertise. Or psychiatry. There is no way we have enough expertise for behavioral medicine, for mental health, psychological assistance.

RP:

And there’s no way we’re going to have it anytime soon. In fact, I think the problems are getting worse far more rapidly than the number of clinicians are improving. You need a clinician because certain diagnoses have to be made and ruled out. But instead of seeing a patient every two or three weeks, not that you want to see them more often, you don’t have any time. How do we fill it in, with a interaction, with an AI application, with a generative AI application?

I think those are all the possibilities that are there. And I see it as being an asset, as an aid, as an augmentation to what we do today as doctors. Today, we ignore the fact that we are in this totally dark space between office visits, between nurse visits, between the patient going home and seeing them back in the office in a week. This fills that in and identifies problems early on so we can address them and it avoids unnecessary repetitive care that isn’t needed because the patient is doing so well.

NP:

How can we prevent, know, we’re talking about, let me wind this back. We’re talking about potentially bio-wearables, more data, access to generative AI models, 4.0, there’s even I think a premium chat GPT. How do we prevent vulnerable populations from suffering a deeper divide in their healthcare than they already experience? So limited English proficiency populations, lower socioeconomic status that may not have access to some of these tools. How do we prevent that deepening divide and actually use this to bridge that gap? Provide more and fair, equitable healthcare delivery.

RP:

Well, I really appreciate your raising this. And it’s a question I’ve gotten asked on a lot of different podcasts and a lot of different keynotes. And again, I want to start with the foundation of that. We have a big problem today. It’s not like we provide care. We know that black women die nine times as often as patients in other countries and probably two to three or two to four times more often than white patients in United States.

We know that there’s bias in medicine. We know that people in socioeconomic areas don’t get the care that they need. And I want to say something that may sound a little harsh. We ignore it. We give it lip service. We don’t change anything that I can find of any significance in those areas. So the question that really you’re posing is how do we apply this tool to address those problems that currently exist?

That we should be embarrassed about as a profession, not as individuals, because we don’t individually intentionally do it, but as a profession, how do we do that? The first thing I would say is the population that would be most served, because they don’t have a concierge doctor. In fact, they don’t even have a primary care doctor, because there’s none in the community in which they live. Now they can start to get expertise, not just knowledge. They don’t have to go to the ER.

RP:

In order to see a clinician, they can actually get information more directly. And they’re often working two jobs and the doctor’s office is closed at 10 PM. So now they can’t get expertise. Now they can get that expertise. Now they can have someone who can coach them, a lifestyle medicine coach, someone who can help them with the management of chronic disease.

And this application can speak any language. It’s not specific to English. So now people who have a language issues can have expertise that they can get as well. I think we need to make investments as a society. think Medicaid should cover access to information technology as part of Medicaid. Medicare should cover it as part of Medicare. Rural areas, these are all areas of tremendous underservice today. We have that ability to do it.

RP:

We have to just decide to make, and I’ll say relatively small investment. You know, if you look at something like giving people an EHR, they’re overpriced, they also want to deliver. We’re not talking about that. GPT 3.5 is free and now it’s been upgraded with the 4.0 version of it. What we know is that even if you want to get the most expensive one, it’s $20 a month.

It’s a latte a week for those of us who have the privilege of being more middle-class. And again, I think the government should provide it to the socioeconomic challenged individuals that exist. And I would even argue that in doing so, we’re going to advance the socioeconomics of inner city areas. I think that we’re going to actually advance the economics of rural areas. And this technology, if prompted well.

You put the prompt in that says, I don’t want to provide care that is less good for one population solely based upon their race. As an example, it will tell you when you prescribe as happens in the U S today, 30 % less pain medication for a patient who seems simply to be black.

That’s the reason you’re doing it. It will tell you during COVID that when two patients walk in the door and you order a COVID test of the white patient, but not the black patient even though they have identical symptoms, that you have to consider the possibility that you’re being biased. It’s almost to me an anti-discriminatory tool if applied well.

And if we ask ourselves, why is it that black women die nine times more often in a, it’s really a post delivery. That’s most women die, quotes, in childbirth. They don’t die in childbirth. die the next day or the day after.

I think it comes down to the fact that we subconsciously, we in this case being nurses, don’t take the complaints, don’t take the results as serious. And it will say with this amount of bleeding or with this degree of hypertension, you would normally call the attending physician to assess the patient. It’s possible there’s a reason why you’re not doing it in this case, but is it also possible that a discriminatory factor is driving it? And that you probably should do so?

RP:

I think those are the opportunities that exist that actually I’m optimistic that if we do it well, if we make a call minimal investments, minimal added investments that we actually can start to address what today I think is a major scar on our healthcare delivery system, which is the inequity that we tolerate. And as I say, give lip service to but don’t make any of the major changes that would be necessary to improve and eradicate it.

MJ:

Wow, I think what’s so great hearing your perspective on this is a lot of people are scared of AI. A lot of people are fearful at the worst-case scenarios, but as a fellow doctor, people who care very deeply about healthcare, it’s refreshing to hear the hope that it could bring to improvement of a very complex and sometimes very ineffective system that we have. So, thank you for shedding light on that.

And you spent a lot of time thinking about it. It’s great to hear your insights based on your experience. I did want to ask you one last question, because I know you are a busy man and have other things to tend to. But one question I wanted to wrap up with is not necessarily AI, but health care systems, health care delivery, and your experience. What is one opinion you think you might hold that a majority of doctors may disagree with?

RP:

It’s a great last question to a conversation about Chat GPT MD. And that is, and I talk about that in the book in detail, that any technology that either slows clinicians down or undermines their income will never become commonplace. Not because we’re bad people, but because we are people. And that’s the way that things do.

We tolerate the EHR despite the fact that it slows us down because we have any choice. We’ve got to do it for billing. But that’s not something that any of us would say, this is so great. I can’t wait to use more of it. But the income part worries me.

And if you’d ask me the question, is there a big, tall hurdle that you worry that we may not get over? It’s that if we do not change the reimbursement system in the United States, and to your point, most doctors would say fee for service is a good thing. You do two procedures; you get paid twice as much. You do three procedures; you get paid three times as much.

They don’t see that essentially it is the driver for why they have to run faster and faster on the treadmill. It is the reason why they’re seeing more and more patients spending on average 17 minutes per patient and going home, they did their day feeling they rushed too much. They didn’t do the best job that they could. So in the book I talk.

RP:

in great detail about the need to move from fee for service to capitation. Pre-payment for a group of patients evaluated on the outcomes benefiting when you reduce heart attacks and strokes and kidney failure and cancer, and amputations is 30 % of the cost is gonna go down. If 30 % of the income goes down, it’s not gonna work. It’s not gonna happen. It’ll be another wasted technology. And it’s why we moved to capitation.

 

And now all of a sudden, we’re seeing fewer patients but earning as much or more money because we’ve kept our patients healthier, because they haven’t had a heart attack that needs to be reversed, or a cancer needs to be resected or given a chemotherapy or a leg that had to be amputated or the prosthesis provided. No, we did the things that were necessary and that we could do, and we couldn’t do without the technology, but now we can do.

So yeah, so what I believe is that fundamentally, we will never fix the American health care system. If I project out, and the fear word you said is so vital, people are afraid because they say, what will happen if the capitation rate doesn’t go up as fast as I want it to go up? Well, let me tell you something. The reimbursement rate in FIFA service is not going to go up as fast as you think because we can’t spend $3 trillion more. Let’s look at what is going to happen and compare it to what could happen.

RP:

And what I believe is that will be best, not just for patients, but for clinicians too, will be to make this move from fee for service to capitation, to use a generative AI technology to accomplish it, to maximize the health of people in this country. I think that we can close the five-year gap between us and the other 11 most industrialized nations on life expectancy.

I think the fact that our life expectancy in the US has been stagnant since 2010 to today, we can change that. I think if we can change the course of chronic disease, we can make the epidemic of diabetes start to recede. There’s so much that we can do if we lead the way. And if we don’t lead the way, what I can tell every listener, whether in medical school or residency or full practice today.

Is that if we don’t lead the way, it’s not that it won’t happen. Someone will do it. We will be left behind. And rather than being empowered and being able to take back control of American medicine, we’ll become more and more simply a worker and potentially ultimately replaced by less expensive technology.

MJ:

Wow. Well, Dr. Pearl, I feel like we could do a whole episode just on those closing comments of yours. get a lot of questions on this, especially related to doctors and income and health care delivery. But I appreciate your time. Thank you so much for being on the podcast and aligning us and hopefully giving a lot of our listeners things to think about and hope for an improvement in our health care system.

RP:

And if they read the book and have the money go to Doctors Without Borders, I’d like to hear their thoughts about the book. You know, I don’t have all the answers, and I may be wrong in a lot of different areas, but at least I think I’m on to the important questions, just like you asked the important questions. And I’d like to hear from any of the listeners who read the book, their thoughts, and particularly their concerns and criticisms. And thank you once again for educating the next generation of American medicine, the current generation, and for having me on as a guest today. Thank you again.

MJ:

Wow, that was a fantastic time getting to learn a lot about healthcare and how potentially AI could change things. What’d think?

NP:

I think it’s pretty incredible all the different areas in which we’re gonna see AI’s effect in healthcare over the next several years. I think we talked about things as wide reaching as preventative care, documentation, billing, physician extenders, medical education. Dr. Pearl really hit it all. Well, I’m impressed.

MJ:

Let me ask you this. Where do you actually see it changing your day to day?

NP:

In the most near term?

MJ:

Putting you on the spot here, Dr. Palmer.

NP:

In the near term, it doesn’t. In the near term, doesn’t. I think there is, in the near term, I see it, I heard it described as a helpful medical student by Dr. Pearl. I will tell you that most days when I’m running around the hospital as a hospitalist, I think we’ve all joked about it. This may even go back to House of God, but like show me a medical student that doesn’t add three times to your three times the amount of work to your workload.

MJ:

You could be offending some people listening to this.

NP:

Yeah, I could. But that’s what you take on when you want to educate. Having a medical student around is a value to the community of health care. Slowing down working with AI as it stands today is interesting and is novel, but I don’t necessarily know if it’s helpful in the short term.

MJ:

Yeah.

NP:

As he describes the hallucinations going down by another order of magnitude, the power and the capability going up by another order of magnitude, I’ll eat my words in probably six to 12 months and describe a totally different system.

MJ:

Maybe six to 12 weeks, the way things are speeding up.

NP:

Totally fair. Totally fair. And I look forward to that probably by the time this is published. I will be wrong by the end of this sentence. And I’m uncomfortable with that.

MJ:

Right, I mean, the big thing is if you try to limit or limit someone’s job, limit someone’s income. What I mean by that is nurses, physicians, dentists, whoever, you’re gonna face some resistance. So, it’s gonna be interesting. Yeah, it’s gonna be interesting to see how that plays out. Hopefully it again, like we talked about increases efficiency. Everybody wins rising tide, know, all boats, all the stuff. So, but it’ll still naturally have some people who probably are gonna resist change.

NP:

Completely. I think that’s going to happen on the patient side. I think that’s happened on the provider side. I think Dr. Pearl identified that there are already massive generational differences in the engagement with generational AI. And I think that generational difference will carry through from the patients to the providers, to administration, to billing, to the insurance companies and what the allowance and tolerance is going to be for the inclusion in your healthcare practice.

MJ:

Yeah. And we talked a little bit offline though.Physicians are not going to get reimbursed likely for using this under a fee for service model.

NP:

Like very unlikely to able to bill insurance for using that bill transcription or a no, I’m sorry. You don’t even build translation services right now. That’s critical to the delivery of healthcare.

MJ:

But it seems unlike it’s into the capitation model. Very much quality delivery of a service.

NP:

And I think when you have something like this, that is relatively cheap, relatively accessible, all things considered, when it comes to healthcare, new healthcare inventions, it’s relatively cheap and relatively affordable, then you really do have to strike at what your models are motivating towards and quality-based models like capitation, and there’s others, but capitation is such a great, has done such great things for Kaiser out West and under Dr. Bell’s leadership, you see why that’s kind of a natural attraction.

MJ:

Absolutely.

NP:

Cheap interventions that you don’t necessarily have to bill for to find value in it completely inverts your quality schema. Yep. And how you define it. I’m learning a lot just listening to you now.

MJ:

Thanks for that. We should have you as a guest.

NP:

What a sarcastic out trail. What a fantastic list.

MJ:

Just seriously though. I appreciated the conversation. Hopefully you guys did too.

NP:

You can catch The Podcast for Doctors (By Doctors) on Apple, Spotify, YouTube, and all major platforms. If you enjoyed this episode, please rate and subscribe. Next time you see a doctor, maybe prescribe this podcast. See you next time.

Check it out on Spotify, Apple, Amazon Music, and iHeart.

Have guest or topic suggestions?

Send us an email at [email protected].

Contents

Subscribe

Sign up for notifications and stay up to date on the latest resources.

All Articles

 

Popular

Podcasts

Your Roadmap to Buying Into a Dental Practice or DSO

August 28, 2025

Student Loan Updates & Repayment Strategies in 2025

June 25, 2025

Dental Job Market in 2025: Trends & Opportunities

May 30, 2025

Webinars

Your Roadmap to Buying Into a Dental Practice or DSO

August 28, 2025

Student Loan Updates & Repayment Strategies in 2025

June 25, 2025

Dental Job Market in 2025: Trends & Opportunities

May 30, 2025

Life Stages

 

Financial Topics

 

Redirecting to Facebook

You are leaving Panacea Financial, and being directed to a third-party site that is not maintained, owned or operated by Panacea Financial.

Panacea Financial does not control and is not responsible for the site content or the privacy or security practices of third parties.

Please select "Continue" below!

You are leaving Panacea Financial, and being directed to a third-party site that is not maintained, owned or operated by Panacea Financial.

Panacea Financial does not control and is not responsible for the site content or the privacy or security practices of third parties.

Please select "Continue" below!

Redirecting to LinkedIn

You are leaving Panacea Financial, and being directed to a third-party site that is not maintained, owned or operated by Panacea Financial.

Panacea Financial does not control and is not responsible for the site content or the privacy or security practices of third parties.

Please select "Continue" below!

Redirecting to Instagram

You are leaving Panacea Financial, and being directed to a third-party site that is not maintained, owned or operated by Panacea Financial.

Panacea Financial does not control and is not responsible for the site content or the privacy or security practices of third parties.

Please select "Continue" below!

Redirecting to YouTube

You are leaving Panacea Financial, and being directed to a third-party site that is not maintained, owned or operated by Panacea Financial.

Panacea Financial does not control and is not responsible for the site content or the privacy or security practices of third parties.

Please select "Continue" below!