Pediatric nephrologist, educator, and “Sheriff of Sodium” Dr. Bryan Carmody joins Dr. Ned Palmer for a candid deep dive into the world of medical education and residency selection. Known for his sharp analysis and willingness to challenge convention, Dr. Carmody offers a four-star review of the medical education system, praising the outcomes, but questioning the inefficiencies baked into the process.
Together, they explore the ripple effects of the USMLE Step 1 exam’s move to pass/fail, the evolving competitiveness of specialties like anesthesiology and pediatrics, and how international medical graduates fit into the shifting landscape. Dr. Carmody also weighs in on the growing presence of AI in residency applications.
As the conversation turns toward the future, Dr. Carmody highlights the urgent need to align medical education with real-world competencies and the expectations of residency programs. How can medical education evolve to meet the needs of both students and patients? And what happens when AI, equity, and efficiency collide in the residency match? This episode unpacks the complexities shaping the next generation of physicians.
Here are five takeaways from the conversation with Dr. Bryan Carmody:
1. Efficiency of the Medical Education System
Dr. Carmody rates the U.S. medical education system four out of five stars, acknowledging the high quality of its graduates but pointing out inefficiencies in the process. He highlights unnecessary steps like excessive CV padding and irrelevant coursework that do not directly contribute to patient care.
2. Trends in Residency Specialties
The episode explores the shifting popularity of medical specialties, noting that anesthesiology and radiology are becoming more competitive, while family medicine and pediatrics face challenges in attracting applicants. Dr. Carmody discusses the role of International Medical Graduates (IMGs) in filling gaps in primary care specialties.
3. Impact of USMLE Step 1 Pass/Fail Change
The change to a pass/fail system for the USMLE Step 1 exam was intended to reduce stress and inequities among students. Dr. Carmody explains that while the change has not significantly altered the residency selection dynamics, it has sparked a debate on fairness and the value of scored exams.
4. Role of AI in Residency Selection
Dr. Carmody observes an increase in AI-generated applications, raising concerns about authenticity and the potential for an “arms race” between applicants and programs. He discusses the implications of AI in screening applications and the challenges it presents in maintaining a genuine assessment of candidates.
5. Alignment Between Medical Education and Residency Programs
Dr. Carmody emphasizes the need for better alignment between medical schools and residency programs, advocating for a focus on teaching and evaluating competencies that are truly valuable in medical practice. The discussion highlights systemic challenges, including misaligned incentives between educational institutions and residency programs.
Transcript
Dr. Ned Palmer:
Good afternoon, and welcome to The Podcast for Doctors (By Doctors). My name is Dr. Ned Palmer, and I’m going solo today—unfortunately, my co-host, Dr. Jerkins, isn’t here. But I’m very excited because we’re bringing back a guest who’s always a favorite: Dr. Bryan Carmody, a pediatric nephrologist from Virginia.
Dr. Carmody is one of the foremost experts when it comes to discussing the Match—not just from a data perspective, but with deep insight into what it means for medical education, medical students, and even residents applying again for fellowship. These are the kinds of conversations I really look forward to.
So please join me in welcoming Dr. Bryan Carmody.
Today on The Podcast for Doctors (By Doctors), we’re joined again by Dr. Carmody, who practices pediatric nephrology at the Children’s Hospital of The King’s Daughters in Norfolk, Virginia. He also teaches second-year medical students and serves as an associate pediatric residency program director at the Macon and Joan Brock Virginia Health Sciences Center at Old Dominion University.
However, he’s probably best known as an analyst and commentator on medical education and residency selection. His social media alter ego is The Sheriff of Sodium, and you can find him on X at @JBCarmody, on YouTube as Sheriff of Sodium, and on his website, thesheriffofsodium.com.
Welcome back to the podcast, Dr. Carmody. It’s great to have you. Let’s start with a few rapid-fire questions to break the ice—even though we already know each other.
If medical education had a Yelp page, what would its star rating be out of five—and why?
Dr. Bryan Carmody:
It’s great to be here. That’s a good question. I think, in many ways, medical education works well. The end result of the U.S. medical education system is doctors who are trained at a world-class level—so I’d say four stars.
NP:
Fair. So, if we have room to improve, how do we get that last star?
BC:
There are plenty of areas for improvement. The end result may be good, but we don’t always get there efficiently. There are years of CV padding, organic chemistry, and coursework that don’t end up being directly relevant to patient care. For example, if you’re an interventional cardiologist, how much hand anatomy do you really need to know? It’s an inefficient system in many ways—and there’s more we could talk about there.
NP:
Inefficiency in care delivery has always plagued medicine, right? Even where doctors are compared to where patients are—there’s been misalignment for a long time. That’s definitely something we can dig into. But first, if you could make one medical education buzzword disappear forever, which one gets the axe?
BC:
I don’t know—what’s yours?
NP:
Mine is grit. It comes from that wellness and resilience space, which is critically important—but the training around it is so inefficiently delivered. It’s never really received the way it’s intended.
BC:
Yeah, I feel like we’re in the post-peak grit era. I hear that word less now than I did three to five years ago. That was peak grit.
NP:
Maybe that’s why I’m still suffering from it. Any others for you?
BC:
Honestly, I’ve probably built up immunity to all the buzzwords. They roll off my back like water over stone.
NP:
Maybe I need to channel some of your zen when it comes to that. Okay, next question: do you remember a moment from med school or residency that would make it into your blooper reel—something funny or so absurd that it just broke everyone in the room?
BC:
I’ve got plenty, but one comes to mind. I was a senior resident admitting patients to our pediatric service. A medical student went to take the initial history, and when they came back, I asked, “What’s their diet?” The student said sheepishly, “They said he doesn’t eat anything.”
I thought, well, that can’t be right—he’s a human being, he has to eat something. So I went in to clarify. The grandmother was clearly exhausted—it was late at night, she’d been asked the same questions by everyone. When I asked, “What does the baby eat?” she glared at me and said, “He doesn’t eat nothing. He drinks bottles.”
NP:
You definitely stepped on a landmine someone else laid before you. Totally understandable from her perspective, though—I can’t imagine being asked the same question seven times in a row.
But let’s transition. I always enjoy our conversations, and especially now—we’re about five months away from the Match. Before we look ahead, I want to talk about this past year’s data. What was the biggest surprise to you, as someone who studies the Match so closely year over year?
BC:
Honestly, very little surprised me. The overall statistics were pretty much what I expected.
NP:
What were some of those expectations that panned out as predicted?
BC:
A lot of what happens in the Match can be anticipated through application statistics. It was clear that it was going to be a lean year for family medicine and pediatrics. Applicants have increasingly pursued specialties that are more prestigious or higher paying, while primary care fields continue to expand residency positions faster than applicants can keep up.
NP:
Why hasn’t the application volume kept up? Historically, IMGs filled that gap in many of the primary care specialties—and there are more international programs than ever. Why aren’t they backfilling those positions?
BC:
It’s worth noting that all the positions eventually fill—either in the SOAP or shortly after. But yes, international medical graduates (IMGs) have real opportunities in family medicine and pediatrics, since the growth of IMG applicants hasn’t been as strong.
Not all programs recruit IMGs, but as U.S. applicant numbers decline, many programs are rethinking their strategies. We saw this with emergency medicine. It used to be a specialty that was not very IMG-friendly, mostly because the positions were filled by U.S. MD and DO graduates, and the recruitment system relied heavily on standardized letters and evaluations that are hard to get outside the U.S. system.
But after a few years of declining U.S. applicants, we reached a tipping point—now application numbers have rebounded, largely because of more IMGs and osteopathic graduates. The number of U.S. MD applicants has only rebounded modestly.
NP:
And that decline in U.S. applicants for emergency medicine—did that line up with all the talk a few years ago about saturation in the field? The reports about oversupply, private equity, and job instability? Were those happening at the same time you started to see interest drop off?
NP:
Do you think that decline in U.S. applicants for emergency medicine had to do with that workforce report about oversaturation? It seemed like there was a lot of noise a few years ago about private equity, job market saturation, and shifts in the field—all around the same time.
BC:
I think there are a few factors, and people will weigh them differently. But yes, that workforce report played a big role—it suggested that we were training more emergency medicine physicians than we probably needed. That’s not what an applicant wants to hear when choosing a specialty for the next three or four decades.
Emergency physicians also bore the brunt of COVID. Seeing what those doctors went through wasn’t exactly inspiring for students choosing a specialty. Once the “all doctors are heroes” moment passed, the reality of that workload probably became a recruiting deterrent.
There are also studies showing that emergency medicine physicians have higher rates of burnout compared to other fields. It’s important to talk about that, but it doesn’t exactly help recruitment. Add to that the increase in private equity–owned groups and changes in the working environment—those factors make the field less appealing for some applicants.
All of this contributed to a perception shift among U.S. MD students. The kind of person who would have chosen emergency medicine a decade ago might now choose something else.
NP:
Yeah, definitely a mix of factors. And you’re right—COVID had a major impact. None of us were spared, but emergency medicine especially still feels some of that trauma.
We’ve talked about family medicine, pediatrics, and internal medicine. Who came out as the biggest winner this year? Any specialties that stood out as particularly strong?
BC:
The one that comes to mind is anesthesiology—it’s become increasingly popular. In fact, the decrease in interest in emergency medicine has been offset by a rise in MD applicants to anesthesiology and radiology. Anesthesiology has been a competitive match for several years now, and they’re not having any trouble filling positions.
NP:
That makes sense. I can see why applicants who might’ve been drawn to emergency medicine could find anesthesia appealing—it’s procedural, usually hospital-based, and offers some flexibility with outpatient or ASC work. Really interesting.
Is there one stat from the Match data that you think people are misunderstanding? Something you see being misinterpreted in headlines?
BC:
The big thing that always bothers me is when a specialty has a rough year in the Match, and people immediately link that to a “workforce shortage.” That’s just not accurate.
Positions might go unfilled initially, but they always fill—whether through the SOAP or afterward. By July 1st, those spots are taken. The Match data don’t support claims of long-term shortages; they only show short-term application trends.
NP:
That’s a great point—it’s kind of like confusing weather with climate. People look at one year’s data for pediatrics or family medicine and treat it like a trend, when in reality, it’s just one small part of a larger picture.
One last Match question for now: if you had to give this year’s Match a headline, what would it be?
BC:
That’s a good one. I’d say something like “The Battle Intensifies.” We’re seeing increasing competition in specialties like dermatology and neurosurgery—that’s clear from the preliminary data.
NP:
I like that—competitiveness has inertia. Once it starts, it tends to build.
So, what’s one thing about the residency selection process—not just the Match itself—that you wish every med student understood sooner?
BC:
I’d tell them that their future happiness in medicine isn’t really tied to which specialty they choose. In my experience, people who are happy in medicine would probably be happy in multiple fields, and those who are unhappy would likely feel that way regardless of specialty.
Different people have different expectations—what they think being a doctor means, what salary they need to feel fulfilled—and those expectations drive their Match choices. But if you check in with those same people five, ten, or twenty years later, the specialty itself usually isn’t what determines their overall satisfaction. That’s what I’d want students to understand.
NP:
That’s such an important perspective. I also hear from students who make decisions based purely on averages—
NP:
Averages are a wonderful thing, but nobody truly represents the average. Frequently, it’s a multimodal or bimodal distribution—meaning that “average” really represents no one. Especially when you have a deep public-versus-private practice split. We hear that all the time.
I know dermatologists making $200,000 and family medicine doctors making $500,000–$600,000 depending on how they structure their work. But students make their decisions based on those averages that get published by the AMA each year. That’s a big challenge, like you described.
BC:
Correct.
NP:
So, shifting to the actual process of the match and student preparation—how important are USMLEs right now? And do you think they’re appropriately weighted, or are they over- or under-valued?
BC:
Good question. For a reason that may surprise some people, I think the USMLEs are becoming less important in some ways. What I mean is, one thing I often discuss is the trend in USMLE performance—and as you know, those scores are rising.
The mean Step 2 CK score increases by about one point per year on average. But the other factor is that the distribution has become more compressed. The difference between someone at the 75th percentile and the 20th percentile is smaller than it used to be. That makes it less useful as a discriminating factor for programs.
Now, that said, when programs receive far more applications than they have spots for, they need some way to stratify applicants. The USMLE remains the most convenient and universal tool for that. So it’s still quite important, but I think it’s becoming a bit less important.
That’s partly because of the math of what’s happening—and partly because in the years ahead, there will be less manual screening by program directors and more automated screening by AI systems. How much those systems will weight USMLE scores versus other factors will depend on the people training and deploying the AI.
NP:
The Luddite part of me is horrified by that—having gone through the match trying to stand out to a human, not a robot. But I do wonder if you could feed resident performance back into the system and create an end-to-end model that’s more objective and powerful—maybe even less subjective than the current process. Residency performance can be pretty subjective and hard to quantify. Maybe I shouldn’t be so terrified of AI taking over the match process.
BC:
On its face, there are potential advantages. It depends how you train your bot and what you ask it to do. But in a world where screening cutoffs let a single data point determine whether an application is even reviewed, at least having something that reads your application might be an improvement.
Of course, that opens up a new type of gamesmanship—now applicants are writing essays and activity descriptions not for humans, but to optimize for the bot. What keywords signal that you have “grit” or “leadership”? How do we teach the bot? You can imagine residency selection devolving into another bot war—bots writing applications competing against bots reading applications.
NP:
Exactly—and that arms race is something you’ve written about before. Now we’ve got bots on bots. Does having a better bot make me a better doctor, or just someone with better resources? That ties into what we talked about before—resource disparities and access.
When Step 1 went to pass/fail, the debate was split between those who said it would worsen inequities and those who said it would level the playing field. Where do you think we actually landed? Has the pass/fail change made a difference?
BC:
Sadly, not much has changed.
Let me step back. If you liked scored Step 1, you could argue it was more fair—that removing the score disadvantages certain applicants. That’s probably true. But if you didn’t like it, you could argue that keeping it disadvantaged a different group—and that’s also true.
There’s no USMLE scoring policy that advantages all applicants. Mathematically, half of applicants will always fall below average. So deciding what’s “fair” requires a value judgment about which outcomes we care about.
When Step 1 went pass/fail—this was February 2020, right before the world shut down—it was a necessary first step. But we didn’t replace it with anything better. There’s still poor alignment between what medical schools are trying to accomplish and what residency programs want.
One of the biggest issues is that undergraduate and graduate medical education share the same goal—to train competent, capable physicians—but the incentives and reward systems are totally misaligned.
Medical schools don’t see their mission as “ranking students for residency selection,” and residency programs want the best applicants, so students do whatever programs say matters most.
The holy grail would be for residency programs to clearly articulate what competencies they value—and for medical schools to teach and assess those competencies rigorously. If the competition is meaningful, where the “winners” are truly those who’ve mastered the right skills, then everyone benefits. Someone will always win or lose—but at least the winners would be those best equipped to serve patients and society.
NP:
That’s a brilliant take. Full transparency, I used to believe that having a numerical Step 1 score was valuable. You’ve helped shake that philosophical bias for me.
But building on what you said—it feels like every part of the journey has its own siloed focus. Med school is focused on one thing, residency another, fellowship another. Each step has its own priorities and metrics, and by the time you adjust, the rules change again.
How do we create better alignment across those phases? Is anyone working to integrate or at least recognize that full career path?
BC:
It’s difficult, because the players involved—medical schools, hospitals, licensing boards—all have different incentives. Historically, medical education began with schools, then evolved to emphasize graduate medical education and legal requirements for licensure. Those later requirements came mostly from hospitals, not universities, and their incentives differ.
Right now, no one is measuring a medical school’s success by looking 20 years down the line to see if its graduates are doing good for society. Schools are judged by graduation and match rates instead.
So what incentive does that create? Schools want everyone to graduate—because attrition looks bad on paper. Admissions committees become “infallible”: if you got in, you have to make it through. And to protect match rates, schools might avoid being fully transparent about challenges.
On the GME side, hospitals have their own incentive structures—chiefly, the need for inexpensive resident labor. So again, the systems don’t align. Everyone defines success differently, and each applies pressure to deans, program directors, and others to serve their own metrics.
If we want medical education to function as a true continuum—with an end goal that benefits society—we have to clearly define what that goal is and then build incentives that align with it.
NP:
I love that. Medical education has been thoughtful about reexamining how we measure students and residents because we know metrics shape behavior—teaching to the test and all that. But zooming out, the system itself still struggles with those same metric-based pitfalls.
It’s wild that a school can proudly say it graduates 100% of students who enroll. That’s not necessarily a good thing—it might even be alarming. But since students are also consumers, schools have to market that success rate as a selling point: “If you get in, you’ll finish.” That’s a fundamental misalignment.
So, back to the match—which really drives so much of the medical ecosystem. Why do we still have July 1 contract dates for everyone, even attending physicians? There are so many ripple effects from the match process. Is it broken? Misunderstood? Working as intended? It’s hard to tell sometimes.
BC:
I think the language around this is sometimes a little bit messy. From my standpoint, there’s the Match — meaning the official match with the capital letters and the little “C” in a circle — you know, the matching algorithm. And I think that works pretty well. As you know, there was even a Nobel Prize in Economics awarded to the people who came up with the idea of using this kind of deferred acceptance algorithm to match up different sides of a market. So, that system — algorithmically speaking — works great. You could argue with a straight face that it can’t really be improved upon.
But then there’s “the match” in air quotes — the whole process writ large. Everything that goes into it: the metric chasing, the CV buffing, the publishing, the networking. And in that version of “the match,” there are a lot of calories burned and dollars spent chasing things that don’t necessarily matter all that much. At the end of the day, you still end up with the same number of winners and losers you would have had if you just pulled names out of a hat. So you start to wonder: are we really getting the value out of the residency selection system?
NP:
Right, the fluff is where it gets complicated. Within that process, how can we make it fairer?
BC:
Exactly — it’s really the residency selection process.
NP:
And fairness is such a tricky part. We’ve talked about this in relation to pass/fail metrics too. There will always be winners and losers, but are we being fair in how we decide who ends up where? Because picking random names out of a hat would, in some ways, level the playing field pretty substantially. So, do you believe the residency match process lacks fairness? And if so, how do we bridge that?
BC:
You’re right — a lottery would be perfectly fair in one sense. Everyone who wants to be an orthopedic surgeon throws their name in the hat, and we draw the number of slots available. That would be fair statistically, but it would offend a lot of people’s moral sensibilities because it’s not a system that inspires excellence. If all you have to do is get your name in the hat, why would anyone go the extra mile?
If that were the case, I’m not writing another one of those case reports for a pay-to-play journal. I’m not grinding over USMLE Step 2 CK. And honestly, much of that work doesn’t always add value anyway — a lot of it contributes to what I’d call “research pollution.” It fills the literature with low-utility material that actually makes it harder to find meaningful research. So some of that might have negative value.
But at the same time, we don’t want to create a world where people stop studying or pushing themselves altogether. So what we need is a system that rewards the right things — the things we actually want to see in good doctors.
If, for example, your program values physicians who are exceptional at interviewing patients, then medical schools should focus on teaching and rigorously evaluating that skill. Some applicants will excel more than others, and that’s fine — it means programs can fairly prioritize those candidates. Other programs might care more about procedural skills or diagnostic reasoning, and they can select accordingly.
The problem right now is that graduate medical education — and really medicine as a whole — has a hard time defining what a “good doctor” is. If we can’t agree on that, how can we possibly build fair systems to measure or select for it? That’s a criterion problem. And while it’s true that some of the things we do value are hard to measure, we can’t just default to the easiest metrics because they’re convenient.
NP:
Exactly. That kind of reductive logic — focusing on what’s easiest to measure — creates perverse incentives. I remember when ER performance was measured purely on time metrics, like throughput. The fastest ER often had the worst patient outcomes. We see that again and again: when we optimize for speed or scores, we often lose quality.
You mentioned AI earlier. Since AI can handle unstructured data and find signals in noise, where do you see it fitting into the match process? Is it going to be used more on the applicant side or the reviewer side? Have you seen any programs already using it?
BC:
I’ve definitely seen a lot of applications that felt like they were written by ChatGPT. You can tell — they’re polished but oddly inauthentic. It’s the same with some letters of recommendation. You get that uncanny valley sense that they were at least partially generated by AI.
As programs start using AI for screening and scoring, we’ll see an arms race. Applicants will be using their best bots to beat the programs’ bots. It’ll be AI versus AI, trying to outgame each other.
NP:
I’ll be curious to see what becomes the “objective” measure in all that. Because if the AI is being trained, someone has to decide what to train it on. What are you telling it to look for? “Find a high-quality candidate”? Okay — but what defines “quality”? You can see how that becomes a snake eating its own tail.
BC:
Exactly. Most residents are great — they’re competent, capable, and hard-working. You could probably swap one resident for another, and the system would still function fine. But the problem residents — the ones who struggle to finish — program directors never forget them. And those issues are rarely about medical knowledge. They’re usually professionalism, attitude, or character issues.
Can AI detect that in an application? I’m not sure.
NP:
Yeah, when I think back on residents who didn’t finish in our program, it was almost always professionalism-related, not knowledge-based. Some were incredibly bright and well-published, but they just couldn’t manage the interpersonal or professional side.
It makes me wonder if program directors are really trying to find the best residents — or if they’re just trying to avoid another bad experience. That avoidance mentality can easily shape their choices year after year. Do you see that psychology at play?
BC:
Absolutely. For many program directors, their biggest fear is having their colleagues come to them saying, “Why did you take this guy? He’s the worst.” So, to protect themselves, they rely on defensible metrics — glowing letters, high scores, dean’s praise — anything that lets them say, “Hey, it wasn’t me who made the mistake. Everyone said this applicant was great.”
It’s a very human instinct — but it’s one that shows just how tangled our incentives have become in the residency selection process.
NP:
Before we start some rapid-fire true/false questions, what’s one prediction you have for Match Day 2026? You called 2025 pretty well — any surprises for the field?
BC:
I’ve been puzzling over the initial application data — it seems like fewer USMDs are entering the match. More schools are opening, so numbers should be rising. Some hypotheses: pass/fail Step 1 causing students to delay graduation, or more grads skipping residency entirely to pursue industry, tech, or “doctorpreneurship” paths. We’ll see when the dust settles how the match numbers compare to last year.
NP:
Those alternative careers, like biotech or tech, are becoming more common.
BC:
Yes — especially since COVID, I’ve seen students enter med school with the plan to eventually pursue non-clinical careers. The private insurance industry is booming, for example, and it offers a decent salary with 9-to-5 hours and no call.
NP:
Okay, let’s move into rapid-fire true/false questions.
The match works exactly as intended
BC:
True
NP:
Step 1 going pass/fail made things better for students
BC:
True
NP:
Prestige still beats performance in residency selection
BC:
False
NP:
Most students overestimate how competitive their specialty is
BC:
True
NP:
The best predictor of being a good doctor is not your Step score
BC:
True
NP:
Do students really understand their competitiveness?
BC:
They do. Students may say they have no idea, but behavior tells a different story: they send preference signals to programs where they are realistically competitive. The median student knows where they stand; misjudgment tends to hurt only a few.
NP:
Why do residents rank so many programs? Isn’t there a point after which additional ranks don’t improve their chance of matching?
BC:
Correct. NRMP charting data shows match probability plateaus after a certain number of interviews. You only need one program that likes you strongly. Some applicants fail despite many interviews because they ranked poorly on interview performance. There’s also a perception effect: if the average applicant is doing X number of interviews, everyone feels they need to do slightly more, and the “average” keeps rising.
NP:
Interesting. Last question: what’s one thing you’ve changed your mind about regarding the match or medical education?
BC:
I remain bullish on the match algorithm — it works well overall. Some applicants might benefit from a system without a match, where they could negotiate earlier or “sell at a lower price,” but in general, the match is better. Match rates mostly depend on applicant numbers vs. positions, with fluctuations largely driven by international graduates.
NP:
Fair enough.
BC:
Thanks — happy to do this anytime.
NP:
You can catch The Podcast for Doctors (By Doctors) on Apple, Spotify, YouTube, and all major platforms. If you enjoyed this episode, please rate and subscribe. Next time you see a doctor, maybe prescribe this podcast. See you next time.
Check it out on Spotify, Apple, Amazon Music, and iHeart.
Have guest or topic suggestions?
Send us an email at [email protected].
Feeling Disappointed On Match Day? What’s Next? – Match Day 2026
According to the NRMP 2024 Main Residency Match Results and Data, less than half of all Match Day applicants were matched with their first choice...
What To Do If I Didn’t Match: SOAP Tips & More
Every year, thousands of medical students apply and interview for residency. In 2025 alone, 47,208 applicants submitted a certified rank order list of their preferred...
What Happens If I Didn’t Match Into A Residency Program?
If after completing SOAP you are still unmatched, it is important to take a moment to rest. Though you will not be entering residency this...