Med-Peds hospitalist, residency program director, and PhD candidate Dr. Ben Kinnear discusses how time-variable training impacts learning amongst students and residents and the importance of evolving medical training to ensure residents are prepared to provide high-quality care for patients. Is time-variable learning the best path forward for quality medical training? Will unionization of hospitals become the norm in the foreseeable future? Dr. Kinnear walks us through his thoughts on competency-based learning and how it can lead to more effective results for learners.
Here are five takeaways from the conversation with Dr. Ben Kinnear:
1. Competency-Based Education
Dr. Kinnear discusses the shift towards competency-based education in medical training, emphasizing the need to focus on educational outcomes rather than standardized processes. He argues that training should be flexible to accommodate different learning speeds, suggesting that some residents may need more or less time to achieve competency.
2. Time Variable Training
The episode explores the concept of time variable training, where the duration of residency is adjusted based on individual progress. Dr. Kinnear highlights pilot programs in the U.S. and Canada that are experimenting with this model, aiming to better align training with individual readiness rather than fixed timelines.
3. Toxic Quizzing
Dr. Kinnear critiques the practice of “toxic quizzing” in medical education, where learners are put under stress through aggressive questioning. He argues that this method is not supported by evidence as an effective teaching tool and can contribute to burnout and negative learning experiences.
4. Psychological Safety in Learning
The importance of creating psychologically safe learning environments is emphasized, where learners can be challenged without fear of punishment or hierarchy. Dr. Kinnear advocates for environments where it is safe to admit not knowing something, which fosters better learning and professional growth.
5. Unionization and Workforce Treatment
The discussion touches on the push for unionization among medical trainees in the U.S., driven by the need for better working conditions and rights. Dr. Kinnear suggests that treating learners as workforce rather than students has led to this movement, highlighting the need for systemic change in how medical education is structured.
Transcript
Dr. Ben Kinnear:
I think that’s one of the reasons why you’re seeing the push toward unionization in the United States because people are starting to say, if you’re going to treat me like workforce and not a learner, I’m going to act like a worker and I’m going to fight for better conditions and better rights. So, it makes total sense when you think about it.
Dr. Michael Jerkins:
Welcome back to The Podcast for Doctors (By Doctors). I’m Dr. Michael Jerkins. And Ned, how’s things going? How’s life?
Dr. Ned Palmer:
I’m Dr. Ned Palmer.
NP:
It’s going great. It’s wonderful. It’s summer. It’s transition season for our interns, new interns and new residents out there. So, we’re seeing a lot of new faces in the hospital. It’s a really vibrant time in hospital-based medicine.
Yeah, and even outpatient—don’t have any bias towards us outpatient doctors—but it’s also an exciting time with folks we’re seeing. I talked with a medical student just today who just went into their fourth year, and they were starting to think about residency applications and stuff. So it’s a fun time. I will say it’s probably more fun being on this side of it than when we were going through it. It wasn’t quite…
The word fun is not what I would have used to describe it.
I had a lot of words that started with variations of F and I don’t think fun was one of them. Like stressful is an F.
Exactly. Totally. What’s interesting about that though, as a good segue, is our guest today: Dr. Ben Kinnear, who spends a lot of time thinking about what is stressful to trainees. How do we educate and train trainees to become great doctors? And I think what’s really fascinating about his work is he focuses on what’s broken and could be improved in our current training system and things that maybe you and I dealt with in training—actively finding out…
MJ:
Hey, actually, maybe we should rethink some of this.
I think my other favorite piece about Dr. Kinnear’s work is around pumpfraga—another F word, just saying—and really actually diving into actionable changes that are happening in medicine today. These aren’t happening in an academic session on a whiteboard somewhere. These are actual residency programs, training programs, fellowship programs that are putting these changes into place, learning, iterating, and trying to improve on healthcare overall. It’s really not just an academic exercise. This is tangible.
There are programs out there that are making hours-based changes, competency-based changes, time-based changes to their residency programs and exploring what all the different downstream effects are that we’d need to solve for if we were ever to roll this out more broadly.
Ned, am I going to learn what a pump frogger means? Do I have to keep listening? Or am I just… I’m still guessing. What is that?
NP:
You gotta stay tuned. I think Dr. Kinnear describes it better.
MJ:
Okay, I guess we’ll have to talk to him. Excellent. We are happy to invite to the podcast Dr. Kinnear. Let’s get to it.
We welcome Dr. Ben Kinnear to the podcast. I’m very excited to have him here as a guest. A few things about Ben: Ben is a MedPeds hospitalist based in Cincinnati, Ohio. He’s also the program director for the MedPeds residency programs there. He has lots of interests including competency-based medical education, novel assessment strategies, coaching. He’s also a PhD candidate studying validity argumentation and argumentation theory, which I can’t wait for him to explain to me what that means.
He also is well-balanced. He does a lot of things outside of medicine. He spends a lot of his free time with his wife and two daughters hiking, playing board games, traveling. I think one of the most important things about Ben is that not only is he a St. Louis Cardinals fanatic, but he also believes that mint flavored ice cream is the best ice cream flavor that exists in the world today.
Dr. Ben Kinnear, welcome to the podcast. Did I get that right?
BK:
Yeah, so great to be here with you guys. Boy, that one hurt. That one hurt. I do have to offer that correction. I think that mint ice cream makes no sense. I think mint should be reserved for toothpaste and gum. And why anybody would ever want to have it as a dessert makes no sense to me. Candy canes don’t make sense to me beyond decoration. It just doesn’t make any sense. Yeah, everything else was great.
Good content, Ben. Thanks for—
MJ:
I gotta get with our team that writes these intros. I’m so sorry I messed that up. But seriously though, Ben, we are very excited to have you here. I think a lot of our listeners deal with a lot of issues related to medical education, maybe not currently, but how they were educated to become doctors. And I couldn’t think of someone who has more interesting takes and more insights than yourself on this topic. So we’re happy you’re here.
BK:
I’m so excited to be here and I don’t know if the listeners know, but I go way back with you all. I was your Associate Program Director during training. So speaking of medical education, I got to see you two through your highs and your lows of training when you were in the thick of it and have many a story that I probably am not allowed to share because this is your podcast and I’m sure it would get cut if I told them. Yeah, it’s so fun to see you all doing what you’re doing and changing the world and excited to have this conversation with you.
MJ:
Well, I’m just excited that there were some highs in there. So thank you for that. I thought it was mostly lows. So that’s very, very exciting.
So a couple of things I wanted to start talking about with you, Ben, where we get a lot of questions on is this concept of competency-based education. And I could explain it in a very dumbed down way, and then you can tell me how wrong I am. Maybe that’s a good way to introduce this. And it’s basically the idea that in training doctors, not everybody learns at the same speed. So why do we have a set time that everyone has to complete? And then we all of a sudden say, you’re ready. Is that a dumbed down way to think about it?
BK:
No, let me reframe a little bit. So competency-based education, which I’m sure many of your listeners have heard that term, has kind of turned into a nonsense buzzword. At its core it is a philosophy of training that says: let’s focus on the outcomes of education, which sounds so obvious, right? But it’s interesting—that’s actually not how education has worked for most of modern civilization.
Most of the time, education is focused on standardizing the process. So let’s get a standard curriculum, everybody does that. And then if you come out the other side of it, we presume that you can do what you say you can do. Competency-based education, which started sometime in the mid-20th century in the United States, says, let’s make sure that the outcomes of training are getting where we want to go.
And so once you start with that foundational principle, you start getting to tensions like: okay, if we want everybody to hit a certain outcome, the only way we can have training for the same amount of time for everyone is if everybody progresses at the same rate. And now that we’ve been studying that, we know that’s not true. Anybody who’s tried to learn a skill knows that we don’t learn things at the same time.
If the three of us tried to learn to juggle or learn to solve a Rubik’s Cube or really anything, we would progress at different rates. And medical education is no different. People come in at different starting points and they have different rates of growth and different struggles. And so if you want to standardize the outcomes of training, the only way to do that is to be flexible in other places.
That’s where I’ve spent a lot of time thinking about in recent years—if we really want to make sure that every single person who graduates medical training is truly ready to provide high quality care for patients, then we probably should rethink how training is structured. Because right now there’s some pretty good evidence that medical education, whether it’s medical school or residency or beyond, not infrequently graduates people that we don’t fully trust to do the job.
Part of the reason for that is because we feel obligated with these time-based structures. So I’ve spent a lot of time thinking about what would it look like if training were more flexible in timing, where, for example, maybe some people need three years of residency, maybe some people need four, maybe some people need two—even if they’re doing the same type of training. And that obviously brings up a whole host of challenges and questions.
So competency-based education is all about the outcomes. And once you pin yourself to that, there are lots of downstream effects that follow.
NP:
What areas have you seen some of the most dynamic variability in as you’ve peeled back competency training? Where do you see the variability? Because we went through very structured training—two months of NICU, a month of whatever. There were certain blocks about social determinants of healthcare. There were certain blocks about QAQI, which I remember with you personally. Where’s the variability? Is it clinical? Is it the health-adjacent things that are critical to being a doctor, but not necessarily clinic?
BK:
Yeah, I think it’s everything. Some people it’s the big picture—are you ready, are you not, those kinds of holistic judgments we make. But there are people who’ve done studies on things like learning to read an EKG or learning to read an ankle x-ray. There’s a guy who’s currently at Harvard, his name is Martin Pusick, he’s a pediatric emergency medicine doctor.
He and others essentially study learning curves—what does it look like in terms of the skill you gain over time with repeated practice. And he’s shown even with specific skills, there’s a lot of variability between people. That idea translates across specific skills, holistic things, all sorts of stuff. Whether it’s communication skills, specific procedural skills, clinical reasoning—I think across all of it, there’s a wide range of variability.
I think what’s challenging is how fast we grow is really context dependent. You can’t always look back at how somebody did in medical school or undergraduate and say, this is how they’re going to grow in my context. So much of how we do depends on the context we’re in. Like you two—we really thought both of you were going to thrive here. And then, as Michael alluded to, once you got here, it was just a disaster.
So hard, yeah.
BK:
It was a disaster. Who could have predicted that? You both looked great on paper when we were taking you into the program.
MJ:
I hope you learned a lesson in there for your selection process.
So, I guess there have been some medical schools that have varied the time, the duration at which they graduate students. Have you seen anybody in the residency space actually take up this cause and experiment with it?
BK:
Yeah, there are a few in the United States. There are a few pilots—Mass General and Brigham and Women’s, and Boston Children’s up in Boston have a pilot going. It’s called Promotion in Place. There’s another one that actually involved multiple pediatric institutions called EPAC where they did some time-variable training that bled into residency as well. We did one here in Cincinnati with our internal medicine residency.
And there are a few others out there—one in surgery, one in plastic surgery. So there are little pockets of people trying to figure out how to make this work. One of the challenges is that so much of how we do education is based on time. How we pay for education is based on time in training, and it’s very rigid. How we staff our hospitals is based on time in training, because we tend to use learners as workforce. And if you start to build flexibility in, it gets really challenging to staff things like clinics, wards, and consult teams.
So basically our entire world is based around this time-based education model. And when you try to deviate from that, it gets really tricky to do beyond little pilots. They exist and they’re out there, and I think it’s a good first step in trying to understand the educational effects of time variability—what it does to the mind of a learner, the practical implications, the challenges, all sorts of things.
I will say Canada is a little bit ahead of us in this. They have something called Competency by Design, which is the broad term for their competency-based initiatives.
NP:
Zed.
BK:
Nope, nope—Canada did their CBD before we did the CBD thing, so I think they own that. But yeah, they’ve been doing more time variability longer than we have, on a broader kind of national scale. But it just makes so much sense when you think about how humans work. Why would we ever build a system that presumes we all grow at the same rate? It doesn’t make sense.
And I love sports, so I tend to think in those terms. Anybody who knows how professional sports work—when you get pulled into a minor league system to prepare for a professional team, you’re not on a rigid time-based track. You train, progress, and grow until you’re ready. And some people are never ready, and they don’t make it. I think those are two things we could do better with medical education. One, give people the time they need—shorter or longer. And two, do a better job of helping people when they’re not going to make it.
As you all probably see given what you do, there’s a huge financial risk people take going into medicine. That can make you feel trapped when you start to realize you might not make it. And then there are conflicting pressures.
Medical schools and residency programs see attrition as a negative mark against them. Failing people out is seen as a flaw in the program, as though you ought to have perfect selection. It’s almost laughable in medical schools, where they’re looking at early 20s individuals. You have no idea what their competency or capabilities are other than undergrad. Maybe you are a little bit better in residency—two notable exceptions being you two—but really it’s all based on this assumption of perfect selection.
So the metrics end up being the tail wagging the dog. If your metric is “no attrition” and you optimize toward it, then failing people hurts your program’s ability to recruit trainees next year.
MJ:
100%. I think that medical schools and residency programs are under tremendous pressure not to fail people and to get it right. There’s program selection of students, but there’s also learners selecting their career. Once you make that leap and take out loans, you’re committed.
There’s this idea in education called the compassionate off-ramp—how do we give people an exit once they realize they’re not going to make it, or maybe they just don’t want to do it? Maybe you get in and think, “This isn’t what I want to do with my life. Maybe I want to start a financial company.” But how do we do a better job than just saying, “Sorry, you didn’t make it, good luck with all your debt”?
So I think there’s selection pressure both ways. Institutions and programs feel like they have to get it right and they’re judged on whether people fail or progress to residency, fellowship, or prestigious careers. Learners feel tremendous pressure to pick correctly, and once they’ve chosen, they’re locked in.
So that’s why I think it’s so amazing—I’ve heard about your program, that Panacea Financial has guaranteed every person who decides they don’t want to do medical school anymore that you’ll forgive their loans. Is that true?
MJ:
That is a very interesting take and unfortunately not true, but I appreciate the suggestion, Dr. Kinnear.
I did have a question, though. Dr. Carmody, who was on the show earlier, mentioned some talk of lowering work hours—from 80 over four weeks on average to something less. The reason I bring this up is it introduces another time factor. If you lower duty hours, could you limit growth for certain individuals in time-variable training and maybe even extend the time it takes to be fully trained? Have you seen that? I don’t know in Canada if they have duty hour restrictions or not and what the impact is.
BK:
Yeah, I actually don’t know about duty hours in Canada either. But you’re already seeing this in the U.S. The American Board of Family Medicine is looking to potentially extend training programs, and emergency medicine has gone from three-year toward four-year programs.
I do think there’s always a tradeoff. If you reduce work hours, can you get the experience and training you need in the same number of years? For some people, yes. For others, almost certainly no. So it does become a tradeoff.
Imagine if we had a system with enough flexibility that you could choose your track. Do you want the pressure cooker of 80-hour weeks to shorten training by a year? Or do you want to work fewer hours, have time for family, other interests, even your mental and physical health, and extend your training longer?
That would be a wonderful choice to give people. But there are two fundamental pressures making that hard. One, graduate medical education is funded through government funds, tied to spots and institutional caps. It’s a very rigid system. Two, we use learners as workforce. Hospitals rely on residents to provide patient care. If you start varying how many residents are available, it messes with the whole system.
I think that’s one reason you’re seeing the push toward unionization in the United States. People are starting to say: if you’re going to treat me like workforce and not a learner, I’m going to act like a worker and fight for better conditions and rights. It makes total sense. And if you make the argument around education, maybe we should start treating learners more like learners, not just workforce.
NP:
I feel like we’ve been slowly rebelling against the Osler model for more than a hundred years. House staff were called that because they lived in house. Interns were interned. Residents lived there. With the rise of unionization and focus on workforce, are they tackling work hours specifically? I’ve seen their talking points on work quality and labor conditions, but not sure about hours. Have you seen unions going after that as a metric?
BK:
That’s a great question. To be completely honest, I’m not an expert in the world of unionization. We aren’t unionized here in Cincinnati yet. I say that because I think most places will be unionized in the future. So I don’t know exactly what specific gains or benefits different institutions have gotten from that.
I do think that many institutions, unless there are other pressures, will continue to schedule residents to the maximum allowable degree based on whatever cap the ACGME sets. Right now, the ACGME—the accrediting body—says you cannot work more than 80 hours per week averaged over four weeks. And you see a lot of residency programs bumping right up against that—not on every clinical rotation, but certainly on the busy ones. Until there’s an incentive to move away from that, you’ll keep seeing it.
Unionization, in my mind, could push institutions to move away from that, but that’s probably very case-by-case. I haven’t heard of any organized labor unions advocating directly to accrediting bodies. Maybe that exists—I just haven’t heard about it.
And as long as residents are revenue-generating—or maybe more accurately, revenue-protecting—you’re going to have that incentive to push them. Residents help keep the lights on, keep the wards alive, and prevent the need to staff with more expensive providers.
That’s a really interesting question, though. I actually tried to dive into this—do residents save institutions money or not? And it’s harder to answer than you’d think. In fact, one of your past guests, Dr. Carmody, had a great article about this on his Sheriff of Sodium blog. If people haven’t read his blog, I highly recommend it.
When you look at all the money put into a residency program, and then try to estimate the savings from having residents staff services—rather than attendings or advanced practice providers—it’s really complicated. Some places say yes, residents save money. Other places actually say they cost money.
So my takeaway has been: it’s hard to tell. And I don’t want to speak for Dr. Carmody, but I think his takeaway was similar. Your gut reaction is that residents must be saving tons of money. But when you add in the costs of running a training program, it often comes out in the wash—or is highly dependent on the service line. Internal medicine may be different than surgical or interventional specialties, for example.
At our program, when you two were here, I remember you refused to do ICU rotations unless you got a weekly mani-pedi. We had to actually pay a company to come into the resident lounge and start hacking away at your cuticles. And Ned, your cuticles—I’ve heard they’re like concrete.
NP:
That was the best they ever looked. I’ve never gotten back to that level, man. I needed institutional support.
BK:
This is an advertisement for Cincinnati Med-Peds: come here and we’ll get you personalized mani-pedis while you’re on call.
MJ:
Somehow no one told me about this on interview day. Ben, I think you were the one who showed us around. I’m a little disappointed.
BK:
Well, I saw your cuticles—you didn’t need them. But have you seen Ned’s toenails? They’re…
NP:
Thank you.
MJ:
So I’m curious: since we left, have you had residents who go four or five months without reporting their duty hours? Hypothetically, maybe someone on this call once did that. Not saying who.
BK:
Yes, there’s always at least one person who doesn’t log them. And I’ll be honest—duty hours are one of my least favorite things as a program director. The idea behind them is wonderful: protecting residents. But in practice, at almost every program, it turns into a nag session.
You’ve got residents working hard, exhausted, focusing on patient care, and then we’re reminding them: “Don’t forget to log your hours.” It’s soul-sucking. Even when logged, the data are usually inaccurate. Some programs are experimenting with automation, but then you risk creating a “nanny state” where you’re tracking logins and movements. That’s a balance too.
MJ:
Yeah, instead of punching a wall, I just put in 8–5, Monday through Friday, and no one ever asked. Not advocating that if any residents are listening—but it worked out okay.
BK:
Please don’t do that. But when duty hours are done well, they’re actually a powerful tool. Programs can take the data back to the institution or a rotation and say, “Our residents are breaking duty hours here. Something has to change.” We’ve had that happen and seen real improvements. The problem is the reliance on residents to self-report, which makes the data weak.
MJ:
So, in the programs experimenting with time-variable training here in the U.S., what’s the actual measure of success?
BK:
Great question. What I’ve seen—and from our own pilot here—it’s clear that it’s possible educationally. Assessment systems are advanced enough that you can make defensible decisions about readiness for practice without relying solely on time in training.
That said, time does matter. Even if someone performs at a high level after three months, it’s hard to say they’re ready to be an attending. But we can introduce variability. And many learners respond really well—they seek out assessment, crave feedback, and appreciate being pushed to the edge of their ability.
Honestly, both of you could probably have advanced faster than four years. If you’re one of those advanced learners, progressing at your own pace is motivating and fruitful. On the flip side, if someone needs more time, we should allow that without stigma.
From our pilot, we learned two other interesting things. First: when you put time variability on the table, some learners focus too much on speed. Residency is hard—you want to finish. But that can shift focus away from growth and curiosity, and make people less willing to be vulnerable or ask questions. Not everyone did this, but some did, and it’s something to balance.
Second: the role of competency committees. These are the groups that decide if someone is ready to graduate. There’s evidence that…
BK:
A lot of times, graduation decisions aren’t very active. Competency committees don’t necessarily look for evidence that you are ready. Instead, they look to make sure there isn’t evidence that you’re not ready. In other words, they’re satisfied with an absence of red flags rather than requiring proof of competence.
That’s a consequence of our time-based system. Once we moved to a time-variable model, that changed. Suddenly, our competency committee had to say, “We need evidence that this person is ready.” If a learner hadn’t demonstrated it, they weren’t going to move forward.
That shift—removing time as the safety net—forced committees to make active, deliberate decisions. And I think that’s exactly what we want. If my child is being cared for in a hospital, I don’t want their doctor to have graduated simply because there were no red flags. I want them to have shown they are competent.
So reframing the mindset from “they’ve been here long enough without problems, they’re probably fine” to “we need clear evidence they can do the job” really had a profound effect.
It’s like you wouldn’t want a pilot announcing, “I just graduated flight school—they told me there were no red flags.” That’s not very reassuring.
The problem in medicine is that we’ve relied on time as the measuring stick for so long that the idea of demonstrating active competence has withered on the vine. It’s only when you take away that time-based crutch that people start to engage seriously with evidence of readiness.
And for listeners interested in the scholarship here, Dan Schumacher has published some really interesting—and honestly, alarming—studies showing that many places really are graduating people based on nothing more than the absence of red flags.
NP:
That makes me think—has anyone tried layering in other opportunities if time-variable advancement isn’t practical? Like in Med-Peds, even if someone is ahead of the curve, it’s still a four-year program. Could that extra time be redirected into resume-building “carrots”—say research, a certificate, or an additional degree—while still finishing on schedule?
BK:
That’s a really thoughtful idea. I haven’t seen much explicitly bridging people into fellowship, but we already have a version of this in physician-scientist training programs.
For example, instead of doing the full three years of internal medicine, you can sign up from the beginning to do two years and then move directly into your cardiology fellowship. You start your fellowship earlier and commit to a research-focused career. But that’s a decision you have to make upfront, not something that flexes dynamically during training.
The main barrier here is regulatory. Many time requirements are tied directly to board eligibility—you must spend a certain number of months in accredited training. Unless you’re in one of those specialized physician-scientist tracks, you can’t just decide mid-course that you’re ready and skip ahead.
But if certifying and accrediting bodies loosened those restrictions, there’s no reason you couldn’t create more flexible models—where accelerated learners use their “extra” time for advanced scholarship, degrees, or other meaningful pursuits.
MJ:
That reminds me of a funny moment—I once led a team where my attending was an MD/PhD, my intern was an MD/PhD, and both of my med students were PhDs. I was the “senior,” but probably the least credentialed person in the room. At the end of the month they gave me a certificate that said I was the most important senior on the team. Very kind, but it really drove home the whole physician-scientist theme.
On a related note, Ben, you’ve been active in producing scholarship yourself. One of your recent papers stirred up quite a debate on X (formerly Twitter). Colloquially, the practice is called “pimping”—asking learners questions on the spot, often aggressively. You and your colleagues published a piece reframing this as “toxic quizzing.” Could you talk a little about what you found?
BK:
Yes—thanks for bringing that up. And let’s use “toxic quizzing,” which is the term we used in our paper. Interestingly, some people try to rationalize the word “pimping” by tracing it back to the German prüfen or Pümpfrage—meaning to question aggressively. But in fact, the way the term arose in medical education really does have misogynistic, power-laden overtones.
So we wrote about this in the Journal of Hospital Medicine in their “Things We Do For No Reason” series. That section usually focuses on clinical practices, but we applied it to education. We reviewed the literature and showed how toxic quizzing has negative effects on learners, while the supposed benefits are largely anecdotal.
I give lectures on how to ask clinical questions well, and whenever I present this literature, without fail there are people—often senior physicians—who insist toxic quizzing worked for them. They’ll say, “I’ll never forget the histology of Barrett’s esophagus because someone humiliated me about it.” I found it remarkable how deeply ingrained that belief is—that distress somehow equals learning.
So we pitched the paper, it was accepted, and once it came out the same debate exploded online. Some people embraced it, but many reacted angrily: “I was toxically quizzed, it was good for me, look at me now.” That mindset really misses the point, but at least it sparks reflection.
NP:
I remember in our own residency orientation we were explicitly taught about eustress versus distress—the idea that a certain level of pressure can be motivating, but humiliation crosses the line. And as we transitioned from overwhelmed interns to senior residents, there was still an expectation that turning the screws during questioning had some value.
BK:
The fact that you remember that, Ned, is the reason why you’re my favorite.
NP:
Yep, the evidence is piling up on who we all have our strengths, okay? All have our strengths. I’m the rule follower between Michael and I. That’s, I think, the other important takeaway. Yeah.
So, like, Ben, when are these people that are dissenting, do they have any valid points? I mean, is there any validity to their argument or is it just an argument over the definition of what we’re talking about?
BK:
Yeah, I think it’s both. There’s like a rationale to it, right? It’s not an irrational argument with the idea being that if I put some pressure on you, maybe you’ll pay attention a little bit more. Maybe it’ll stick in your brain better because you’re in this heightened emotional state. And there is some evidence that in life, when you have a highly emotional moment, it imprints in your memory better. There’s good cognitive science to back that up. And it kind of just makes rational sense that if I get you terrified of me, then you are going to pay attention to me, right? Because you’re afraid.
And so there’s a rationale there. But the downside is, number one, when you look at actual learning, not just memories but actual learning, once you put somebody in a state of threat, they actually don’t learn as well. The whole “flashbulb memory” thing with traumatic moments is true. But it doesn’t hold when you’re trying to learn information—it can actually hinder learning.
And what a lot of these people don’t think about are the downstream effects of putting people in a constant state of stress. So Ned just said the words distress and eustress. One of the things that makes this so hard is everybody has a different line of how much stress is too much stress, how much threat is too much threat. And that is not a line we can see in one another. For those of you listening in medicine, it’s like a Frank-Starling curve for heart failure. There’s a sweet spot, but once you go beyond it, performance falls.
There are so many factors that dictate where that optimal point is for people, that when you start intentionally causing distress and reinforcing hierarchy, I think more often than not, it causes harm. It contributes to burnout and regret for being in medicine. There are qualitative studies out there that show this. So if it’s not effective and it has a high chance of causing harm, why are we doing it?
Instead, we should create environments that are psychologically safe. Psychological safety doesn’t mean easy or soft—it means you can be challenged and fail without punishment or lasting damage. We always say: safe, but not soft. You’ll be challenged. You’ll get asked questions. But we do it without hierarchy and harm. It’s safe to say you don’t know. It’s safe to get things wrong. That’s different from toxic quizzing.
To your point about definitions, Michael, people conflate things. Some people on X read the article and said, “Oh, you can’t ask questions.” But that’s not what we said. We said you should ask questions and apply some stress, but do it in a way that is psychologically safe—which is very different from toxic quizzing. And I want to be clear: some people believe making learners upset is good. This paper was written for them to say, I think you’re wrong.
I’ve thought about this a lot, especially working with interns. Not just interns, but even practicing doctors. There’s this ingrained training that saying “I don’t know” is weakness. But what does that mean when you’re out of training, caring for a patient, and you’ve internalized that you can’t admit uncertainty? You might reflexively give a half-baked plan to project confidence instead of pausing to research or discuss. That can put patients in danger.
That’s why your approach limits some of those maladaptive behaviors.
BK:
I 100% agree with you. Research in med-ed shows that when people ask what attribute matters most before someone can care for patients independently—knowledge, efficiency, communication, accountability—all important. But the number one is: do you know your limits, and will you seek help? That’s it.
You have to be smart enough, efficient enough, a good enough communicator—but all of that can be made up for if you know your limits and ask for help. Even the best resident will eventually see something brand new in practice. You need humility to say, “I don’t know what’s going on, I need help.”
If we model that as supervisors, learners will carry it forward. Then, in practice, they won’t feel like they have to fake it. And I do think younger attendings are getting better at this. That performative bravado still exists but is slowly washing out of medicine.
The “I don’t know” issue extends outside clinical settings too. Sometimes the competency-to-confidence ratio is off. Doctors think “I don’t know” signals weakness everywhere—even financially. We’ve seen people make poor financial decisions because they didn’t seek help. That affects where they practice, their patients, their well-being.
I say this all the time—the most important phrase is: “Don’t know, but let me find out.” That applies clinically and beyond.
Dr. Michael Jerkins:
Yeah, I don’t know if that was me, but it sounds smart. So I’ll take credit. You’re right—even in finance I feel pressure to pretend I know what I’m doing. My wife and I were house hunting recently. I reached out to Michael with questions and honestly, I was embarrassed. I’m a 40-year-old attending of 10 years—I felt like I should know. But I’ve only bought one house, 15 years ago. I’m not financially savvy like you all.
This happens in so many parts of life. If we’d just put down our pride and ask for help, we’d live better lives—and others would too. Because when you admit what you don’t know, you realize lots of others don’t know either.
I can’t tell you how many first-time homebuyers I see—residents especially—who say, “This is complicated, I don’t know what I’m doing.” And I tell them, “Me neither. I’ve only done this once. It’s hard. Reach out to people who know what they’re doing.” That’s important to hear.
I was also invited to speak to my faculty group in Boston recently about retirement and investment planning. I’m far from an expert there. But they wanted a 101-level talk to cover the basics. It was a great chance to say, no matter where we are in our careers, we can all pause and ask, what don’t I know? How do I explore that? It was really fun for me.
BK:
I get the message, Ned. We’ll invite you back to Cincinnati. Boston loves you more than we do. I get it.
NP:
Pretty obvious. I missed the chili more than anything, if I could really—yeah. Sorry, almost said St. Louis.
Let me, for fun, take a contrary position. I don’t necessarily believe this, but let’s go here for a second. How do you balance creating a safe space without it becoming soft? There’s an argument that if you cut work hours, they’ll keep cutting. If you create safe spaces, they’ll get “too safe.” How do you balance that—still meeting objectives without creating softness for softness’ sake?
BK:
Yeah, this is a great question. I won’t give a definitive answer, because I don’t have one, and it’s a hot-button issue in medical education right now. But there are two things I’ll refer people to if they want more reading. One is a widely discussed article by Dr. Lisa Rosenbaum in the New England Journal of Medicine called “Doing Well While Being Well,” or something along those lines.
BK:
And it starts to get into the idea of, as our understanding of harm and trauma in the learning space evolves over time, how do we navigate the boundary of what are necessary discomforts, what are unnecessary discomforts, and what is trauma? Some things are clearly trauma—racial microaggressions, gender microaggressions or macroaggressions. But then there comes a point where it’s like, okay, if we make training too easy or bend everything toward comfort, do we lose something? More importantly, are we not preparing people for the real world? Because as both of you know, while residency is hard, being an attending is just as hard.
That line is not clear, and the discussion is a challenging one. I think it’s in tension with the wellbeing movement we’ve been trying to navigate the past several years. Promoting wellbeing is crucial. But so is rigorous training. Sometimes those feel in conflict, but we should make it “and,” not “either/or.” That paper navigates that tension. There’s no clear line—it’s different for everyone, which makes it challenging.
Another thing that makes this difficult is we’re anchored in the paradigm in which we trained. What was “enough discomfort” for me is tied to what I experienced. You all are probably the same. But the window of what counts as necessary discomfort versus softness shifts over time, and we risk being left behind—becoming the old man yelling at the clouds. The best approach is to remain humble, talk to learners where they are, track training outcomes, and balance wellbeing with rigor. If people aren’t getting where they need to be—that’s when competency-based approaches matter. Then you have to ask: if we’re reducing work hours or work compression, how else do we ensure readiness? Extending training? Shorter bursts of intensity? I don’t know, but we need humility and curiosity in those conversations.
MJ:
How do you, in time-variable training, reconcile that with the rigid calendar of medicine? July 1st and June 30th are such conserved dates. And contracts are often signed six to twelve months ahead of presumed graduation.
BK:
In the pilot programs, some used “promotion in place.” Learners stayed at the same institution but were granted more autonomy, functioning as attendings before formally graduating. Other programs did let people graduate and move out faster. During COVID, many schools let students graduate as early as February or March 2020 so they could help in the pandemic, then start residency in July. That raised the question: if we could do that then, why not all the time?
A big part of the rigidity comes from the match—the NRMP match. For those unfamiliar, it’s a once-a-year process where everyone applies and matches to residency at the same time. Even med school admissions are annual. But it doesn’t have to be that way.
In other countries, there are rolling admissions and rolling matches—on-ramps and off-ramps throughout the year. That makes variability easier. It also reduces the comparison mindset—if everyone starts at different points, you’re not racing against peers in the same way. If we fundamentally rethought the education system—including funding models—we could have a more flexible system with rolling entry points, instead of this high-stakes, once-a-year moment.
MJ:
Can I go back to toxic quizzing? Hypothetically, if studies showed it actually made you a more competent doctor, would you adopt it? I never did it and still don’t. But if evidence said it was best practice, I just don’t think people would change. On the flip side, many can’t teach outside of it no matter what the evidence says.
BK:
I would rapidly adopt it—mostly because I’m filled with rage and love screaming at people. Getting my PhD is really just about having a sense of superiority. If there were evidence that making learners feel worse improved outcomes, that would unleash my inner Hulk.
But seriously, it’s an interesting question. If studies showed making learners uncomfortable led to better learning—within HR-allowable bounds—I’d want to see two things: one, did they measure the negative consequences (stress, burnout, depression, attrition)? Two, how big was the effect? How much benefit compared to the harm? That would be an important conversation.
When I wrote the paper, I wasn’t starting from “I want to be nice.” I started from seeing harm caused by toxic quizzing, often done under the guise of kindness. We cite an article by a surgeon who described toxic quizzing as a vaccine—painful now, but good long-term. I think that’s flawed thinking.
But if Panacea Financial wants to fund the study, I’m happy to find some residents to scream at.
MJ:
So what I hear is you’re not a fan of Kelly Clarkson’s song “What Doesn’t Kill You Makes You Stronger.”
NP:
No, she coined that phrase—it really is a Clarksonism.
I am calling Kelly Clarkson a liar. She’s a liar. I think she should be taken off the air. She’s charming, wonderful on-air. But she’s gone too far, and I’m glad someone is finally calling her out.
This is news—we’re breaking it here on The Podcast for Doctors (By Doctors).
I’m coming for you, Kelly. I’m coming for you.
No, it’s not true.
MJ:
I’m sure we’ll get a response. Maybe we’ll have her on the podcast to debate. I know we’re wrapping up, and you’re busy, so I want to respect your time. But I did have one question, and Ed, I think you had another too.
One of my favorite questions to ask—especially of thoughtful folks like you—is: what’s one opinion you hold as a doctor that you think most doctors would disagree with? Maybe you don’t have any, but if something comes to mind that most doctors would say is crazy, I’d love to hear it.
BK:
Does mint chocolate chip count?
MJ:
That’s probably majority opinion—so I won’t allow it.
BK:
Okay. Well, the toxic quizzing thing is one where I get the most pushback. Otherwise, most of my views align with what people think. But here’s one: a lot of doctors hold what you’d call a positivist or post-positivist worldview—that there’s an objective truth about the world we can measure. As an educator, I don’t think that’s true. I don’t think there’s an objective truth about people’s competence or intelligence. Those are socially constructed.
That has big implications for how you approach education. Some educators agree with me, but most doctors don’t think that way. It’s a very social sciences lens. And I’m a social sciences kind of guy.
MJ:
Interesting. Doesn’t sound too crazy to me.
BK:
The other one: learning styles. Did you all ever learn about those?
MJ:
I did. I even recorded lectures to listen to—though I don’t know if I was actually learning.
BK:
Exactly. Learning styles are not meaningful. People say “I’m a visual learner” or “I’m an auditory learner,” but that doesn’t predict how well you’ll learn. The evidence just doesn’t support it.
NP:
Spicy take, Dr. Kinnear.
MJ:
Since you’re a social science guy, here’s one: the whole white cloud versus dark cloud thing. Dr. Warm used to say, “Whether you believe you’re a white or dark cloud, you are.” The idea is your belief shapes your happiness and sense of workload. I know you’ve looked into this. What’s your spiciest take?
BK:
So first, some terminology issues: white = good, black = bad, which is problematic. But the basic idea is that white clouds get lighter workloads, black clouds always get crushed. It’s superstition—the belief that the universe piles more work on you if you’re a black cloud.
Studies have asked: do self-identified black clouds actually get more admissions, codes, or pages? Do peers identify them as having worse luck? The answer: no. No measurable difference.
So why do we hang onto it? I think it’s Pascal’s Wager. It’s better to believe in cloudness because if you’re wrong, the universe punishes you. When I presented this to residents, after showing the evidence, almost no one raised their hand saying they still believed in cloudness. But then I asked, “How many of you will go back and say, ‘We’re going to have a quiet night tonight’?” Nobody raised their hand.
We know it’s not real, but we’re still afraid of tempting fate. That fascinates me—why our psychology clings to superstition even when disproven.
MJ:
Well, Dr. Kinnear, we could talk to you for hours, but we’ll let you go. We’re so grateful for your time. Where can people find you if they want to follow your work?
BK:
I’m on X, formerly Twitter. Technically on Instagram, though not much. Otherwise, not a big online footprint.
But really, it’s been a treat chatting with you both. You were incredible physicians and advocates when you were in our program, and we miss you. The program’s proud of you, but I’m personally proud too. It’s been amazing to see what you’ve built and how many people you’ve helped. Thanks for having me on.
MJ:
We recorded this whole thing just for that validation—we’ve been waiting years.
NP:
Exactly. Thanks, Ben. Always great talking with you.
MJ:
I always learn a ton from him. This conversation really made me think about where medicine could go, how we avoid pitfalls, and how we actually create better doctors—not just ones who’ve “served their time.”
NP:
Right. Residency isn’t incarceration. It should be training.
MJ:
Exactly. And it all comes down to outcomes—what works, what doesn’t, and measuring it honestly. That’s science, and we should apply it to education too. Take time-variable training: can someone graduate early once they’re competent? It’s controversial, but it could save future doctors huge amounts of debt. That’s worth exploring.
And toxic quizzing—it’s messy. The definition is fuzzy. What one person sees as toxic, another might see as normal. That makes it hard to measure or standardize.
NP:
There’s also the engagement piece. We’ve all seen when a learner gets shut down by toxic questioning—the wall goes up, they disengage. Maybe they memorize a fact, but the synthesis and clinical reasoning vanish. We need environments where people feel safe saying, “I don’t know, but I’ll look it up.” That’s how you get engaged, capable doctors.
MJ:
Yes—and ultimately, we want competent doctors who also stay in medicine. Less burnout, more satisfaction. That serves patients and communities.
NP:
Exactly.
MJ:
Well, lots to think about. And the exciting part is, it’s always evolving.
Thanks for joining us this episode. You can catch the podcast For Doctors, By Doctors on Apple, Spotify, YouTube, and all the other major podcasting platforms. If you enjoyed this episode or learned anything here today, please take a moment to give us a rating and subscribe so that you don’t miss a single episode release.
To submit topic suggestions, guest suggestions, or questions, you can reach us at [email protected]. As always, thanks for listening—and the next time you see a doctor, maybe you should prescribe this podcast. See you next time.
Check it out on Spotify, Apple, Amazon Music, and iHeart.
Have guest or topic suggestions?
Send us an email at [email protected].
Feeling Disappointed On Match Day? What’s Next? – Match Day 2026
According to the NRMP 2024 Main Residency Match Results and Data, less than half of all Match Day applicants were matched with their first choice...
What To Do If I Didn’t Match: SOAP Tips & More
Every year, thousands of medical students apply and interview for residency. In 2025 alone, 47,208 applicants submitted a certified rank order list of their preferred...
What Happens If I Didn’t Match Into A Residency Program?
If after completing SOAP you are still unmatched, it is important to take a moment to rest. Though you will not be entering residency this...