Girl Doc Survival Guide

EP202: Understanding Cancer Diagnosis: An Expert Discussion with Dr. Kathleen Kerr

Christine J Ko, MD Season 1 Episode 202

Overdiagnosis and Medical Decision Making with Dr. Kathleen Kerr

In this episode of The Girl Doc Survival Guide, Dr. Kathleen Kerr, a Professor of Biostatistics at the University of Washington, discusses overdiagnosis and medical decision making. Dr. Kerr delves into how patients perceive mammogram results, the difference between overdiagnosis and overcalling, and the challenges pathologists face in diagnosing cancer. She also shares findings from her research on the influence of prior opinions on second diagnoses and the cognitive processes involved in interpreting pathology images. The discussion highlights the complexities and subjectivity in pathology diagnoses and the implications for patient care.

00:00 Introduction and Guest Welcome

00:33 Personal Anecdote on Mammograms

01:25 Understanding Screening and Its Limitations

02:24 Exploring Overdiagnosis

05:59 Research on Dermatopathologists' Perceptions

08:20 Second Opinions in Medical Decision Making

12:26 Pathologists' Diagnostic Process

15:42 Final Thoughts on Diagnostic Criteria

Christine Ko: [00:00:00] Welcome back to The Girl Doc Survival Guide. Today I'm very happy to be with Dr. Kathleen Kerr. Dr. Kerr, PhD is a Professor in the Department of Biostatistics at the University of Washington, where she also directs the Biostatistics MS Capstone Program, as well as the Summer Institute in Statistics for clinical and epidemiological research. She has researched overdiagnosis and medical decision making, as well as how pathologists interpret an image and make a diagnosis. Welcome to Katie. 

Katie Kerr: Thanks very much for having me. 

Christine Ko: Can you first share a personal anecdote? 

Katie Kerr: Yeah. Being a woman of the age that I am, it means that over the past 10 years or so, many female friends have started to have mammograms as part of recommended screening. And very often a friend had her first mammogram, she would say, Great news everybody. I don't have breast cancer. As someone who works in the area [00:01:00] of imperfect tests, it really struck me. It was a good reminder for me the way that regular people think about this stuff, that cancer is something you either have or you don't, and you have a test that it either tells you that you have it when you have it, or tells you that you don't have it when you don't have it. I think that sort of the way real patients actually think about these things is important to keep in mind. 

Christine Ko: That's a really good point. What exactly is incorrect about that general perception of, Oh, I had a screening mammogram and I'm great. I don't have cancer.

Katie Kerr: I'm certainly no expert in mammography, but just that these tests are imperfect. Screening is especially good at catching the slow growing cancers. Slow growing cancers are the ones that are less important to catch early. Screening isn't so good at catching the very aggressive, fast-growing cancers, which are of course the ones you really want to catch early. So there's a mismatch between what screening is good for and [00:02:00] what you would really ideally like to be able to do.

Christine Ko: Yes. Screening detects more slow growing things, and it's not really meant to necessarily catch the very aggressive cancers. It could, if it just happens to be something started growing rapidly right before you're about to get your screening exam. So yes.

Some of your research relates to overdiagnosis. Can you talk about overdiagnosis?

Katie Kerr: The standard definition of overdiagnosis is the diagnosis of a disease that is not going to cause morbidity or mortality in a patient's lifetime. Potentially a cancer that is so super slow growing that, particularly if it's diagnosed in an older person, is very unlikely to actually harm that patient during their lifetime. 

Christine Ko: Yes. That's the definition that more [00:03:00] recently I've heard. The way I always perceived overdiagnosis, I was actually equating overdiagnosis with overcalling. Overdiagnosis, on the pathologic side, it's a true diagnosis. It's not an error.

Katie Kerr: So to me, and I would love to hear your perspective, overcalling is basically an error in the sense that that diagnosis should not have been made. A less severe or even a benign diagnosis should have been made, whereas overdiagnosis isn't an error. It's the correct diagnosis, yet the patient is not helped by that diagnosis. In fact, the patient is probably harmed by that diagnosis because they would have been just fine if that if that disease had never been diagnosed. 

Christine Ko: Yes. I recently learned that overdiagnosis is actually pathologically cancer. I think actually a lot of my colleagues in pathology and dermatopathology and also dermatology and maybe all over medicine think those [00:04:00] two terms are synonymous. If I over-diagnose a cancer on someone, it meant that I'm over calling it; that every other person, a hundred other people that looked at it would've called it something less than that and not cancer. But I over called it, and I'm wrong. But over-diagnosis, that's the right thing to call it from the pathology perspective, but as you said, in terms of the outcome for the patient, it's not gonna harm the patient cause symptoms, morbidity, or death, mortality. The hard thing for me when I'm looking at a microscopic slide is that I feel like I don't have a crystal ball for that. How do I know how symptomatic something is already or might be in a week or a month or a year, and also how would I know if that's gonna be something that's gonna eventually kill the patient. I really do think that's a misconception that overdiagnosis is the same thing as over calling. The problem is we don't actually have a gold [00:05:00] standard of what really is a cancer, it's not black and white; and also we don't have the crystal ball of what's really gonna happen to the patient in the future. Whether something's over-diagnosis or over calling or maybe both, we can't tell the difference from the path side, at least.

Katie Kerr: Absolutely. It's a really thorny problem because once you diagnose something it's very hard to not do anything. It's hard for the physician, and it's hard for the patients. We very rarely actually can identify an individual case of overdiagnosis because when there is the diagnosis, then that's followed by treatment. So we have no way of knowing if the patient would have been fine had the patient not been treated. So the evidence for overdiagnosis comes very much from population level data, epidemiological level data. It doesn't come from saying, Oh, look, here's a patient. This patient was overdiagnosed. 

Christine Ko: Yes.

Katie Kerr: Yeah, it's tricky. The first [00:06:00] paper I was going to do on overdiagnosis, actually, it started out as a very simple kind of data on what are dermatopathologists’ perceptions of overdiagnosis. Do they think, for example, overdiagnosis of melanoma is a public health issue in the US? In our study of dermatopathologists, we basically found that two-thirds thought overdiagnosis of atypical nevi is a public health issue, about a half thought overdiagnosis of melanoma in situ is a public health issue, and about a third thought that over diagnosis of melanoma is a public health issue. We thought it would be really interesting to look at associations between their perceptions about whether overdiagnosis is an issue and how they diagnosed study cases. For example, I wondered whether those who thought that melanoma is overdiagnosed and is a public health issue in the US, whether those were maybe a little [00:07:00] more reserved from giving that diagnosis to our study cases than those who didn't perceive overdiagnosis to be an issue. And actually our results on that particular question were basically null, which I think is interesting in itself. So those who perceive melanoma overdiagnosis to be a public health issue were no less likely to give that diagnosis to a study case.

Christine Ko: Yeah. I read that paper of yours. That makes sense to me. As we just talked about, overdiagnosis still means that the pathological diagnosis is actually correct. So even though I do think overdiagnosis is a public health problem, when I'm signing out a melanoma that is a thin melanoma, I'm like, Oh, it'd be nice maybe for the patient to not call this melanoma, but it really is a melanoma.

Katie Kerr: Some of my pathology colleagues have similar thoughts that maybe just changing the names would help somewhat. [00:08:00] Because again, to regular people, anything cancer is, for the most part, just terrifying.

Christine Ko: In addition to your research in overdiagnosis, you've written some articles on when a pathologist or dermatopathologist gives a second opinion on a case that already has a first opinion, that already has a diagnosis. Can you talk a little bit about that medical decision making? 

Katie Kerr: Yeah. We did two studies, but I'll talk about the one in dermatopathology. Our study subjects were dermatopathologists, and they interpreted melanocytic lesions. They had a set of lesions to interpret in Phase 1 of the study, just interpreting them without any other information about how anyone else had diagnosed that lesion. That was basically to set up a kind of a baseline for that pathologist and that case. Then there was a washout period, so they should have forgotten that diagnosis in Phase 1 because it was some time ago. And then they did another set of [00:09:00] case interpretations. In that second phase, unbeknownst to them, they were interpreting the same cases as they interpreted in Phase 1, but in Phase 2 we randomly assigned for them to see a prior diagnosis of the case by another dermatopathologist. For some of the cases in Phase 2, we said we don't have another diagnosis, so just give your first opinion. So some of the cases were basically mimicking giving a second opinion with knowledge of what the first opinion was, and other cases were mimicking giving a second opinion where you don't know what the first opinion was. And again, this was randomly assigned. So it was a fairly strong study design.

When they did see a first opinion when they did their Phase 2 interpretations, it was always either less severe or more severe than their diagnosis in Phase 1 of the study. So [00:10:00] we saw a very clear signal that they were influenced in their diagnoses when they saw a diagnosis of another physician. For example, they were about 50% more likely to give a more severe diagnosis in Phase 2 compared to their original diagnosis in Phase 1 if they had seen a first opinion that was more severe than their diagnosis in phase one. And then we also saw they were influenced towards less severe diagnoses. So they were 38% more likely to give a less severe diagnosis in Phase 2 when they saw a less severe prior diagnosis compared to not seeing any first opinion diagnosis.

At the very beginning of the study, we had asked them some questions. One of the questions we asked them was, When you give a second opinion, are you influenced by what the first opinion of a case was? And we had a subset who said, I'm not at all influenced by the first [00:11:00] opinion when I give a second opinion. But we saw basically very similar effects even in that subset of dermatopathologists.

Christine Ko: I'm definitely influenced by a prior opinion on a case. What I try to do is I always look at a case that has a prior opinion without looking at that prior opinion before I look at the case and decide what I think about it. But then I look at how it's been signed out, and I do take that into account, actively take that into account.

Katie Kerr: That's what my pathology colleagues recommend. On one hand, when you want a second opinion, you want independent assessments. But on the other hand, you want to learn from your colleagues, so I think they've described maybe the ideal situation is when you're giving a second opinion, it's just what you described. First do it independently and then look at the prior diagnosis and adjust as you feel is warranted. 

Christine Ko: You might have an opinion on this or not, but the hard thing is that [00:12:00] pathology diagnosis is actually pretty subjective. And I think that's something that definitely non-healthcare individuals don't really know. Cancer is just black and white to them, like it is or isn't. But cancer's definitely not black and white from the pathology standpoint. So aside from over-diagnosis, where the pathologist is actually right, there's actually a fair number of cases where, Who is really right? I wanted to talk to you about it because you wrote a paper about how pathologists come to a diagnosis from an image, and you were looking at sources of error for that process of looking at an image, like a microscopic slide, and then coming to the diagnosis. Can you comment on that research? 

Katie Kerr: I'm happy to describe it a little bit. That research was a big collaboration of which I was only one small part. It was a collaboration of cognitive psychologists and pathologists and so on. A study in breast tissue, breast [00:13:00] pathology. In this study we were collecting eye-tracking data on the pathologist as they interpreted cases. And the conceptual model that the cognitive psychologists were using was to think about four phases of interpretation. So the first stage or the first opportunity for error was, Does the pathologist detect the critical region of a case? And we were able to look at, Do the pathologist's eyes fixate on the critical region? Was it their focus at any point during the review? The second stage being then, Do they recognize the relevance of the critical region? We were using digital images and we were asking them to mark, Where's the critical region of the case? That we had pre-identified with our expert pathologists. That's another opportunity for error. Did their critical region overlap with the critical region identified by the expert pathologist, which they couldn't see, of course. Then they were asked to describe what they saw in their critical region, [00:14:00] describe the features. So we had the pathologists in our [research team] reviewing their descriptions and comparing it against what they had marked on the image, Are they, at some basic level, describing correctly what's in there? And then the fourth stage being the final diagnostic decision. And so this study was in breast, and we had both trainees, so residents, as well as experienced pathologists in the study. And we pretty much found there were high rates of success for the first two stages. They were detecting the critical region. They weren't missing something in the image. They usually recognized its relevance. They identified the same critical region as the experts had identified. It was that third stage of correctly describing the features in the region where we saw more errors. That was in fact the only aspect that was significantly associated with diagnostic accuracy. If they had [00:15:00] inaccurate feature descriptions, that was followed by an accurate diagnosis only 13% of the time. Whereas if they gave accurate feature descriptions, then they gave an accurate diagnosis 61% of the time. There's no super simple story there, but if you do want to simplify it, in breast pathology, it's not an issue of finding the right place or recognizing what the critical region is, but it's more about interpretation of what's in that region. 

Christine Ko: That's interesting 'cause everyone saw it, but you come to a different conclusion on it, linked to words, like the actual cognitive description of what you think that you were seeing there.

Okay. Great. Do you have any final thoughts? 

Katie Kerr: Yeah, just something I think about. I feel like pathological diagnostic criteria were developed in a context where things were most often biopsied if they were possibly already clinically [00:16:00] manifesting in symptoms or had gotten very bad. And over the decades, the threshold for biopsy has lowered; yet, I think pathologists are still applying the same diagnostic criteria. And any diagnostic test that is imperfect -- has imperfect sensitivity or imperfect specificity. There's no universal way to say its performance is good enough without the context because a test with a certain sensitivity and specificity might be performing well in one context, maybe a high prevalence setting, but not be useful in a different context, like a low prevalence setting.

So I could make a rough analogy: my sense of what's happened in pathology and over diagnosis is a lower threshold to biopsy has developed over the decades, but the pathological criteria stayed the same. And, maybe they [00:17:00] made a hundred percent sense, back in the day but maybe not now. And maybe it's a matter of adjusting the language a little bit, so we don't scare patients unnecessarily. 

Christine Ko: Great. Yes. That makes sense. A way to combat overdiagnosis is perhaps not doing biopsies, just watching instead, especially the skin. We can take a photograph and watch it, and wait to do a biopsy. Something to ponder. 

Thank you so much for your time.

Katie Kerr: Thanks for having me.