The Vetrospective

Artificial Intelligence

The Vetrospective Season 1 Episode 7

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 46:02

Send us Fan Mail

Dr. Michael Kent speaks with Dr. Stefan Keller about Artificial Intelligence in the practice of Veterinary Medicine. 

The Vetrospective Podcast Season 01 Episode 07 – Artificial Intelligence

Dr. Michael Kent with Dr. Stephen Keller


Dr. Keller: I'm very excited about the opportunities, but also a bit concerned about us getting complacent and not checking things as thoroughly as we surely should.

Dr. Kent: Shortcuts are dangerous.

Dr. Keller: Yeah, absolutely.

Dr. Kent: Hello, this is the Vetrospective podcast and I'm Dr. Michael Kent, a professor in radiation oncology at the UC Davis School of Veterinary Medicine and your host. Artificial intelligence. It's in the news daily and seems to be integrating into our lives, with most of us only having a vague understanding of what it is and what impact it holds for us. Well, no one really knows what impact it's going to have on all our futures. Is AI going to be no more than a hyped up productivity tool, or is it going to cause massive disruptions in healthcare and other industries? Will this be a productive disruption, ending in better outcomes for patients, or will it erode the humanity in our healthcare system?

On today's episode, we are talking about AI and how it's being used in veterinary medicine and what the future might hold for all of us. I've invited Dr. Stefan Keller to join us today to take a look at this. Dr. Keller is an associate professor in anatomic pathology here at UC Davis. He did his veterinary school in Berlin and did his residency work both in Zurich and here at UC Davis, where he also received his PhD. He is a European diplomate in anatomic pathology. And Dr. Keller works to bridge, really, the diagnostic pathology, immunology, and the rapid evolving field of artificial intelligence, why I invited him here today. His research group develops and deploys machine learning tools to improve diagnostic precision in veterinary medicine, including ANNA, an open-source analytics platform that links AI models directly with an electronic medical record system to support real-time interpretation of clinical and pathology data. Dr. Keller has contributed to areas such as clonality testing, immune repertoire sequencing, and an AI-assisted histopathology platform. He also teaches immunology and pathology, mentors graduate students, and collaborates with clinicians and computer scientists to help integrate AI thoughtfully and responsibly into everyday diagnostic workflows. So, Dr. Keller, thank you for joining me today on the Retrospective.

Dr. Keller: Thanks for having me, Michael.

Dr. Kent: Of course. So first, I want to ask you why you became a veterinarian and what made you interested in pathology?

Dr. Keller: So, I guess I started out firsthand thinking I could be a wildlife veterinarian and travel to cool places and see exotic animals. So, like the combination of being a veterinarian and adventurer, I think, initially lured me in. But then as you go through the DVM program, you kind of learn to see some realities and you explore or find things that you haven't really paid attention to before. And so I came across pathology and pathology... really intrigued me because it gives you a chance to look at the cause of disease and then the molecular and cellular events that that then lead to a clinical manifestation. So in other words, it allows you to kind of dig deep and understand, I think, disease on a kind of more deeper level. I think as practitioners, we are oftentimes restrained by financial resources of owners, for example, and are just technical abilities. And so pathology, For those of you who are not familiar with what pathologists do, we essentially are an atomic pathologists, which is a subspecialty of pathology. We look at tissue samples under a microscope. So biopsies.

Dr. Kent: Biopsies or post-mortem exams, what we would call a necropsy.

Dr. Keller: Correct, absolutely, yes. So, sitting at the microscope, looking at the cellular level of tissues is just very rewarding and oftentimes offers an insight in why a certain disease developed.

Dr. Kent: Interesting. And so in a sense, this deeper diving in and trying to figure out the causes of disease leads us over to your work in artificial intelligence as well. So, I've seen artificial intelligence, the concepts broken into many different classification systems, ranging from its capabilities in thinking, to tasks it can carry out and how it learns. I think what many of us have interacted with are the online AI platforms using specific applications that can do specific tasks, such as creating an image or writing an e-mail or writing a letter. Now, can you explain to me the different types of AI that are available to us? Specifically, maybe what is the difference between, let's say, generative AI versus machine learning, or maybe I'm missing the categories totally?

Dr. Keller: No, that's absolutely, it's a confusing a terminology and field because there's so many aspects and angles to look at it. AI started out in the 1950s, essentially, and just basically means that we have a machine that kind of does things akin to what humans would do. So, kind of AI is just a very general umbrella term for any computer system that mimics human intelligence. And so then machine learning. And so initially that was done creating rules, meaning that we could say, for example, in the field of veterinary medicine, if we wanted to interpret blood work, we could create a rule that says if, you know, the hematocrit is below a certain value, then we could call it anemia.

Dr. Kent: So hematocrit meaning what percentage of red blood cells you have in your circulation.

Dr. Keller: Exactly, right. So it's just essentially mathematical equation that tells you if-then a statement, if you will. And so that became more sophisticated with machine learning, which essentially refers to the fact that the machine can create some of those rules or learn some of those rules by itself. So, we don't need a human to explicitly program an if-then statement. But if you give enough data to a machine, especially if you provide a label, so label in this context might refer to this record is a dog that is anemic, or this record is from a dog that is not anemic, then the machine can figure out by itself the rule. And then you can take, let's say, blood work that the machine has never seen before, and you can classify it with respect to whether or not the dog is anemic, based on a rule that the machine kind of figured it out by itself.

Dr. Kent: Which may not only be the hematocrit, it may take other things into account, right?

Dr. Keller: Absolutely, yes. So, the beauty of machine learning or artificial intelligence is that you can essentially feed it as much data as you want to. And usually the more data you feed it, better and the model gets it in general. And then kind of the third level of complexity that has evolved only more recently, and I would say the last 10 years, is what's called deep learning. So deep learning is a bit of a different, we call it architecture. So basically, the computer system that lays at the base of it, it's often referred to also as neural networks, essentially. that is more complex in how it works and usually needs a lot more data. But with deep learning, essentially, what we can do is we can analyze data that is much more complicated than anything we've done before. For example, so I'm an atomic pathologist. I look through a microscope. And so at least in the olden days, and now we're switching over to using digital images instead of microscope.

Dr. Kent: So basically you photograph. the slide that you would normally look at the glass slide and digitize that.

Dr. Keller: Absolutely, that's how it works. The kind of key difference, or there's two key differences with, let's say, radiology, for example, is that pathology or histopathology requires you to zoom in quite a bit. So you could imagine looking at a world map where you can see the whole world. But then you need to be able to zoom into California and look at, let's say, the Davis city map as well, right? So, what we need are these large image files that allow you to zoom in and zoom out, which is something that radiographic images, for example, don't have that feature, right?

Dr. Kent: So there's also the 3D aspect to it too, right? I mean, almost a topographic map, because yeah, where we're sitting right now is sea level and very flat, but let's say we move over to the Sierras. the focus will be different. Do you have to take that into account as well?

Dr. Keller: Absolutely. So usually, a tissue section that we look at is 3 microns thick. And the microscope that we use to photograph that is not essentially very... There's slight variations in what we call the z-planes, or the third elevation, if you will.

Dr. Kent: Kind of up-down elevation.

Dr. Keller: Up-down correct, right? So there's some of that focus variability as well, which usually is kind of tried to account for at the scanner level, right? So the image that we receive usually has that third dimension taken out of place. For us, the third dimension is more the zoom level, how much we focus in, whether we're on the worldview versus the Davis City map. So in other words, it's a huge file that we look at, usually in the gigabytes size. And so we need a computer system that can actually handle that type of image format. And so before deep learning came around, we weren't able to really do that efficiently, or at least with kind of the tasks that we can tell it to do nowadays. And so that's been really a game changer. So looping back to your initial question.

Dr. Kent: I was about to re-ask you, let's say, you know, what's the difference when I go on and I put in a photo and I say, make a cartoon out of me.

Dr. Keller: Yes. So what?

Dr. Kent: Which I've done, by the way.

Dr. Keller: Correct. So if you look at generative AI, so if you look at ChatGPT, or things like that. They're usually now the third level that I explained, so the deep learning type of programs that require huge computational resources to do that, right? And so underlying that is what's called a neural network. And GenEI refers to generative, as the name says, so we can use it to generate text or we can use it to generate images, for example.

Dr. Kent: So now that seems almost... a waste of me using a neural network that's just been really developed to do something pretty superficial, but it can be fun. So now we've got these fun aspects, but we also have these, what people are really thinking about how AI may disrupt industries in a positive or negative sense, is these deeper machine learning things, right?

Dr. Keller: Yeah. So in veterinary medicine, there's multiple ways how we could and are integrating AI in our diagnostic workflow. For example, here at Davis, we're just now launching a scribe technology. So what that does is it essentially listens to patient Clinician conversation

Dr. Kent: and most do you mean patient or you mean their owners? Owners, right? Yeah.

Dr. Keller: Well, these owner and clinician conversations and transcribes that and modifies that into a transcript that we can now include in our clinical record. So previously we had to take notes. We had to synthesize that into a paragraph or multiple paragraphs that makes sense. And now AI can do that for us. So the advantage of that is obviously it could be a huge save in time. 

Dr. Kent: So the productivity tool. 

Dr. Keller: Productivity absolutely can focus on the clinician. They don't have to write down every word. And from what I've seen, the summaries are pretty good, right? The downside of that we have to consider are multiple. So one is at this point, we have no experience how good the tool is, right? So a lot of it is probably fairly good, but without really checking that, we don't know what the accuracy or the quality is.

Dr. Kent: And that could have huge implications if we write down in the history that the dog was vomiting for three days and the computer writes it was been vomiting intermittently for three months. I'll know it was three days during my visit, but the next doctor who picks this up might be misinformed.

Dr. Keller: Absolutely. Yeah. So that.

Dr. Kent: And then what about also, so that's a little bit of an ethical issue too. Like I can't just record a phone conversation, you know, that's someone's privacy. So how do we safeguard that?

Dr. Keller: Yeah. The rules that we're setting in places that we need to have explicit owner consent to record these conversations and they're obviously not shared. They're removed after a period of time. And so we need owner consent for that. Yeah.

Dr. Kent: So is this considered generative AI if we are actually taking the inputs from what we're saying and synthesizing the conversation. It's also, I know, linked to the transcript and you can click out to fact check it. But is that considered generative AI since it's creating the summary?

Dr. Keller: Yes, it is.

Dr. Kent: Okay, so, but there's obviously machine learning or neural network behind it that's allowing it to do this synthesization.

Dr. Keller: Yes, correct.

Dr. Kent: Synthesization a word, I wonder. But we can ask AI. So where else in veterinary medicine is it being used? How is this being looked at besides the productivity tool of making it easier for me to do medical records, which, by the way, is the bane of every clinician's existence is having to sit down at night and work on your records for hours.

Dr. Keller: Yeah. So one kind of more superficial AI tool that we're trying to implement here at UC Davis is to interpret blood work. So routine blood tests that you might do if you go to your veterinarian around the corner or here at our VMTH. So things like complete blood count or chemistry panel, we can use that to check whether a dog has a certain disease, yes or no, for example. And so colleagues of mine have developed 3 classifiers. We call themselves three of these AI tools. They're able to tell whether or not a, in this case, dog has a certain disease. And so We are in the process of trying to incorporate those into our clinical workflow as well, but there's several worlds that we're trying to manage before we can do that.

Dr. Kent: But now if I'm like a halfway decent doctor and I just look at the blood work, shouldn't I be able to tell that? Do I need a computer? And you called it superficial. So is this something that's easy, well, easy to do and easy to diagnose?

Dr. Keller: Yeah, so I called it superficial because the underlying algorithm is compared to deep learning or neural networks, it would be considered superficial.

Dr. Kent: Almost an if-then statements.

Dr. Keller: Not quite. It's the machine learning level, the level 2 that we talked about before, in between, right? So we don't need as many computational resources. And it gives us a simple yes, no answer with a probability attached to it as about how certain AI is about a certain diagnosis. It spits out. So With respect to your question about shouldn't a veterinarian be able to diagnose that? Absolutely. It depends a bit on what the disease is, right? So as you know yourself, we have those types of diseases or patients that are pretty straightforward. They're slam dunk diagnoses and there's others that are more complicated. Second, We're all just humans, so we make errors. So we still train our veterinarians, our vet students to recognize and diagnose the disease. But it is nice to have a backup that in case we do have a bad day or, you know, some other things happens, we miss it. And so having a kind of backup copilot AI that will help us to not miss diseases, I think is desirable. And then third, If you're in a rush, no matter how good you are, sometimes things fall. 

Dr. Kent: You would miss it. 

Dr. Keller: You can miss it.

Dr. Kent: So it just will flag it for the attending veterinarians so that they can then go and check better.

Dr. Keller: Correct, yeah. So, it would run in the background and then we have a section that says machine learning algorithms or decision support tools and then at the bottom you can look at that or you can choose to ignore it if you don't want to do that.

Dr. Kent: So now, I know at least one of the diseases that's been worked on is Addison's disease, which can be really difficult to diagnose at first and can have very non-specific clinical signs, in other words, what the dog is showing to the owner. So, for this kind of disease, is this what AI is kind of made for in a sense for us?

Dr. Keller: Absolutely, yeah. Oftentimes it's used in the very initial stages to help us guide further diagnostic workup, right? So, things like Addison's disease, we would follow up with further diagnostic testing.

Dr. Kent: Yeah, and just so people understand, Addison's disease is kind of a hypoadrenocorticism. You don't have your mineralocorticoids, which are coming out of your adrenal glands and kind of balance the salts in your bloodstream, right? So you may have some very non-specific signs at first, but can be life-threatening.

Dr. Keller: Absolutely, yeah. Another one we've been working on is leptospirosis, which is a bacterial disease that affects the kidney and the liver. And those animals present usually very least fairly sick. And so it is helpful at the beginning if we know or at least have a rough idea whether or not that disease is due to a bacterial infection, like in this case, or some other causes that can cause kidney disease.

Dr. Kent: So if the computer flags it and you're still waiting for your, let's say, urine culture to come back, you may decide to go ahead and start treating it just in case because this is life-threatening.

Dr. Keller: Correct. Yeah.

Dr. Kent: And now, so I guess, what other areas in veterinary medicine am I missing any? I've heard things like there's now companies out there that are reading x-rays or radiographs, what we would call them. So images of, let's say, a dog's chest and coming up with a diagnosis with AI. So where does that integrate in? What do we need to do to make sure this is safe? How do you test these things? How do you know that the computers, you know, you've all, I think, heard the term hallucination. How do we know it's not making something up?

Dr. Keller: Yeah, that's an excellent point. And so, what we should say at the beginning is that there's no official agency that checks these types of algorithms. Like in humans, there's the FDA and every algorithm that comes out is essentially a device that has to be approved by the FDA. So, there's no quality control in veterinary medicine analog to that. And so in that regard, it is a bit of a more wild west situation. So, there are certain colleges, for example, radiology, that have proposed rules and guidelines on how to identify good or bad algorithms or at least best practices in how to develop algorithms and then disclose the details of those algorithms. But they're not legally binding. So, it basically comes down to essentially the veterinarian, if you will, to decide whether or not something is worthwhile doing or not. I mean, what we'll probably, or what we've started to see and we'll start to see is that there might be papers that check a certain, you know, algorithm to, let's say, read out x-rays. The problem with, or one thing to consider with artificial intelligence is that there's usually 2 stages. One is where we develop the algorithm. We call it training the algorithm. And then the second one is where we actually deploy it, meaning we give it something that the algorithm has never seen, like an X-ray, before, and then ask it to answer a question. And so one of the tricky things about that is that the area that you use the AI tool in has to be very similar to the conditions in which it was created, for example. In the case of histopathology, we knew that. We know that very simple things like the machine we use as scanner to take a picture of our histology slide that has a slightly different color output can vastly offset the result that the algorithm provides you with, which means that if the model was developed using scanner A and then we deployed in a clinic using scanner B, the algorithm might perform very differently, right? 

Dr. Kent: It might misdiagnose. 

Dr. Keller: Might misdiagnose, right? So with AI, it is really crucial to make sure that if you use an algorithm that you properly validate it, meaning you have to make sure that the algorithm performs as you would expect it in the current environment you're using it in.

Dr. Kent: So, you know, I'm... Full disclosure, I'm a member of the American College of Veterinary Radiology and I am aware of kind of the guidelines that they put forth. So let's say I was deciding to start a business and I wanted to be able to read x-rays. Can I train it off 10? How do you classify them or how do you go about building this kind of model first?

Dr. Keller: Yeah, so radiology is not my field of expertise. So, and I, even if it was, I probably wouldn't give you precise numbers as to how many cases you would need to train it. As a rule of thumb, the more subtle the pattern that you're trying to recognize, the more images or training data you need, right? In other words, if you want to train or if you meet the classifier to identify a big honking mass, that's super easy to recognize you need fewer cases to train on than if you. have very subtle differences.

Dr. Kent: But if we need all the differences you can have, let's say in a chest x-ray or thoracic radiograph, as we would say, and you want to look at the difference between, let's say, bronchitis and asthma and pneumonia and in a large lymph node or a big heart or pulmonary edema, we could have, you might need hundreds or more of really well documented or annotated radiographs to do it.

Dr. Keller: Absolutely, yes.

Dr. Kent: And then you have to test it on a different set.

Dr. Keller: Correct, yes.

Dr. Kent: And maybe radiographs taken from different machines.

Dr. Keller: Ideally, yes.

Dr. Kent: Yeah, so this can get very complex and there's not rules on how this gets done at this point is what you're saying, at least on the veterinary side. So, what about the human side? Where has this been integrated in? Are you familiar with that? Or is this something that's not your area.

Dr. Keller: I have some knowledge, but I think not enough knowledge to broadly comment on that in the podcast.

Dr. Kent: No, that's okay. That's okay. Yeah. And I've also heard people worrying about AI replacing doctors, particularly, let's say, the radiologists or pathologists, people who use pattern recognition, because really what we train our residents and our vet students to do is recognize patterns, right? You see this pattern again and again and so that this means this dog has pneumonia on the chest x-ray. This means this is a carcinoma or sarcoma on under the microscope. So where do you think we're at? Are we looking at replacing doctors at this point or, you know, do you see that happening?

Dr. Keller: Very good question. So, one of the things to note with most of the AI algorithms that we're implementing is that they're fairly narrow in scope, meaning that they are meant to diagnose disease X or distinguish between disease A and disease B, right? So it's a fairly narrow scope that if you give the AI something, let's say you wanted to diagnose or differentiate between inflammation and tumor, and then you give it a third group or type of case, it'll perform fairly poorly. So I think where humans shine at this point is that you can give me any type of biopsy. You know, it might be any disease process, most of the species, mammalian, reptile, amphibians. And I can come up with a reasonably close diagnosis. It might not be very deep with respect to, you know, brain tumor classification into like a lot of sub-entities. 

Dr. Kent: It’s not you area.

Dr. Keller: Exactly. But I can make any dedicated or educated diagnosis on a broad range of different cases. Again, with AI tools, if it's not within the narrow scope that it has been trained for, it'll perform fairly poorly. So, going back to your question about replacing veterinarians with AI, at this point, I think we will use AI to look at very specific parts of our expertise and replace those, but I don't see any of us being replaced right away, right? That's one consideration. The other consideration is, again, we need to kind of figure out how good those algorithms are. And so, what we try to propagate is what's called a human in the loop, which means that the AI might initially make the diagnosis, but then we still need the human who will ultimately sign off on the case, right? Because you have responsibility for the case at the end of the day. And so, you don’t want to make it worse. Exactly that what the AI tool tells you is really correct. So there's always that consideration to it. Having said that, for radiology, I think you can already submit radiographs online. It's being read out purely by an AI algorithms where there's no true human in the loop anymore. And that is kind of a tricky thing if you can be sure that the algorithm is is really good, then maybe it'll work. But as I said before, just because there's a paper published that says the performance of this algorithm is excellent or good, it doesn't mean that in a specific, the different scenario it performs as good, essentially. So there's...

Dr. Kent: As a radiologist who's trained. So at this point, the neural networks that are out there in the computers maybe aren't as good as your neural network.

Dr. Keller: It depends. I mean, ultimately, I believe, I'm a firm believer that the machine is a better pattern recognition. And I also believe that if we get to the point where a machine is better at diagnosing the disease than me, then we shouldn't be concerned about my workplace. We should let whoever does the best diagnosis do the job. And if the machine's better than me, then, you know, we have to look at retraining pathologists to do something else. So I do believe that at this stage, I think of the game, however, we're not there yet. And we need, at least in the interim phase, also pathologists, radiologists to make sure that those algorithms actually perform the way we intend them to perform.

Dr. Kent: So I've heard the term, let's say, robotic surgery. And a friend of mine asked me when I was told them I was going to be doing this podcast, They basically said to me, so is a robot going to be doing my dog's surgery tomorrow or next year? Now, I immediately said, no. What are your thoughts on where we're headed there?

Dr. Keller: Yeah, I'm not sure about surgery. I mean, the way I understand it is price is always a big determinant, right? And so I don't know, and I know nothing about surgery where we are in that region. 

Dr. Kent: In robotics.

Dr. Keller: In robotics, yeah and whatnot. I can really only speak to data that's being analyzed rather than actual mechanics involved.

Dr. Kent: So the other question I got, which I also thought was really interesting, is do you see a time when maybe instead of Google Translate, which can read anything, you'll be able to translate your dog, you know, with having the computer inputs of what your dog's looking like, maybe the sounds they're making, things like that. And getting an output that you can kind of communicate better to your dog with or understand what they're seeing.

Dr. Keller: Boy.

Dr. Kent: He's rolling his eyes on this one.

Dr. Keller: You mean with respect to just general communication or actually a pathology where we're trying to figure out what the problem is?

Dr. Kent: Maybe more just general communication there. Again, maybe this is the generative AI where we are taking a picture and making it a cartoon. But for pet owners, this might be something that would be really interesting.

Dr. Keller: Yeah, I think over time, we'll have more and more devices that measure various things. Like we all know, a lot of us wear some kind of a watch that tracks our heartbeat and whatnot. And You can do similar things now with pets. 

Dr. Kent: Activity monitors. 

Dr. Keller: Activity monitors, you can use the litter box to derive certain data. And so I think as these tools become more mainstream, there's going to be a lot more data to analyze. And I think it's a cool and interesting field that's really worthwhile. Having said that, some types of input might be more worthwhile than others.

Dr. Kent: Fair. So now I know you do specifically, you've been also one of the things you're working on, is trying to distinguish between inflammatory bowel disease in cats and lymphoma in cats. And I know that's really tricky sometimes, you know, for a pathologist and a clinician to figure out what the cat has. You know, both are going to cause gastrointestinal signs, diarrhea, some vomiting, and they're almost a continuation of a disease. So, what have you been doing in your lab to try to figure this problem out?

Dr. Keller: Yeah, as you see, it's a pretty tricky, tricky field. So, the basic dilemma that we have as pathologists is we get what we call the slides, so a section of tissue, and we have to look at that and these types of diseases that we talked about are essentially determined by how many lymphocytes, which is a type of a white blood cells, are present in a certain tissue section and where are they located and what is their morphology? So how big are they and what size their nucleus is and whatnot? The issue with that is that if you go back to the analogy of a world map and the city map of Davis in order to look at a lymphocyte, you have to zoom into the level of Davis, right? 

Dr. Kent: And there's different types of lymphocytes. 

Dr. Keller: There's different types of, yes, yeah. Light microscopically, we only distinguish, yeah, mostly one. But yeah, there's different morphologies with that, right? So what we are tasked to do as a human is essentially look at, you know, a world map and then try to figure out what is the distribution of people in continent A versus continent B or country A or city A versus B, which means We're constantly zooming in, zooming out, zooming in, zooming out, and then we're trying to summarize what we see across this world map. And as you can imagine, humans are not very good at estimating or gauging how many lymphocytes are in a specific field of view and then trying to summarize that across like a large slide like that is difficult too. So not surprisingly, if you give three different pathologists the same slide, they sometimes come up with vastly different guesses or estimates or what we call as we graded essentially. And that has led to the fact that histopathology is still kind of a gold standard, but it's not.

Dr. Kent: But it's an art also, right?

Dr. Keller: It's an art as well, depending on how experienced you are, obviously. And so what my lab is trying to do is to take the human factor out of play here. So,  we trained an algorithm to recognize lymphocytes, and then we can measure how big the lymphocyte is, where it is located, how many we have. And so we get a whole bunch of data, and we just take essentially the location, the size of the different lymphocytes. And we can do that across a lot of different cases. So traditionally, if you look at studies, they might have in the two-digit max, three-digit number of cases. So what we did is we essentially went back through our archive all the way back to the 1990s and we pulled every single cat biopsy that we could find that roughly fits that entity. We came up with some literally thousands of tissue fragments and we ran it through our software and we can now determine or we can basically say what is normal, what is abnormal with respect to how many lymphocytes we have, where they are located, how big they are, and then, which is really cool, we can take these data and can put it into AI again in something we called unsupervised learning and say, okay, can you find patterns in here? So we, in other words, don't have the human expert define the pattern, but we can let the machine try to find patterns

Dr. Kent: that we may not have seen.

Dr. Keller: That we may not have seen exactly like that. And so it's a pretty powerful tool where you can get rid of a lot of the subjectivity that humans bring into the game here. And so what we found, and we're about to publish that, is essentially their distinct kind of subgroups, subgroups with respect to how lymphocytes are organized, number, spatial distribution, but it's a continuum, right? And so it is not surprising then if you have different humans look at that, they come to different results. What the tool allows us to do now is though we can take a new case that the AI has never seen and we can classify it based on the algorithm we have. And if we repeat that 10 times, the machine will always classify it exactly the same way. 

Dr. Kent: So that's consistency. 

Dr. Keller: Consistency, absolutely. And so what we're trying to do now is to add outcome data to that because ultimately at the end of the day, what the pathologist calls it is not as important as to 

Dr. Kent: how this cat's doing at home. 

Dr. Keller: Exactly. Does it respond to treatment? How does the cat do? How long does it survive? The tricky part with that is getting that information A&B, as you know, treatment varies, right? So cat A will get treatment A, cat 2 will get treatment B. And so getting enough data to make conclusions or predictions about how will the cat respond to a certain treatment given a certain histopathology phenotype is tricky at that point. So that's our bottleneck for sure where we need to get more data moving forward.

Dr. Kent: And that's our real bottleneck in implementing a lot of these things is having these quote unquote curated data sets where you've proven at the end what that case actually is.

Dr. Keller: Exactly. Yep.

Dr. Kent: Yeah. So what haven't I asked you that I should have asked you about AI, sir?

Dr. Keller: So personally, I find the aspect, because we are a vet school, the aspect of training veterinary students and kind of what AI effect has in our skills as diagnosticians a very interesting one. Right. 

Dr. Kent: How so?

Dr. Keller: So, For example, if I know that my, AI tool will always pick up Addison's disease, or at least at a higher rate than I will, I might not look at the blood work as thoroughly again as I do right now. And as we have more and more of these tools developed, not today and not tomorrow, but, you know, a couple of decades or earlier from now, we probably have a machine that can diagnose most diseases more efficiently than humans can. And so what does that mean as a veterinarian? Do we train veterinarians as long as we're needed to keep us in, have the human in the loop? Do we relinquish certain aspects that we train? Going back to the example that we talked about before with a scribe technology. So right now, veterinarians go through a program that teaches them how to take notes and how to create a proper medical record. Now, we're introducing an itool that can do that potentially as good or better as humans do. At what point do we say we're no longer training our veterinary students to learn this skill, right? And so that now goes from a simple scribe technology to diagnosing a disease. And so ultimately, 

Dr. Kent: it's a big step. 

Dr. Keller: It's a big step, right? So we're opening the floodgates here where we say, okay, we might not want or need to teach that anymore. And here at the vet school, we currently don't have a real workflow decision process to deal with that, right? At what point do we say we're no longer teaching that? And our standpoint so far is..

Dr. Kent: that the human doesn't have to be in the loop anymore.

Dr. Keller: Exactly, yes. So, for the scribe technology, it appears or it looks like we're still going to require that, veterinarians have to learn the skill. But then on the other hand, it is important, I think, that the vet students learn the tools that they will encounter in practice. So, we can't just say we're not going to let them use scribe technology because once they get out into practice, they will use that and we want them to be critical users of AI technology. So they have to understand the limits and pitfalls of it.

Dr. Kent: Yeah, and I would argue, just if we're chatting about this, that when I'm teaching the vet students to take a history and do that, I'm not just they know how to type in a computer already. But what questions do you ask and how do you ask them and how do you see how the person's understanding or not so that you can get the answers that help you understand the problem as to what diagnostic tests you do. So, it's not just being a scribe. You know, that's easy. You know, they already know how to do that when they get here, but how do you actually ferret out the problem. So, we have this saying in vet med, if you hear hoofbeats think horses, not zebras, but occasionally the zebras are there. And I think so the computer probably finds the hoofbeats of horses really easily, but can it find the zebras?

Dr. Keller: Yeah, I mean, ultimately, I think it might be able to, right? now, I'm not sure. Which leads us to the next problem, I think, that I haven't touched upon, is that the validation and the ongoing monitoring of those tools. Because as we said, the environment they were trained in might be different than what we are using.

Dr. Kent: And that may shift over time.

Dr. Keller: And that may shift over time, right?

Dr. Kent: We get a new CT scanner that's got better resolution. And do we throw out all the old algorithms?

Dr. Keller: Correct.

Dr. Kent: Or do we stop learning and building new machines.

Dr. Keller: Absolutely. And so that requires a whole new compute and personnel infrastructure to be on top of that, to monitor these AI devices as we deploy them, but then also ongoing to make sure, as you said, once we get a new CT scanner, that it still works adequately. And in a kind of resource-confined environment right now with the state budget, It is hard to do that, so our IT crew is really awesome, but they have their hands full as is with just keeping the ship afloat, and so now we come in and say... hey, we have 3 new classifiers where we need to monitor how they perform and.

Dr. Kent: Integrate it into our medical record and have it notify the clinicians and don't let it hallucinate.

Dr. Keller: Exactly, all of these things. So there's essentially a whole new field that's added to IT now that they previously hadn't been doing, but they will be expected to do moving forward. And so that's certainly constrained here in my world, in my lab, has been working towards, for example, integrating these classifiers into our electronic medical record system because our IT group does not have the bandwidth to do that. But moving forward, we need more funding for these types of things if we say we're using AI.

Dr. Kent: So that's obviously going to be really important, the validation and the like. So Just to kind of wrap us up, where is all this headed? Where are we going to be in five years, 10 years? Or I know you can't predict that as you laugh when I ask the question, but what do you think?

Dr. Keller: Yeah, I mean, I'm very excited about the opportunities, but also a bit concerned about us getting complacent and not checking things as thoroughly as we should. 

Dr. Kent: Shortcuts are dangerous.

Dr. Keller: Yeah, absolutely. So, I think what we should do or will do here at the vet school in Davis is to introduce things in a controlled way and test them thoroughly to make sure that our vet students still learn the skills that they need to learn, but then also become critical users of AI technology. They need to be able to use those tools and be familiar with them once they come out. So, I think we have a bit of a luxury here at the vet school in that we still, you know, can teach the old ways, diagnostic ways. I think once our graduates get out into private practice, there might be more of a pressure on them with respect to either using those tools. They obviously have to perform in very stressful situations. So, there is more of a danger, I think, of missing certain diseases and relying more on these AI tools, right? 

Dr. Kent: Yeah.

Dr. Keller: So ultimately, I think it's inevitable those AI tools will be there, they will be used, they will create some damage, they will hopefully create more positive consequences than damage. But it's so difficult to predict and it's so hard to keep up with the field. It's moving at such a fast pace.

Dr. Kent: Yeah, and we can't lose the humanity of medicine either. I know we teach doctoring, you know, and you still need the person who cares. And I don't think we're there yet with computers.

Dr. Keller: Yes, and hopefully we'll never get there. We need the person.

Dr. Kent: We don't need to replace us as a race, as the human race. 

Dr. Keller: Yes. 

Dr. Kent: As a species, I guess, is a better way to say it.

Dr. Keller: Well, I think that most people are probably comfortable, who do want to have that human to interact with. The question is, how much does that human have to know veterinary medicine, right? Like, if you real-time transcribe our conversation, you could have also the computer put out real-time recommendations or therapy, diagnoses, things like that. So theoretically, I think I can imagine a world where you still have the human interface, but that human doesn't understand a lot about veterinary medicine like in the far future.

Dr. Kent: I hope that we're far away off from that because it just... It just seems to me we'll lose that caring aspect, at least at this point.

Dr. Keller: Yeah.

Dr. Kent: Well, Stefen, Dr. Keller, thanks so much for joining me today on The Vetrospective. It's been a very enlightening conversation and hopefully not a cautionary tale, but a way moving forward to make sure that we're not giving up too much on veterinary medicine.

Dr. Keller: Thank you for having me.

Dr. Kent: Of course. 

The Vetrospective, as with life, takes a village. I want to thank those who suggested I start this project and everyone who has encouraged and supported me along the way. Particularly, I want to thank our producer and director, Danae Blythe-Unti, Nancy Bei, who is our program coordinator, our sound mixer, Andy Cowitt, and theme music was composed and produced by Tim Gahagan. Thank you all, and we'll see you next time.