Building a framework for responsible AI in veterinary medicine
Story and photo by R. Scott Nolen
The pace of artificial intelligence (AI) development is outstripping the availability of clear professional guidance.
"This technology is moving so fast that we can’t afford a leadership vacuum," said Dr. Petra Harms, founder and CEO of VetMaite, a veterinary AI consultancy and education platform. "Clinics are looking to professional bodies like the AVMA for direction on what appropriate and inappropriate use looks like."
She was one of three participants on the "Tech-Enabled Tomorrow" panel at the 2025 AVMA Veterinary Business and Economic Forum, held October 8-9 in Denver.
The discussion also featured John J. Craig, PhD, CEO of MetronMind, a veterinary AI radiology report software company, and Dr. Jeremy Redmond, director of clinical skills and assistant professor of equine medicine at Louisiana State University (LSU) School of Veterinary Medicine.
Dr. Harms and Craig specifically called for best-practice guidelines that can adapt as technology matures.
"Transparency is fine, but what matters is action. We need to start defining standards now," Dr. Harms said.
To that end, the AVMA has created a Task Force on Emerging Technologies and Innovation.
The task force has been charged with developing a strategy that the AVMA can use to support practitioners faced with the opportunities and challenges of emerging technologies such as AI.
The task force met in late September to begin identifying and prioritizing the resources it plans to develop for veterinary teams.
Meanwhile, the AVMA journals have published multiple articles on AI, including a paper appearing in the April 2025 issue of JAVMA, that calls for "a collaborative approach that integrates the expertise of AI researchers, veterinary professionals, and other stakeholders to navigate the evolving landscape of AI in veterinary medicine."
Imaging leads the way
Dr. Craig described radiology as the proving ground for AI in veterinary medicine. Eight companies worldwide currently offer AI software to review radiographs, he said, producing everything from simple "yes or no" diagnoses to detailed narrative reports.
At MetronMind, Dr. Craig's team is embedding AI directly into digital radiography units.
"As soon as the image appears, it classifies, rotates, crops, and calibrates automatically," he said. "It can even detect poor alignment and tell the technician. That saves time, improves image quality, and raises the whole clinic's game."
At the same time, vendors share responsibility for keeping veterinarians in control, according to Dr. Craig.
"Every report our system generates comes with a watermark that says, 'Ready for review.'' The veterinarian has to clear or edit it before it's final. The doctor stays in control," he explained.
But serious issues remain. Dr. Craig said the radiologists he works with are frustrated by so-called black-box programs that do not provide information on how a diagnosis is made, especially regarding software that gives binary outcomes.
"They want to see how the algorithm reached its conclusion," he said, adding that the American College of Veterinary Radiology's AI committee, on which Dr. Craig serves, has yet to endorse any AI radiographic software.
The committee has urged vendors to make outputs explainable and editable. It is also pushing for performance metrics, which possibly come from academia, he said.
Risky conveniences
As the profession continues to navigate use of AI, some confusion has emerged over who is responsible for educating veterinary students around its use. Should it be the veterinary college or the university more generally? Indecision has led to a lack of standard training on AI and its use, Dr. Redmond said, often leaving students and educators to navigate this topic on their own.
For example, the University of Illinois created an AI in Medicine Certificate program through an interdisciplinary partnership among the College of Veterinary Medicine, Department of Bioengineering at The Grainger College of Engineering, and the Carle Illinois College of Medicine.
At LSU's veterinary school, Dr. Redmond noted that with class sizes approaching 200 students, faculty are using AI to help manage grading, feedback, and simulate client interactions.
"I can't meet with all 200 students one-on-one every week, but AI helps structure feedback so students still get useful guidance between those conversations," he explained.
AI programs can now analyze video interactions between students and simulated clients, flag common issues, and generate feedback from instructor prompts.
"It's about efficiency," Dr. Redmond said. "But it's also about teaching communication and judgment, skills that prevent malpractice claims and burnout later on."
Nevertheless, veterinary educators must guard against overreliance on technology, especially when many veterinary students think it's better than their own expertise. "They Google as much as their clients do," Dr. Redmond said.
"My biggest concern is that students and new graduates will be too confident relying on AI. Have we prepared them to make their own decisions?" he said. "If we don't teach AI literacy, we risk a generation of veterinarians who can’t tell when a model is wrong."
Dr. Harms pointed to examples in aviation and human medicine, including the Boeing 737 MAX crashes, to illustrate how "automation bias" can override judgment.
"Humans are bad at standing their ground against automated systems," she said. "It's not enough to have a 'human in the loop.' It has to be an educated, confident human in the loop."
This message is echoed in a paper recently published by the European Data Protection Supervisor, the European Union's independent data protection authority.
"Human Oversight of Automated Decision-Making" examines common assumptions about how humans interact with and monitor decision-making systems, "highlighting the overly optimistic nature of many of these assumptions. Accepting these assumptions uncritically can lead to inadequate or flawed implementations, posing significant risks—including harm to individuals and potential violations of fundamental rights."
Another concern with use of AI is data privacy. Dr. Harms pointed out that, up until a few months ago, OpenAI had a feature in which someone who clicked the "share" button on ChatGPT made that conversation publicly searchable, and some were indexed by search engines such as Google and Bing.
Specific to veterinary medicine, she urged caution around software that records and transcribes examination room conversations.
"When we put a surveillance tool into an exam room, we have to be sure that data isn't being used for something else, like [being] sold to a drug company or mined to monitor veterinarian performance. The shift from care provider to data generator is a real risk," she said.
AI in the future
Asked to imagine what the average appointment might look like a decade from now, panelists described a clinic where AI quietly manages background tasks—including triaging patients, drafting records, generating cost estimates, and streamlining communication—while veterinarians focus on diagnosis, surgery, prescribing, and, most importantly, empathy.
"The goal isn't to replace the human element," Dr. Craig said. "It's to automate the boring parts so veterinarians can spend their time on care and communication."
Dr. Harms envisioned a seamless data flow among clients, insurers, and clinics.
"AI should make the experience smoother, not colder," she said. "When done right, it lets us see more patients and take better care of them."
Dr. Redmond agreed.
"It's about maintaining clinical judgment," he said. "AI can help us be more efficient, but it can't replace compassion or critical thinking."
A version of this story appears in the January 2026 print issue of JAVMA
Learn more about the interface between artificial intelligence and veterinary medicine by reading a virtual collection of scientific articles from the AVMA journals.
Also, AVMA Axon features webinars on artificial intelligence, including: