Artificial intelligence (AI) and machine learning, a type of AI that includes deep learning, which produces data with multiple levels of abstraction, are emerging technologies that have the potential to change how veterinary medicine is practiced. They have been developed to help improve predictive analytics and diagnostic performance, thus supporting decision-making when practitioners analyze medical images. But unlike human medicine, no premarket screening of AI tools is required for veterinary medicine.
This raises important ethical and legal considerations, particularly when it comes to conditions with a poor prognosis where such interpretations may lead to a decision to euthanize, and makes it even more vital for the veterinary profession to develop best practices to protect care teams, patients, and clients.
That's according to Dr. Eli Cohen, a clinical professor of diagnostic imaging at the North Carolina State College of Veterinary Medicine. He, presented the webinar, "Do No Harm: Ethical and Legal Implications of A.I.," which debuted in late August on AVMA Axon, AVMA's digital education platform.
During the presentation, he explored the potential of AI to increase efficiency and accuracy throughout radiology, but also acknowledged its biases and risks.
The use of AI in clinical diagnostic imaging practice will continue to grow, largely because much of the data—radiographs, ultrasound, CT, MRI, and nuclear medicine—and their corresponding reports are in digital form, according to a Currents in One Health paper published in JAVMA in May 2022.
Dr. Ryan Appleby, assistant professor at the University of Guelph Ontario Veterinary College, who authored the paper, said artificial intelligence can be a great help in expediting tasks.
For example, AI can be used to automatically rotate or position digital radiographs, produce hanging protocols—which are instructions for how to arrange images for optimal viewing—or call up report templates based on the body parts included in the study.
More generally, AI can triage workflows by taking a first pass at various imaging studies and prioritize more critical patients to the top of the queue, said Dr. Appleby, who is chair of the American College of Veterinary Radiology's (ACVR) Artificial Intelligence Committee.
That said, when it comes to interpreting radiographs, not only does AI need to identify common cases of a disease, but it must also bring up border cases as well to ensure patients are being treated accurately and for it to be useful.
"As a specialist, I'm there for the subset of times when there is something unusual," Dr. Cohen said, who is co-owner of Dragonfly Imaging, a teleradiology company, where he serves as a radiologist. "While AI will get better, it's not perfect. We need to be able to troubleshoot it when it doesn't perform appropriately."
Challenges with artificial intelligence
Medical device developers must gain Food and Drug Administration (FDA) approval for their devices and permission to sell their product in the U.S. Artificial intelligence and machine learning-enabled medical devices for humans are classified by the FDA as medical devices.
However, companies developing medical devices for animals are not required to undergo a premarket screening, unlike those developing devices for people. The ACVR has expressed concern about the lack of oversight for software used to read radiographs.
"It is logical that if the FDA provides guidelines and oversight of medical devices used on people, that similar measures should be in place for veterinary medical devices to help protect our pets," said Dr. Tod Drost, executive director of the American College of Veterinary Radiology. "The goal is not to stifle innovation, but rather have a neutral third party to provide checks and balances to the development of these new technologies."
Massive amounts of data are needed to train machine-learning algorithms and training images must be annotated manually. Because of the lack of regulation for AI developers and companies, it's not a requirement for companies to provide information about how their employees trained or validated their products. Many of these algorithms are often referred to as operating in a "black box."
"That raises pretty relevant ethical considerations if we're using these to make diagnoses and perform treatments," Dr. Cohen said.
Because AI doesn't have a conscience, he said, those who are developing and using AI need to have a conscience and can't afford to be indifferent. "AI might be smart, but that doesn't mean it's ethical," he said.
In the case of black-box medicine, "there exists no expert who can provide practitioners with useful causal or mechanistic explanations of the systems' internal decision procedures," according to a study published July 14, 2022, in Frontiers.
Dr. Cohen says, "As we adopt AI and bring it into veterinary medicine in a prudent and intentional way, the new best practice ideally would be leveraging human expertise and AI together as opposed to replacing humans with AI."
He suggested having a domain expert involved in all stages of AI—from product development, validation, and testing to clinical use, error assessment, and oversight of these products.
The consensus of multiple leading radiology societies, including the American College of Radiology and Society for Imaging Informatics in Medicine, is that ethical use of AI in radiology should promote well-being and minimize harm.
"It is important that veterinary professionals take an active role in making medicine safer as use of artificial intelligence becomes more common. Veterinarians will hopefully learn the strengths and weaknesses of this new diagnostic tool by reviewing current literature and attending continuing education presentations," Dr. Appleby said.
Dr. Cohen recommends veterinarians obtain owner consent before using AI in decision making, particularly if the case involves a consult or referral. And during the decision-making process, practitioners should be vigilant about AI providing a diagnosis that exacerbates human and cognitive biases.
"We need to be very sure that when we choose to make that decision, that it is as validated and indicated as possible," Dr. Cohen said.
According to a 2022 Veterinary Radiology & Ultrasound article written by Dr. Cohen, if not carefully overseen, AI has the potential to cause harm. For example, an AI product could produce a false‐positive diagnosis, leading to tests or interventions, or lead to false‐negative results, possibly delaying diagnosis and care. It could also be applied to inappropriate datasets or populations, such as applying an algorithm to an ultrasound on a horse that gathered information from small animal cases.
He added that veterinary professionals need to consider if it is ethical to shift responsibility to general practitioners, emergency veterinarians, or non-imaging specialists who use a product whose accuracy is not published or otherwise known.
"How do we make sure there is appropriate oversight to protect our colleagues, our patients, and our clients, and make sure we're not asleep at the wheel as we usher in this new tech and adopt it responsibly?" Dr. Cohen asked.