Analyze Diet
Equine veterinary journal2024; doi: 10.1111/evj.14087

Comparison of veterinarians and a deep learning tool in the diagnosis of equine ophthalmic diseases.

Abstract: The aim was to compare ophthalmic diagnoses made by veterinarians to a deep learning (artificial intelligence) software tool which was developed to aid in the diagnosis of equine ophthalmic diseases. As equine ophthalmology is a very specialised field in equine medicine, the tool may be able to help in diagnosing equine ophthalmic emergencies such as uveitis. Methods: In silico tool development and assessment of diagnostic performance. Methods: A deep learning tool which was developed and trained for classification of equine ophthalmic diseases was tested with 40 photographs displaying various equine ophthalmic diseases. The same data set was shown to different groups of veterinarians (equine, small animal, mixed practice, other) using an opinion poll to compare the results and evaluate the performance of the programme. Convolutional Neural Networks (CNN) were trained on 2346 photographs of equine eyes, which were augmented to 9384 images. Two hundred and sixty-one separate unmodified images were used to evaluate the trained network. The trained deep learning tool was used on 40 photographs of equine eyes (10 healthy, 12 uveitis, 18 other diseases). An opinion poll was used to evaluate the diagnostic performance of 148 veterinarians in comparison to the software tool. Results: The probability for the correct answer was 93% for the AI programme. Equine veterinarians answered correctly in 76%, whereas other veterinarians reached 67% probability for the correct diagnosis. Conclusions: Diagnosis was solely based on images of equine eyes without the possibility to evaluate the inner eye. Conclusions: The deep learning tool proved to be at least equivalent to veterinarians in assessing ophthalmic diseases in photographs. We therefore conclude that the software tool may be useful in detecting potential emergency cases. In this context, blindness in horses may be prevented as the horse can receive accurate treatment or can be sent to an equine hospital. Furthermore, the tool gives less experienced veterinarians the opportunity to differentiate between uveitis and other ocular anterior segment disease and to support them in their decision-making regarding treatment. Unassigned: Ziel der Studie war es, die von Tierärzten/‐ärztinnen gestellten ophthalmologischen Diagnosen mit einem Deep Learning (künstliche Intelligenz) Software‐Tool zu vergleichen, das zur Unterstützung der Diagnose von ophthalmologischen Erkrankungen bei Pferden entwickelt wurde. Da die Pferdeophthalmologie ein sehr spezialisiertes Gebiet der Pferdemedizin ist, könnte das Tool bei der Diagnose von ophthalmologischen Notfällen bei Pferden, wie beispielsweise Uveitis, helfen. Methods: Das Deep‐Learning‐Tool, das für die Klassifizierung von Augenkrankheiten bei Pferden entwickelt und trainiert wurde, wurde mit 40 Bildern getestet, die verschiedene Augenkrankheiten bei Pferden zeigen. Derselbe Datensatz wurde verschiedenen Gruppen von Tierärzten/‐ärztinnen (Pferde‐, Kleintier‐, Gemischtpraxis, andere) mittels einer Meinungsumfrage vorgelegt, um die Ergebnisse zu vergleichen und die Leistung des Programms zu bewerten. Methods: Convolutional Neural Networks (CNN) wurden mit 2346 Bildern von Pferdeaugen trainiert, die auf 9384 Bilder erweitert wurden. Zur Bewertung des trainierten Netzes wurden 261 separate, nicht modifizierte Bilder verwendet. Das trainierte Deep‐Learning‐Tool wurde auf 40 Bilder von Pferdeaugen (10 gesunde, 12 Uveitis, 18 andere Krankheiten) angewendet. Anhand einer Meinungsumfrage wurde die diagnostische Leistung von 148 Tierärzten/‐ärztinnen im Vergleich zum Software‐Tool bewertet. Unassigned: Die Wahrscheinlichkeit für die richtige Antwort lag bei 93% für das KI‐Programm. Pferdetierärzte/‐tierärztinnen antworteten zu 76% richtig, während andere Tierärzte/‐ärztinnen eine Wahrscheinlichkeit von 67% für die richtige Diagnose erreichten. HAUPTEINSCHRÄNKUNGEN: Die Diagnose basierte ausschließlich auf Bildern von Pferdeaugen, ohne die Möglichkeit, das Augeninnere zu beurteilen. Unassigned: Das Deep‐Learning‐Tool erwies sich bei der Beurteilung von Augenkrankheiten auf Bildern als mindestens gleichwertig mit Tierärzten/‐ärztinnen. Wir schlussfolgern deshalb, dass das Software‐Tool bei der Erkennung potenzieller Notfälle nützlich sein kann. In diesem Zusammenhang könnte die Erblindung von Pferden verhindert werden, da schnell eine spezifische Behandlung eingeleitet oder das Pferd in eine Pferdeklinik überwiesen werden kann. Darüber hinaus bietet es weniger erfahrenen Tierärzten/‐ärztinnen die Möglichkeit, zwischen Uveitis und anderen Erkrankungen des vorderen Augensegments zu unterscheiden und sie bei der Entscheidungsfindung hinsichtlich der Behandlung unterstützen.
Publication Date: 2024-04-03 PubMed ID: 38567426DOI: 10.1111/evj.14087Google Scholar: Lookup
The Equine Research Bank provides access to a large database of publicly available scientific literature. Inclusion in the Research Bank does not imply endorsement of study methods or findings by Mad Barn.
  • Journal Article

Summary

This research summary has been generated with artificial intelligence and may contain errors and omissions. Refer to the original study to confirm details provided. Submit correction.

The research paper involves a comparative study: veterinarians’ ability to diagnose equine ophthalmic diseases is contrasted with a deep learning tool devised specifically to aid in diagnosing such conditions. The goal is to assess whether this technology can satisfactorily assist in emergency situations, improve disease identification, and enable less experienced practitioners in making treatment decisions.

Study Methodology

  • The research used a deep learning tool trained to classify equine ophthalmic diseases.
  • Convolutional Neural Networks (CNN) were trained on 2346 photographs of horse eyes, expanded into 9384 images through augmentation. This trained network was validated with 261 distinct, unedited images.
  • The performance of the tool was evaluated by applying it to a set of 40 photographs picturing various equine ophthalmic diseases (10 healthy examples, 12 of uveitis, and 18 of other illnesses).
  • The performance of the AI tool was compared to 148 veterinarians from different fields (including equine, small animal, mixed practice, etc.) by showing them the same dataset and comparing their diagnoses.

Key Findings

  • Using only images of horse eyes, the AI tool had a 93% likelihood of delivering the correct diagnosis.
  • Equine veterinarians achieved a correct diagnosis rate of 76%, whereas veterinarians from other fields had a 67% chance of diagnosing correctly.
  • Despite the limitation of only using images (and not being able to assess the inside of the eye), the AI tool was found to be at least as good as, if not better than, the veterinarians in diagnosing ophthalmic diseases.

Conclusions and Implications

  • The results suggest that the deep learning tool could be beneficial in identifying emergency cases where swift yet accurate diagnosis is of the essence, potentially averting blindness in horses by enabling effective treatments or timely referrals to a specialized facility.
  • For less experienced veterinary professionals, the AI tool could provide supportive insights to differentiate between uveitis and other anterior segment diseases, aiding them in deciding the best course of treatment.

Cite This Article

APA
Scharre A, Scholler D, Gesell-May S, Müller T, Zablotski Y, Ertel W, May A. (2024). Comparison of veterinarians and a deep learning tool in the diagnosis of equine ophthalmic diseases. Equine Vet J. https://doi.org/10.1111/evj.14087

Publication

ISSN: 2042-3306
NlmUniqueID: 0173320
Country: United States
Language: English

Researcher Affiliations

Scharre, Annabel
  • Equine Clinic, Ludwig Maximilians University, Oberschleissheim, Germany.
Scholler, Dominik
  • Equine Clinic, Ludwig Maximilians University, Oberschleissheim, Germany.
Gesell-May, Stefan
  • Center for Equine Ophthalmology, Munich, Germany.
Müller, Tobias
  • anirec, Munich, Germany.
Zablotski, Yury
  • Clinic for Ruminants, Ludwig Maximilians University, Oberschleissheim, Germany.
Ertel, Wolfgang
  • Institute for Artificial Intelligence, Ravensburg-Weingarten University, Weingarten, Germany.
May, Anna
  • Equine Clinic, Ludwig Maximilians University, Oberschleissheim, Germany.

References

This article includes 28 references
  1. Gerding JC, Gilger BC. Prognosis and impact of equine recurrent uveitis.. Equine Vet J 2016;48:290–298.
  2. Gilger B, Hollingsworth S. Diseases of the uvea, uveitis, and recurrent uveitis.. In: Gilger B, editor. Equine ophthalmology. 3rd ed.; Ames, Iowa: Wiley‐Blackwell; 2016. p. 369–415.
  3. Gilger BC. Equine recurrent uveitis: the viewpoint from the USA.. Equine Vet J 2010;42(S37):57–61.
  4. May A, Gesell‐May S, Müller T, Erte W. Artificial intelligence as a tool to aid in the differentiation of equine ophthalmic diseases with an emphasis on equine uveitis.. Equine Vet J 2022;54(5):847–855.
  5. Yala A, Lehman C, Schuster T, Portnoi T, Barzilay R. A deep learning mammography‐based model for improved breast cancer risk prediction.. Radiology 2019;292:60–66.
  6. Gulshan V, Peng L, Coram M, Stumpe MC, Wu D, Narayanaswamy A. Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs.. JAMA 2016;316:2402–2410.
  7. Quellec G, Charrière K, Boudi Y, Cochener B, Lamard M. Deep image mining for diabetic retinopathy screening.. Med Image Anal 2017;39:178–193.
  8. Alyoubi WL, Shalash WM, Abulkhair MF. Diabetic retinopathy detection through deep learning techniques: a review.. Inform Med Unlocked 2020;20:100377.
  9. Goodfellow IBY, Courville A. Deep learning.. Vol 329ff. Cambridge, MA: MIT Press; 2016. Accessed 17 September 2023. www.deeplearningbook.org.
  10. Li Z, He Y, Keel S, Meng W, Chang RT, He M. Efficacy of a deep learning system for detecting glaucomatous optic neuropathy based on color fundus photographs.. Ophthalmology 2018;125:1199–1206.
  11. Lee CS, Baughman DM, Lee AY. Deep learning is effective for the classification of OCT images of normal versus age‐related macular degeneration.. Ophthalmol Retina 2017;1:322–327.
  12. Esteva A, Kuprel B, Novoa RA, Ko J, Swetter SM, Blau HM. Dermatologist‐level classification of skin cancer with deep neural networks.. Nature 2017;542:115–118.
  13. Liu X, Faes L, Kale AU, Wagner SK, Fu DJ, Brunseels A. A comparison of deep learning performance against health‐care professionals in detecting diseases from medical imaging: a systematic review and meta‐analysis.. Lancet Digit Health 2019;1:e271–e297.
  14. Becker AS, Marcon M, Ghafoor S, Wurnig MC, Frauenfelder T, Boss A. Deep learning in mammography: diagnostic accuracy of a multipurpose image analysis software in the detection of breast cancer.. Invest Radiol 2017;52:434–440.
  15. Nagendran M, Chen Y, Lovejoy CA, Gordon AC, Komorowski M, Harvey H. Artificial intelligence versus clinicians: systematic review of design, reporting standards, and claims of deep learning studies.. BMJ 2020;368:m689.
  16. Bertram CA, Marzahl C, Bartel A, Stayt J, Bonsembiante F, Beeler‐Marfisi J. Cytologic scoring of equine exercise‐induced pulmonary hemorrhage: performance of human experts and a deep learning‐based algorithm.. Vet Pathol 2023;60:75–85.
  17. Kim E, Fischetti AJ, Sreetharan P, Weltman JG, Fox PR. Comparison of artificial intelligence to the veterinary radiologist's diagnosis of canine cardiogenic pulmonary edema.. Vet Radiol Ultrasound 2022;63:292–297.
  18. Zhai S, Wang H, Sun L, Zhang B, Huo F, Qiu S. Artificial intelligence (AI) versus expert: a comparison of left ventricular outflow tract velocity time integral (LVOT‐VTI) assessment between ICU doctors and an AI tool.. J Appl Clin Med Phys 2022;23:e13724.
  19. Graf M, Knitza J, Leipe J, Krusche M, Welcker M, Kuhn S. Comparison of physician and artificial intelligence‐based symptom checker diagnostic accuracy.. Rheumatol Int 2022;42:2167–2176.
  20. Aubreville M, Bertram CA, Marzahl C, Gurtner C, Dettwiler M, Schmidt A. Deep learning algorithms out‐perform veterinary pathologists in detecting the mitotically most active tumor region.. Sci Rep 2020;10:16447.
  21. Ter Riet G, Bachmann LM, Kessels AG, Khan KS. Individual patient data meta‐analysis of diagnostic studies: opportunities and challenges.. Evid Based Med 2013;18:165–169.
  22. Li Z, Jiang J, Qiang W, Guo L, Liu X, Weng H. Comparison of deep learning systems and cornea specialists in detecting corneal diseases from low‐quality images.. iScience 2021;24:103317.
  23. Tamori H, Yamashina H, Mukai M, Morii Y, Suzuki T, Ogasawara K. Acceptance of the use of artificial intelligence in medicine among Japan's doctors and the public: a questionnaire survey.. JMIR Hum Factors 2022;9:e24680.
  24. Bisdas S, Topriceanu CC, Zakrzewska Z, Irimia A‐V, Shakallis L, Subhash J. Artificial intelligence in medicine: a multinational multi‐center survey on the medical and dental students' perception.. Front Public Health 2021;9:795284.
  25. Fraiwan MA, Abutarbush SM. Using artificial intelligence to predict survivability likelihood and need for surgery in horses presented with acute abdomen (colic).. J Equine Vet 2020;90:102973.
  26. Alexeenko V, Jeevaratnam K. Artificial intelligence: is it wizardry, witchcraft, or a helping hand for an equine veterinarian?. Equine Vet J 2023;55:719–722.
  27. Darcy AM, Louie AK, Roberts LW. Machine learning and the profession of medicine.. JAMA 2016;315:551–552.
  28. Litjens G, Kooi T, Bejnordi BE, Setio AAA, Ciompi F, Ghafoorian M. A survey on deep learning in medical image analysis.. Med Image Anal 2017;42:60–88.

Citations

This article has been cited 0 times.