Abstract: Once a mare experiences parturition abnormalities, the outcome between a live foal and a stillborn can change rapidly. Automated detection of mare parturition and timely human intervention is crucial to reducing risks during mare and foal parturition. This paper addresses the challenges of manual monitoring of parturition in large-scale equine facilities due to the unpredictability of mare parturition timing, proposing an algorithm for detecting mare parturition through a balanced multi-scale feature fusion based on an improved Libra RCNN. Initially, a ResNet101 backbone network incorporating the CBAM attention module was used to enhance parturition feature extraction capability; subsequently, a balanced content-aware feature reassembly feature pyramid, CARAFE-BFP, was employed to mitigate data imbalance effects while enhancing the quality of feature map upsampling; finally, the GRoIE module was utilized to merge CARAFE-BFP's multi-scale features, improving the model's perception of multi-scale objectives and minor feature changes. The model achieved a mean average precision of 86.26% in scenarios of imbalanced positive and negative samples of mare parturition data, subtle parturition feature differences, and multi-scale data distribution, with a detection speed of 15.06 images per second and an average recall rate of 98.17%. Moreover, this study employed a statistical method combined with a sliding window mechanism to assess the algorithm's performance in detecting mare parturition in video stream continuous monitoring scenarios, achieving an accuracy rate of 92.75% for mare parturition detection. The algorithm proposed in this study achieved non-contact, stress-free, intensive, and automated detection of mare parturition, also demonstrating the immense potential of artificial intelligence technology in the field of animal production management.
The Equine Research Bank provides access to a large database of publicly available scientific literature. Inclusion in the Research Bank does not imply endorsement of study methods or findings by Mad Barn.
This research summary has been generated with artificial intelligence and may contain errors and omissions. Refer to the original study to confirm details provided. Submit correction.
Overview
This research presents a new artificial intelligence algorithm designed to automatically detect when a mare (female horse) is about to give birth, improving monitoring and reducing risks associated with equine parturition.
The proposed method enhances feature extraction and balances multi-scale data using an improved Libra RCNN framework, achieving high accuracy and real-time detection speeds.
Background and Motivation
Parturition abnormalities in mares can critically influence whether a foal is born alive or stillborn.
Timely detection of mare labor onset is essential for prompt human intervention to reduce health risks for both mare and foal.
Current monitoring methods often rely on manual observation in large-scale equine facilities, which is labor-intensive and inefficient due to unpredictability of the timing.
Automated, non-contact, and real-time detection systems based on video data offer a solution to these challenges.
Research Problem and Challenges
Detecting mare parturition from video involves several challenges:
Imbalanced datasets: far fewer positive parturition samples versus negatives.
This component handles multi-scale features, improving upsampling quality in feature maps.
It also balances the feature content to mitigate dataset imbalance problems, helping to detect both common and rare features effectively.
GRoIE Module (Generalized Region of Interest Enhancement):
Designed to merge multi-scale features from CARAFE-BFP.
Enhances the model’s sensitivity to multi-scale objects and subtle changes related to parturition.
Performance and Results
The algorithm achieved a mean average precision (mAP) of 86.26% despite difficult conditions of imbalanced datasets and subtle feature differences.
Detection speed reached 15.06 images per second, suitable for near real-time monitoring.
Recall rate was 98.17%, indicating very effective detection of true parturition events.
Additionally, a statistical method combined with a sliding window mechanism was proposed to analyze continuous video streams, improving detection reliability over time.
In video monitoring scenarios, the method achieved a 92.75% accuracy rate for mare parturition detection.
Significance and Implications
The study introduces a non-contact and stress-free automated system for intensive monitoring of mare parturition, reducing reliance on laborious manual observation.
Demonstrates the potential of AI-based object detection technologies in improving animal production management and welfare.
The balanced multi-scale feature fusion approach addresses common challenges in medical and veterinary image-based detection tasks, suggesting applicability to other domains with imbalanced and subtle visual data.
Summary
The research advances automated detection of critical animal physiological events utilizing deep learning and enhanced multi-scale feature fusion techniques.
It offers a practical solution for monitoring mare labor with high accuracy and speed, supporting timely intervention and thus improved outcomes for mares and foals.
Cite This Article
APA
Wang B, Duan W, Zhao J, Bai D.
(2025).
Detection of mare parturition through balanced multi-scale feature fusion based on improved Libra RCNN.
PLoS One, 20(3), e0318498.
https://doi.org/10.1371/journal.pone.0318498
Ille N, Aurich C, Aurich J. Physiological stress responses of mares to gynecologic examination in veterinary medicine.. J Equine Veter Sci 2016;436–11.
Sharma VK, Mir RN. A comprehensive and systematic look up into deep learning based object detection techniques: a review.. Comput Sci Rev 2020;38:100301.
Mahmud MS, Zahid A, Das AK, Muzammil M, Khan MU. A systematic literature review on deep learning applications for precision cattle farming.. Comput Electron Agricult 2021;187:106313.
García R, Aguilar J, Toro M, Pinto A, Rodríguez P. A systematic literature review on the use of machine learning in precision livestock farming.. Comput Electron Agricult 2020;179:105826.
Li X, Xu F, Gao H, Liu F, Lyu X. A frequency domain feature-guided network for semantic segmentation of remote sensing images.. IEEE Signal Process Lett 2024.
Appe SN, G A, Gn B. CAM-YOLO: tomato detection and classification based on improved YOLOv5 using combining attention mechanism.. PeerJ Comput Sci 2023;9:e1463.
Balaji GN, Parthasarathy G. A modified convolutional neural network for tumor segmentation in multimodal brain magnetic resonance images.. AIP Conf Proc 2024;2919:050008.
Bhujel A, Arulmozhi E, Moon B-E, Kim H-T. Deep-learning-based automatic monitoring of pigs’ physico-temporal activities at different greenhouse gas concentrations. Animals (Basel) 2021;11(11):3089.
Peng J, Wang D, Liao X, Shao Q, Sun Z, Yue H. Wild animal survey using UAS imagery and deep learning: modified Faster R-CNN for kiang detection in Tibetan Plateau. ISPRS J Photogram Remote Sens 2020;169:364–76.
Chen C, Zhu W, Norton T. Behaviour recognition of pigs and cattle: journey from computer vision to deep learning. Comput Electron Agricult 2021;187:106255.
Liu L, Zhou J, Zhang B, Dai S, Shen M. Visual detection on posture transformation characteristics of sows in late gestation based on Libra R-CNN. Biosyst Eng 2022;223:219–31.
Ji H, Yu J, Lao F, Zhuang Y, Wen Y, Teng G. Automatic position detection and posture recognition of grouped pigs based on deep learning. Agriculture 2022;12(9):1314.
Niknejad N, Caro JL, Bidese-Puhl R, Bao Y, Staiger EA. Equine kinematic gait analysis using stereo videography and deep learning: stride length and stance duration estimation. J ASABE 2023.
Liu C, Su J, Wang L, Lu S, Li L. LA-DeepLab V3+: a novel counting network for pigs. Agriculture 2022;12(2):284.
Li X, Xu F, Li L, Xu N, Liu F, Yuan C. AAFormer: attention-attended transformer for semantic segmentation of remote sensing images. IEEE Geosci Remote Sens Lett 2024.
Li X, Xu F, Liu F, Lyu X, Tong Y, Xu Z. A synergistical attention model for semantic segmentation of remote sensing images. IEEE Trans Geosci Remote Sensing 2023;61:1–16.
Li X, Xu F, Yong X, Chen D, Xia R, Ye B. SSCNet: a spectrum-space collaborative network for semantic segmentation of remote sensing images. Remote Sens 2023;15(23):5610.
Pang J, Chen K, Shi J, Feng H, Ouyang W, Lin D. Libra r-cnn: towards balanced learning for object detection. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 2019;p. 821–30.
He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2016;p. 770–8.
Woo S, Park J, Lee JY, Kweon IS. Cbam: Convolutional block attention module. Proceedings of the Proceedings of the European Conference On Computer Vision (ECCV) 2018;p. 3–19.
Rossi L, Karimi A, Prati A. A novel region of interest extraction layer for instance segmentation. Proceedings of the 2020 25th International Conference on Pattern Recognition (ICPR) IEEE; 2021;p. 2203–9.
Wang Z, Bovik AC, Sheikh HR, Simoncelli EP. Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Process 2004;13(4):600–12.
Lin TY, Maire M, Belongie S, Hays J, Perona P, Ramanan D. Microsoft coco: common objects in context. Proceedings of the Computer Vision–ECCV 2014:13th European Conference, Zurich, Switzerland, 2014 Sept 6–12, Part V 13 Springer; 2014;p. 740–55.
Hu J, Shen L, Sun G. Squeeze-and-excitation networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 2018 June 18–22 2018;p. 7132–41.
Cao Y, Xu J, Lin S, Wei F, Hu H. GCNet: non-local networks meet squeeze-excitation networks and beyond. Proceedings of the IEEE International Conference on Computer Vision Workshops (ICCVW), Seoul, South Korea, 2019 Oct 27–Nov 2 2019;p. 1971–80.
Lin TY, Dollar P, Girshick R, He K, Hariharan B, Belongie S. Feature pyramid networks for object detection. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2017;pp. 2117–25.
Wang J, Chen K, Xu R, Liu Z, Loy CC, Lin D. Carafe: content-aware reassembly of features. Proceedings of the IEEE/CVF International Conference On Computer Vision 2019;p. 3007–16.
He K, Gkioxari G, Dollar P, Girshick R. Mask r-cnn. Proceedings of the IEEE International Conference on Computer Vision 2017;p. 2961–9.
Ioffe S, Szegedy C. Batch normalization: accelerating deep network training by reducing internal covariate shift. Proceedings of the International Conference on Machine Learning PMLR; 2015;p. 448–56.
Simonyan K. Very deep convolutional networks for large-scale image recognition. Proceedings of the 3rd International Conference on Learning Representations ICLR 2015-Conference Track Proceedings 2015;p. 1.
Zhang H, Wu C, Zhang Z, Zhu Y, Lin H, Zhang Z. Resnest: split-attention networks. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 2022;p. 2736–46.
Ren S, He K, Girshick R, Sun J. Faster r-cnn: towards real-time object detection with region proposal networks. Adv Neural Inf Process Syst 2015;28.
Tian Z, Shen C, Chen H, He T. Fcos: fully convolutional one-stage object detection. Proceedings of the IEEE/CVF International Conference on Computer Vision 2019;p. 9627–36.
Farhadi A, Redmon J. Yolov3: an incremental improvement. Computer vision and pattern recognition 2018;p. 1–6.
Zhou B, Khosla A, Lapedriza A, Oliva A, Torralba A. Learning deep features for discriminative localization. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2016;p. 2921–9.
Zeng W, He M. Rice disease segmentation method based on CBAM-CARAFE-DeepLabv3+. Crop Protection 2024;180:106665.