Analyze Diet
Sensors (Basel, Switzerland)2021; 21(17); 5818; doi: 10.3390/s21175818

Cross-Modality Interaction Network for Equine Activity Recognition Using Imbalanced Multi-Modal Data.

Abstract: With the recent advances in deep learning, wearable sensors have increasingly been used in automated animal activity recognition. However, there are two major challenges in improving recognition performance-multi-modal feature fusion and imbalanced data modeling. In this study, to improve classification performance for equine activities while tackling these two challenges, we developed a cross-modality interaction network (CMI-Net) involving a dual convolution neural network architecture and a cross-modality interaction module (CMIM). The CMIM adaptively recalibrated the temporal- and axis-wise features in each modality by leveraging multi-modal information to achieve deep intermodality interaction. A class-balanced (CB) focal loss was adopted to supervise the training of CMI-Net to alleviate the class imbalance problem. Motion data was acquired from six neck-attached inertial measurement units from six horses. The CMI-Net was trained and verified with leave-one-out cross-validation. The results demonstrated that our CMI-Net outperformed the existing algorithms with high precision (79.74%), recall (79.57%), F1-score (79.02%), and accuracy (93.37%). The adoption of CB focal loss improved the performance of CMI-Net, with increases of 2.76%, 4.16%, and 3.92% in precision, recall, and F1-score, respectively. In conclusion, CMI-Net and CB focal loss effectively enhanced the equine activity classification performance using imbalanced multi-modal sensor data.
Publication Date: 2021-08-29 PubMed ID: 34502709PubMed Central: PMC8434387DOI: 10.3390/s21175818Google Scholar: Lookup
The Equine Research Bank provides access to a large database of publicly available scientific literature. Inclusion in the Research Bank does not imply endorsement of study methods or findings by Mad Barn.
  • Journal Article

Summary

This research summary has been generated with artificial intelligence and may contain errors and omissions. Refer to the original study to confirm details provided. Submit correction.

This research develops a new network, called Cross-Modality Interaction Network (CMI-Net), aimed at improving the recognition of activities in horses using data from wearable sensors. By addressing challenges with imbalanced data and multi-modal feature fusion, the network provides more accurate results than existing algorithms.

Introduction

  • The article discusses the use of deep learning and wearable sensors in animal activity recognition, specifically equine activities. The focus is set upon overcoming two key challenges in this field – multi-modal feature fusion and handling imbalanced data.

Cross-Modality Interaction Network (CMI-Net)

  • In order to improve the classification of equine activities, the researchers developed a Cross-Modality Interaction Network (CMI-Net). This network incorporates a dual convolution neural network architecture and a cross-modality interaction module (CMIM).
  • The CMIM makes use of multi-modal information to recalibrate the temporal- and axis-wise features of each modality, enabling deeper intermodality interaction.

Class-Balanced Focal Loss

  • The researchers used a class-balanced (CB) focal loss to supervise the training of the CMI-Net, in order to tackle the class imbalance problem. CB focal loss re-weights the importance of different classes based on their frequency, thus countering the negative effects of imbalanced class distribution.

Experimental Setup

  • The team collected motion data from six neck-attached inertial measurement units on six horses. The gathered data was used to train and verify the CMI-Net, using a well-known validation scheme called leave-one-out cross-validation.

Results and Conclusion

  • Compared to existing algorithms, the results showed that the CMI-Net provided significantly improved results with high precision, recall, F1-score, and overall accuracy.
  • The implementation of CB focal loss contributed to the improved performance of the CMI-Net, particularly enhancing precision, recall, and F1-score.
  • In sum, the study concludes that the proposed CMI-Net and the use of CB focal loss effectively improved the classification of equine activities using multi-modal sensor data that previously suffered from imbalances.

Cite This Article

APA
Mao A, Huang E, Gan H, Parkes RSV, Xu W, Liu K. (2021). Cross-Modality Interaction Network for Equine Activity Recognition Using Imbalanced Multi-Modal Data. Sensors (Basel), 21(17), 5818. https://doi.org/10.3390/s21175818

Publication

ISSN: 1424-8220
NlmUniqueID: 101204366
Country: Switzerland
Language: English
Volume: 21
Issue: 17
PII: 5818

Researcher Affiliations

Mao, Axiu
  • Department of Infectious Diseases and Public Health, Jockey Club College of Veterinary Medicine and Life Sciences, City University of Hong Kong, Hong Kong, China.
Huang, Endai
  • Department of Computer Science, City University of Hong Kong, Hong Kong, China.
Gan, Haiming
  • Department of Infectious Diseases and Public Health, Jockey Club College of Veterinary Medicine and Life Sciences, City University of Hong Kong, Hong Kong, China.
  • College of Electronic Engineering, South China Agricultural University, Guangzhou 510642, China.
Parkes, Rebecca S V
  • Department of Veterinary Clinical Sciences, Jockey Club College of Veterinary Medicine and Life Sciences, City University of Hong Kong, Hong Kong, China.
  • Centre for Companion Animal Health, Jockey Club College of Veterinary Medicine and Life Sciences, City University of Hong Kong, Hong Kong, China.
Xu, Weitao
  • Department of Computer Science, City University of Hong Kong, Hong Kong, China.
Liu, Kai
  • Department of Infectious Diseases and Public Health, Jockey Club College of Veterinary Medicine and Life Sciences, City University of Hong Kong, Hong Kong, China.
  • Animal Health Research Centre, Chengdu Research Institute, City University of Hong Kong, Chengdu 610000, China.

MeSH Terms

  • Algorithms
  • Animals
  • Horses
  • Neural Networks, Computer

Grant Funding

  • 9610450 / City University of Hong Kong

Conflict of Interest Statement

The authors declare no conflict of interest.

References

This article includes 46 references
  1. Eerdekens A, Deruyck M, Fontaine J, Martens L, De Poorter E, Plets D, Joseph W. A framework for energy-efficient equine activity recognition with leg accelerometers. Comput. Electron. Agric. 2021;183:106020.
  2. Parkes RSV, Weller R, Pfau T, Witte TH. The Effect of Training on Stride Duration in a Cohort of Two-Year-Old and Three-Year-Old Thoroughbred Racehorses.. Animals (Basel) 2019 Jul 22;9(7).
    doi: 10.3390/ani9070466pmc: PMC6680649pubmed: 31336595google scholar: lookup
  3. van Weeren PR, Pfau T, Rhodin M, Roepstorff L, Serra Bragança F, Weishaupt MA. Do we have to redefine lameness in the era of quantitative gait analysis?. Equine Vet J 2017 Sep;49(5):567-569.
    doi: 10.1111/evj.12715pubmed: 28804943google scholar: lookup
  4. Bosch S, Serra Bragança F, Marin-Perianu M, Marin-Perianu R, van der Zwaag BJ, Voskamp J, Back W, van Weeren R, Havinga P. EquiMoves: A Wireless Networked Inertial Measurement System for Objective Examination of Horse Gait.. Sensors (Basel) 2018 Mar 13;18(3).
    doi: 10.3390/s18030850pmc: PMC5877382pubmed: 29534022google scholar: lookup
  5. Astill J, Dara RA, Fraser EDG, Roberts B, Sharif S. Smart poultry management: Smart sensors, big data, and the internet of things. Comput. Electron. Agric. 2020;170:105291.
  6. Rueß D, Rueß J, Hümmer C, Deckers N, Migal V, Kienapfel K, Wieckert A, Barnewitz D, Reulke R. Equine Welfare Assessment: Horse Motion Evaluation and Comparison to Manual Pain Measurements. Proceedings of the Pacific-Rim Symposium on Image and Video Technology, PSIVT 2019 Sydney, Australia. 18–22 November 2019; pp. 156–169.
  7. Kamminga JW, Meratnia N, Havinga PJM. Dataset: Horse Movement Data and Analysis of its Potential for Activity Recognition. Proceedings of the 2nd Workshop on Data Acquisition to Analysis, DATA 2019 Prague, Czech Republic. 26–28 July 2019; pp. 22–25.
    doi: 10.1145/3359427.3361908google scholar: lookup
  8. Kumpulainen P, Cardó AV, Somppi S, Törnqvist H, Väätäjä H, Majaranta P, Gizatdinova Y, Hoog Antink C, Surakka V, Kujala MV. Dog behaviour classification with movement sensors placed on the harness and the collar. Appl. Anim. Behav. Sci. 2021;241:105393.
  9. Tran DN, Nguyen TN, Khanh PCP, Trana DT. An IoT-based Design Using Accelerometers in Animal Behavior Recognition Systems. IEEE Sens. J. 2021.
    doi: 10.1109/JSEN.2021.3051194google scholar: lookup
  10. Maisonpierre IN, Sutton MA, Harris P, Menzies-Gow N, Weller R, Pfau T. Accelerometer activity tracking in horses and the effect of pasture management on time budget.. Equine Vet J 2019 Nov;51(6):840-845.
    doi: 10.1111/evj.13130pubmed: 31009100google scholar: lookup
  11. Nweke HF, Teh YW, Al-garadi MA, Alo UR. Deep learning algorithms for human activity recognition using mobile and wearable sensor networks: State of the art and research challenges. Expert Syst. Appl. 2018;105:233–261.
  12. Noorbin SFH, Layeghy S, Kusy B, Jurdak R, Bishop-hurley G, Portmann M. Deep Learning-based Cattle Activity Classification Using Joint Time-frequency Data Representation. Comput. Electron. Agric. 2020;187:106241.
  13. Peng Y, Kondo N, Fujiura T, Suzuki T, Ouma S, Wulandari Yoshioka H, Itoyama E. Dam behavior patterns in Japanese black beef cattle prior to calving: Automated detection using LSTM-RNN. Comput. Electron. Agric. 2020;169:105178.
  14. Bocaj E, Uzunidis D, Kasnesis P, Patrikakis CZ. On the Benefits of Deep Convolutional Neural Networks on Animal Activity Recognition. Proceedings of the 2020 International Conference on Smart Systems and Technologies (SST) Osijek, Croatia. 14–16 October 2020; pp. 83–88.
  15. Eerdekens A, Deruyck M, Fontaine J, Martens L, de Poorter E, Plets D, Joseph W. Resampling and Data Augmentation for Equines’ Behaviour Classification Based on Wearable Sensor Accelerometer Data Using a Convolutional Neural Network. Proceedings of the 2020 International Conference on Omni-layer Intelligent Systems (COINS) Barcelona, Spain. 31 August–2 September 2020; pp. 1–6.
  16. Chambers RD, Yoder NC, Carson AB, Junge C, Allen DE, Prescott LM, Bradley S, Wymore G, Lloyd K, Lyle S. Deep Learning Classification of Canine Behavior Using a Single Collar-Mounted Accelerometer: Real-World Validation.. Animals (Basel) 2021 May 25;11(6).
    doi: 10.3390/ani11061549pmc: PMC8228965pubmed: 34070579google scholar: lookup
  17. Liu N, Zhang N, Han J. Learning Selective Self-Mutual Attention for RGB-D Saliency Detection. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020 14–19 June 2020; pp. 13753–13762.
  18. Ha S, Choi S. Convolutional neural networks for human activity recognition using multiple accelerometer and gyroscope sensors. Proceedings of the 2016 International Joint Conference on Neural Networks (IJCNN) Vancouver, BC, Canada. 24–29 July 2016; pp. 381–388.
  19. Mustaqeem Kwon S. MLT-DNet: Speech emotion recognition using 1D dilated CNN based on multi-learning trick approach. Expert Syst. Appl. 2021;167:114177.
  20. Mustaqeem, Kwon S. Optimal feature selection based speech emotion recognition using two-stream deep convolutional neural network. Int. J. Intell. Syst. 2021;36:5116–5135.
    doi: 10.1002/int.22505google scholar: lookup
  21. Xu X, Li W, Duan Q. Transfer learning and SE-ResNet152 networks-based for small-scale unbalanced fish species identification. Comput. Electron. Agric. 2021;180:105878.
  22. Zhang S, Li Z, Yan S, He X, Sun J. Distribution Alignment: A Unified Framework for Long-tail Visual Recognition. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2021 19–25 June 2021; pp. 2361–2370.
  23. Tan J, Wang C, Li B, Li Q, Ouyang W, Yin C, Yan J. Equalization loss for long-tailed object recognition. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020 14–19 June 2020; pp. 11659–11668.
  24. Khan SH, Hayat M, Bennamoun M, Sohel FA, Togneri R. Cost-Sensitive Learning of Deep Feature Representations From Imbalanced Data.. IEEE Trans Neural Netw Learn Syst 2018 Aug;29(8):3573-3587.
    doi: 10.1109/TNNLS.2017.2732482pubmed: 28829320google scholar: lookup
  25. Cui Y, Jia M, Lin TY, Song Y, Belongie S. Class-balanced loss based on effective number of samples. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2019 Long Beach, CA, USA. 16–20 June 2019; pp. 9260–9269.
    doi: 10.1109/CVPR.2019.00949google scholar: lookup
  26. Lin TY, Goyal P, Girshick R, He K, Dollar P. Focal Loss for Dense Object Detection.. IEEE Trans Pattern Anal Mach Intell 2020 Feb;42(2):318-327.
    doi: 10.1109/TPAMI.2018.2858826pubmed: 30040631google scholar: lookup
  27. Wang T, Zhu Y, Zhao C, Zeng W, Wang J, Tang M. Adaptive Class Suppression Loss for Long-Tail Object Detection. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2021 19–25 June 2021; pp. 3103–3112.
  28. Mao AX, Huang ED, Xu WT, Liu K. Cross-modality Interaction Network for Equine Activity Recognition Using Time-Series Motion Data. Proceedings of the 2021 International Symposium on Animal Environment and Welfare (ISAEW) Chongqing, China. 20–23 October 2021; in press.
  29. Zhang Z, Lin Z, Xu J, Jin WD, Lu SP, Fan DP. Bilateral Attention Network for RGB-D Salient Object Detection.. IEEE Trans Image Process 2021;30:1949-1961.
    doi: 10.1109/TIP.2021.3049959pubmed: 33439842google scholar: lookup
  30. Woo S, Park J, Lee JY, Kweon IS. CBAM: Convolutional block attention module. Proceedings of the European Conference on Computer Vision, ECCV 2018 Munich, Germany. 8–14 September 2018; pp. 3–19.
  31. Mustaqeem Kwon S. Att-Net: Enhanced emotion recognition system using lightweight self-attention module. Appl. Soft Comput. 2021;102:107101.
  32. Kamminga JW, Janßen LM, Meratnia N, Havinga PJM. Horsing around—A dataset comprising horse movement. Data 2019;4:131.
    doi: 10.3390/data4040131google scholar: lookup
  33. He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2016 Las Vegas, NV, USA. 27–30 June 2016; pp. 770–778.
    doi: 10.1109/CVPR.2016.90google scholar: lookup
  34. Kamminga JW, Le DV, Havinga PJM. Towards deep unsupervised representation learning from accelerometer time series for animal activity recognition. Proceedings of the 6th Workshop on Mining and Learning from Time Series, MiLeTS 2020 San Diego, CA, USA. 24 August 2020.
  35. Nair V, Hinton GE. Rectified Linear Units Improve Restricted Boltzmann Machines Vinod. Proceedings of the 27th International Conference on Machine Learning, ICML 2010 Haifa, Israel. 21–24 June 2010.
    doi: 10.1123/jab.2016-0355google scholar: lookup
  36. Joze HRV, Shaban A, Iuzzolino ML, Koishida K. MMTM: Multimodal transfer module for CNN fusion. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020 14–19 June 2020; pp. 13286–13296.
  37. Casella E, Khamesi AR, Silvestri S. A framework for the recognition of horse gaits through wearable devices. Pervasive Mob. Comput. 2020;67:101213.
  38. Zeng M, Nguyen LT, Yu B, Mengshoel OJ, Zhu J, Wu P, Zhang J. Convolutional Neural Networks for human activity recognition using mobile sensors. Proceedings of the 6th international conference on mobile computing, applications and services, MobiCASE 2014 Austin, TX, USA. 6–7 November 2014; pp. 197–205.
  39. Wolpert DH, Macready WG. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1997;1:67–82.
    doi: 10.1109/4235.585893google scholar: lookup
  40. Wei J, Wang Q, Li Z, Wang S, Zhou SK, Cui S. Shallow Feature Matters for Weakly Supervised Object Localization. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2021 19–25 June 2021; pp. 5993–6001.
  41. LeCun Y, Bengio Y, Hinton G. Deep learning.. Nature 2015 May 28;521(7553):436-44.
    doi: 10.1038/nature14539pubmed: 26017442google scholar: lookup
  42. Van der Maaten L, Hinton G. Visualizing Data using t-SNE. J. Mach. Learn. Res. 2008;9:2579–2605.
    doi: 10.1007/s10479-011-0841-3google scholar: lookup
  43. de Cocq P, van Weeren PR, Back W. Effects of girth, saddle and weight on movements of the horse.. Equine Vet J 2004 Dec;36(8):758-63.
    doi: 10.2746/0425164044848000pubmed: 15656511google scholar: lookup
  44. Geng C, Huang SJ, Chen S. Recent Advances in Open Set Recognition: A Survey.. IEEE Trans Pattern Anal Mach Intell 2021 Oct;43(10):3614-3631.
    doi: 10.1109/TPAMI.2020.2981604pubmed: 32191881google scholar: lookup
  45. Yoshihashi R, You S, Shao W, Iida M, Kawakami R, Naemura T. Classification-Reconstruction Learning for Open-Set Recognition. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2019 Long Beach, CA, USA. 16–20 June 2019; pp. 4016–4025.
  46. Cardoso DO, Gama J, França FMG. Weightless neural networks for open set recognition. Mach. Learn. 2017;106:1547–1567.
    doi: 10.1007/s10994-017-5646-4google scholar: lookup

Citations

This article has been cited 8 times.
  1. Aguilar-Lazcano CA, Espinosa-Curiel IE, Ríos-Martínez JA, Madera-Ramírez FA, Pérez-Espinosa H. Machine Learning-Based Sensor Data Fusion for Animal Monitoring: Scoping Review. Sensors (Basel) 2023 Jun 20;23(12).
    doi: 10.3390/s23125732pubmed: 37420896google scholar: lookup
  2. Mao A, Huang E, Gan H, Liu K. FedAAR: A Novel Federated Learning Framework for Animal Activity Recognition with Wearable Sensors. Animals (Basel) 2022 Aug 21;12(16).
    doi: 10.3390/ani12162142pubmed: 36009732google scholar: lookup
  3. Mao A, Giraudet CSE, Liu K, De Almeida Nolasco I, Xie Z, Xie Z, Gao Y, Theobald J, Bhatta D, Stewart R, McElligott AG. Automated identification of chicken distress vocalizations using deep learning models. J R Soc Interface 2022 Jun;19(191):20210921.
    doi: 10.1098/rsif.2021.0921pubmed: 35765806google scholar: lookup
  4. Wu Y, Liang W, Fang J, Zhou C, Sun X. VTC-Net: A Semantic Segmentation Network for Ore Particles Integrating Transformer and Convolutional Block Attention Module (CBAM). Sensors (Basel) 2026 Jan 24;26(3).
    doi: 10.3390/s26030787pubmed: 41682303google scholar: lookup
  5. Ding L, Zhang C, Yue Y, Yao C, Li Z, Hu Y, Yang B, Ma W, Yu L, Gao R, Li Q. Wearable Sensors-Based Intelligent Sensing and Application of Animal Behaviors: A Comprehensive Review. Sensors (Basel) 2025 Jul 21;25(14).
    doi: 10.3390/s25144515pubmed: 40732643google scholar: lookup
  6. Akbarein H, Taaghi MH, Mohebbi M, Soufizadeh P. Applications and Considerations of Artificial Intelligence in Veterinary Sciences: A Narrative Review. Vet Med Sci 2025 May;11(3):e70315.
    doi: 10.1002/vms3.70315pubmed: 40173266google scholar: lookup
  7. Lovatti JVR, Dijkinga KA, Aires JF, Garrido LFC, Costa JHC, Daros RR. Validation and interdevice reliability of a behavior monitoring collar to measure rumination, feeding activity, and idle time of lactating dairy cows. JDS Commun 2024 Nov;5(6):602-607.
    doi: 10.3168/jdsc.2023-0467pubmed: 39650024google scholar: lookup
  8. Ahn SH, Kim S, Jeong DH. Unsupervised Domain Adaptation for Mitigating Sensor Variability and Interspecies Heterogeneity in Animal Activity Recognition. Animals (Basel) 2023 Oct 20;13(20).
    doi: 10.3390/ani13203276pubmed: 37894000google scholar: lookup