Yıl: 2019 Cilt: 27 Sayı: 4 Sayfa Aralığı: 2395 - 2411 Metin Dili: İngilizce DOI: 10.3906/elk-1808-130 İndeks Tarihi: 18-05-2020

Elimination of useless images from raw camera-trap data

Öz:
Camera-traps are motion triggered cameras that are used to observe animals in nature. The number ofimages collected from camera-traps has increased significantly with the widening use of camera-traps thanks to advancesin digital technology. A great workload is required for wild-life researchers to group and label these images. We proposea system to decrease the amount of time spent by the researchers by eliminating useless images from raw camera-trapdata. These images are too bright, too dark, blurred, or they contain no animals. To eliminate bright, dark, and blurredimages we employ techniques based on image histograms and fast Fourier transform. To eliminate the images withoutanimals, we propose a system combining convolutional neural networks and background subtraction. We experimentallyshow that the proposed approach keeps 99% of photos with animals while eliminating more than 50% of photos withoutanimals. We also present a software prototype that employs developed algorithms to eliminate useless images.
Anahtar Kelime:

Konular: Mühendislik, Elektrik ve Elektronik Bilgisayar Bilimleri, Yazılım Mühendisliği Bilgisayar Bilimleri, Sibernitik Bilgisayar Bilimleri, Bilgi Sistemleri Bilgisayar Bilimleri, Donanım ve Mimari Bilgisayar Bilimleri, Teori ve Metotlar Bilgisayar Bilimleri, Yapay Zeka
Belge Türü: Makale Makale Türü: Araştırma Makalesi Erişim Türü: Erişime Açık
  • [1] Boom BJ, He J, Palazzo S, Huang PX, Beyan C, Chou HM, Lin FP, Spampinato C, Fisher RB. A research tool for long-term and continuous analysis of fish assemblage in coral-reefs using underwater camera footage. Ecological Informatics 2014; 23: 83-97.
  • [2] Song D, Xu Y. A low false-negative filter for detecting rare bird species from short video segments using a probable observation data set-based EKF method. IEEE Transactions on Image Processing 2010; 19: 2321-2331.
  • [3] Weinstein BG. MotionMeerkat: Integrating motion video detection and ecological monitoring. Methods in Ecology and Evolution 2015; 6: 357-362.
  • [4] Hernández-Serna A, Jiménez-Segura LF. Automatic identification of species with neural networks. PeerJ 2014; 2:e563. doi: 10.7717/peerj.563
  • [5] Yu X, Wang J, Kays R, Jansen PA, Wang T, Huang T. Automated identification of animal species in camera trap images. EURASIP Journal on Image and Video Processing 2013; 52. doi: 10.1186/1687-5281-2013-52
  • [6] Chen G, Han TX, He Z, Kays R, Forrester T. Deep convolutional neural network based species recognition for wild animal monitoring. In: IEEE International Conference on Image Processing (ICIP); Paris, France; 2014. pp. 858-862.
  • [7] Gomez-Villa A, Salazar A, Vargas F. Identification of animal species in camera-trap images using very deep convolutional neural networks. Ecological Informatics 2017; 41: 24-32.
  • [8] Norouzzadeh MS, Nguyen A, Kosmala M, Swanson A, Palmer MS, Packer C, Clune, J. Automatically identifying, counting, and describing wild animals in camera-trap images with deep learning. In: Proceedings of the National Academy of Sciences of the United States of America (PNAS) 2018; 115: E5716-E5725. doi: 10.1073/pnas.1719367115
  • [9] Nguyen H, Maclagan SJ, Nguyen TD, Nguyen T, Flemons P, Andrews K, Ritchie EG, Phung D. Animal recognition and identification with deep convolutional neural networks for automated wildlife monitoring. In: IEEE International Conference on Data Science and Advanced Analytics (DSAA); Tokyo, Japan; 2017. pp. 40-49.
  • [10] Krishnappa YS, Turner WC. Software for minimalistic data management in large camera trap studies. Ecological Informatics 2014; 24: 11-16.
  • [11] Fegraus EH, Lin K, Ahumada JA, Baru C, Chandara S, Youn C. Data acquisition and management software for camera trap data: A case study from the TEAM network. Ecological Informatics 2011; 6: 345-353.
  • [12] Niedballa J, Sollmann R, Courtiol A, Wilting A. camtrapR: An R package for efficient camera trap data management. Methods in Ecology and Evolution 2016; 7 (12): 1457-1462.
  • [13] Pavlovic G, Tekalp AM. Maximum likelihood parametric blur identification based on a continuous spatial domain model. IEEE Transactions on Image Processing 1992; 1 (4): 496-504.
  • [14] Narvekar ND, Karam LJ. A no-reference image blur metric based on cumulative probability of blur detection (CPBD). IEEE Transactions on Image Processing 2011; 20 (9): 2678-2683.
  • [15] Tong H, Li M, Zhang H, Zhang C. Blur detection for digital images using wavelet transform. In: IEEE International Conference on Multimedia and Expo (ICME); Taipei, Taiwan; 2004; pp. 17-20.
  • [16] Dosselmann RW, Yang XD. No-reference noise and blur detection via the Fourier transform. Technical Report, University of Regina, Regina, Canada, 2012.
  • [17] Krizhevsky A, Sutskever I, Hinton GE. ImageNet classification with deep convolutional neural networks. In: International Conference on Neural Information Processing Systems; Lake Tahoe, Nevada, USA; 2012. pp. 1097- 1105.
  • [18] Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, Huang Z, Karpathy A, Khosla A, Bernstein M, Berg AC. ImageNet large scale visual recognition challenge. International Journal of Computer Vision 2015; 115 (3): 211-252.
  • [19] Sermanet P, Eigen D, Zhang X, Mathieu M, Fergus R, LeCun Y. Overfeat: Integrated recognition, localization and detection using convolutional networks. arXiv preprint 2013. arXiv:1312.6229.
  • [20] Ren S, He K, Girshick R, Sun, J. Faster R-CNN: Towards realtime object detection with region proposal networks. In: Advances in Neural Information Processing Systems (NIPS); Montreal, Canada; 2015. pp. 91-99.
  • [21] Redmon J, Divvala S, Girshick R, Farhadi A. You only look once: unified, real-time object detection. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR); Las Vegas, Nevada, USA; 2016. pp. 779-788.
  • [22] Liu W, Anguelov D, Erhan D, Szegedy C, Reed S, Fu CY, Berg AC. SSD: Single shot multibox detector. In: European Conference on Computer Vision (ECCV); Amsterdam, Netherlands; 2016. pp. 21-37.
  • [23] He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR); Las Vegas, Nevada, USA; 2016. pp. 770-778.
  • [24] Orhan S, Bastanlar Y. Training CNNs with image patches for object localisation. Electronics Letters 2018; 54 (7): 424-426. doi: 10.1049/el.2017.4725
  • [25] Sobral A, Vacavant A. A comprehensive review of background subtraction algorithms evaluated with synthetic and real videos. Computer Vision and Image Understanding 2014; 122: 4-21.
  • [26] Zivkovic Z. Improved adaptive Gaussian mixture model for background subtraction. In: International Conference on Pattern Recognition (ICPR); Cambridge, UK; 2004. pp. 28-31.
  • [27] Tabak MA, Norouzzadeh MS, Wolfson DW, et al. Machine learning to classify animal species in camera trap images: Applications in ecology. Methods in Ecology and Evolution 2019; 10 (4): 585-590.
  • [28] Ju C, Bibaut A, van der Laan MJ. The relative performance of ensemble methods with deep convolutional neural networks for image classification. Journal of Applied Statistics 2018; 45 (15): 2800-2818.
  • [29] Islam J, Zhang Y. An ensemble of deep convolutional neural networks for Alzheimer’s disease detection and classification. In: Machine Learning for Health Workshop at Neural Information Processing Systems (NIPS); Long Beach, CA, USA; 2017.
  • [30] Swanson A, Kosmala M, Lintott C, Simpson R, Smith A, Packer C. Snapshot Serengeti, high-frequency annotated camera trap images of 40 mammalian species in an African savanna. Scientific Data 2015; 2:150026. doi: 10.1038/sdata.2015.26
APA TEKELİ U, BAŞTANLAR Y (2019). Elimination of useless images from raw camera-trap data. , 2395 - 2411. 10.3906/elk-1808-130
Chicago TEKELİ Ulaş,BAŞTANLAR Yalın Elimination of useless images from raw camera-trap data. (2019): 2395 - 2411. 10.3906/elk-1808-130
MLA TEKELİ Ulaş,BAŞTANLAR Yalın Elimination of useless images from raw camera-trap data. , 2019, ss.2395 - 2411. 10.3906/elk-1808-130
AMA TEKELİ U,BAŞTANLAR Y Elimination of useless images from raw camera-trap data. . 2019; 2395 - 2411. 10.3906/elk-1808-130
Vancouver TEKELİ U,BAŞTANLAR Y Elimination of useless images from raw camera-trap data. . 2019; 2395 - 2411. 10.3906/elk-1808-130
IEEE TEKELİ U,BAŞTANLAR Y "Elimination of useless images from raw camera-trap data." , ss.2395 - 2411, 2019. 10.3906/elk-1808-130
ISNAD TEKELİ, Ulaş - BAŞTANLAR, Yalın. "Elimination of useless images from raw camera-trap data". (2019), 2395-2411. https://doi.org/10.3906/elk-1808-130
APA TEKELİ U, BAŞTANLAR Y (2019). Elimination of useless images from raw camera-trap data. Turkish Journal of Electrical Engineering and Computer Sciences, 27(4), 2395 - 2411. 10.3906/elk-1808-130
Chicago TEKELİ Ulaş,BAŞTANLAR Yalın Elimination of useless images from raw camera-trap data. Turkish Journal of Electrical Engineering and Computer Sciences 27, no.4 (2019): 2395 - 2411. 10.3906/elk-1808-130
MLA TEKELİ Ulaş,BAŞTANLAR Yalın Elimination of useless images from raw camera-trap data. Turkish Journal of Electrical Engineering and Computer Sciences, vol.27, no.4, 2019, ss.2395 - 2411. 10.3906/elk-1808-130
AMA TEKELİ U,BAŞTANLAR Y Elimination of useless images from raw camera-trap data. Turkish Journal of Electrical Engineering and Computer Sciences. 2019; 27(4): 2395 - 2411. 10.3906/elk-1808-130
Vancouver TEKELİ U,BAŞTANLAR Y Elimination of useless images from raw camera-trap data. Turkish Journal of Electrical Engineering and Computer Sciences. 2019; 27(4): 2395 - 2411. 10.3906/elk-1808-130
IEEE TEKELİ U,BAŞTANLAR Y "Elimination of useless images from raw camera-trap data." Turkish Journal of Electrical Engineering and Computer Sciences, 27, ss.2395 - 2411, 2019. 10.3906/elk-1808-130
ISNAD TEKELİ, Ulaş - BAŞTANLAR, Yalın. "Elimination of useless images from raw camera-trap data". Turkish Journal of Electrical Engineering and Computer Sciences 27/4 (2019), 2395-2411. https://doi.org/10.3906/elk-1808-130