Semi-automated detection of tagged animals from camera trap images using artificial intelligence. Santangeli, A., Chen, Y., Boorman, M., Sales Ligero, S. and Albert García, G. 2022 Ibis. doi: 10.1111/ibi.13087 VIEW
Remote monitoring of wildlife has been greatly facilitated by the advent of technological advances over the past decades. Among these recent advances, camera traps represent a very popular non-invasive means to monitor species presence and abundance (Wearn and Glover-Kapfer 2019). However, with the opportunity also comes the challenge. For camera traps, the main challenge is typically represented by the massive amount of photographs gathered, which requires equally massive processing efforts before the data can be used for science and conservation.
To aid the processing of large numbers of images, artificial intelligence can prove a very valuable tool. Artificial intelligence is becoming an integral part of our modern society, and the number of applications to recognize individual human faces, objects and animals from photographs is growing exponentially (Willi et al. 2019). In ecology and conservation, artificial intelligence has even been developed to automatically identify Lapwing (Vanellus vanellus) nests from drone-borne thermal images (Santangeli et al. 2020).
So far, the identification of individually marked birds from camera trap images has received little research attention. This resource is however relevant, as population demography is typically studied by capturing and individually marking animals which are then released and later resighted (an approach called Capture-Mark-Recapture, CMR). Within the raptor, and especially vulture and condor world, individuals are often marked with a cattle-ear tag applied on their wing and bearing a unique code. These tagged animals can be later resighted with the help of camera traps.
Figure 1 Lappet-faced Vultures captured in a camera-trap in the Namib Desert in Namibia bearing a patagial cattle-ear tag for individual identification. The square box around the tag represents the outcome of the artificial intelligence algorithm at identifying it as a tag, with the confidence of it being a tag, as given by the algorithm, shown as %.
A few years ago together with colleagues at Vultures Namibia we aimed at quantifying survival of a large raptor, the Lappet-faced Vulture (Torgos tracheliotos), by gathering Capture-Mark-Recapture data on a large number of tagged birds resighted with camera traps placed at water points (Santangeli et al. 2020). That project required a large effort to manually identify and read the code of tagged animals from the hundreds of thousands of camera trap photos. The majority of the effort went into filtering out the thousands of images capturing animals other than the target species.
Figure 2 A set of camera trap photographs including animals other than the target species (the Lappet-faced vulture), representing over 99% of the material and that required a large effort to be filtered out manually.
Therefore, with this latest study we decided to apply artificial intelligence to streamline processing of camera trap photographs of tagged birds. Specifically, we developed an artificial intelligence algorithm that allows the user to automatically identify photographs including a vulture bearing a tag and separate them from the rest (e.g. all irrelevant photographs as in Figure 2).
We trained an algorithm with over 900 images capturing a Lappet-faced Vulture bearing a tag. This step allows the algorithm to understand all the relevant features of a tag, so that it can later identify these tags from new photographs. We achieved a good level of performance, with the algorithm correctly classifying images with a tag in 95% of the cases. Interestingly, we also found that this level of classification accuracy was higher when the tag code in the photograph was readable as compared to cases when it was not (as in Figure 1). This is relevant because the ultimate goal of these resighting efforts is to read the tag code and individually identify the marked birds.
We quantified the time saved from using the algorithm, as compared to the fully manual processing, in a total of 11 to 24 full days for a single camera trap operating year-round and collecting 100 photos per day.
Overall, this study underscores the value of artificial intelligence for processing big data recorded by camera traps, ultimately facilitating ecological studies and biodiversity conservation.
Wearn, O. R. and Glover-Kapfer, P. 2019. Snap happy: camera traps are an effective sampling tool when compared with alternative methods Royal Society Open Science 6: 181748. VIEW
Willi, M., Pitman, R.T., Cardoso, A.W., et al. 2019. Identifying animal species in camera trap images using deep learning and citizen science. Methods in Ecology and Evolution 10: 80–91. VIEW
Santangeli, A., Chen, Y., Kluen, E., Chirumamilla, R., Tiainen, J. and Loehr, J. 2020. Integrating drone-borne thermal imaging with artificial intelligence to locate bird nests on agricultural land. Scientific Reports 10(1): 10993. VIEW
Santangeli, A., Pakanen, V.-M., Bridgeford, P., Boorman, M., Kolberg, H. and Sanz-Aguilar, A. 2020. The relative contribution of camera trap technology and citizen science for estimating survival of an endangered African vulture. Biological Conservation 246: 108593. VIEW
Top right: © vultures-namibia
If you want to write about your research in #theBOUblog, then please see here.