scholarly journals Agouti: A platform for processing and archiving of camera trap images

Author(s):  
Jim Casaer ◽  
Tanja Milotic ◽  
Yorick Liefting ◽  
Peter Desmet ◽  
Patrick Jansen

Camera traps placed in the field, photograph warm-bodied animals that pass in front of an infrared sensor. The imagery represents a rich source of data on mammals larger than ~200 grams, providing information at the level of species and communities. Camera-trap surveys generate observations of specific mammals at a certain location and time, including photo evidence that can be evaluated by experts to map species distribution patterns. The imagery also provides information on the species composition of local communities, identifying which species co-occur and in what proportion. Moreover, the images contain information on activity patterns and other interesting aspects of animal behaviour. Because surveys can be standardized relatively easily, camera traps are well suited for documenting shifts in the behaviour, distribution and community composition, for example in response to climate and land-use change. Imagery from camera traps can thus serve as a baseline for subsequent surveys. In less than two decades, camera traps have become the standard tool for surveying mammals. They are simple to use and non-invasive, requiring no special permits. As a consequence they are widely used by professionals and hobbyists alike. Together, tens of thousands of users have the potential to form a huge sensor network. Unfortunately however, imagery and data collected are currently rarely integrated. Rather, they are lost at a massive scale. Users tend to retain only a subset of the photos and discard the rest. Or the material ends up on an external hard disk that will at some point fail or be erased as these scientific data tend to be used within the scope of specific projects. Very few of the wealth of material becomes available for scientific research and monitoring. Moreover, joint projects are rare and there is little coordination between camera-trap users. A solution to this problem is provided by Agouti, a platform for the organization, processing and storage of camera-trap imagery (www.agouti.eu). The aim of Agouti is, on the one hand, to standardize and facilitate collaborative camera-trap surveys, and on the other hand to compile and secure imagery and data for scientific research and monitoring, by encouraging users to share their material. Agouti provides an interface that allows users to collaborate on projects, organize and manage their surveys, upload and store imagery, and annotate images with species identifications and characteristics. Images can also be annotated through basic image recognition and crowd sourcing via a connection with the citizen science platform Zooniverse, which creates the potential to reach new audiences. Exporting data and imagery in the Camera Trap Metadata Standard (Forrester et al. 2016) will be supported in the near future. This will allow data to be archived outside of Agouti in research repositories such as Zenodo and by further mapping to Darwin Core to be made discoverable on the Global Biodiversity Information Facility (GBIF). Agouti provides both professionals and the public with a practical solution for retaining camera-trap surveys and simultaneously engages people in contributing data to science in a standardized and organized manner, to the benefit of science and conservation.

2019 ◽  
Author(s):  
Sadoune Ait Kaci Azzou ◽  
Liam Singer ◽  
Thierry Aebischer ◽  
Madleina Caduff ◽  
Beat Wolf ◽  
...  

SummaryCamera traps and acoustic recording devices are essential tools to quantify the distribution, abundance and behavior of mobile species. Varying detection probabilities among device locations must be accounted for when analyzing such data, which is generally done using occupancy models. We introduce a Bayesian Time-dependent Observation Model for Camera Trap data (Tomcat), suited to estimate relative event densities in space and time. Tomcat allows to learn about the environmental requirements and daily activity patterns of species while accounting for imperfect detection. It further implements a sparse model that deals well will a large number of potentially highly correlated environmental variables. By integrating both spatial and temporal information, we extend the notation of overlap coefficient between species to time and space to study niche partitioning. We illustrate the power of Tomcat through an application to camera trap data of eight sympatrically occurring duiker Cephalophinae species in the savanna - rainforest ecotone in the Central African Republic and show that most species pairs show little overlap. Exceptions are those for which one species is very rare, likely as a result of direct competition.


2020 ◽  
Vol 47 (4) ◽  
pp. 326 ◽  
Author(s):  
Harry A. Moore ◽  
Jacob L. Champney ◽  
Judy A. Dunlop ◽  
Leonie E. Valentine ◽  
Dale G. Nimmo

Abstract ContextEstimating animal abundance often relies on being able to identify individuals; however, this can be challenging, especially when applied to large animals that are difficult to trap and handle. Camera traps have provided a non-invasive alternative by using natural markings to individually identify animals within image data. Although camera traps have been used to individually identify mammals, they are yet to be widely applied to other taxa, such as reptiles. AimsWe assessed the capacity of camera traps to provide images that allow for individual identification of the world’s fourth-largest lizard species, the perentie (Varanus giganteus), and demonstrate other basic morphological and behavioural data that can be gleaned from camera-trap images. MethodsVertically orientated cameras were deployed at 115 sites across a 10000km2 area in north-western Australia for an average of 216 days. We used spot patterning located on the dorsal surface of perenties to identify individuals from camera-trap imagery, with the assistance of freely available spot ID software. We also measured snout-to-vent length (SVL) by using image-analysis software, and collected image time-stamp data to analyse temporal activity patterns. ResultsNinety-two individuals were identified, and individuals were recorded moving distances of up to 1975m. Confidence in identification accuracy was generally high (91%), and estimated SVL measurements varied by an average of 6.7% (min=1.8%, max=21.3%) of individual SVL averages. Larger perenties (SVL of >45cm) were detected mostly between dawn and noon, and in the late afternoon and early evening, whereas small perenties (SVL of <30cm) were rarely recorded in the evening. ConclusionsCamera traps can be used to individually identify large reptiles with unique markings, and can also provide data on movement, morphology and temporal activity. Accounting for uneven substrates under cameras could improve the accuracy of morphological estimates. Given that camera traps struggle to detect small, nocturnal reptiles, further research is required to examine whether cameras miss smaller individuals in the late afternoon and evening. ImplicationsCamera traps are increasingly being used to monitor reptile species. The ability to individually identify animals provides another tool for herpetological research worldwide.


Animals ◽  
2020 ◽  
Vol 10 (12) ◽  
pp. 2200
Author(s):  
Fructueux G. A. Houngbégnon ◽  
Daniel Cornelis ◽  
Cédric Vermeulen ◽  
Bonaventure Sonké ◽  
Stephan Ntie ◽  
...  

The duiker community in Central African rainforests includes a diversity of species that can coexist in the same area. The study of their activity patterns is needed to better understand habitat use or association between the species. Using camera traps, we studied the temporal activity patterns, and quantified for the first time the temporal overlap and spatial co-occurrence between species. Our results show that: (i) Two species are strongly diurnal: Cephalophus leucogaster, and Philantomba congica, (ii) two species are mostly diurnal: C.callipygus and C. nigrifrons, (iii) one species is strongly nocturnal: C.castaneus, (iv) and one species is mostly nocturnal: C.silvicultor. Analyses of temporal activities (for five species) identified four species pairs that highly overlapped (Δ^≥ 0.80), and six pairs that weakly overlapped (Δ^ between 0.06 and 0.35). Finally, co-occurrence tests reveal a truly random co-occurrence (plt > 0.05 and pgt > 0.05) for six species pairs, and a positive co-occurrence (pgt < 0.05) for four pairs. Positive co-occurrences are particularly noted for pairs formed by C.callipygus with the other species (except C. nigrifrons). These results are essential for a better understanding of the coexistence of duikers and the ecology of poorly known species (C. leucogaster and C. nigrifrons), and provide clarification on the activity patterns of C. silvicultor which was subject to controversy. Camera traps proved then to be a powerful tool for studying the activity patterns of free-ranging duiker populations.


2016 ◽  
Vol 32 (2) ◽  
pp. 170-174 ◽  
Author(s):  
Marcelo Lopes Rheingantz ◽  
Caroline Leuchtenberger ◽  
Carlos André Zucco ◽  
Fernando A.S. Fernandez

Abstract:Circadian use of time is an important, but often neglected, part of an animal's niche. We compared the activity patterns of the Neotropical otter Lontra longicaudis in two different areas in Brazil using camera traps placed at the entrance of holts. We obtained 58 independent photos in the Atlantic Forest (273 camera trap-days) and 46 photos in Pantanal (300 camera trap-days). We observed different kernel density probabilities on these two areas (45.6% and 14.1% overlap between the 95% and 50% density isopleths respectively). We observed the plasticity in Neotropical otter activity behaviour with different activity patterns in the two areas. In the Pantanal, the Neotropical otter selected daylight (Ivlev = 0.23) and avoided night (Ivlev = −0.44), while in the Atlantic Forest it selected dawn (Ivlev = 0.24) and night (Ivlev = 0.14), avoiding daylight (Ivlev = −0.33). We believe that this pattern can be due to human activity or shifts in prey activity.


2015 ◽  
Vol 42 (1) ◽  
pp. 1 ◽  
Author(s):  
J. L. Read ◽  
A. J. Bengsen ◽  
P. D. Meek ◽  
K. E. Moseby

Context Automatically activated cameras (camera traps) and automated poison-delivery devices are increasingly being used to monitor and manage predators such as felids and canids. Maximising visitation rates to sentry positions enhances the efficacy of feral-predator management, especially for feral cats, which are typically less attracted to food-based lures than canids. Aims The influence of camera-trap placement and lures were investigated to determine optimal monitoring and control strategies for feral cats and other predators in two regions of semi-arid South Australia. Methods We compared autumn and winter capture rates, activity patterns and behaviours of cats, foxes and dingoes at different landscape elements and with different lures in three independent 6 km × 3 km grids of 18 camera-trap sites. Key results Neither visual, olfactory or audio lures increased recorded visitation rates by any predators, although an audio and a scent-based lure both elicited behavioural responses in predators. Cameras set on roads yielded an eight times greater capture rate for dingoes than did off-road cameras. Roads and resource points also yielded highest captures of cats and foxes. All predators were less nocturnal in winter than in autumn and fox detections at the Immarna site peaked in months when dingo and cat activity were lowest. Conclusions Monitoring and management programs for cats and other predators in arid Australia should focus on roads and resource points where predator activity is highest. Olfactory and auditory lures can elicit behavioural responses that render cats more susceptible to passive monitoring and control techniques. Dingo activity appeared to be inversely related to fox but not cat activity during our monitoring period. Implications Optimised management of feral cats in the Australian arid zone would benefit from site- and season-specific lure trials.


Author(s):  
Sara Beery ◽  
Dan Morris ◽  
Siyu Yang ◽  
Marcel Simon ◽  
Arash Norouzzadeh ◽  
...  

Camera traps are heat- or motion-activated cameras placed in the wild to monitor and investigate animal populations and behavior. They are used to locate threatened species, identify important habitats, monitor sites of interest, and analyze wildlife activity patterns. At present, the time required to manually review images severely limits productivity. Additionally, ~70% of camera trap images are empty, due to a high rate of false triggers. Previous work has shown good results on automated species classification in camera trap data (Norouzzadeh et al. 2018), but further analysis has shown that these results do not generalize to new cameras or new geographic regions (Beery et al. 2018). Additionally, these models will fail to recognize any species they were not trained on. In theory, it is possible to re-train an existing model in order to add missing species, but in practice, this is quite difficult and requires just as much machine learning expertise as training models from scratch. Consequently, very few organizations have successfully deployed machine learning tools for accelerating camera trap image annotation. We propose a different approach to applying machine learning to camera trap projects, combining a generalizable detector with project-specific classifiers. We have trained an animal detector that is able to find and localize (but not identify) animals, even species not seen during training, in diverse ecosystems worldwide. See Fig. 1 for examples of the detector run over camera trap data covering a diverse set of regions and species, unseen at training time. By first finding and localizing animals, we are able to: drastically reduce the time spent filtering empty images, and dramatically simplify the process of training species classifiers, because we can crop images to individual animals (and thus classifiers need only worry about animal pixels, not background pixels). drastically reduce the time spent filtering empty images, and dramatically simplify the process of training species classifiers, because we can crop images to individual animals (and thus classifiers need only worry about animal pixels, not background pixels). With this detector model as a powerful new tool, we have established a modular pipeline for on-boarding new organizations and building project-specific image processing systems. We break our pipeline into four stages: 1. Data ingestion First we transfer images to the cloud, either by uploading to a drop point or by mailing an external hard drive. Data comes in a variety of formats; we convert each data set to the COCO-Camera Traps format, i.e. we create a Javascript Object Notation (JSON) file that encodes the annotations and the image locations within the organization’s file structure. 2. Animal detection We next run our (generic) animal detector on all the images to locate animals. We have developed an infrastructure for efficiently running this detector on millions of images, dividing the load over multiple nodes. We find that a single detector works for a broad range of regions and species. If the detection results (as validated by the organization) are not sufficiently accurate, it is possible to collect annotations for a small set of their images and fine-tune the detector. Typically these annotations would be fed back into a new version of the general detector, improving results for subsequent projects. 3. Species classification Using species labels provided by the organization, we train a (project-specific) classifier on the cropped-out animals. 4. Applying the system to new data We use the general detector and the project-specific classifier to power tools facilitating accelerated verification and image review, e.g. visualizing the detections, selecting images for review based on model confidence, etc. The aim of this presentation is to present a new approach to structuring camera trap projects, and to formalize discussion around the steps that are required to successfully apply machine learning to camera trap images. The work we present is available at http://github.com/microsoft/cameratraps, and we welcome new collaborating organizations.


2019 ◽  
Vol 11 (4) ◽  
pp. 13478-13491
Author(s):  
Karen Anne Jeffers ◽  
Adul , ◽  
Susan Mary Cheyne

We present an update on the photographic detections from camera traps and the activity patterns of Borneo’s four small cats, namely, Sunda Leopard Cat Prionailurus javanensis, Flat-headed Cat P. planiceps, Marbled Cat Pardofelis marmorata, and Bay Cat Catopuma badia, at two sites in Central Kalimantan, Indonesia.  Camera trap survey data of 10 years (2008–2018) from the first site in Sebangau provide details about the temporal partitioning of these small cats from each other but overlap with Sunda Clouded Leopard Neofelis diardi.  The activity of Flat-headed Cat was higher after midnight and that of Leopard Cat at night with no clear preference before or after midnight.  The Marbled Cat is predominantly diurnal, but the remaining three cats have flexible activity periods.  While limited data are available from Rungan, the second site, we confirmed the presence of all four small cat species found on Borneo, though we have insufficient data to comment on the Bay Cat.  The cat sightings, however, are intermittent and may reflect the unprotected status of this forest.  Leopard Cats appear relatively unaffected by habitat disturbance based on encounter rates on camera traps.  Conservationists, both NGOs and the government, must pay particular attention to specialists like Flat-headed Cats and Bay Cats when assessing habitat suitability for long-term cat conservation.


2020 ◽  
Vol 42 (3) ◽  
pp. 312 ◽  
Author(s):  
Christopher Davies ◽  
Wendy Wright ◽  
Fiona E. Hogan ◽  
Hugh Davies

Introduced sambar deer (Rusa unicolor) are increasing in abundance and distribution across much of south-eastern Australia and causing damage to native ecosystems. However, the current paucity of knowledge surrounding many aspects of sambar deer ecology is limiting our capacity to make informed management decisions, and properly gauge the extent of deer impacts. Here we investigate correlates of sambar deer detectability and describe activity patterns of sambar deer in Baw Baw National Park (BBNP) to inform control operations. Camera traps were deployed in BBNP between October and December 2016. We used an occupancy modelling framework to investigate sambar deer detectability and camera trap record time stamps to determine sambar deer activity patterns. Sambar deer were found to be significantly more detectable near roads and in areas of sparse tree density and displayed strong crepuscular activity patterns. Control operations carried out along roads at dawn and dusk could be effective, at least in the short term. Likewise, aerial culling could be an effective control option for sambar deer populations in BBNP. This study highlights the utility of camera trap data to inform the application of control operations for cryptic invasive species.


2019 ◽  
Vol 41 (2) ◽  
pp. 283 ◽  
Author(s):  
Stephanie K. Courtney Jones ◽  
Katarina M. Mikac

Activity levels of spotted-tailed quolls were investigated using camera traps over 12 months. There were 33 independent camera trap photos with 17 individual quolls identified. Latency to initial detection was 40 days. Quolls were nocturnal/crepuscular, spending 35% of the day they were detected active. Highest activity levels were recorded in summer.


2018 ◽  
Vol 45 (8) ◽  
pp. 706 ◽  
Author(s):  
Helen R. Morgan ◽  
Guy Ballard ◽  
Peter J. S. Fleming ◽  
Nick Reid ◽  
Remy Van der Ven ◽  
...  

Context When measuring grazing impacts of vertebrates, the density of animals and time spent foraging are important. Traditionally, dung pellet counts are used to index macropod grazing density, and a direct relationship between herbivore density and foraging impact is assumed. However, rarely are pellet deposition rates measured or compared with camera-trap indices. Aims The aims were to pilot an efficient and reliable camera-trapping method for monitoring macropod grazing density and activity patterns, and to contrast pellet counts with macropod counts from camera trapping, for estimating macropod grazing density. Methods Camera traps were deployed on stratified plots in a fenced enclosure containing a captive macropod population and the experiment was repeated in the same season in the following year after population reduction. Camera-based macropod counts were compared with pellet counts and pellet deposition rates were estimated using both datasets. Macropod frequency was estimated, activity patterns developed, and the variability between resting and grazing plots and the two estimates of macropod density was investigated. Key Results Camera-trap grazing density indices initially correlated well with pellet count indices (r2=0.86), but were less reliable between years. Site stratification enabled a significant relationship to be identified between camera-trap counts and pellet counts in grazing plots. Camera-trap indices were consistent for estimating grazing density in both surveys but were not useful for estimating absolute abundance in this study. Conclusions Camera trapping was efficient and reliable for estimating macropod activity patterns. Although significant, the relationship between pellet count indices and macropod grazing density based on camera-trapping indices was not strong; this was due to variability in macropod pellet deposition rates over different years. Time-lapse camera imagery has potential for simultaneously assessing herbivore foraging activity budgets with grazing densities and vegetation change. Further work is required to refine the use of camera-trapping indices for estimation of absolute abundance. Implications Time-lapse camera trapping and site-stratified sampling allow concurrent assessment of grazing density and grazing behaviour at plot and landscape scale.


Sign in / Sign up

Export Citation Format

Share Document