Estimating macropod grazing density and defining activity patterns using camera-trap image analysis

2018 ◽  
Vol 45 (8) ◽  
pp. 706 ◽  
Author(s):  
Helen R. Morgan ◽  
Guy Ballard ◽  
Peter J. S. Fleming ◽  
Nick Reid ◽  
Remy Van der Ven ◽  
...  

Context When measuring grazing impacts of vertebrates, the density of animals and time spent foraging are important. Traditionally, dung pellet counts are used to index macropod grazing density, and a direct relationship between herbivore density and foraging impact is assumed. However, rarely are pellet deposition rates measured or compared with camera-trap indices. Aims The aims were to pilot an efficient and reliable camera-trapping method for monitoring macropod grazing density and activity patterns, and to contrast pellet counts with macropod counts from camera trapping, for estimating macropod grazing density. Methods Camera traps were deployed on stratified plots in a fenced enclosure containing a captive macropod population and the experiment was repeated in the same season in the following year after population reduction. Camera-based macropod counts were compared with pellet counts and pellet deposition rates were estimated using both datasets. Macropod frequency was estimated, activity patterns developed, and the variability between resting and grazing plots and the two estimates of macropod density was investigated. Key Results Camera-trap grazing density indices initially correlated well with pellet count indices (r2=0.86), but were less reliable between years. Site stratification enabled a significant relationship to be identified between camera-trap counts and pellet counts in grazing plots. Camera-trap indices were consistent for estimating grazing density in both surveys but were not useful for estimating absolute abundance in this study. Conclusions Camera trapping was efficient and reliable for estimating macropod activity patterns. Although significant, the relationship between pellet count indices and macropod grazing density based on camera-trapping indices was not strong; this was due to variability in macropod pellet deposition rates over different years. Time-lapse camera imagery has potential for simultaneously assessing herbivore foraging activity budgets with grazing densities and vegetation change. Further work is required to refine the use of camera-trapping indices for estimation of absolute abundance. Implications Time-lapse camera trapping and site-stratified sampling allow concurrent assessment of grazing density and grazing behaviour at plot and landscape scale.

Animals ◽  
2019 ◽  
Vol 9 (6) ◽  
pp. 388 ◽  
Author(s):  
D. J. Welbourne ◽  
A. W. Claridge ◽  
D. J. Paull ◽  
F. Ford

Camera-traps are used widely around the world to census a range of vertebrate fauna, particularly mammals but also other groups including birds, as well as snakes and lizards (squamates). In an attempt to improve the reliability of camera-traps for censusing squamates, we examined whether programming options involving time lapse capture of images increased detections. This was compared to detections by camera-traps set to trigger by the standard passive infrared sensor setting (PIR), and camera-traps set to take images using time lapse in combination with PIR. We also examined the effect of camera trap focal length on the ability to tell different species of small squamate apart. In a series of side-by-side field comparisons, camera-traps programmed to take images at standard intervals, as well as through routine triggering of the PIR, captured more images of squamates than camera-traps using the PIR sensor setting alone or time lapse alone. Similarly, camera traps with their lens focal length set at closer distances improved our ability to discriminate species of small squamates. With these minor alterations to camera-trap programming and hardware, the quantity and quality of squamate detections was markedly better. These gains provide a platform for exploring other aspects of camera-trapping for squamates that might to lead to even greater survey advances, bridging the gap in knowledge of this otherwise poorly known faunal group.


2019 ◽  
Author(s):  
Sadoune Ait Kaci Azzou ◽  
Liam Singer ◽  
Thierry Aebischer ◽  
Madleina Caduff ◽  
Beat Wolf ◽  
...  

SummaryCamera traps and acoustic recording devices are essential tools to quantify the distribution, abundance and behavior of mobile species. Varying detection probabilities among device locations must be accounted for when analyzing such data, which is generally done using occupancy models. We introduce a Bayesian Time-dependent Observation Model for Camera Trap data (Tomcat), suited to estimate relative event densities in space and time. Tomcat allows to learn about the environmental requirements and daily activity patterns of species while accounting for imperfect detection. It further implements a sparse model that deals well will a large number of potentially highly correlated environmental variables. By integrating both spatial and temporal information, we extend the notation of overlap coefficient between species to time and space to study niche partitioning. We illustrate the power of Tomcat through an application to camera trap data of eight sympatrically occurring duiker Cephalophinae species in the savanna - rainforest ecotone in the Central African Republic and show that most species pairs show little overlap. Exceptions are those for which one species is very rare, likely as a result of direct competition.


2020 ◽  
Vol 47 (4) ◽  
pp. 326 ◽  
Author(s):  
Harry A. Moore ◽  
Jacob L. Champney ◽  
Judy A. Dunlop ◽  
Leonie E. Valentine ◽  
Dale G. Nimmo

Abstract ContextEstimating animal abundance often relies on being able to identify individuals; however, this can be challenging, especially when applied to large animals that are difficult to trap and handle. Camera traps have provided a non-invasive alternative by using natural markings to individually identify animals within image data. Although camera traps have been used to individually identify mammals, they are yet to be widely applied to other taxa, such as reptiles. AimsWe assessed the capacity of camera traps to provide images that allow for individual identification of the world’s fourth-largest lizard species, the perentie (Varanus giganteus), and demonstrate other basic morphological and behavioural data that can be gleaned from camera-trap images. MethodsVertically orientated cameras were deployed at 115 sites across a 10000km2 area in north-western Australia for an average of 216 days. We used spot patterning located on the dorsal surface of perenties to identify individuals from camera-trap imagery, with the assistance of freely available spot ID software. We also measured snout-to-vent length (SVL) by using image-analysis software, and collected image time-stamp data to analyse temporal activity patterns. ResultsNinety-two individuals were identified, and individuals were recorded moving distances of up to 1975m. Confidence in identification accuracy was generally high (91%), and estimated SVL measurements varied by an average of 6.7% (min=1.8%, max=21.3%) of individual SVL averages. Larger perenties (SVL of >45cm) were detected mostly between dawn and noon, and in the late afternoon and early evening, whereas small perenties (SVL of <30cm) were rarely recorded in the evening. ConclusionsCamera traps can be used to individually identify large reptiles with unique markings, and can also provide data on movement, morphology and temporal activity. Accounting for uneven substrates under cameras could improve the accuracy of morphological estimates. Given that camera traps struggle to detect small, nocturnal reptiles, further research is required to examine whether cameras miss smaller individuals in the late afternoon and evening. ImplicationsCamera traps are increasingly being used to monitor reptile species. The ability to individually identify animals provides another tool for herpetological research worldwide.


Animals ◽  
2020 ◽  
Vol 10 (12) ◽  
pp. 2200
Author(s):  
Fructueux G. A. Houngbégnon ◽  
Daniel Cornelis ◽  
Cédric Vermeulen ◽  
Bonaventure Sonké ◽  
Stephan Ntie ◽  
...  

The duiker community in Central African rainforests includes a diversity of species that can coexist in the same area. The study of their activity patterns is needed to better understand habitat use or association between the species. Using camera traps, we studied the temporal activity patterns, and quantified for the first time the temporal overlap and spatial co-occurrence between species. Our results show that: (i) Two species are strongly diurnal: Cephalophus leucogaster, and Philantomba congica, (ii) two species are mostly diurnal: C.callipygus and C. nigrifrons, (iii) one species is strongly nocturnal: C.castaneus, (iv) and one species is mostly nocturnal: C.silvicultor. Analyses of temporal activities (for five species) identified four species pairs that highly overlapped (Δ^≥ 0.80), and six pairs that weakly overlapped (Δ^ between 0.06 and 0.35). Finally, co-occurrence tests reveal a truly random co-occurrence (plt > 0.05 and pgt > 0.05) for six species pairs, and a positive co-occurrence (pgt < 0.05) for four pairs. Positive co-occurrences are particularly noted for pairs formed by C.callipygus with the other species (except C. nigrifrons). These results are essential for a better understanding of the coexistence of duikers and the ecology of poorly known species (C. leucogaster and C. nigrifrons), and provide clarification on the activity patterns of C. silvicultor which was subject to controversy. Camera traps proved then to be a powerful tool for studying the activity patterns of free-ranging duiker populations.


2016 ◽  
Vol 32 (2) ◽  
pp. 170-174 ◽  
Author(s):  
Marcelo Lopes Rheingantz ◽  
Caroline Leuchtenberger ◽  
Carlos André Zucco ◽  
Fernando A.S. Fernandez

Abstract:Circadian use of time is an important, but often neglected, part of an animal's niche. We compared the activity patterns of the Neotropical otter Lontra longicaudis in two different areas in Brazil using camera traps placed at the entrance of holts. We obtained 58 independent photos in the Atlantic Forest (273 camera trap-days) and 46 photos in Pantanal (300 camera trap-days). We observed different kernel density probabilities on these two areas (45.6% and 14.1% overlap between the 95% and 50% density isopleths respectively). We observed the plasticity in Neotropical otter activity behaviour with different activity patterns in the two areas. In the Pantanal, the Neotropical otter selected daylight (Ivlev = 0.23) and avoided night (Ivlev = −0.44), while in the Atlantic Forest it selected dawn (Ivlev = 0.24) and night (Ivlev = 0.14), avoiding daylight (Ivlev = −0.33). We believe that this pattern can be due to human activity or shifts in prey activity.


PeerJ ◽  
2021 ◽  
Vol 9 ◽  
pp. e10684
Author(s):  
Tamara I. Potter ◽  
Aaron C. Greenville ◽  
Christopher R. Dickman

Invertebrates dominate the animal world in terms of abundance, diversity and biomass, and play critical roles in maintaining ecosystem function. Despite their obvious importance, disproportionate research attention remains focused on vertebrates, with knowledge and understanding of invertebrate ecology still lacking. Due to their inherent advantages, usage of camera traps in ecology has risen dramatically over the last three decades, especially for research on mammals. However, few studies have used cameras to reliably detect fauna such as invertebrates or used cameras to examine specific aspects of invertebrate ecology. Previous research investigating the interaction between wolf spiders (Lycosidae: Lycosa spp.) and the lesser hairy-footed dunnart (Sminthopsis youngsoni) found that camera traps provide a viable method for examining temporal activity patterns and interactions between these species. Here, we re-examine lycosid activity to determine whether these patterns vary with different environmental conditions, specifically between burned and unburned habitats and the crests and bases of sand dunes, and whether cameras are able to detect other invertebrate fauna. Twenty-four cameras were deployed over a 3-month period in an arid region in central Australia, capturing 2,356 confirmed images of seven invertebrate taxa, including 155 time-lapse images of lycosids. Overall, there was no clear difference in temporal activity with respect to dune position or fire history, but twice as many lycosids were detected in unburned compared to burned areas. Despite some limitations, camera traps appear to have considerable utility as a tool for determining the diel activity patterns and habitat use of larger arthropods such as wolf spiders, and we recommend greater uptake in their usage in future.


2016 ◽  
Vol 38 (1) ◽  
pp. 44 ◽  
Author(s):  
Paul D. Meek ◽  
Karl Vernes

Camera trapping is increasingly recognised as a survey tool akin to conventional small mammal survey methods such as Elliott trapping. While there are many cost and resource advantages of using camera traps, their adoption should not compromise scientific rigour. Rodents are a common element of most small mammal surveys. In 2010 we deployed camera traps to measure whether the endangered Hastings River mouse (Pseudomys oralis) could be detected and identified with an acceptable level of precision by camera traps when similar-looking sympatric small mammals were present. A comparison of three camera trap models revealed that camera traps can detect a wide range of small mammals, although white flash colour photography was necessary to capture characteristic features of morphology. However, the accurate identification of some small mammals, including P. oralis, was problematic; we conclude therefore that camera traps alone are not appropriate for P. oralis surveys, even though they might at times successfully detect them. We discuss the need for refinement of the methodology, further testing of camera trap technology, and the development of computer-assisted techniques to overcome problems associated with accurate species identification.


2015 ◽  
Vol 42 (1) ◽  
pp. 1 ◽  
Author(s):  
J. L. Read ◽  
A. J. Bengsen ◽  
P. D. Meek ◽  
K. E. Moseby

Context Automatically activated cameras (camera traps) and automated poison-delivery devices are increasingly being used to monitor and manage predators such as felids and canids. Maximising visitation rates to sentry positions enhances the efficacy of feral-predator management, especially for feral cats, which are typically less attracted to food-based lures than canids. Aims The influence of camera-trap placement and lures were investigated to determine optimal monitoring and control strategies for feral cats and other predators in two regions of semi-arid South Australia. Methods We compared autumn and winter capture rates, activity patterns and behaviours of cats, foxes and dingoes at different landscape elements and with different lures in three independent 6 km × 3 km grids of 18 camera-trap sites. Key results Neither visual, olfactory or audio lures increased recorded visitation rates by any predators, although an audio and a scent-based lure both elicited behavioural responses in predators. Cameras set on roads yielded an eight times greater capture rate for dingoes than did off-road cameras. Roads and resource points also yielded highest captures of cats and foxes. All predators were less nocturnal in winter than in autumn and fox detections at the Immarna site peaked in months when dingo and cat activity were lowest. Conclusions Monitoring and management programs for cats and other predators in arid Australia should focus on roads and resource points where predator activity is highest. Olfactory and auditory lures can elicit behavioural responses that render cats more susceptible to passive monitoring and control techniques. Dingo activity appeared to be inversely related to fox but not cat activity during our monitoring period. Implications Optimised management of feral cats in the Australian arid zone would benefit from site- and season-specific lure trials.


2015 ◽  
Vol 37 (1) ◽  
pp. 1 ◽  
Author(s):  
Paul D. Meek ◽  
Guy-Anthony Ballard ◽  
Karl Vernes ◽  
Peter J. S. Fleming

This paper provides an historical review of the technological evolution of camera trapping as a zoological survey tool in Australia. Camera trapping in Australia began in the 1950s when purpose-built remotely placed cameras were used in attempts to rediscover the thylacine (Thylacinus cynocephalus). However, camera traps did not appear in Australian research papers and Australasian conference proceedings until 1989–91, and usage became common only after 2008, with an exponential increase in usage since 2010. Initially, Australian publications under-reported camera trapping methods, often failing to provide fundamental details about deployment and use. However, rigour in reporting of key methods has increased during the recent widespread adoption of camera trapping. Our analysis also reveals a change in camera trap use in Australia, from simple presence–absence studies, to more theoretical and experimental approaches related to population ecology, behavioural ecology, conservation biology and wildlife management. Practitioners require further research to refine and standardise camera trap methods to ensure that unbiased and scientifically rigorous data are obtained from quantitative research. The recent change in emphasis of camera trapping research use is reflected in the decreasing range of camera trap models being used in Australian research. Practitioners are moving away from less effective models that have slow reaction times between detection and image capture, and inherent bias in detectability of fauna, to more expensive brands that offer faster speeds, greater functionality and more reliability.


Author(s):  
Sara Beery ◽  
Dan Morris ◽  
Siyu Yang ◽  
Marcel Simon ◽  
Arash Norouzzadeh ◽  
...  

Camera traps are heat- or motion-activated cameras placed in the wild to monitor and investigate animal populations and behavior. They are used to locate threatened species, identify important habitats, monitor sites of interest, and analyze wildlife activity patterns. At present, the time required to manually review images severely limits productivity. Additionally, ~70% of camera trap images are empty, due to a high rate of false triggers. Previous work has shown good results on automated species classification in camera trap data (Norouzzadeh et al. 2018), but further analysis has shown that these results do not generalize to new cameras or new geographic regions (Beery et al. 2018). Additionally, these models will fail to recognize any species they were not trained on. In theory, it is possible to re-train an existing model in order to add missing species, but in practice, this is quite difficult and requires just as much machine learning expertise as training models from scratch. Consequently, very few organizations have successfully deployed machine learning tools for accelerating camera trap image annotation. We propose a different approach to applying machine learning to camera trap projects, combining a generalizable detector with project-specific classifiers. We have trained an animal detector that is able to find and localize (but not identify) animals, even species not seen during training, in diverse ecosystems worldwide. See Fig. 1 for examples of the detector run over camera trap data covering a diverse set of regions and species, unseen at training time. By first finding and localizing animals, we are able to: drastically reduce the time spent filtering empty images, and dramatically simplify the process of training species classifiers, because we can crop images to individual animals (and thus classifiers need only worry about animal pixels, not background pixels). drastically reduce the time spent filtering empty images, and dramatically simplify the process of training species classifiers, because we can crop images to individual animals (and thus classifiers need only worry about animal pixels, not background pixels). With this detector model as a powerful new tool, we have established a modular pipeline for on-boarding new organizations and building project-specific image processing systems. We break our pipeline into four stages: 1. Data ingestion First we transfer images to the cloud, either by uploading to a drop point or by mailing an external hard drive. Data comes in a variety of formats; we convert each data set to the COCO-Camera Traps format, i.e. we create a Javascript Object Notation (JSON) file that encodes the annotations and the image locations within the organization’s file structure. 2. Animal detection We next run our (generic) animal detector on all the images to locate animals. We have developed an infrastructure for efficiently running this detector on millions of images, dividing the load over multiple nodes. We find that a single detector works for a broad range of regions and species. If the detection results (as validated by the organization) are not sufficiently accurate, it is possible to collect annotations for a small set of their images and fine-tune the detector. Typically these annotations would be fed back into a new version of the general detector, improving results for subsequent projects. 3. Species classification Using species labels provided by the organization, we train a (project-specific) classifier on the cropped-out animals. 4. Applying the system to new data We use the general detector and the project-specific classifier to power tools facilitating accelerated verification and image review, e.g. visualizing the detections, selecting images for review based on model confidence, etc. The aim of this presentation is to present a new approach to structuring camera trap projects, and to formalize discussion around the steps that are required to successfully apply machine learning to camera trap images. The work we present is available at http://github.com/microsoft/cameratraps, and we welcome new collaborating organizations.


Sign in / Sign up

Export Citation Format

Share Document