scholarly journals Fire frequency analysis in Portugal (1975 - 2005), using Landsat-based burnt area maps

2012 ◽  
Vol 21 (1) ◽  
pp. 48 ◽  
Author(s):  
Sofia L. J. Oliveira ◽  
José M. C. Pereira ◽  
João M. B. Carreiras

Fire frequency in 21 forest planning regions of Portugal during the period 1975–2005 was estimated from historical burnt area maps generated with semi-automatic classification of Landsat Thematic Mapper (TM) satellite imagery. Fire return interval distributions were modelled with the Weibull function and the estimated parameters were used to calculate regional mean, median and modal fire return intervals, as well as regional hazard functions. Arrangement of the available data into three different time series allowed for assessment of the effects of minimum mapping unit, time series length and use of censored data on the Weibull function parameter estimates. Varying the minimum mapping unit between 5 and 35 ha had a negligible effect on parameter estimates, whereas changing the time series length from 22 to 31 years substantially affected the estimates. However, the strongest effect was caused by censored data. Its exclusion led to substantial overestimation of fire frequency and of burning probability dependence on fuel age. We estimated a country-wide mean fire interval of 36 years and an annual burnt area of 1.2%. Regional variations in fire frequency descriptors were interpreted in terms of land cover and land use practices that affect the contemporary fire regime in Portugal.

2021 ◽  
Vol 9 (1) ◽  
Author(s):  
Dalton J. Hance ◽  
Katie M. Moriarty ◽  
Bruce A. Hollen ◽  
Russell W. Perry

Abstract Background Studies of animal movement using location data are often faced with two challenges. First, time series of animal locations are likely to arise from multiple behavioral states (e.g., directed movement, resting) that cannot be observed directly. Second, location data can be affected by measurement error, including failed location fixes. Simultaneously addressing both problems in a single statistical model is analytically and computationally challenging. To both separate behavioral states and account for measurement error, we used a two-stage modeling approach to identify resting locations of fishers (Pekania pennanti) based on GPS and accelerometer data. Methods We developed a two-stage modelling approach to estimate when and where GPS-collared fishers were resting for 21 separate collar deployments on 9 individuals in southern Oregon. For each deployment, we first fit independent hidden Markov models (HMMs) to the time series of accelerometer-derived activity measurements and apparent step lengths to identify periods of movement and resting. Treating the state assignments as given, we next fit a set of linear Gaussian state space models (SSMs) to estimate the location of each resting event. Results Parameter estimates were similar across collar deployments. The HMMs successfully identified periods of resting and movement with posterior state assignment probabilities greater than 0.95 for 97% of all observations. On average, fishers were in the resting state 63% of the time. Rest events averaged 5 h (4.3 SD) and occurred most often at night. The SSMs allowed us to estimate the 95% credible ellipses with a median area of 0.12 ha for 3772 unique rest events. We identified 1176 geographically distinct rest locations; 13% of locations were used on > 1 occasion and 5% were used by > 1 fisher. Females and males traveled an average of 6.7 (3.5 SD) and 7.7 (6.8 SD) km/day, respectively. Conclusions We demonstrated that if auxiliary data are available (e.g., accelerometer data), a two-stage approach can successfully resolve both problems of latent behavioral states and GPS measurement error. Our relatively simple two-stage method is repeatable, computationally efficient, and yields directly interpretable estimates of resting site locations that can be used to guide conservation decisions.


Mathematics ◽  
2021 ◽  
Vol 9 (15) ◽  
pp. 1832
Author(s):  
Mariano Méndez-Suárez

Partial least squares structural equations modeling (PLS-SEM) uses sampling bootstrapping to calculate the significance of the model parameter estimates (e.g., path coefficients and outer loadings). However, when data are time series, as in marketing mix modeling, sampling bootstrapping shows inconsistencies that arise because the series has an autocorrelation structure and contains seasonal events, such as Christmas or Black Friday, especially in multichannel retailing, making the significance analysis of the PLS-SEM model unreliable. The alternative proposed in this research uses maximum entropy bootstrapping (meboot), a technique specifically designed for time series, which maintains the autocorrelation structure and preserves the occurrence over time of seasonal events or structural changes that occurred in the original series in the bootstrapped series. The results showed that meboot had superior performance than sampling bootstrapping in terms of the coherence of the bootstrapped data and the quality of the significance analysis.


2021 ◽  
Vol 28 (1) ◽  
pp. 98-102
Author(s):  
A. B. AYANWALE ◽  
J. O. AJETOMOBI

This paper exainîned the role of household composition in egg cunsumption in Obafemi Awolowo University Community. An Ordinary Least Square regression model was used to obtain at-home demand function parameter estimates for egg. Positive and signiflcant relationship was found between quantity of eggs consumed and both household size and the age of children. A 1% increase in each of the variables would cause a 4.68% and 5.71 % increase in egg consumption respectively. The need for education of the household on the importance of egg consumption and keeping an optimum family size was suggested based on the findings of the study.


2017 ◽  
Author(s):  
Easton R White

Long-term time series are necessary to better understand population dynamics, assess species' conservation status, and make management decisions. However, population data are often expensive, requiring a lot of time and resources. When is a population time series long enough to address a question of interest? We determine the minimum time series length required to detect significant increases or decreases in population abundance. To address this question, we use simulation methods and examine 878 populations of vertebrate species. Here we show that 15-20 years of continuous monitoring are required in order to achieve a high level of statistical power. For both simulations and the time series data, the minimum time required depends on trend strength, population variability, and temporal autocorrelation. These results point to the importance of sampling populations over long periods of time. We argue that statistical power needs to be considered in monitoring program design and evaluation. Time series less than 15-20 years are likely underpowered and potentially misleading.


2018 ◽  
Author(s):  
Easton R White

Long-term time series are necessary to better understand population dynamics, assess species' conservation status, and make management decisions. However, population data are often expensive, requiring a lot of time and resources. What is the minimum population time series length required to detect significant trends in abundance? I first present an overview of the theory and past work that has tried to address this question. As a test of these approaches, I then examine 822 populations of vertebrate species. I show that 72% of time series required at least 10 years of continuous monitoring in order to achieve a high level of statistical power. However, the large variability between populations casts doubt on commonly used simple rules of thumb, like those employed by the IUCN Red List. I argue that statistical power needs to be considered more often in monitoring programs. Short time series are likely under-powered and potentially misleading.


Entropy ◽  
2019 ◽  
Vol 21 (4) ◽  
pp. 385 ◽  
Author(s):  
David Cuesta-Frau ◽  
Juan Pablo Murillo-Escobar ◽  
Diana Alexandra Orrego ◽  
Edilson Delgado-Trejos

Permutation Entropy (PE) is a time series complexity measure commonly used in a variety of contexts, with medicine being the prime example. In its general form, it requires three input parameters for its calculation: time series length N, embedded dimension m, and embedded delay τ . Inappropriate choices of these parameters may potentially lead to incorrect interpretations. However, there are no specific guidelines for an optimal selection of N, m, or τ , only general recommendations such as N > > m ! , τ = 1 , or m = 3 , … , 7 . This paper deals specifically with the study of the practical implications of N > > m ! , since long time series are often not available, or non-stationary, and other preliminary results suggest that low N values do not necessarily invalidate PE usefulness. Our study analyses the PE variation as a function of the series length N and embedded dimension m in the context of a diverse experimental set, both synthetic (random, spikes, or logistic model time series) and real–world (climatology, seismic, financial, or biomedical time series), and the classification performance achieved with varying N and m. The results seem to indicate that shorter lengths than those suggested by N > > m ! are sufficient for a stable PE calculation, and even very short time series can be robustly classified based on PE measurements before the stability point is reached. This may be due to the fact that there are forbidden patterns in chaotic time series, not all the patterns are equally informative, and differences among classes are already apparent at very short lengths.


2020 ◽  
Vol 22 (1) ◽  
pp. 18-30
Author(s):  
Mikael Deurs ◽  
Mollie E. Brooks ◽  
Martin Lindegren ◽  
Ole Henriksen ◽  
Anna Rindorf

Sign in / Sign up

Export Citation Format

Share Document