Low-cost estimates of mortality rate from single tag recoveries: addressing short-term trap-happy and trap-shy bias

2012 ◽  
Vol 69 (3) ◽  
pp. 600-611
Author(s):  
Richard McGarvey ◽  
Janet M. Matthews

Conventional single tag-recovery data are widely available for stock assessments, notably of invertebrate fisheries, worldwide. Though not commonly used for this purpose, the times-at-large in single tag-recovery data provide (relatively) direct information about average mortality rate as a sample of survival times. Mortality rate is estimated using simple formulas given as functions of the mean time-at-large of tagged and recaptured animals. Here we extend an earlier time-at-large mortality estimator to address a potentially common source of bias: trap-happy or trap-shy behavior shortly following tag release. A maximum likelihood solution is derived, yielding an unbiased estimate of instantaneous mortality rate where the interval of usable times-at-large for observed recaptures may be truncated on both sides to any biologist-chosen experimental (recapture) time frame. In tests of the new doubly-truncated mortality estimator using simulated tag-recovery times-at-large, omitting the first 8 weeks of recaptures from the mortality estimate largely eliminated the bias introduced by simulated short-term trap-happy and trap-shy behavior. Bias in the mortality estimate declined by an order of magnitude more than the observed increase in standard error.

Author(s):  
J. H. Latter

This paper reviews the nature and history of activity and the extent of risk at 14 volcanoes and volcanic centres in New Zealand and the Kermadec Islands. Mean intervals between eruptions are calculated, or estimated by extrapolation, for eight classes of eruption, represented by order of magnitude volume increases from 104m3 to 1011m3 (100 km3) Expected property losses in eruptions, divided by the approximate mean intervals, allow risk to be apportioned on an annual basis. In real terms the rhyolite volcanoes, between Kawerau/Lake Rotorua and the southern end of Lake Taupo, are easily the most destructive. Annually apportioned, however, the risk is highest for an eruption of about 107m3 at Mt Egmont. Cumulative volumes erupted with time are estimated for most of the volcanoes and, where possible, average rates of magma accumulation and subsequent eruption have been estimated. This enables any shortfall between the actual volumes erupted, and the expected volumes, to be estimated, thus giving a measure of eruption potential at the present time. This varies for different volcanoes, from about 0.04 km3 up to several hundred cubic kilometres. The time elapsed since the last eruption, divided by the mean frequency for that class of eruption, gives an idea of the likelihood of further activity, although the usefulness of the results is limited by large standard deviations. In the short term, less than 100 years, an eruption of 107m3 at Mt Egmont again emerges as the most likely damaging event. In the medium term, of the order of a few hundred years, an eruption of c.1 km3 in the Okataina-Rotorua area, or in the district between Lake Taupo and Rotorua, becomes probable. The data on which the conclusions are based, together with the mean intervals accepted, and the times elapsed since the last eruptions, are given in Appendices, so that the nature of the facts, and hence a wide perspective on volcanic activity in New Zealand, can be the better appreciated. The picture is one of volcanoes dormant for long periods of time, with great destructive potential, any of which could awaken at any time.


2020 ◽  
Vol 24 (05n07) ◽  
pp. 964-972
Author(s):  
Larisa Lvova ◽  
Giuseppe Pomarico ◽  
Federica Mandoj ◽  
Fabrizio Caroleo ◽  
Corrado Di Natale ◽  
...  

A low-cost on-paper sensor based on 5,10,15-tritolylcorrolatocobalt(III) triphenylphosphine, CoTTCorr(PPh3), was developed for cyanide detection in aqueous solutions. The sensor was coupled to a smartphone and used a home-written color intensity analysis software in order to record and interpret the colorimetric response. The detection of cyanide was possible down to 0.053 mg/L, an order of magnitude lower than the value of 0.5 mg/L set by the World Health Organization (WHO) for safe short-term exposure of cyanide in potable water. The colorimetric sensor had selectivity toward cyanide ions over the anions Cl-, Br, F-, NO2, SCN-, OA[Formula: see text]-,ClO4-, H2PO4- and HCO3- while the influence of NO3- ions on the sensor optical response towards cyanide was overcome by optimization of the ionophore/anion-exchanger ratio inside the sensing material. The best performance was obtained for the optode with an ionophore to exchanger ratio of 1:3. The optimized optodes were employed for quantification of cyanide content added to potable water and saliva.


Author(s):  
Qianqian Lu ◽  
Nannan Zhang ◽  
Chen Chen ◽  
Miao Zhang ◽  
Dehua Zhao ◽  
...  

Lab-scale simulated biofilm reactors, including aerated reactors disturbed by short-term aeration interruption (AE-D) and non-aerated reactors disturbed by short-term aeration (AN-D), were established to study the stable-state (SS) formation and recovery after disturbance for nitrogen transformation in terms of dissolved oxygen (DO), removal efficiency (RE) of NH4+-N and NO3−-N and activity of key nitrogen-cycle functional genes amoA and nirS (RNA level abundance, per ball). SS formation and recovery of DO were completed in 0.56–7.75 h after transition between aeration (Ae) and aeration stop (As). In terms of pollutant REs, new temporary SS formation required 30.7–52.3 h after Ae and As interruptions, and seven-day Ae/As interruptions required 5.0% to 115.5% longer recovery times compared to one-day interruptions in AE-D and AN-D systems. According to amoA activity, 60.8 h were required in AE-D systems to establish new temporary SS after As interruptions, and RNA amoA copies (copy number/microliter) decreased 88.5%, while 287.2 h were required in AN-D systems, and RNA amoA copies (copy number/microliter) increased 36.4 times. For nirS activity, 75.2–85.8 h were required to establish new SSs after Ae and As interruptions. The results suggested that new temporary SS formation and recovery in terms of DO, pollutant REs and amoA and nirS gene activities could be modelled by logistic functions. It is concluded that temporary SS formation and recovery after Ae and As interruptions occurred at asynchronous rates in terms of DO, pollutant REs and amoA and nirS gene activities. Because of DO fluctuations, the quantitative relationship between gene activity and pollutant RE remains a challenge.


Author(s):  
Xinyi Li ◽  
Liqiong Chang ◽  
Fangfang Song ◽  
Ju Wang ◽  
Xiaojiang Chen ◽  
...  

This paper focuses on a fundamental question in Wi-Fi-based gesture recognition: "Can we use the knowledge learned from some users to perform gesture recognition for others?". This problem is also known as cross-target recognition. It arises in many practical deployments of Wi-Fi-based gesture recognition where it is prohibitively expensive to collect training data from every single user. We present CrossGR, a low-cost cross-target gesture recognition system. As a departure from existing approaches, CrossGR does not require prior knowledge (such as who is currently performing a gesture) of the target user. Instead, CrossGR employs a deep neural network to extract user-agnostic but gesture-related Wi-Fi signal characteristics to perform gesture recognition. To provide sufficient training data to build an effective deep learning model, CrossGR employs a generative adversarial network to automatically generate many synthetic training data from a small set of real-world examples collected from a small number of users. Such a strategy allows CrossGR to minimize the user involvement and the associated cost in collecting training examples for building an accurate gesture recognition system. We evaluate CrossGR by applying it to perform gesture recognition across 10 users and 15 gestures. Experimental results show that CrossGR achieves an accuracy of over 82.6% (up to 99.75%). We demonstrate that CrossGR delivers comparable recognition accuracy, but uses an order of magnitude less training samples collected from the end-users when compared to state-of-the-art recognition systems.


1980 ◽  
Vol 70 (6) ◽  
pp. 2221-2228
Author(s):  
C. E. Mortensen ◽  
E. Y. Iwatsubo

abstract A tilt anomaly preceded a pair of earthquakes (ML = 4.2, origin time 0014 UTC, and ML = 3.9, origin time 0018 UTC, both on 29 August 1978) on the Calaveras Fault near San Jose, California. These earthquakes occurred at hypocentral depths of 8.5 and 9.0 km, respectively, and were located 6.7 and 5.2 km northwest of the Mt. Hamilton tiltmeter site. The anomaly is similar in shape and time scale to signals observed on other tiltmeters at the times of recorded surface creep events. The anomaly began approximately 40 hr before the earthquake pair and consisted of gradual down-to-the-east tilting followed by rapid tilting down-to-the-north-northeast at a rate of 12 μrad/hr. This was followed by 1 hr of rapid down-to-the-east tilting amounting to 1.5 μrad. The maximum peak tilt of 10.6 μrad down-to-the-northeast was followed by gradual decelerating tilting down-to-the-southwest constituting partial recovery. An anomaly of nearly identical form, but smaller in amplitude and duration, preceded an ML = 2.2 aftershock on 5 September 1978. Other nearby earthquakes as large as ML = 4.7 have occurred without accompanying creep-like signals. A similar, but a much smaller (0.74 μrad) creep-event-like signal preceded an ML = 3.5 earthquake with epicenter 3 km east of the Black Mountain tiltmeter site. In general, however, short-term tilt anomalies such as these are not observed to precede local earthquakes within the central California tiltmeter network. The tilt signal preceding the 29 August earthquake pair may be interpreted in terms of a model of a propagating creep event, at depth, associated with seismic failure at a “stuck” patch on the fault. However, the data are not adequate to constrain the model sufficiently to constitute a test of the hypothesis.


Author(s):  
O. A. Alekseev ◽  
◽  
S. A. Pulinets ◽  
P. A. Budnikov ◽  
V. B. Serebriakov ◽  
...  

The article is devoted to the analysis of the development and control of the operation of the functional mock-up of the information service for automated monitoring and short-term forecasting of severe earthquakes in the Kamchatka-Sakhalin region. The tasks of the service mock-up concerning the collection, processing of data on earthquake precursors, the forecasting of severe (earthquake magnitude 6 or more) earthquakes in the form of estimates of the times of their onset, coordinates of epicenters (latitude and longitude) and earthquake magnitudes are determined. Taking the geoinformational character of the initial data on the approaching earthquakes as the basis for constructing the mock-up of the service, a geo-integration platform is proposed. This allows the integration of the information resources of the earthquake precursor monitoring systems, the functions of processing monitoring information into earthquake forecasts, the results of generating earthquake forecasts and their presentation to consumers into a single geoinformation environment. The composition of the service mock-up and the functioning of such elements as microservices are considered: collection and processing data from receivers of radio navigation signals of the GPS/GLONASS systems; collection and processing of data on the global distribution of TEC in the ionosphere; collection and processing of data on geomagnetic conditions, the flux of solar radio emission, thermal anomalies, as well as data concerning the atmospheric anomalies over the test site area and a unit for presenting and communicating the results of the operation of the information service mock-up for automated monitoring and short-term forecasting of severe earthquakes. The results of service operation are illustrated with the help of examples of retrospective forecasting of a number of severe earthquakes that occurred over the past 10 years in the Kamchatka-Sakhalin region, according to their precursors.


Sensors ◽  
2018 ◽  
Vol 18 (10) ◽  
pp. 3405 ◽  
Author(s):  
Manuel Espinosa-Gavira ◽  
Agustín Agüera-Pérez ◽  
Juan González de la Rosa ◽  
José Palomares-Salas ◽  
José Sierra-Fernández

Very short-term solar forecasts are gaining interest for their application on real-time control of photovoltaic systems. These forecasts are intimately related to the cloud motion that produce variations of the irradiance field on scales of seconds and meters, thus particularly impacting in small photovoltaic systems. Very short-term forecast models must be supported by updated information of the local irradiance field, and solar sensor networks are positioning as the more direct way to obtain these data. The development of solar sensor networks adapted to small-scale systems as microgrids is subject to specific requirements: high updating frequency, high density of measurement points and low investment. This paper proposes a wireless sensor network able to provide snapshots of the irradiance field with an updating frequency of 2 Hz. The network comprised 16 motes regularly distributed over an area of 15 m × 15 m (4 motes × 4 motes, minimum intersensor distance of 5 m). The irradiance values were estimated from illuminance measurements acquired by lux-meters in the network motes. The estimated irradiances were validated with measurements of a secondary standard pyranometer obtaining a mean absolute error of 24.4 W/m 2 and a standard deviation of 36.1 W/m 2 . The network was able to capture the cloud motion and the main features of the irradiance field even with the reduced dimensions of the monitoring area. These results and the low-cost of the measurement devices indicate that this concept of solar sensor networks would be appropriate not only for photovoltaic plants in the range of MW, but also for smaller systems such as the ones installed in microgrids.


2018 ◽  
Vol 221 ◽  
pp. 02005
Author(s):  
Swee Shu Luing Nikalus ◽  
Guan Toh Guat ◽  
Mum Wai Yip ◽  
See Chew Tai

This paper provides a detailed analysis on the systematic innovation process in improving the quality control of latex gloves production. The systematic innovation tool such as TRIZ is applied in this case study. Function analysis, cause and effect chain analysis, physical contradiction, By-separation model and 40 Inventive Principles are applied in order to derive some feasible and low cost solutions to alleviate the problem. Findings revealed that the rejected (leaking) gloves on the production line will be manually monitored by a checker during the air blowing test and will be discarded by the same checker instantly. The main root cause is that the quality control worker is not able to concentrate all the times to detect the torn gloves, mainly is due to the fast speed production line and other distractions. The problem is solved by applying function analysis, physical contradiction, by-separation tool and Inventive Principles to generate low cost but elegant solutions within the defined scope of several constraints and without making the production line more complex. Therefore, it can be concluded that TRIZ is a systematic and innovative problem solving methodology.


Sign in / Sign up

Export Citation Format

Share Document