scholarly journals On the mechanism of automated fizzy extraction

2019 ◽  
Vol 1 ◽  
pp. e2 ◽  
Author(s):  
Chun-Ming Chang ◽  
Hao-Chun Yang ◽  
Pawel L. Urban

Fizzy extraction (FE) facilitates analysis of volatile solutes by promoting their transfer from the liquid to the gas phase. A carrier gas is dissolved in the sample under moderate pressure (Δp ≈ 150 kPa), followed by an abrupt decompression, what leads to effervescence. The released gaseous analytes are directed to an on-line detector due to a small pressure difference. FE is advantageous in chemical analysis because the volatile species are released in a short time interval, allowing for pulsed injection, and leading to high signal-to-noise ratios. To shed light on the mechanism of FE, we have investigated various factors that could potentially contribute to the extraction efficiency, including: instrument-related factors, method-related factors, sample-related factors, and analyte-related factors. In particular, we have evaluated the properties of volatile solutes, which make them amenable to FE. The results suggest that the organic solutes may diffuse to the bubble lumen, especially in the presence of salt. The high signal intensities in FE coupled with mass spectrometry are partly due to the high sample introduction rate (upon decompression) to a mass-sensitive detector. However, the analytes with different properties (molecular weight, polarity) reveal distinct temporal profiles, pointing to the effect of bubble exposure to the sample matrix. A sufficient extraction time (~12 s) is required to extract less volatile solutes. The results presented in this report can help analysts to predict the occurrence of matrix effects when analyzing real samples. They also provide a basis for increasing extraction efficiency to detect low-abundance analytes.

Author(s):  
O. S. Galinina ◽  
S. D. Andreev ◽  
A. M. Tyurlikov

Introduction: Machine-to-machine communication assumes data transmission from various wireless devices and attracts attention of cellular operators. In this regard, it is crucial to recognize and control overload situations when a large number of such devices access the network over a short time interval.Purpose:Analysis of the radio network overload at the initial network entry stage in a machine-to-machine communication system.Results: A system is considered that features multiple smart meters, which may report alarms and autonomously collect energy consumption information. An analytical approach is proposed to study the operation of a large number of devices in such a system as well as model the settings of the random-access protocol in a cellular network and overload control mechanisms with respect to the access success probability, network access latency, and device power consumption. A comparison between the obtained analytical results and simulation data is also offered. 


2021 ◽  
Vol 13 (14) ◽  
pp. 2739
Author(s):  
Huizhong Zhu ◽  
Jun Li ◽  
Longjiang Tang ◽  
Maorong Ge ◽  
Aigong Xu

Although ionosphere-free (IF) combination is usually employed in long-range precise positioning, in order to employ the knowledge of the spatiotemporal ionospheric delays variations and avoid the difficulty in choosing the IF combinations in case of triple-frequency data processing, using uncombined observations with proper ionospheric constraints is more beneficial. Yet, determining the appropriate power spectral density (PSD) of ionospheric delays is one of the most important issues in the uncombined processing, as the empirical methods cannot consider the actual ionosphere activities. The ionospheric delays derived from actual dual-frequency phase observations contain not only the real-time ionospheric delays variations, but also the observation noise which could be much larger than ionospheric delays changes over a very short time interval, so that the statistics of the ionospheric delays cannot be retrieved properly. Fortunately, the ionospheric delays variations and the observation noise behave in different ways, i.e., can be represented by random-walk and white noise process, respectively, so that they can be separated statistically. In this paper, we proposed an approach to determine the PSD of ionospheric delays for each satellite in real-time by denoising the ionospheric delay observations. Based on the relationship between the PSD, observation noise and the ionospheric observations, several aspects impacting the PSD calculation are investigated numerically and the optimal values are suggested. The proposed approach with the suggested optimal parameters is applied to the processing of three long-range baselines of 103 km, 175 km and 200 km with triple-frequency BDS data in both static and kinematic mode. The improvement in the first ambiguity fixing time (FAFT), the positioning accuracy and the estimated ionospheric delays are analysed and compared with that using empirical PSD. The results show that the FAFT can be shortened by at least 8% compared with using a unique empirical PSD for all satellites although it is even fine-tuned according to the actual observations and improved by 34% compared with that using PSD derived from ionospheric delay observations without denoising. Finally, the positioning performance of BDS three-frequency observations shows that the averaged FAFT is 226 s and 270 s, and the positioning accuracies after ambiguity fixing are 1 cm, 1 cm and 3 cm in the East, North and Up directions for static and 3 cm, 3 cm and 6 cm for kinematic mode, respectively.


2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Christiane Schön ◽  
Claudia Reule ◽  
Katharina Knaub ◽  
Antje Micka ◽  
Manfred Wilhelm ◽  
...  

Abstract Background The assessment of improvement or maintenance of joint health in healthy subjects is a great challenge. The aim of the study was the evaluation of a joint stress test to assess joint discomfort in subjects with activity-related knee joint discomfort (ArJD). Results Forty-five subjects were recruited to perform the single-leg-step-down (SLSD) test (15 subjects per group). Subjects with ArJD of the knee (age 22–62 years) were compared to healthy subjects (age 24–59 years) with no knee joint discomfort during daily life sporting activity and to subjects with mild-to-moderate osteoarthritis of the knee joint (OA, Kellgren score 2–3, age 42–64 years). The subjects performed the SLSD test with two different protocols: (I) standardization for knee joint discomfort; (II) standardization for load on the knee joint. In addition, range of motion (ROM), reach test, acute pain at rest and after a single-leg squat and knee injury, and osteoarthritis outcome score (KOOS) were assessed. In OA and ArJD subjects, knee joint discomfort could be reproducibly induced in a short time interval of less than 10 min (200 steps). In healthy subjects, no pain was recorded. A clear differentiation between study groups was observed with the SLSD test (maximal step number) as well as KOOS questionnaire, ROM, and reach test. In addition, a moderate to good intra-class correlation was shown for the investigated outcomes. Conclusions These results suggest the SLSD test is a reliable tool for the assessment of knee joint health function in ArJD and OA subjects to study the improvements in their activities. Further, this model can be used as a stress model in intervention studies to study the impact of stress on knee joint health function.


1998 ◽  
Vol 1644 (1) ◽  
pp. 142-149 ◽  
Author(s):  
Gang-Len Chang ◽  
Xianding Tao

An effective method for estimating time-varying turning fractions at signalized intersections is described. With the inclusion of approximate intersection delay, the proposed model can account for the impacts of signal setting on the dynamic distribution of intersection flows. To improve the estimation accuracy, the use of preestimated turning fractions from a relatively longer time interval has been proposed to serve as additional constraints for the same estimation but over a short time interval. The results of extensive simulation experiments indicated that the proposed method can yield sufficiently accurate as well as efficient estimation of dynamic turning fractions for signalized intersections.


2020 ◽  
pp. 5-13
Author(s):  
Vishal Dubey ◽  
◽  
◽  
◽  
Bhavya Takkar ◽  
...  

Micro-expression comes under nonverbal communication, and for a matter of fact, it appears for minute fractions of a second. One cannot control micro-expression as it tells about our actual state emotionally, even if we try to hide or conceal our genuine emotions. As we know that micro-expressions are very rapid due to which it becomes challenging for any human being to detect it with bare eyes. This subtle-expression is spontaneous, and involuntary gives the emotional response. It happens when a person wants to conceal the specific emotion, but the brain is reacting appropriately to what that person is feeling then. Due to which the person displays their true feelings very briefly and later tries to make a false emotional response. Human emotions tend to last about 0.5 - 4.0 seconds, whereas micro-expression can last less than 1/2 of a second. On comparing micro-expression with regular facial expressions, it is found that for micro-expression, it is complicated to hide responses of a particular situation. Micro-expressions cannot be controlled because of the short time interval, but with a high-speed camera, we can capture one's expressions and replay them at a slow speed. Over the last ten years, researchers from all over the globe are researching automatic micro-expression recognition in the fields of computer science, security, psychology, and many more. The objective of this paper is to provide insight regarding micro-expression analysis using 3D CNN. A lot of datasets of micro-expression have been released in the last decade, we have performed this experiment on SMIC micro-expression dataset and compared the results after applying two different activation functions.


2018 ◽  
Vol 21 (10) ◽  
pp. 979-984 ◽  
Author(s):  
Chiara Adami ◽  
Elena Lardone ◽  
Paolo Monticelli

Objectives The aim of this study was to compare the Electronic von Frey Anaesthesiometer (EVF) and the Small Animal ALGOmeter (SMALGO), used to measure sensory thresholds in 13 healthy cats at both the stifle and the lumbosacral joint, in terms of inter-rater and inter-device reliability. Methods Two independent observers carried out the sets of measurements in a randomised order, with a 45 min interval between them, in each cat. The inter-rater and inter-device reliability were evaluated by calculating the inter-rater correlation coefficient (ICC) for each pair of measurements. The Bland–Altman method was used as an additional tool to assess the level of agreement between the two algometers. Results The mean ± SD sensory thresholds measured with the EVF were 311 ± 116 g and 378 ± 178 g for the stifle and for the lumbosacral junction, respectively, whereas those measured with the SMALGO were 391 ±172 g and 476 ± 172 g. The inter-rater reliability was fair (ICC >0.4) for each pair of measurements except those taken at the level of the stifle with the SMALGO, for which the level of agreement between observers A and B was poor (ICC = 0.01). The inter-device reliability was good (ICC = 0.73; P = 0.001). The repetition of the measurements affected reliability, as the thresholds obtained after the 45 min break were consistently lower than those measured during the first part of the trial ( P = 0.02). Conclusions and relevance The EVF and the SMALGO may be used interchangeably in cats, especially when the area to be tested is the lumbosacral joint. However, when the thresholds are measured at the stifle, the inter-observer reliability is better with the EVF than with the SMALGO. The reliability decreases when the measurements are repeated within a short time interval, suggesting a limited clinical applicability of quantitative sensory testing with both algometers in cats.


2000 ◽  
Vol 90 (8) ◽  
pp. 788-800 ◽  
Author(s):  
L. V. Madden ◽  
G. Hughes ◽  
M. E. Irwin

A general approach was developed to predict the yield loss of crops in relation to infection by systemic diseases. The approach was based on two premises: (i) disease incidence in a population of plants over time can be described by a nonlinear disease progress model, such as the logistic or monomolecular; and (ii) yield of a plant is a function of time of infection (t) that can be represented by the (negative) exponential or similar model (ζ(t)). Yield loss of a population of plants on a proportional scale (L) can be written as the product of the proportion of the plant population newly infected during a very short time interval (X′(t)dt) and ζ(t), integrated over the time duration of the epidemic. L in the model can be expressed in relation to directly interpretable parameters: maximum per-plant yield loss (α, typically occurring at t = 0); the decline in per-plant loss as time of infection is delayed (γ; units of time-1); and the parameters that characterize disease progress over time, namely, initial disease incidence (X0), rate of disease increase (r; units of time-1), and maximum (or asymptotic) value of disease incidence (K). Based on the model formulation, L ranges from αX0 to αK and increases with increasing X0, r, K, α, and γ-1. The exact effects of these parameters on L were determined with numerical solutions of the model. The model was expanded to predict L when there was spatial heterogeneity in disease incidence among sites within a field and when maximum per-plant yield loss occurred at a time other than the beginning of the epidemic (t > 0). However, the latter two situations had a major impact on L only at high values of r. The modeling approach was demonstrated by analyzing data on soybean yield loss in relation to infection by Soybean mosaic virus, a member of the genus Potyvirus. Based on model solutions, strategies to reduce or minimize yield losses from a given disease can be evaluated.


1995 ◽  
Vol 34 (7) ◽  
pp. 1512-1524 ◽  
Author(s):  
Thomas J. Kleespies

Abstract Radiometric observations in the 3.9-µm region have been used by a number of investigators for the determination of cloud parameters or sea surface temperature at night. Only a few attempts have been made to perform quantitative assessments of cloud and surface properties during the daytime because of the inability to distinguish between the thermal and solar components of the satellite-sensed radiances. This paper presents a new method of separating the thermal and solar components of upwelling 3.9-µm radiances. Two collocated satellite observations are made under conditions where the solar illumination angle changes but the thermal structure of the cloud and atmosphere, as well as the cloud microphysics change very little. These conditions can easily be met by observing the same cloud from geosynchoronous orbit over a short time interval during the morning hours. When the radiances are differenced under these constraints, the thermal components cancel, and the difference in the radiances is simply the difference in the solar component. With a few simplifying assumptions, a cloud microphysical property, specifically effective radius, can be inferred. This parameter is of particular importance to both climate modeling and global change studies. The methodology developed in this paper is applied to data from the Visible-Infrared Spin Scan Radiometer Atmospheric Sounder onboard the GOES-7 spacecraft for a period in August 1992.


1989 ◽  
Vol 21 (1) ◽  
pp. 1-19 ◽  
Author(s):  
H. R. Lerche ◽  
D. Siegmund

Let T be the first exit time of Brownian motion W(t) from a region ℛ in d-dimensional Euclidean space having a smooth boundary. Given points ξ0 and ξ1 in ℛ, ordinary and large-deviation approximations are given for Pr{T < ε |W(0) = ξ0, W(ε) = ξ 1} as ε → 0. Applications are given to hearing the shape of a drum and approximating the second virial coefficient.


Sign in / Sign up

Export Citation Format

Share Document