scholarly journals Improving GNSS Ambiguity Acceptance Test Performance with the Generalized Difference Test Approach

Sensors ◽  
2018 ◽  
Vol 18 (9) ◽  
pp. 3018 ◽  
Author(s):  
Lei Wang ◽  
Ruizhi Chen ◽  
Lili Shen ◽  
Yanming Feng ◽  
Yuanjin Pan ◽  
...  

In Global navigation satellite system (GNSS) data processing, integer ambiguity acceptance test is considered as a challenging problem. A number of ambiguity acceptance tests have been proposed from different perspective and then unified into the integer aperture estimation (IA) framework. Among all the IA estimators, the optimal integer aperture (OIA) achieves the highest success rate with the fixed failure rate tolerance. However, the OIA is of less practical appealing due to its high computation complexity. On the other hand, the popular discrimination tests employ only two integer candidates, which are the essential reason for their sub-optimality. In this study, a generalized difference test (GDT) is proposed to exploit the benefit of including three or more integer candidates to improve their performance from theoretical perspective. The simulation results indicate that the third best integer candidates contribute to more than 70% success rate improvement for integer bootstrapping success rate higher than 0.8 case. Therefore, the GDT with three integer candidates (GDT3) achieves a good trade-off between the performance and computation burden. The threshold function is also applied for rapid determination of the fixed failure rate (FF)-threshold for GDT3. The performance improvement of GDT3 is validated with real GNSS data set. The numerical results indicate that GDT3 achieves higher empirical success rate while the empirical failure rate remains comparable. In a 20 km baseline test, the success rate GDT3 increase 7% with almost the same empirical failure rate.

2019 ◽  
Vol 11 (7) ◽  
pp. 804 ◽  
Author(s):  
Lei Wang ◽  
Ruizhi Chen ◽  
Lili Shen ◽  
Fu Zheng ◽  
Yanming Feng ◽  
...  

GNSS integer ambiguity acceptance test is one of the open problems in GNSS data processing. A number of ambiguity acceptance tests have been proposed from different perspectives and then unified into the integer aperture estimation framework. The existing comparative studies indicate that the impact of test statistics form on the test performance is less critical, while how to construct an efficient, practical test threshold is still challenging. Based on the likelihood ratio test theory, a new computationally efficient ambiguity acceptance test with controllable success fix rate, namely the fixed likelihood ratio (FL-) approach is proposed, which does not require Monte Carlo simulation. The study indicates that the fixed failure rate (FF-) approach can only control the overall failure rate of the acceptance region, but the local failure rate is not controllable. The proposed FL-approach only accepts the fixed solution meeting the likelihood ratio requirement. With properly chosen likelihood ratio threshold, the FL-approach achieves comparable success rate as the FF-approach and even lower failure rate than the FF-approach for the strong underlying model cases. The fixed success fix rate of the FL-approach is verified with both simulation data and real GNSS data. The numerical results indicate that the success fix rate of the FL-approach achieves >98% while the failure rate is <1.5%. The RTK positioning with ambiguities tested by the FL-approach achieved 1–2cm horizontal precision and 2–4 cm vertical precision for all tested baselines, which confirms that the FL-approach can serve as a reliable and efficient threshold determination method for the GNSS ambiguity acceptance test problem.


2021 ◽  
Vol 14 (1) ◽  
pp. 60
Author(s):  
Farinaz Mirmohammadian ◽  
Jamal Asgari ◽  
Sandra Verhagen ◽  
Alireza Amiri-Simkooei

With the advancement of multi-constellation and multi-frequency global navigation satellite systems (GNSSs), more observations are available for high precision positioning applications. Although there is a lot of progress in the GNSS world, achieving realistic precision of the solution (neither too optimistic nor too pessimistic) is still an open problem. Weighting among different GNSS systems requires a realistic stochastic model for all observations to achieve the best linear unbiased estimation (BLUE) of unknown parameters in multi-GNSS data processing mode. In addition, the correct integer ambiguity resolution (IAR) becomes crucial in shortening the Time-To-Fix (TTF) in RTK, especially in challenging environmental conditions. In general, it is required to estimate various variances for observation types, consider the correlation between different observables, and compensate for the satellite elevation dependence of the observable precision. Quality control of GNSS signals, such as GPS, GLONASS, Galileo, and BeiDou can be performed by processing a zero or short baseline double difference pseudorange and carrier phase observations using the least-squares variance component estimation (LS-VCE). The efficacy of this method is investigated using real multi-GNSS data sets collected by the Trimble NETR9, SEPT POLARX5, and LEICA GR30 receivers. The results show that the standard deviation of observations depends on the system and the observable type in which a particular receiver could have the best performance. We also note that the estimated variances and correlations among different observations are also dependent on the receiver type. It is because the approaches utilized for the recovery techniques differ from one type of receiver to another kind. The reliability of IAR will improve if a realistic stochastic model is applied in single or multi-GNSS data processing. According to the results, for the data sets considered, a realistic stochastic model can increase the computed empirical success rate to 100% in multi-GNSS as well as a single system. As mentioned previously, the realistic precision of the solution can be achieved with a realistic stochastic model. However, using the estimated stochastic model, in fact, leads to better precision and accuracy for the estimated baseline components, up to 39% in multi-GNSS.


2021 ◽  
Vol 13 (11) ◽  
pp. 2106
Author(s):  
Haiyang Li ◽  
Guigen Nie ◽  
Shuguang Wu ◽  
Yuefan He

Integer ambiguity resolution is required to obtain precise coordinates for the global navigation satellite system (GNSS). Poorly observed data cause unfixed integer ambiguity and reduce the coordinate accuracy. Previous studies mostly used denoise filters and partial ambiguity resolution algorithms to address this problem. This study proposes a sequential ambiguity resolution method that includes a float solution substitution process and a double-difference (DD) iterative correction equation process. The float solution substitution process updates the initial float solution, while the DD iterative correction equation process is used to eliminate the residual biases. The satellite-selection experiment shows that the float solution substitution process is adequate to obtain a more accurate float solution. The iteration-correction experiment shows that the double-difference iterative correction equation process is feasible with an improvement in the ambiguity success rate from 28.4% to 96.2%. The superiority experiment shows significant improvement in the ambiguity success rate from 36.1% to 83.6% and a better baseline difference from about 0.1 m to 0.04 m. It is proved that the proposed sequential ambiguity resolution method can significantly optimize the results for poorly-observed GNSS data.


Author(s):  
Georg Feigl ◽  
Andreas Sammer

Abstract Purpose Due to the ongoing discussion of the usefulness of dissection on human bodies in medical curricula, we investigated the influence of anatomical knowledge collected in the dissection course and requested for modules of visceral surgery. Methods Students attending the dissection course of topographic anatomy had to answer a questionnaire of 22 questions with focus on anatomical knowledge required for visceral surgical modules. Failure was defined as 13 or fewer correct answers, success categorized as high, good or moderate. The same questionnaire was handed out to 245 students prior to the module on visceral surgery. Students provided information on which regions they had dissected during the course or prior to the module. The results were compared to the result of a written Multiple Choice Question (MCQ) exam of the module visceral surgery (n = 160 students) with an unannounced primary focus on anatomy. Results Students who dissected the truncal regions of the human body succeeded in answering the questionnaire with high success. Students dissecting regions of the Head/Neck or Limbs had a high failure rate, and none of them reached the “high” success level. In the MCQ exam, students dissecting truncal regions had a high success rate, while those who had not dissected or who dissected the Head/Neck or Limbs had a high failure rate. Conclusion Dissections support and improve the required knowledge for surgical modules. For the visceral surgical module, students dissecting the region prior to the module greatly benefited. Therefore, entire human body dissection assumes to be preferable.


2021 ◽  
pp. archdischild-2021-322184
Author(s):  
Susan Jones ◽  
Ross Hanwell ◽  
Tharima Chowdhury ◽  
Jane Orgill ◽  
Kirandeep van den Eshof ◽  
...  

ObjectiveRapid implementation of home sleep studies during the first UK COVID-19 ‘lockdown’—completion rates, family feedback and factors that predict success.DesignWe included all patients who had a sleep study conducted at home instead of as inpatient from 30 March 2020 to 30 June 2020. Studies with less than 4 hours of data for analysis were defined ‘unsuccessful’.Results137 patients were included. 96 underwent home respiratory polygraphy (HRP), median age 5.5 years. 41 had oxycapnography (O2/CO2), median age 5 years. 56% HRP and 83% O2/CO2 were successful. A diagnosis of autism predicted a lower success rate (29%) as did age under 5 years.ConclusionSwitching studies rapidly from an inpatient to a home environment is possible, but there are several challenges that include a higher failure rate in younger children and those with neurodevelopmental disorders.


Author(s):  
Hans-Jörg Schurr ◽  
Mathias Fleury ◽  
Martin Desharnais

AbstractWe present a fast and reliable reconstruction of proofs generated by the SMT solver veriT in Isabelle. The fine-grained proof format makes the reconstruction simple and efficient. For typical proof steps, such as arithmetic reasoning and skolemization, our reconstruction can avoid expensive search. By skipping proof steps that are irrelevant for Isabelle, the performance of proof checking is improved. Our method increases the success rate of Sledgehammer by halving the failure rate and reduces the checking time by 13%. We provide a detailed evaluation of the reconstruction time for each rule. The runtime is influenced by both simple rules that appear very often and common complex rules.


2018 ◽  
Vol 19 (1) ◽  
pp. 264-273 ◽  
Author(s):  
M. Kutyłowska

Abstract This paper presents the results of failure rate prediction by means of support vector machines (SVM) – a non-parametric regression method. A hyperplane is used to divide the whole area in such a way that objects of different affiliation are separated from one another. The number of support vectors determines the complexity of the relations between dependent and independent variables. The calculations were performed using Statistical 12.0. Operational data for one selected zone of the water supply system for the period 2008–2014 were used for forecasting. The whole data set (in which data on distribution pipes were distinguished from those on house connections) for the years 2008–2014 was randomly divided into two subsets: a training subset – 75% (5 years) and a testing subset – 25% (2 years). Dependent variables (λr for the distribution pipes and λp for the house connections) were forecast using independent variables (the total length – Lr and Lp and number of failures – Nr and Np of the distribution pipes and the house connections, respectively). Four kinds of kernel functions (linear, polynomial, sigmoidal and radial basis functions) were applied. The SVM model based on the linear kernel function was found to be optimal for predicting the failure rate of each kind of water conduit. This model's maximum relative error of predicting failure rates λr and λp during the testing stage amounted to about 4% and 14%, respectively. The average experimental failure rates in the whole analysed period amounted to 0.18, 0.44, 0.17 and 0.24 fail./(km·year) for the distribution pipes, the house connections and the distribution pipes made of respectively PVC and cast iron.


PLoS ONE ◽  
2021 ◽  
Vol 16 (4) ◽  
pp. e0250369
Author(s):  
Andreas Moritz ◽  
Luise Holzhauser ◽  
Tobias Fuchte ◽  
Sven Kremer ◽  
Joachim Schmidt ◽  
...  

Background Video laryngoscopy is an effective tool in the management of difficult pediatric airway. However, evidence to guide the choice of the most appropriate video laryngoscope (VL) for airway management in pediatric patients with Pierre Robin syndrome (PRS) is insufficient. Therefore, the aim of this study was to compare the efficacy of the Glidescope® Core™ with a hyperangulated blade, the C-MAC® with a nonangulated Miller blade (C-MAC® Miller) and a conventional Miller laryngoscope when used by anesthetists with limited and extensive experience in simulated Pierre Robin sequence. Methods Forty-three anesthetists with limited experience and forty-three anesthetists with extensive experience participated in our randomized crossover manikin trial. Each performed endotracheal intubation with the Glidescope® Core™ with a hyperangulated blade, the C-MAC® with a Miller blade and the conventional Miller laryngoscope. “Time to intubate” was the primary endpoint. Secondary endpoints were “time to vocal cords”, “time to ventilate”, overall success rate, number of intubation attempts and optimization maneuvers, Cormack-Lehane score, severity of dental trauma and subjective impressions. Results Both hyperangulated and nonangulated VLs provided superior intubation conditions. The Glidescope® Core™ enabled the best glottic view, caused the least dental trauma and significantly decreased the “time to vocal cords”. However, the failure rate of intubation was 14% with the Glidescope® Core™, 4.7% with the Miller laryngoscope and only 2.3% with the C-MAC® Miller when used by anesthetists with extensive previous experience. In addition, the “time to intubate”, the “time to ventilate” and the number of optimization maneuvers were significantly increased using the Glidescope® Core™. In the hands of anesthetists with limited previous experience, the failure rate was 11.6% with the Glidescope® Core™ and 7% with the Miller laryngoscope. Using the C-MAC® Miller, the overall success rate increased to 100%. No differences in the “time to intubate” or “time to ventilate” were observed. Conclusions The nonangulated C-MAC® Miller facilitated correct placement of the endotracheal tube and showed the highest overall success rate. Our results therefore suggest that the C-MAC® Miller could be beneficial and may contribute to increased safety in the airway management of infants with PRS when used by anesthetists with limited and extensive experience.


Author(s):  
Apoorv Durga ◽  
M. L. Singla

Usage of social media within organizations' value chains has been increasing rapidly. There are several benefits and savings projected due to usage of social media. As a result, there is also a lot of hype that is typical of any new web phenomenon. On the other hand, saner voices are cautioning against excessive hype and point to high failure rate of social media initiatives. Lack of best practices or frameworks and incomplete understanding of how to make best use of social media are some of the reasons cited for this high failure of social media initiatives. In addition, there are several other aspects related to governance, people, and processes that need to be addressed to improve success rate of these initiatives. Therefore, effective implementation of a social media initiative includes addressing all those aspects that relate to governance, people, and processes. The authors use a construct, “Social Media Readiness,” that encapsulates these aspects. This chapter summarizes research that shows how readiness can impact social media use.


2019 ◽  
Vol 4 (1) ◽  
pp. e001029 ◽  
Author(s):  
Daniel J Carter ◽  
Rhian Daniel ◽  
Ana W Torrens ◽  
Mauro N Sanchez ◽  
Ethel Leonor N Maciel ◽  
...  

BackgroundEvidence suggests that social protection policies such as Brazil’s Bolsa Família Programme (BFP), a governmental conditional cash transfer, may play a role in tuberculosis (TB) elimination. However, study limitations hamper conclusions. This paper uses a quasi-experimental approach to more rigorously evaluate the effect of BFP on TB treatment success rate.MethodsPropensity scores were estimated from a complete-case logistic regression using covariates from a linked data set, including the Brazil’s TB notification system (SINAN), linked to the national registry of those in poverty (CadUnico) and the BFP payroll.ResultsThe average effect of treatment on the treated was estimated as the difference in TB treatment success rate between matched groups (ie, the control and exposed patients, n=2167). Patients with TB receiving BFP showed a treatment success rate of 10.58 percentage points higher (95% CI 4.39 to 16.77) than patients with TB not receiving BFP. This association was robust to sensitivity analyses.ConclusionsThis study further confirms a positive relationship between the provision of conditional cash transfers and TB treatment success rate. Further research is needed to understand how to enhance access to social protection so to optimise public health impact.


Sign in / Sign up

Export Citation Format

Share Document