scholarly journals Experimental Results for the Multipath Performance of Galileo Signals Transmitted by GIOVE-A Satellite

2008 ◽  
Vol 2008 ◽  
pp. 1-13 ◽  
Author(s):  
Andrew Simsky ◽  
David Mertens ◽  
Jean-Marie Sleewaegen ◽  
Martin Hollreiser ◽  
Massimo Crisci

Analysis of GIOVE-A signals is an important part of the in-orbit validation phase of the Galileo program. GIOVE-A transmits the ranging signals using all the code modulations currently foreseen for the future Galileo and provides a foretaste of their performance in real-life applications. Due to the use of advanced code modulations, the ranging signals of Galileo provide significant improvement of the multipath performance as compared to current GPS. In this paper, we summarize the results of about 1.5 years of observations using the data from four antenna sites. The analysis of the elevation dependence of averaged multipath errors and the multipath time series for static data indicate significant suppression of long-range multipath by the best Galileo codes. The E5AltBOC signal is confirmed to be a multipath suppression champion for all the data sets. According to the results of the observations, the Galileo signals can be classified into 3 groups: high-performance (E5AltBOC, L1A, E6A), medium-performance (E6BC, E5a, E5b) and an L1BC signal, which has the lowest performance among Galileo signals, but is still better than GPS-CA. The car tests have demonstrated that for kinematic multipath the intersignal differences are a lot less pronounced. The phase multipath performance is also discussed.

1998 ◽  
Vol 6 (A) ◽  
pp. A13-A19 ◽  
Author(s):  
T.G. Axon ◽  
R. Brown ◽  
S.V. Hammond ◽  
S.J. Maris ◽  
F. Ting

The early use of near infrared (NIR) spectroscopy in the pharmaceutical industry was for raw material identification, later moving on to some conventional “calibrations” for various ingredients in a variety of sample types. The approach throughout this development process has always been “conventional” with one measurement by NIR directly replacing some other slower method, be it Mid-IR identification, or determinations by Karl Fischer, high performance liquid chromatography (HPLC)etc. A significant change in approach was demonstrated by Plugge and Van der Vlies1 in 1993, where a qualitative system was used to provide “quantitative like” answers for potency of a drug substance. Following on from that key paper, there has been a realisation that the qualitative analysis ability of NIR, has the potential to be a powerful tool for process investigation, control and validation. The final step has been to develop “model free” approaches, that consider individual data sets as unique systems, and present the opportunity for NIR to escape the shackles of “calibration” in one form or another. The use of qualitative, or model free, approaches to NIR spectroscopy provides an effective tool for satisfying many of the demands of modern pharmaceutical production. “Straight through production,” “right first time,” “short cycle time” and “total quality management” philosophies can be realised. Eventually the prospect of parametric release may be materialised with a strong contribution from NIR spectroscopy. This paper will illustrate the above points with some real life examles.


2011 ◽  
Vol 135-136 ◽  
pp. 522-527 ◽  
Author(s):  
Gang Zhang ◽  
Shan Hong Zhan ◽  
Chun Ru Wang ◽  
Liang Lun Cheng

Ensemble pruning searches for a selective subset of members that performs as well as, or better than ensemble of all members. However, in the accuracy / diversity pruning framework, generalization ability of target ensemble is not considered, and moreover, there is not clear relationship between them. In this paper, we proof that ensemble formed by members of better generalization ability is also of better generalization ability. We adopt learning with both labeled and unlabeled data to improve generalization ability of member learners. A data dependant kernel determined by a set of unlabeled points is plugged in individual kernel learners to improve generalization ability, and ensemble pruning is launched as much previous work. The proposed method is suitable for both single-instance and multi-instance learning framework. Experimental results on 10 UCI data sets for single-instance learning and 4 data sets for multi-instance learning show that subensemble formed by the proposed method is effective.


2020 ◽  
Vol 496 (1) ◽  
pp. 629-637
Author(s):  
Ce Yu ◽  
Kun Li ◽  
Shanjiang Tang ◽  
Chao Sun ◽  
Bin Ma ◽  
...  

ABSTRACT Time series data of celestial objects are commonly used to study valuable and unexpected objects such as extrasolar planets and supernova in time domain astronomy. Due to the rapid growth of data volume, traditional manual methods are becoming extremely hard and infeasible for continuously analysing accumulated observation data. To meet such demands, we designed and implemented a special tool named AstroCatR that can efficiently and flexibly reconstruct time series data from large-scale astronomical catalogues. AstroCatR can load original catalogue data from Flexible Image Transport System (FITS) files or data bases, match each item to determine which object it belongs to, and finally produce time series data sets. To support the high-performance parallel processing of large-scale data sets, AstroCatR uses the extract-transform-load (ETL) pre-processing module to create sky zone files and balance the workload. The matching module uses the overlapped indexing method and an in-memory reference table to improve accuracy and performance. The output of AstroCatR can be stored in CSV files or be transformed other into formats as needed. Simultaneously, the module-based software architecture ensures the flexibility and scalability of AstroCatR. We evaluated AstroCatR with actual observation data from The three Antarctic Survey Telescopes (AST3). The experiments demonstrate that AstroCatR can efficiently and flexibly reconstruct all time series data by setting relevant parameters and configuration files. Furthermore, the tool is approximately 3× faster than methods using relational data base management systems at matching massive catalogues.


2015 ◽  
Author(s):  
Andrew MacDonald

PhilDB is an open-source time series database. It supports storage of time series datasets that are dynamic, that is recording updates to existing values in a log as they occur. Recent open-source systems, such as InfluxDB and OpenTSDB, have been developed to indefinitely store long-period, high-resolution time series data. Unfortunately they require a large initial installation investment before use because they are designed to operate over a cluster of servers to achieve high-performance writing of static data in real time. In essence, they have a ‘big data’ approach to storage and access. Other open-source projects for handling time series data that don’t take the ‘big data’ approach are also relatively new and are complex or incomplete. None of these systems gracefully handle revision of existing data while tracking values that changed. Unlike ‘big data’ solutions, PhilDB has been designed for single machine deployment on commodity hardware, reducing the barrier to deployment. PhilDB eases loading of data for the user by utilising an intelligent data write method. It preserves existing values during updates and abstracts the update complexity required to achieve logging of data value changes. PhilDB improves accessing datasets by two methods. Firstly, it uses fast reads which make it practical to select data for analysis. Secondly, it uses simple read methods to minimise effort required to extract data. PhilDB takes a unique approach to meta-data tracking; optional attribute attachment. This facilitates scaling the complexities of storing a wide variety of data. That is, it allows time series data to be loaded as time series instances with minimal initial meta-data, yet additional attributes can be created and attached to differentiate the time series instances as a wider variety of data is needed. PhilDB was written in Python, leveraging existing libraries. This paper describes the general approach, architecture, and philosophy of the PhilDB software.


Sensors ◽  
2020 ◽  
Vol 20 (3) ◽  
pp. 825 ◽  
Author(s):  
Fadi Al Machot ◽  
Mohammed R. Elkobaisi ◽  
Kyandoghere Kyamakya

Due to significant advances in sensor technology, studies towards activity recognition have gained interest and maturity in the last few years. Existing machine learning algorithms have demonstrated promising results by classifying activities whose instances have been already seen during training. Activity recognition methods based on real-life settings should cover a growing number of activities in various domains, whereby a significant part of instances will not be present in the training data set. However, to cover all possible activities in advance is a complex and expensive task. Concretely, we need a method that can extend the learning model to detect unseen activities without prior knowledge regarding sensor readings about those previously unseen activities. In this paper, we introduce an approach to leverage sensor data in discovering new unseen activities which were not present in the training set. We show that sensor readings can lead to promising results for zero-shot learning, whereby the necessary knowledge can be transferred from seen to unseen activities by using semantic similarity. The evaluation conducted on two data sets extracted from the well-known CASAS datasets show that the proposed zero-shot learning approach achieves a high performance in recognizing unseen (i.e., not present in the training dataset) new activities.


2014 ◽  
Vol 1044-1045 ◽  
pp. 1149-1152
Author(s):  
Dong Mei Wu ◽  
Xin Zhou

Shaking leaves has been the biggest interference for early forest smoke video detection. Moving average method, Gaussian mixture method and its improved methods are often used to update background, but the performance is not good through background subtraction. Codebook algorithm is applied to extract foreground for early forest smoke detection, quantization techniques are used to obtain background model from time series, then getting foreground image through background subtraction. Through multiple video tests, the experimental results that the filtering performance, anti-noise performance and accuracy are better than other methods above.


2014 ◽  
Vol 574 ◽  
pp. 728-733
Author(s):  
Shu Xia Lu ◽  
Cai Hong Jiao ◽  
Le Tong ◽  
Yang Fan Zhou

Core Vector Machine (CVM) can be used to deal with large data sets by find minimum enclosing ball (MEB), but one drawback is that CVM is very sensitive to the outliers. To tackle this problem, we propose a novel Position Regularized Core Vector Machine (PCVM).In the proposed PCVM, the data points are regularized by assigning a position-based weighting. Experimental results on several benchmark data sets show that the performance of PCVM is much better than CVM.


2021 ◽  
pp. 1-20
Author(s):  
Fabian Kai-Dietrich Noering ◽  
Yannik Schroeder ◽  
Konstantin Jonas ◽  
Frank Klawonn

In technical systems the analysis of similar situations is a promising technique to gain information about the system’s state, its health or wearing. Very often, situations cannot be defined but need to be discovered as recurrent patterns within time series data of the system under consideration. This paper addresses the assessment of different approaches to discover frequent variable-length patterns in time series. Because of the success of artificial neural networks (NN) in various research fields, a special issue of this work is the applicability of NNs to the problem of pattern discovery in time series. Therefore we applied and adapted a Convolutional Autoencoder and compared it to classical nonlearning approaches based on Dynamic Time Warping, based on time series discretization as well as based on the Matrix Profile. These nonlearning approaches have also been adapted, to fulfill our requirements like the discovery of potentially time scaled patterns from noisy time series. We showed the performance (quality, computing time, effort of parametrization) of those approaches in an extensive test with synthetic data sets. Additionally the transferability to other data sets is tested by using real life vehicle data. We demonstrated the ability of Convolutional Autoencoders to discover patterns in an unsupervised way. Furthermore the tests showed, that the Autoencoder is able to discover patterns with a similar quality like classical nonlearning approaches.


Author(s):  
Umar Kabir ◽  
Terna Godfrey IEREN

This article proposed a new distribution referred to as the transmuted Exponential Lomax distribution as an extension of the popular Lomax distribution in the form of Exponential Lomax by using the Quadratic rank transmutation map proposed and studied in earlier research. Using the transmutation map, we defined the probability density function (PDF) and cumulative distribution function (CDF) of the transmuted Exponential Lomax distribution. Some properties of the new distribution were extensively studied after derivation. The estimation of the distribution’s parameters was also done using the method of maximum likelihood estimation. The performance of the proposed probability distribution was checked in comparison with some other generalizations of Lomax distribution using three real-life data sets. The results obtained indicated that TELD performs better than the other distributions comprising power Lomax, Exponential-Lomax, and the Lomax distributions.


Author(s):  
Zhou Yu ◽  
Jun Yu ◽  
Chenchao Xiang ◽  
Zhou Zhao ◽  
Qi Tian ◽  
...  

Visual grounding aims to localize an object in an image referred to by a textual query phrase. Various visual grounding approaches have been proposed, and the problem can be modularized into a general framework: proposal generation, multi-modal feature representation, and proposal ranking. Of these three modules, most existing approaches focus on the latter two, with the importance of proposal generation generally neglected. In this paper, we rethink the problem of what properties make a good proposal generator. We introduce the diversity and discrimination simultaneously when generating proposals, and in doing so propose Diversified and Discriminative Proposal Networks model (DDPN). Based on the proposals generated by DDPN, we propose a high performance baseline model for visual grounding and evaluate it on four benchmark datasets. Experimental results demonstrate that our model delivers significant improvements on all the tested data-sets (e.g., 18.8% improvement on ReferItGame and 8.2% improvement on Flickr30k Entities over the existing state-of-the-arts respectively).


Sign in / Sign up

Export Citation Format

Share Document