Supervised Selection of Dynamic Features, with an Application to Telecommunication Data Preparation

Author(s):  
Sylvain Ferrandiz ◽  
Marc Boullé
2021 ◽  
Author(s):  
Maximilian Peter Dammann ◽  
Wolfgang Steger ◽  
Ralph Stelzer

Abstract Product visualization in AR/VR applications requires a largely manual process of data preparation. Previous publications focus on error-free triangulation or transformation of product structure data and display attributes for AR/VR applications. This paper focuses on the preparation of the required geometry data. In this context, a significant reduction in effort can be achieved through automation. The steps of geometry preparation are identified and examined with respect to their automation potential. In addition, possible couplings of sub-steps are discussed. Based on these explanations, a structure for the geometry preparation process is proposed. With this structured preparation process it becomes possible to consider the available computing power of the target platform during the geometry preparation. The number of objects to be rendered, the tessellation quality and the level of detail can be controlled by the automated choice of transformation parameters. We present a software tool in which partial steps of the automatic preparation are already implemented. After an analysis of the product structure of a CAD file, the transformation is executed for each component. Functions implemented so far allow, for example, the selection of assemblies and parts based on filter options, the transformation of geometries in batch mode, the removal of certain details and the creation of UV maps. Flexibility, transformation quality and time savings are described and discussed.


Author(s):  
Maximilian Peter Dammann ◽  
Wolfgang Steger ◽  
Ralph Stelzer

Abstract Product visualization in AR/VR applications requires a largely manual process of data preparation. Previous publications focus on error-free triangulation or transformation of product structure data and display attributes for AR/VR applications. This paper focuses on the preparation of the required geometry data. In this context, a significant reduction in effort can be achieved through automation. The steps of geometry preparation are identified and examined concerning their automation potential. In addition, possible couplings of sub-steps are discussed. Based on these explanations, a structure for the geometry preparation process is proposed. With this structured preparation process, it becomes possible to consider the available computing power of the target platform during the geometry preparation. The number of objects to be rendered, the tessellation quality, and the level of detail can be controlled by the automated choice of transformation parameters. Through this approach, tedious preparation tasks and iterative performance optimization can be avoided in the future, which also simplifies the integration of AR/VR applications into product development and use. A software tool is presented in which partial steps of the automatic preparation are already implemented. After an analysis of the product structure of a CAD file, the transformation is executed for each component. Functions implemented so far allow, for example, the selection of assemblies and parts based on filter options, the transformation of geometries in batch mode, the removal of certain details, and the creation of UV maps. Flexibility, transformation quality, and timesavings are described and discussed.


2018 ◽  
Vol 11 (11) ◽  
pp. 6203-6230 ◽  
Author(s):  
Simon Ruske ◽  
David O. Topping ◽  
Virginia E. Foot ◽  
Andrew P. Morse ◽  
Martin W. Gallagher

Abstract. Primary biological aerosol including bacteria, fungal spores and pollen have important implications for public health and the environment. Such particles may have different concentrations of chemical fluorophores and will respond differently in the presence of ultraviolet light, potentially allowing for different types of biological aerosol to be discriminated. Development of ultraviolet light induced fluorescence (UV-LIF) instruments such as the Wideband Integrated Bioaerosol Sensor (WIBS) has allowed for size, morphology and fluorescence measurements to be collected in real-time. However, it is unclear without studying instrument responses in the laboratory, the extent to which different types of particles can be discriminated. Collection of laboratory data is vital to validate any approach used to analyse data and ensure that the data available is utilized as effectively as possible. In this paper a variety of methodologies are tested on a range of particles collected in the laboratory. Hierarchical agglomerative clustering (HAC) has been previously applied to UV-LIF data in a number of studies and is tested alongside other algorithms that could be used to solve the classification problem: Density Based Spectral Clustering and Noise (DBSCAN), k-means and gradient boosting. Whilst HAC was able to effectively discriminate between reference narrow-size distribution PSL particles, yielding a classification error of only 1.8 %, similar results were not obtained when testing on laboratory generated aerosol where the classification error was found to be between 11.5 % and 24.2 %. Furthermore, there is a large uncertainty in this approach in terms of the data preparation and the cluster index used, and we were unable to attain consistent results across the different sets of laboratory generated aerosol tested. The lowest classification errors were obtained using gradient boosting, where the misclassification rate was between 4.38 % and 5.42 %. The largest contribution to the error, in the case of the higher misclassification rate, was the pollen samples where 28.5 % of the samples were incorrectly classified as fungal spores. The technique was robust to changes in data preparation provided a fluorescent threshold was applied to the data. In the event that laboratory training data are unavailable, DBSCAN was found to be a potential alternative to HAC. In the case of one of the data sets where 22.9 % of the data were left unclassified we were able to produce three distinct clusters obtaining a classification error of only 1.42 % on the classified data. These results could not be replicated for the other data set where 26.8 % of the data were not classified and a classification error of 13.8 % was obtained. This method, like HAC, also appeared to be heavily dependent on data preparation, requiring a different selection of parameters depending on the preparation used. Further analysis will also be required to confirm our selection of the parameters when using this method on ambient data. There is a clear need for the collection of additional laboratory generated aerosol to improve interpretation of current databases and to aid in the analysis of data collected from an ambient environment. New instruments with a greater resolution are likely to improve on current discrimination between pollen, bacteria and fungal spores and even between different species, however the need for extensive laboratory data sets will grow as a result.


Author(s):  
Soo-Young Lee ◽  
Chandra Shahard Dhir ◽  
Paresh Chandra Barman ◽  
Sangkyun Lee

2018 ◽  
Vol 26 (2) ◽  
pp. 87-94 ◽  
Author(s):  
Zhonghai He ◽  
Zhenhe Ma ◽  
Mengchao Li ◽  
Yang Zhou

For spectroscopic measurements, representative samples are needed in the course of building a calibration model to guarantee accurate predictions. The most widely used selection method is the Kennard-Stone method, which can be used before a reference measurement is done. In this paper, a method termed semi-supervised selection is presented to determine whether a sample should be added to the calibration set. The selection procedure has two steps. First, part of the population of samples is selected using the Kennard-Stone method, and their concentrations are measured. Second, another part of the population of samples is selected based on the scalar value distribution of the net analyte signal. If the net analyte signal of a sample is distinctive compared to the existing net analyte signal values, then the sample is added to the calibration set. The analyte of interest in the sample is then measured so that the sample can be used as a calibration sample. By a validation test, it is shown that the presented method is more efficient than random selection and Kennard-Stone selection. As a result, both the time and the money spent on reference measurements are saved.


2009 ◽  
Vol 38 (1) ◽  
pp. 118-137 ◽  
Author(s):  
A. F. Quiceno-Manrique ◽  
J. I. Godino-Llorente ◽  
M. Blanco-Velasco ◽  
G. Castellanos-Dominguez

2011 ◽  
Author(s):  
J. R. Orozco-Arroyave ◽  
S. Murillo-Rendón ◽  
A. M. Álvarez-Meza ◽  
J. D. Arias-Londoño ◽  
E. Delgado-Trejos ◽  
...  

2015 ◽  
Vol 54 (03) ◽  
pp. 215-220 ◽  
Author(s):  
M. Matteucci ◽  
L. Mainardi ◽  
A. Tahirovic

Summary Introduction: This article is part of the Focus Theme of Methods of Information in Medicine on “Biosignal Interpretation: Advanced Methods for Neural Signals and Images”. Objectives: The main objectives of the paper regard the analysis of amplitude spatial distribution of the P300 evoked potential over a scalp of a particular subject and finding an averaged spatial distribution template for that subject. This template, which may differ for two different subjects, can help in getting a more accurate P300 detection for all BCIs that inherently use spatial filtering to detect P300 signal. Finally, the proposed averaging technique for a particular subject obtains an averaged spatial distribution template through only several epochs, which makes the proposed averaging technique fast and possible to use without applying any prior training data as in case of data enhancement technique. Methods: The method used in the proposed framework for the averaging of spatial distribution of P300 evoked potentials is based on the statistical properties of independent components (ICs). These components are obtained by using independent component analysis (ICA) from different target epochs. Results: This paper gives a novel averaging technique for the spatial distribution of P300 evoked potentials, which is based on the P300 signals obtained from different target epochs using the ICA algorithm. Such a technique provides a more reliable P300 spatial distribution for a subject of interest, which can be used either for an improved spatial selection of ICs, or more accurate P300 detection and extraction. In addition, the experiments demonstrate that the values of spatial intensity computed by the proposed technique for P300 signal converge after only several target epochs for each electrode allocation. Such a speed of convergence allows the proposed algorithm to easily adapt to a subject of interest without any additional artificial data preparation prior the algorithm execution such in case of data enhancement technique. Conclusion: The proposed technique averages the P300 spatial distribution for a particular subject over all electrode allocations. First, the technique combines P300-like components obtained by the ICA run within a target epoch in order to obtainan averaged P300 spatial distribution. Second, the technique averages spatial distributions of P300 signals obtained from different target epochs in order to get the final averaged template. Such an template can be useful for any BCI technique where spatial selection is used to detect evoked potentials.


Sign in / Sign up

Export Citation Format

Share Document