Kernel Density Estimation, Kernel Methods, and Fast Learning in Large Data Sets

2014 ◽  
Vol 44 (1) ◽  
pp. 1-20 ◽  
Author(s):  
Shitong Wang ◽  
Jun Wang ◽  
Fu-lai Chung
Author(s):  
Sahar Asadi ◽  
Matteo Reggente ◽  
Cyrill Stachniss ◽  
Christian Plagemann ◽  
Achim J. Lilienthal

Gas distribution models can provide comprehensive information about a large number of gas concentration measurements, highlighting, for example, areas of unusual gas accumulation. They can also help to locate gas sources and to plan where future measurements should be carried out. Current physical modeling methods, however, are computationally expensive and not applicable for real world scenarios with real-time and high resolution demands. This chapter reviews kernel methods that statistically model gas distribution. Gas measurements are treated as random variables, and the gas distribution is predicted at unseen locations either using a kernel density estimation or a kernel regression approach. The resulting statistical models do not make strong assumptions about the functional form of the gas distribution, such as the number or locations of gas sources, for example. The major focus of this chapter is on two-dimensional models that provide estimates for the means and predictive variances of the distribution. Furthermore, three extensions to the presented kernel density estimation algorithm are described, which allow to include wind information, to extend the model to three dimensions, and to reflect time-dependent changes of the random process that generates the gas distribution measurements. All methods are discussed based on experimental validation using real sensor data.


2012 ◽  
Vol 4 (2) ◽  
pp. 119-137 ◽  
Author(s):  
Shitong Wang ◽  
Zhaohong Deng ◽  
Fu-lai Chung ◽  
Wenjun Hu

Author(s):  
John A. Hunt

Spectrum-imaging is a useful technique for comparing different processing methods on very large data sets which are identical for each method. This paper is concerned with comparing methods of electron energy-loss spectroscopy (EELS) quantitative analysis on the Al-Li system. The spectrum-image analyzed here was obtained from an Al-10at%Li foil aged to produce δ' precipitates that can span the foil thickness. Two 1024 channel EELS spectra offset in energy by 1 eV were recorded and stored at each pixel in the 80x80 spectrum-image (25 Mbytes). An energy range of 39-89eV (20 channels/eV) are represented. During processing the spectra are either subtracted to create an artifact corrected difference spectrum, or the energy offset is numerically removed and the spectra are added to create a normal spectrum. The spectrum-images are processed into 2D floating-point images using methods and software described in [1].


Author(s):  
Thomas W. Shattuck ◽  
James R. Anderson ◽  
Neil W. Tindale ◽  
Peter R. Buseck

Individual particle analysis involves the study of tens of thousands of particles using automated scanning electron microscopy and elemental analysis by energy-dispersive, x-ray emission spectroscopy (EDS). EDS produces large data sets that must be analyzed using multi-variate statistical techniques. A complete study uses cluster analysis, discriminant analysis, and factor or principal components analysis (PCA). The three techniques are used in the study of particles sampled during the FeLine cruise to the mid-Pacific ocean in the summer of 1990. The mid-Pacific aerosol provides information on long range particle transport, iron deposition, sea salt ageing, and halogen chemistry.Aerosol particle data sets suffer from a number of difficulties for pattern recognition using cluster analysis. There is a great disparity in the number of observations per cluster and the range of the variables in each cluster. The variables are not normally distributed, they are subject to considerable experimental error, and many values are zero, because of finite detection limits. Many of the clusters show considerable overlap, because of natural variability, agglomeration, and chemical reactivity.


Author(s):  
Mykhajlo Klymash ◽  
Olena Hordiichuk — Bublivska ◽  
Ihor Tchaikovskyi ◽  
Oksana Urikova

In this article investigated the features of processing large arrays of information for distributed systems. A method of singular data decomposition is used to reduce the amount of data processed, eliminating redundancy. Dependencies of com­putational efficiency on distributed systems were obtained using the MPI messa­ging protocol and MapReduce node interaction software model. Were analyzed the effici­ency of the application of each technology for the processing of different sizes of data: Non — distributed systems are inefficient for large volumes of information due to low computing performance. It is proposed to use distributed systems that use the method of singular data decomposition, which will reduce the amount of information processed. The study of systems using the MPI protocol and MapReduce model obtained the dependence of the duration calculations time on the number of processes, which testify to the expediency of using distributed computing when processing large data sets. It is also found that distributed systems using MapReduce model work much more efficiently than MPI, especially with large amounts of data. MPI makes it possible to perform calculations more efficiently for small amounts of information. When increased the data sets, advisable to use the Map Reduce model.


Sign in / Sign up

Export Citation Format

Share Document