scholarly journals Image Data Denoising using Center Pixel Weights in Non-Local Means and Smart Patch-based, Modern Machine Learning Method using Higher Order Singular Value Decomposition: A Review

2015 ◽  
Vol 115 (14) ◽  
pp. 22-25
Author(s):  
Jeetesh KumarRajak ◽  
Achint Chugh
2021 ◽  
Author(s):  
Bu-Yo Kim ◽  
Joo Wan Cha ◽  
Ki-Ho Chang

Abstract. In this study, image data features and machine learning methods were used to calculate 24-h continuous cloud cover from image data obtained by a camera-based imager on the ground. The image data features were the time (Julian day and hour), solar zenith angle, and statistical characteristics of the red-blue ratio, blue–red difference, and luminance. These features were determined from the red, green, and blue brightness of images subjected to a pre-processing process involving masking removal and distortion correction. The collected image data were divided into training, validation, and test sets and were used to optimize and evaluate the accuracy of each machine learning method. The cloud cover calculated by each machine learning method was verified with human-eye observation data from a manned observatory. Supervised machine learning models suitable for nowcasting, namely, support vector regression, random forest, gradient boosting machine, k-nearest neighbor, artificial neural network, and multiple linear regression methods, were employed and their results were compared. The best learning results were obtained by the support vector regression model, which had an accuracy, recall, and precision of 0.94, 0.70, and 0.76, respectively. Further, bias, root mean square error, and correlation coefficient values of 0.04 tenth, 1.45 tenths, and 0.93, respectively, were obtained for the cloud cover calculated using the test set. When the difference between the calculated and observed cloud cover was allowed to range between 0, 1, and 2 tenths, high agreement of approximately 42 %, 79 %, and 91 %, respectively, were obtained. The proposed system involving a ground-based imager and machine learning methods is expected to be suitable for application as an automated system to replace human-eye observations.


2012 ◽  
Vol 263-266 ◽  
pp. 223-226
Author(s):  
Musab Elkheir Salih ◽  
Xu Ming Zhang ◽  
Ming Yue Ding

The performance of singular value decomposition (SVD) based nonlocal mean (NLM) denoising method degrades when the noise is high. This paper describes an approach of an improvement of NLM denoising when the noise is large. Instead of SVD, we combine the kernel principal component analysis (KPCA) with NLM. It is demonstrated in terms of peak signal to noise ratio (PSNR) in decibels (dB) that the NLM denoising method is improved using various test images corrupted by large additive white Gaussian noise (AWGN)


2021 ◽  
Vol 6 (1) ◽  
pp. 18
Author(s):  
Diny Melsye Nurul Fajri

Kenaf fiber is mainly used for forest wood substitute industrial products. Thus, the kenaf fiber can be promoted as the main composition of environmentally friendly goods. Unfortunately, there are several Kenaf gardens that have been stricken with the disease-causing a lack of yield. By utilizing advances in technology, it was felt to be able to help kenaf farmers quickly and accurately detect which pests or diseases attacked their crops. This paper will discuss the application of the machine learning method which is a Convolutional Neural Network (CNN) that can provide results for inputting leaf images into the results of temporary diagnoses. The data used are 838 image data for 4 classes. The average results prove that with CNN an accuracy value of 73% can be achieved for the detection of diseases and plant pests in Kenaf plants.


Author(s):  
Andreas Falke ◽  
Harald Hruschka

AbstractThe increasing importance of online distribution channels is paralleled by a rising interest in gaining insights into the customer journey and browsing behavior. We evaluate several machine learning methods (latent Dirichlet allocation, correlated topic model, structural topic model, replicated softmax model) with respect to their ability to reproduce the browsing behavior of households across websites. In addition, we compare these machine learning methods to a related classical technique, singular value decomposition. In our study, the replicated softmax model outperforms latent Dirichlet allocation, but the correlated topic model attains the overall best performance. Compared to singular value decomposition both the correlated topic model and the replicated softmax model lead to a more efficient compression of web browsing data. On the other hand, singular value decomposition surpasses latent Dirichlet allocation. We interpret results of the correlated topic model and the replicated softmax model by determining combinations of topics or hidden variables that are heterogeneous with respect to visited websites. We show that decision makers should not rely on bivariate measures of site visits, as these do not agree with measures of interdependences between sites that can be inferred from the correlated topic model or the replicated softmax model. We investigate how well topics or hidden variables measured by these methods predict yearly household expenditures. The correlated topic model leads to the best predictive performance, followed by the replicated softmax model. We also discuss how the replicated softmax model can be used to support online marketing decisions of websites.


Sign in / Sign up

Export Citation Format

Share Document