scholarly journals On the Use of Deep Active Semi-Supervised Learning for Fast Rendering in Global Illumination

2020 ◽  
Vol 6 (9) ◽  
pp. 91
Author(s):  
Ibtissam Constantin ◽  
Joseph Constantin ◽  
André Bigand

Convolution neural networks usually require large labeled data-sets to construct accurate models. However, in many real-world scenarios, such as global illumination, labeling data are a time-consuming and costly human intelligent task. Semi-supervised learning methods leverage this issue by making use of a small labeled data-set and a larger set of unlabeled data. In this paper, our contributions focus on the development of a robust algorithm that combines active and deep semi-supervised convolution neural network to reduce labeling workload and to accelerate convergence in case of real-time global illumination. While the theoretical concepts of photo-realistic rendering are well understood, the increased need for the delivery of highly dynamic interactive content in vast virtual environments has increased recently. Particularly, the quality measure of computer-generated images is of great importance. The experiments are conducted on global illumination scenes which contain diverse distortions. Compared with human psycho-visual thresholds, the good consistency between these thresholds and the learning models quality measures can been seen. A comparison has also been made with SVM and other state-of-the-art deep learning models. We do transfer learning by running the convolution base of these models over our image set. Then, we use the output features of the convolution base as input to retrain the parameters of the fully connected layer. The obtained results show that our proposed method provides promising efficiency in terms of precision, time complexity, and optimal architecture.

Author(s):  
Norsyela Muhammad Noor Mathivanan ◽  
Nor Azura Md.Ghani ◽  
Roziah Mohd Janor

<p>Online business development through e-commerce platforms is a phenomenon which change the world of promoting and selling products in this 21<sup>st</sup> century. Product title classification is an important task in assisting retailers and sellers to list a product in a suitable category. Product title classification is apart of text classification problem but the properties of product title are different from general document. This study aims to evaluate the performance of five different supervised learning models on data sets consist of e-commerce product titles with a very short description and they are incomplete sentences. The supervised learning models involve in the study are Naïve Bayes, K-Nearest Neighbor (KNN), Decision Tree, Support Vector Machine (SVM) and Random Forest. The results show KNN model is the best model with the highest accuracy and fastest computation time to classify the data used in the study. Hence, KNN model is a good approach in classifying e-commerce products.</p>


2017 ◽  
Vol 10 (2) ◽  
pp. 695-708 ◽  
Author(s):  
Simon Ruske ◽  
David O. Topping ◽  
Virginia E. Foot ◽  
Paul H. Kaye ◽  
Warren R. Stanley ◽  
...  

Abstract. Characterisation of bioaerosols has important implications within environment and public health sectors. Recent developments in ultraviolet light-induced fluorescence (UV-LIF) detectors such as the Wideband Integrated Bioaerosol Spectrometer (WIBS) and the newly introduced Multiparameter Bioaerosol Spectrometer (MBS) have allowed for the real-time collection of fluorescence, size and morphology measurements for the purpose of discriminating between bacteria, fungal spores and pollen.This new generation of instruments has enabled ever larger data sets to be compiled with the aim of studying more complex environments. In real world data sets, particularly those from an urban environment, the population may be dominated by non-biological fluorescent interferents, bringing into question the accuracy of measurements of quantities such as concentrations. It is therefore imperative that we validate the performance of different algorithms which can be used for the task of classification.For unsupervised learning we tested hierarchical agglomerative clustering with various different linkages. For supervised learning, 11 methods were tested, including decision trees, ensemble methods (random forests, gradient boosting and AdaBoost), two implementations for support vector machines (libsvm and liblinear) and Gaussian methods (Gaussian naïve Bayesian, quadratic and linear discriminant analysis, the k-nearest neighbours algorithm and artificial neural networks).The methods were applied to two different data sets produced using the new MBS, which provides multichannel UV-LIF fluorescence signatures for single airborne biological particles. The first data set contained mixed PSLs and the second contained a variety of laboratory-generated aerosol.Clustering in general performs slightly worse than the supervised learning methods, correctly classifying, at best, only 67. 6 and 91. 1 % for the two data sets respectively. For supervised learning the gradient boosting algorithm was found to be the most effective, on average correctly classifying 82. 8 and 98. 27 % of the testing data, respectively, across the two data sets.A possible alternative to gradient boosting is neural networks. We do however note that this method requires much more user input than the other methods, and we suggest that further research should be conducted using this method, especially using parallelised hardware such as the GPU, which would allow for larger networks to be trained, which could possibly yield better results.We also saw that some methods, such as clustering, failed to utilise the additional shape information provided by the instrument, whilst for others, such as the decision trees, ensemble methods and neural networks, improved performance could be attained with the inclusion of such information.


Author(s):  
Timothy Olander ◽  
Anthony Wimmers ◽  
Christopher Velden ◽  
James P. Kossin

AbstractSeveral simple and computationally inexpensive machine learning models are explored that can use Advanced Dvorak Technique (ADT) retrieved features of tropical cyclones (TCs) from satellite imagery to provide improved maximum sustained surface wind speed (MSW) estimates. ADT (Version 9.0) TC analysis parameters and operational TC forecast center Best Track data sets from 2005-2016 are used to train and validate the various models over all TC basins globally and select the best among them. Two independent test sets of TC cases from 2017 and 2018 are used to evaluate the intensity estimates produced by the final selected model called the “artificial intelligence (AI)” enhanced Advanced Dvorak Technique (AiDT). The 2017 and 2018 MSW results demonstrate a global RMSE of 7.7 and 8.2 kt, respectively. Basin-specific MSW RMSEs of 8.4, 6.8, 7.3, 8.0, and 7.5 kt were obtained with the 2017 data set in the North Atlantic, East/Central Pacific, Northwest Pacific, South Pacific/Indian, and North Indian ocean basins, respectively, with MSW RMSE values of 8.9, 6.7, 7.1, 10.4, and 7.7 obtained with the 2018 data set. These represent a 30% and 23% improvement over the corresponding ADT RMSE for the 2017 and 2018 data sets, respectively, with the AiDT error reduction significant to 99% in both sets. The AiDT model represents a notable improvement over the ADT performance and also compares favorably to more computationally expensive and complex machine learning models that interrogate satellite images directly while still preserving the operational familiarity of the ADT.


2017 ◽  
Vol 20 (3) ◽  
pp. 985-994 ◽  
Author(s):  
Leili Shahriyari

Abstract Motivation: One of the main challenges in machine learning (ML) is choosing an appropriate normalization method. Here, we examine the effect of various normalization methods on analyzing FPKM upper quartile (FPKM-UQ) RNA sequencing data sets. We collect the HTSeq-FPKM-UQ files of patients with colon adenocarcinoma from TCGA-COAD project. We compare three most common normalization methods: scaling, standardizing using z-score and vector normalization by visualizing the normalized data set and evaluating the performance of 12 supervised learning algorithms on the normalized data set. Additionally, for each of these normalization methods, we use two different normalization strategies: normalizing samples (files) or normalizing features (genes). Results: Regardless of normalization methods, a support vector machine (SVM) model with the radial basis function kernel had the maximum accuracy (78%) in predicting the vital status of the patients. However, the fitting time of SVM depended on the normalization methods, and it reached its minimum fitting time when files were normalized to the unit length. Furthermore, among all 12 learning algorithms and 6 different normalization techniques, the Bernoulli naive Bayes model after standardizing files had the best performance in terms of maximizing the accuracy as well as minimizing the fitting time. We also investigated the effect of dimensionality reduction methods on the performance of the supervised ML algorithms. Reducing the dimension of the data set did not increase the maximum accuracy of 78%. However, it leaded to discovery of the 7SK RNA gene expression as a predictor of survival in patients with colon adenocarcinoma with accuracy of 78%.


2019 ◽  
Vol 28 (06) ◽  
pp. 1960001
Author(s):  
Erdem Beğenilmiş ◽  
Susan Uskudarli

The successful use of social media to manipulate public opinion via bots and hired individuals to spread (mis)information to unsuspecting users reached alarming levels due to the manipulations during the 2016 US elections and the Brexit deliberations in the UK. Fake interaction such as “liking” and “retweeting” are staged to foster trust in the posts of bots and individuals, which makes it difficult for individuals to detect the posts that are part of greater schemes. We propose an approach based on supervised learning to classify collections of tweets as “organized” when they inhabit premeditated intent and as “organic” otherwise. Features related to users and posting behavior are used to train the classifiers using 851 data sets totaling above 270 million tweets. Further classifiers are trained to assess the effectiveness of the selected features. The random forest algorithm persistently yielded the best results with scores greater than 95% for both accuracy and f-measure. For comparison purposes, unsupervised learning methods were used to cluster the same data sets. The Gaussian Mixture Model clustered [organized vs organic] data set with 99% agreement with the labels. The success of using only behavioral features to detect organized behavior is encouraging.


2020 ◽  
pp. 666-679 ◽  
Author(s):  
Xuhong Zhang ◽  
Toby C. Cornish ◽  
Lin Yang ◽  
Tellen D. Bennett ◽  
Debashis Ghosh ◽  
...  

PURPOSE We focus on the problem of scarcity of annotated training data for nucleus recognition in Ki-67 immunohistochemistry (IHC)–stained pancreatic neuroendocrine tumor (NET) images. We hypothesize that deep learning–based domain adaptation is helpful for nucleus recognition when image annotations are unavailable in target data sets. METHODS We considered 2 different institutional pancreatic NET data sets: one (ie, source) containing 38 cases with 114 annotated images and the other (ie, target) containing 72 cases with 20 annotated images. The gold standards were manually annotated by 1 pathologist. We developed a novel deep learning–based domain adaptation framework to count different types of nuclei (ie, immunopositive tumor, immunonegative tumor, nontumor nuclei). We compared the proposed method with several recent fully supervised deep learning models, such as fully convolutional network-8s (FCN-8s), U-Net, fully convolutional regression network (FCRN) A, FCRNB, and fully residual convolutional network (FRCN). We also evaluated the proposed method by learning with a mixture of converted source images and real target annotations. RESULTS Our method achieved an F1 score of 81.3% and 62.3% for nucleus detection and classification in the target data set, respectively. Our method outperformed FCN-8s (53.6% and 43.6% for nucleus detection and classification, respectively), U-Net (61.1% and 47.6%), FCRNA (63.4% and 55.8%), and FCRNB (68.2% and 60.6%) in terms of F1 score and was competitive with FRCN (81.7% and 70.7%). In addition, learning with a mixture of converted source images and only a small set of real target labels could further boost the performance. CONCLUSION This study demonstrates that deep learning–based domain adaptation is helpful for nucleus recognition in Ki-67 IHC stained images when target data annotations are not available. It would improve the applicability of deep learning models designed for downstream supervised learning tasks on different data sets.


2021 ◽  
Vol 6 ◽  
pp. 248
Author(s):  
Paul Mwaniki ◽  
Timothy Kamanu ◽  
Samuel Akech ◽  
Dustin Dunsmuir ◽  
J. Mark Ansermino ◽  
...  

Background: The success of many machine learning applications depends on knowledge about the relationship between the input data and the task of interest (output), hindering the application of machine learning to novel tasks. End-to-end deep learning, which does not require intermediate feature engineering, has been recommended to overcome this challenge but end-to-end deep learning models require large labelled training data sets often unavailable in many medical applications. In this study, we trained machine learning models to predict paediatric hospitalization given raw photoplethysmography (PPG) signals obtained from a pulse oximeter. We trained self-supervised learning (SSL) for automatic feature extraction from PPG signals and assessed the utility of SSL in initializing end-to-end deep learning models trained on a small labelled data set with the aim of predicting paediatric hospitalization.Methods: We compared logistic regression models fitted using features extracted using SSL with end-to-end deep learning models initialized either randomly or using weights from the SSL model. We also compared the performance of SSL models trained on labelled data alone (n=1,031) with SSL trained using both labelled and unlabelled signals (n=7,578). Results: The SSL model trained on both labelled and unlabelled PPG signals produced features that were more predictive of hospitalization compared to the SSL model trained on labelled PPG only (AUC of logistic regression model: 0.78 vs 0.74). The end-to-end deep learning model had an AUC of 0.80 when initialized using the SSL model trained on all PPG signals, 0.77 when initialized using SSL trained on labelled data only, and 0.73 when initialized randomly. Conclusions: This study shows that SSL can improve the classification of PPG signals by either extracting features required by logistic regression models or initializing end-to-end deep learning models. Furthermore, SSL can leverage larger unlabelled data sets to improve performance of models fitted using small labelled data sets.


Sensors ◽  
2021 ◽  
Vol 21 (17) ◽  
pp. 5809
Author(s):  
Loris Nanni ◽  
Giovanni Minchio ◽  
Sheryl Brahnam ◽  
Davide Sarraggiotto ◽  
Alessandra Lumini

In this paper, we examine two strategies for boosting the performance of ensembles of Siamese networks (SNNs) for image classification using two loss functions (Triplet and Binary Cross Entropy) and two methods for building the dissimilarity spaces (FULLY and DEEPER). With FULLY, the distance between a pattern and a prototype is calculated by comparing two images using the fully connected layer of the Siamese network. With DEEPER, each pattern is described using a deeper layer combined with dimensionality reduction. The basic design of the SNNs takes advantage of supervised k-means clustering for building the dissimilarity spaces that train a set of support vector machines, which are then combined by sum rule for a final decision. The robustness and versatility of this approach are demonstrated on several cross-domain image data sets, including a portrait data set, two bioimage and two animal vocalization data sets. Results show that the strategies employed in this work to increase the performance of dissimilarity image classification using SNN are closing the gap with standalone CNNs. Moreover, when our best system is combined with an ensemble of CNNs, the resulting performance is superior to an ensemble of CNNs, demonstrating that our new strategy is extracting additional information.


2020 ◽  
Vol 10 (10) ◽  
pp. 3386 ◽  
Author(s):  
Krzysztof Fiok ◽  
Waldemar Karwowski ◽  
Edgar Gutierrez ◽  
Mohammad Reza-Davahli

After the advent of Glove and Word2vec, the dynamic development of language models (LMs) used to generate word embeddings has enabled the creation of better text classifier frameworks. With the vector representations of words generated by newer LMs, embeddings are no longer static but are context-aware. However, the quality of results provided by state-of-the-art LMs comes at the price of speed. Our goal was to present a benchmark to provide insight into the speed–quality trade-off of a sentence classifier framework based on word embeddings provided by selected LMs. We used a recurrent neural network with gated recurrent units to create sentence-level vector representations from word embeddings provided by an LM and a single fully connected layer for classification. Benchmarking was performed on two sentence classification data sets: The Sixth Text REtrieval Conference (TREC6)set and a 1000-sentence data set of our design. Our Monte Carlo cross-validated results based on these two data sources demonstrated that the newest deep learning LMs provided improvements over Glove and FastText in terms of weighted Matthews correlation coefficient (MCC) scores. We postulate that progress in LMs is more apparent when more difficult classification tasks are addressed.


2008 ◽  
Vol 34 (4) ◽  
pp. 487-511 ◽  
Author(s):  
James Henderson ◽  
Oliver Lemon ◽  
Kallirroi Georgila

We propose a method for learning dialogue management policies from a fixed data set. The method addresses the challenges posed by Information State Update (ISU)-based dialogue systems, which represent the state of a dialogue as a large set of features, resulting in a very large state space and a huge policy space. To address the problem that any fixed data set will only provide information about small portions of these state and policy spaces, we propose a hybrid model that combines reinforcement learning with supervised learning. The reinforcement learning is used to optimize a measure of dialogue reward, while the supervised learning is used to restrict the learned policy to the portions of these spaces for which we have data. We also use linear function approximation to address the need to generalize from a fixed amount of data to large state spaces. To demonstrate the effectiveness of this method on this challenging task, we trained this model on the COMMUNICATOR corpus, to which we have added annotations for user actions and Information States. When tested with a user simulation trained on a different part of the same data set, our hybrid model outperforms a pure supervised learning model and a pure reinforcement learning model. It also outperforms the hand-crafted systems on the COMMUNICATOR data, according to automatic evaluation measures, improving over the average COMMUNICATOR system policy by 10%. The proposed method will improve techniques for bootstrapping and automatic optimization of dialogue management policies from limited initial data sets.


Sign in / Sign up

Export Citation Format

Share Document