scholarly journals Sea Fog Identification from GOCI Images Using CNN Transfer Learning Models

Electronics ◽  
2020 ◽  
Vol 9 (2) ◽  
pp. 311
Author(s):  
Ho-Kun Jeon ◽  
Seungryong Kim ◽  
Jonathan Edwin ◽  
Chan-Su Yang

This study proposes an approaching method of identifying sea fog by using Geostationary Ocean Color Imager (GOCI) data through applying a Convolution Neural Network Transfer Learning (CNN-TL) model. In this study, VGG19 and ResNet50, pre-trained CNN models, are used for their high identification performance. The training and testing datasets were extracted from GOCI images for the area of coastal regions of the Korean Peninsula for six days in March 2015. With varying band combinations and changing whether Transfer Learning (TL) is applied, identification experiments were executed. TL enhanced the performance of the two models. Training data of CNN-TL showed up to 96.3% accuracy in matching, both with VGG19 and ResNet50, identically. Thus, it is revealed that CNN-TL is effective for the detection of sea fog from GOCI imagery.

2020 ◽  
Vol 36 (3) ◽  
pp. 1166-1187 ◽  
Author(s):  
Shohei Naito ◽  
Hiromitsu Tomozawa ◽  
Yuji Mori ◽  
Takeshi Nagata ◽  
Naokazu Monma ◽  
...  

This article presents a method for detecting damaged buildings in the event of an earthquake using machine learning models and aerial photographs. We initially created training data for machine learning models using aerial photographs captured around the town of Mashiki immediately after the main shock of the 2016 Kumamoto earthquake. All buildings are classified into one of the four damage levels by visual interpretation. Subsequently, two damage discrimination models are developed: a bag-of-visual-words model and a model based on a convolutional neural network. Results are compared and validated in terms of accuracy, revealing that the latter model is preferable. Moreover, for the convolutional neural network model, the target areas are expanded and the recalls of damage classification at the four levels range approximately from 66% to 81%.


Author(s):  
Yasir Hussain ◽  
Zhiqiu Huang ◽  
Yu Zhou ◽  
Senzhang Wang

In recent years, deep learning models have shown great potential in source code modeling and analysis. Generally, deep learning-based approaches are problem-specific and data-hungry. A challenging issue of these approaches is that they require training from scratch for a different related problem. In this work, we propose a transfer learning-based approach that significantly improves the performance of deep learning-based source code models. In contrast to traditional learning paradigms, transfer learning can transfer the knowledge learned in solving one problem into another related problem. First, we present two recurrent neural network-based models RNN and GRU for the purpose of transfer learning in the domain of source code modeling. Next, via transfer learning, these pre-trained (RNN and GRU) models are used as feature extractors. Then, these extracted features are combined into attention learner for different downstream tasks. The attention learner leverages from the learned knowledge of pre-trained models and fine-tunes them for a specific downstream task. We evaluate the performance of the proposed approach with extensive experiments with the source code suggestion task. The results indicate that the proposed approach outperforms the state-of-the-art models in terms of accuracy, precision, recall and F-measure without training the models from scratch.


2018 ◽  
Vol 8 (12) ◽  
pp. 2663 ◽  
Author(s):  
Davy Preuveneers ◽  
Vera Rimmer ◽  
Ilias Tsingenopoulos ◽  
Jan Spooren ◽  
Wouter Joosen ◽  
...  

The adoption of machine learning and deep learning is on the rise in the cybersecurity domain where these AI methods help strengthen traditional system monitoring and threat detection solutions. However, adversaries too are becoming more effective in concealing malicious behavior amongst large amounts of benign behavior data. To address the increasing time-to-detection of these stealthy attacks, interconnected and federated learning systems can improve the detection of malicious behavior by joining forces and pooling together monitoring data. The major challenge that we address in this work is that in a federated learning setup, an adversary has many more opportunities to poison one of the local machine learning models with malicious training samples, thereby influencing the outcome of the federated learning and evading detection. We present a solution where contributing parties in federated learning can be held accountable and have their model updates audited. We describe a permissioned blockchain-based federated learning method where incremental updates to an anomaly detection machine learning model are chained together on the distributed ledger. By integrating federated learning with blockchain technology, our solution supports the auditing of machine learning models without the necessity to centralize the training data. Experiments with a realistic intrusion detection use case and an autoencoder for anomaly detection illustrate that the increased complexity caused by blockchain technology has a limited performance impact on the federated learning, varying between 5 and 15%, while providing full transparency over the distributed training process of the neural network. Furthermore, our blockchain-based federated learning solution can be generalized and applied to more sophisticated neural network architectures and other use cases.


Processes ◽  
2021 ◽  
Vol 9 (11) ◽  
pp. 2029
Author(s):  
Yan-Kai Chen ◽  
Steven Shave ◽  
Manfred Auer

Small molecule lipophilicity is often included in generalized rules for medicinal chemistry. These rules aim to reduce time, effort, costs, and attrition rates in drug discovery, allowing the rejection or prioritization of compounds without the need for synthesis and testing. The availability of high quality, abundant training data for machine learning methods can be a major limiting factor in building effective property predictors. We utilize transfer learning techniques to get around this problem, first learning on a large amount of low accuracy predicted logP values before finally tuning our model using a small, accurate dataset of 244 druglike compounds to create MRlogP, a neural network-based predictor of logP capable of outperforming state of the art freely available logP prediction methods for druglike small molecules. MRlogP achieves an average root mean squared error of 0.988 and 0.715 against druglike molecules from Reaxys and PHYSPROP. We have made the trained neural network predictor and all associated code for descriptor generation freely available. In addition, MRlogP may be used online via a web interface.


Mekatronika ◽  
2020 ◽  
Vol 2 (2) ◽  
pp. 23-27
Author(s):  
Amirul Asyraf Abdul Manan ◽  
Mohd Azraai Mohd Razman ◽  
Ismail Mohd Khairuddin ◽  
Muhammad Nur Aiman Shapiee

This study presents an application of using a Convolutional Neural Network (CNN) based detector to detect chili and its leaves in the chili plant image. Detecting chili on its plant is essential for the development of robotic vision and monitoring. Thus, helps us supervise the plant growth, furthermore, analyses their productivity and quality. This paper aims to develop a system that can monitor and identify bird’s eye chili plants by implementing machine learning. First, the development of methodology for efficient detection of bird’s eye chili and its leaf was made. A dataset of a total of 1866 images after augmentation of bird’s eye chili and its leaf was used in this experiment. YOLO Darknet was implemented to train the dataset. After a series of experiments were conducted, the model is compared with other transfer learning models like YOLO Tiny, Faster R-CNN, and EfficientDet. The classification performance of these transfer learning models has been calculated and compared with each other. The experimental result shows that the Yolov4 Darknet model achieves mAP of 75.69%, followed by EfficientDet at 71.85% for augmented dataset.


2020 ◽  
Vol 12 (10) ◽  
pp. 1581 ◽  
Author(s):  
Daniel Perez ◽  
Kazi Islam ◽  
Victoria Hill ◽  
Richard Zimmerman ◽  
Blake Schaeffer ◽  
...  

Coastal ecosystems are critically affected by seagrass, both economically and ecologically. However, reliable seagrass distribution information is lacking in nearly all parts of the world because of the excessive costs associated with its assessment. In this paper, we develop two deep learning models for automatic seagrass distribution quantification based on 8-band satellite imagery. Specifically, we implemented a deep capsule network (DCN) and a deep convolutional neural network (CNN) to assess seagrass distribution through regression. The DCN model first determines whether seagrass is presented in the image through classification. Second, if seagrass is presented in the image, it quantifies the seagrass through regression. During training, the regression and classification modules are jointly optimized to achieve end-to-end learning. The CNN model is strictly trained for regression in seagrass and non-seagrass patches. In addition, we propose a transfer learning approach to transfer knowledge in the trained deep models at one location to perform seagrass quantification at a different location. We evaluate the proposed methods in three WorldView-2 satellite images taken from the coastal area in Florida. Experimental results show that the proposed deep DCN and CNN models performed similarly and achieved much better results than a linear regression model and a support vector machine. We also demonstrate that using transfer learning techniques for the quantification of seagrass significantly improved the results as compared to directly applying the deep models to new locations.


Webology ◽  
2021 ◽  
Vol 18 (2) ◽  
pp. 509-518
Author(s):  
Payman Hussein Hussan ◽  
Syefy Mohammed Mangj Al-Razoky ◽  
Hasanain Mohammed Manji Al-Rzoky

This paper presents an efficient method for finding fractures in bones. For this purpose, the pre-processing set includes increasing the quality of images, removing additional objects, removing noise and rotating images. The input images then enter the machine learning phase to detect the final fracture. At this stage, a Convolutional Neural Networks is created by Genetic Programming (GP). In this way, learning models are implemented in the form of GP programs. And evolve during the evolution of this program. Then finally the best program for classifying incoming images is selected. The data set in this work is divided into training and test friends who have nothing in common. The ratio of training data to test is equal to 80 to 20. Finally, experimental results show good results for the proposed method for bone fractures.


This research is aimed to achieve high-precision accuracy and for face recognition system. Convolution Neural Network is one of the Deep Learning approaches and has demonstrated excellent performance in many fields, including image recognition of a large amount of training data (such as ImageNet). In fact, hardware limitations and insufficient training data-sets are the challenges of getting high performance. Therefore, in this work the Deep Transfer Learning method using AlexNet pre-trained CNN is proposed to improve the performance of the face-recognition system even for a smaller number of images. The transfer learning method is used to fine-tuning on the last layer of AlexNet CNN model for new classification tasks. The data augmentation (DA) technique also proposed to minimize the over-fitting problem during Deep transfer learning training and to improve accuracy. The results proved the improvement in over-fitting and in performance after using the data augmentation technique. All the experiments were tested on UTeMFD, GTFD, and CASIA-Face V5 small data-sets. As a result, the proposed system achieved a high accuracy as 100% on UTeMFD, 96.67% on GTFD, and 95.60% on CASIA-Face V5 in less than 0.05 seconds of recognition time.


Polymers ◽  
2021 ◽  
Vol 13 (22) ◽  
pp. 3874
Author(s):  
Yan-Mao Huang ◽  
Wen-Ren Jong ◽  
Shia-Chung Chen

This study addresses some issues regarding the problems of applying CAE to the injection molding production process where quite complex factors inhibit its effective utilization. In this study, an artificial neural network, namely a backpropagation neural network (BPNN), is utilized to render results predictions for the injection molding process. By inputting the plastic temperature, mold temperature, injection speed, holding pressure, and holding time in the molding parameters, these five results are more accurately predicted: EOF pressure, maximum cooling time, warpage along the Z-axis, shrinkage along the X-axis, and shrinkage along the Y-axis. This study first uses CAE analysis data as training data and reduces the error value to less than 5% through the Taguchi method and the random shuffle method, which we introduce herein, and then successfully transfers the network, which CAE data analysis has predicted to the actual machine for verification with the use of transfer learning. This study uses a backpropagation neural network (BPNN) to train a dedicated prediction network using different, large amounts of data for training the network, which has proved fast and can predict results accurately using our optimized model.


2021 ◽  
Vol 7 ◽  
pp. e715
Author(s):  
Laith Alzubaidi ◽  
Ye Duan ◽  
Ayad Al-Dujaili ◽  
Ibraheem Kasim Ibraheem ◽  
Ahmed H. Alkenani ◽  
...  

Transfer learning (TL) has been widely utilized to address the lack of training data for deep learning models. Specifically, one of the most popular uses of TL has been for the pre-trained models of the ImageNet dataset. Nevertheless, although these pre-trained models have shown an effective performance in several domains of application, those models may not offer significant benefits in all instances when dealing with medical imaging scenarios. Such models were designed to classify a thousand classes of natural images. There are fundamental differences between these models and those dealing with medical imaging tasks regarding learned features. Most medical imaging applications range from two to ten different classes, where we suspect that it would not be necessary to employ deeper learning models. This paper investigates such a hypothesis and develops an experimental study to examine the corresponding conclusions about this issue. The lightweight convolutional neural network (CNN) model and the pre-trained models have been evaluated using three different medical imaging datasets. We have trained the lightweight CNN model and the pre-trained models with two scenarios which are with a small number of images once and a large number of images once again. Surprisingly, it has been found that the lightweight model trained from scratch achieved a more competitive performance when compared to the pre-trained model. More importantly, the lightweight CNN model can be successfully trained and tested using basic computational tools and provide high-quality results, specifically when using medical imaging datasets.


Sign in / Sign up

Export Citation Format

Share Document