scholarly journals Seafloor Pipeline Detection With Deep Learning

Author(s):  
Vemund Sigmundson Schøyen ◽  
Narada Dilp Warakagoda ◽  
Øivind Midtgaard

This paper presents fast, accurate, and automatic methods for detecting seafloor pipelines in multibeam echo sounder data with deep learning. The proposed methods take inspiration from the highly successful ResNet and YOLO deep learning models and tailor them to the idiosyncrasies of the seafloor pipeline detection task. We use the area between lines and Hausdorff line distance functions to accurately evaluate how well methods can localize (pipe)lines. The same functions also show promise as loss functions compared to standard mean squared error, which does not include the regression variables' geometrical interpretation. The model outperforms the highest likelihood baseline by more than 35% on a region-wise F1-score classification evaluation while being more than eight times more accurate than the baseline in locating pipelines. It is efficient, operating at over eighteen 32-ping image segments per second, which is far beyond real-time requirements.

2021 ◽  
Vol 19 (1) ◽  
pp. 2-20
Author(s):  
Piyush Kant Rai ◽  
Alka Singh ◽  
Muhammad Qasim

This article introduces calibration estimators under different distance measures based on two auxiliary variables in stratified sampling. The theory of the calibration estimator is presented. The calibrated weights based on different distance functions are also derived. A simulation study has been carried out to judge the performance of the proposed estimators based on the minimum relative root mean squared error criterion. A real-life data set is also used to confirm the supremacy of the proposed method.


Author(s):  
Kalva Sindhu Priya

Abstract: In the present scenario, it is quite aware that almost every field is moving into machine based automation right from fundamentals to master level systems. Among them, Machine Learning (ML) is one of the important tool which is most similar to Artificial Intelligence (AI) by allowing some well known data or past experience in order to improve automatically or estimate the behavior or status of the given data through various algorithms. Modeling a system or data through Machine Learning is important and advantageous as it helps in the development of later and newer versions. Today most of the information technology giants such as Facebook, Uber, Google maps made Machine learning as a critical part of their ongoing operations for the better view of users. In this paper, various available algorithms in ML is given briefly and out of all the existing different algorithms, Linear Regression algorithm is used to predict a new set of values by taking older data as reference. However, a detailed predicted model is discussed clearly by building a code with the help of Machine Learning and Deep Learning tool in MATLAB/ SIMULINK. Keywords: Machine Learning (ML), Linear Regression algorithm, Curve fitting, Root Mean Squared Error


2019 ◽  
Vol 11 (21) ◽  
pp. 2463
Author(s):  
Arthur Moraux ◽  
Steven Dewitte ◽  
Bruno Cornelis ◽  
Adrian Munteanu

This paper proposes a multimodal and multi-task deep-learning model for instantaneous precipitation rate estimation. Using both thermal infrared satellite radiometer and automatic rain gauge measurements as input, our encoder–decoder convolutional neural network performs a multiscale analysis of these two modalities to estimate simultaneously the rainfall probability and the precipitation rate value. Precipitating pixels are detected with a Probability Of Detection (POD) of 0.75 and a False Alarm Ratio (FAR) of 0.3. Instantaneous precipitation rate is estimated with a Root Mean Squared Error (RMSE) of 1.6 mm/h.


Genes ◽  
2019 ◽  
Vol 10 (11) ◽  
pp. 862
Author(s):  
Tong Liu ◽  
Zheng Wang

We present a deep-learning package named HiCNN2 to learn the mapping between low-resolution and high-resolution Hi-C (a technique for capturing genome-wide chromatin interactions) data, which can enhance the resolution of Hi-C interaction matrices. The HiCNN2 package includes three methods each with a different deep learning architecture: HiCNN2-1 is based on one single convolutional neural network (ConvNet); HiCNN2-2 consists of an ensemble of two different ConvNets; and HiCNN2-3 is an ensemble of three different ConvNets. Our evaluation results indicate that HiCNN2-enhanced high-resolution Hi-C data achieve smaller mean squared error and higher Pearson’s correlation coefficients with experimental high-resolution Hi-C data compared with existing methods HiCPlus and HiCNN. Moreover, all of the three HiCNN2 methods can recover more significant interactions detected by Fit-Hi-C compared to HiCPlus and HiCNN. Based on our evaluation results, we would recommend using HiCNN2-1 and HiCNN2-3 if recovering more significant interactions from Hi-C data is of interest, and HiCNN2-2 and HiCNN if the goal is to achieve higher reproducibility scores between the enhanced Hi-C matrix and the real high-resolution Hi-C matrix.


2019 ◽  
Vol 11 (12) ◽  
pp. 1459 ◽  
Author(s):  
Linjing Zhang ◽  
Zhenfeng Shao ◽  
Jianchen Liu ◽  
Qimin Cheng

Estimation of forest aboveground biomass (AGB) is crucial for various technical and scientific applications, ranging from regional carbon and bioenergy policies to sustainable forest management. However, passive optical remote sensing, which is the most widely used remote sensing data for retrieving vegetation parameters, is constrained by spectral saturation problems and cloud cover. On the other hand, LiDAR data, which have been extensively used to estimate forest structure attributes, cannot provide sufficient spectral information of vegetation canopies. Thus, this study aimed to develop a novel synergistic approach to estimating biomass by integrating LiDAR data with Landsat 8 imagery through a deep learning-based workflow. First the relationships between biomass and spectral vegetation indices (SVIs) and LiDAR metrics were separately investigated. Next, two groups of combined optical and LiDAR indices (i.e., COLI1 and COLI2) were designed and explored to identify their performances in biomass estimation. Finally, five prediction models, including K-nearest Neighbor, Random Forest, Support Vector Regression, the deep learning model, i.e., Stacked Sparse Autoencoder network (SSAE), and multiple stepwise linear regressions, were individually used to estimate biomass with input variables of different scenarios, i.e., (i) all the COLI1 (ACOLI1), (ii) all the COLI2 (ACOLI2), (iii) ACOLI1 and all the optical (AO) and LiDAR variables (AL), and (iv) ACOLI2, AO and AL. Results showed that univariate models with the combined optical and LiDAR indices as explanatory variables presented better modeling performance than those with either optical or LiDAR data alone, regardless of the combination mode. The SSAE model obtained the best performance compared to the other tested prediction algorithms for the forest biomass estimation. The best predictive accuracy was achieved by the SSAE model with inputs of combined optical and LiDAR variables (i.e., ACOLI1, AO and AL) that yielded an R2 of 0.935, root mean squared error (RMSE) of 15.67 Mg/ha, and relative root mean squared error (RMSEr) of 11.407%. It was concluded that the presented combined indices were simple and effective by integrating LiDAR-derived structure information with Landsat 8 spectral data for estimating forest biomass. Overall, the SSAE model with inputs of Landsat 8 and LiDAR integrated information resulted in accurate estimation of forest biomass. The presented modeling workflow will greatly facilitate future forest biomass estimation and carbon stock assessments.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Zohaib Iqbal ◽  
Dan Nguyen ◽  
Michael Albert Thomas ◽  
Steve Jiang

AbstractNuclear magnetic resonance spectroscopy (MRS) allows for the determination of atomic structures and concentrations of different chemicals in a biochemical sample of interest. MRS is used in vivo clinically to aid in the diagnosis of several pathologies that affect metabolic pathways in the body. Typically, this experiment produces a one dimensional (1D) 1H spectrum containing several peaks that are well associated with biochemicals, or metabolites. However, since many of these peaks overlap, distinguishing chemicals with similar atomic structures becomes much more challenging. One technique capable of overcoming this issue is the localized correlated spectroscopy (L-COSY) experiment, which acquires a second spectral dimension and spreads overlapping signal across this second dimension. Unfortunately, the acquisition of a two dimensional (2D) spectroscopy experiment is extremely time consuming. Furthermore, quantitation of a 2D spectrum is more complex. Recently, artificial intelligence has emerged in the field of medicine as a powerful force capable of diagnosing disease, aiding in treatment, and even predicting treatment outcome. In this study, we utilize deep learning to: (1) accelerate the L-COSY experiment and (2) quantify L-COSY spectra. All training and testing samples were produced using simulated metabolite spectra for chemicals found in the human body. We demonstrate that our deep learning model greatly outperforms compressed sensing based reconstruction of L-COSY spectra at higher acceleration factors. Specifically, at four-fold acceleration, our method has less than 5% normalized mean squared error, whereas compressed sensing yields 20% normalized mean squared error. We also show that at low SNR (25% noise compared to maximum signal), our deep learning model has less than 8% normalized mean squared error for quantitation of L-COSY spectra. These pilot simulation results appear promising and may help improve the efficiency and accuracy of L-COSY experiments in the future.


Author(s):  
Seifeldeen Eteifa ◽  
Hesham A. Rakha ◽  
Hoda Eldardiry

Vehicle acceleration and deceleration maneuvers at traffic signals result in significant fuel and energy consumption levels. Green light optimal speed advisory systems require reliable estimates of signal switching times to improve vehicle energy/fuel efficiency. Obtaining these estimates is difficult for actuated signals where the length of each green indication changes to accommodate varying traffic conditions and pedestrian requests. This study details a four-step long short-term memory (LSTM) deep learning based methodology that can be used to provide reasonable switching time estimates from green to red and vice versa while being robust to missing data. The four steps are data gathering, data preparation, machine learning model tuning, and model testing and evaluation. The input to the models includes controller logic, signal timing parameters, time of day, traffic state from detectors, vehicle actuation data, and pedestrian actuation data. The methodology is applied and evaluated on data from an intersection in Northern Virginia. A comparative analysis is conducted between different loss functions including the mean squared error, mean absolute error, and mean relative error used in LSTM and a new loss function that is proposed in this paper. The results show that while the proposed loss function outperforms conventional loss functions in overall absolute error values, the choice of the loss function is dependent on the prediction horizon. Specifically, the proposed loss function is slightly outperformed by the mean relative error for very short prediction horizons (less than 20 s) and the mean squared error for very long prediction horizons (greater than 120 s).


2020 ◽  
Author(s):  
Luciano Melodia

The distribution of energy dose from Lu177 radiotherapy can be estimated by convolving an image of a time-integrated activity distribution with a dose voxel kernel (DVK) consisting of different types of tissues. This fast and inacurate approximation is inappropriate for personalized dosimetry as it neglects tissue heterogenity. The latter can be calculated using different imaging techniques such as CT and SPECT combined with a time consuming monte-carlo simulation. The aim of this study is, for the first time, an estimation of DVKs from CT-derived density kernels (DK) via deep learning in convolutional neural networks (CNNs). The proposed CNN achieved, on the test set, a mean intersection over union (IOU) of =0.86 after 308 epochs and a corresponding mean squared error (MSE) =1.24⋅10−4. This generalization ability shows that the trained CNN can indeed learn the difficult transfer function from DK to DVK. Future work will evaluate DVKs estimated by CNNs with full monte-carlo simulations of a whole body CT to predict patient specific voxel dose maps.


Author(s):  
Lei Feng ◽  
Senlin Shu ◽  
Zhuoyi Lin ◽  
Fengmao Lv ◽  
Li Li ◽  
...  

Trained with the standard cross entropy loss, deep neural networks can achieve great performance on correctly labeled data. However, if the training data is corrupted with label noise, deep models tend to overfit the noisy labels, thereby achieving poor generation performance. To remedy this issue, several loss functions have been proposed and demonstrated to be robust to label noise. Although most of the robust loss functions stem from Categorical Cross Entropy (CCE) loss, they fail to embody the intrinsic relationships between CCE and other loss functions. In this paper, we propose a general framework dubbed Taylor cross entropy loss to train deep models in the presence of label noise. Specifically, our framework enables to weight the extent of fitting the training labels by controlling the order of Taylor Series for CCE, hence it can be robust to label noise. In addition, our framework clearly reveals the intrinsic relationships between CCE and other loss functions, such as Mean Absolute Error (MAE) and Mean Squared Error (MSE). Moreover, we present a detailed theoretical analysis to certify the robustness of this framework. Extensive experimental results on benchmark datasets demonstrate that our proposed approach significantly outperforms the state-of-the-art counterparts.


Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Shuping Li ◽  
Taotang Liu

Predicting students’ performance is very important in matters related to higher education as well as with regard to deep learning and its relationship to educational data. Prediction of students’ performance provides support in selecting courses and designing appropriate future study plans for students. In addition to predicting the performance of students, it helps teachers and managers to monitor students in order to provide support to them and to integrate the training programs to obtain the best results. One of the benefits of student’s prediction is that it reduces the official warning signs as well as expelling students because of their inefficiency. Prediction provides support to the students themselves through their choice of courses and study plans appropriate to their abilities. The proposed method used deep neural network in prediction by extracting informative data as a feature with corresponding weights. Multiple updated hidden layers are used to design neural network automatically; number of nodes and hidden layers controlled by feed forwarding and backpropagation data are produced by previous cases. The training mode is used to train the system with labeled data from dataset and the testing mode is used for evaluating the system. Mean absolute error (MAE) and root mean squared error (RMSE) with accuracy used for evolution of the proposed method. The proposed system has proven its worth in terms of efficiency through the achieved results in MAE (0.593) and RMSE (0.785) to get the best prediction.


Sign in / Sign up

Export Citation Format

Share Document