scholarly journals Urban Flood Prediction Using Deep Neural Network with Data Augmentation

Water ◽  
2020 ◽  
Vol 12 (3) ◽  
pp. 899 ◽  
Author(s):  
Hyun Il Kim ◽  
Kun Yeun Han

Data-driven models using an artificial neural network (ANN), deep learning (DL) and numerical models are applied in flood analysis of the urban watershed, which has a complex drainage system. In particular, data-driven models using neural networks can quickly present the results and be used for flood forecasting. However, not a lot of data with actual flood history and heavy rainfalls are available, it is difficult to conduct a preliminary analysis of flood in urban areas. In this study, a deep neural network (DNN) was used to predict the total accumulative overflow, and because of the insufficiency of observed rainfall data, 6 h of rainfall were surveyed nationwide in Korea. Statistical characteristics of each rainfall event were used as input data for the DNN. The target value of the DNN was the total accumulative overflow calculated from Storm Water Management Model (SWMM) simulations, and the methodology of data augmentation was applied to increase the input data. The SWMM is one-dimensional model for rainfall-runoff analysis. The data augmentation allowed enrichment of the training data for DNN. The data augmentation was applied ten times for each input combination, and the practicality of the data augmentation was determined by predicting the total accumulative overflow over the testing data and the observed rainfall. The prediction result of DNN was compared with the simulated result obtained using the SWMM model, and it was confirmed that the predictive performance was improved on applying data augmentation.

2021 ◽  
Vol 11 (15) ◽  
pp. 7148
Author(s):  
Bedada Endale ◽  
Abera Tullu ◽  
Hayoung Shi ◽  
Beom-Soo Kang

Unmanned aerial vehicles (UAVs) are being widely utilized for various missions: in both civilian and military sectors. Many of these missions demand UAVs to acquire artificial intelligence about the environments they are navigating in. This perception can be realized by training a computing machine to classify objects in the environment. One of the well known machine training approaches is supervised deep learning, which enables a machine to classify objects. However, supervised deep learning comes with huge sacrifice in terms of time and computational resources. Collecting big input data, pre-training processes, such as labeling training data, and the need for a high performance computer for training are some of the challenges that supervised deep learning poses. To address these setbacks, this study proposes mission specific input data augmentation techniques and the design of light-weight deep neural network architecture that is capable of real-time object classification. Semi-direct visual odometry (SVO) data of augmented images are used to train the network for object classification. Ten classes of 10,000 different images in each class were used as input data where 80% were for training the network and the remaining 20% were used for network validation. For the optimization of the designed deep neural network, a sequential gradient descent algorithm was implemented. This algorithm has the advantage of handling redundancy in the data more efficiently than other algorithms.


2021 ◽  
Author(s):  
Debmitra Ghosh

Abstract SARS-CoV-2 or severe acute respiratory syndrome coronavirus 2 is considered to be the cause of Coronavirus (COVID-19) which is a viral disease. The rapid spread of COVID-19 is having a detrimental effect on the global economy and health. A chest X-ray of infected patients can be considered as a crucial step in the battle against COVID-19. On retrospections, it is found that abnormalities exist in chest X-rays of patients suggestive of COVID-19. This sparked the introduction of a variety of deep learning systems and studies which have shown that the accuracy of COVID-19 patient detection through the use of chest X-rays is strongly optimistic. Although there are certain shortcomings like deep learning networks like convolutional neural networks (CNNs) need a substantial amount of training data but the outbreak is recent, so it is large datasets of radiographic images of the COVID-19 infected patients are not available in such a short time. Here, in this research, we present a method to generate synthetic chest X-ray (CXR) images by developing a Deep Convolution Generative Adversarial Network-based model. In addition, we demonstrate that the synthetic images produced from DCGAN can be utilized to enhance the performance of CNN for COVID-19 detection. Classification using CNN alone yielded 85% accuracy. Although there are several models available, we chose MobileNet as it is a lightweight deep neural network, with fewer parameters and higher classification accuracy. Here we are using a deep neural network-based model to diagnose COVID-19 infected patients through radiological imaging of 5,859 Chest X-Ray images. We are using a Deep Convolutional Neural Network and a pre-trained model “DenseNet 121” for two new label classes (COVID-19 and Normal). To improve the classification accuracy, in our work we have further reduced the number of network parameters by introducing dense blocks that are proposed in DenseNets into MobileNet. By adding synthetic images produced by DCGAN, the accuracy increased to 97%. Our goal is to use this method to speed up COVID-19 detection and lead to more robust systems of radiology.


Healthcare ◽  
2020 ◽  
Vol 8 (3) ◽  
pp. 234 ◽  
Author(s):  
Hyun Yoo ◽  
Soyoung Han ◽  
Kyungyong Chung

Recently, a massive amount of big data of bioinformation is collected by sensor-based IoT devices. The collected data are also classified into different types of health big data in various techniques. A personalized analysis technique is a basis for judging the risk factors of personal cardiovascular disorders in real-time. The objective of this paper is to provide the model for the personalized heart condition classification in combination with the fast and effective preprocessing technique and deep neural network in order to process the real-time accumulated biosensor input data. The model can be useful to learn input data and develop an approximation function, and it can help users recognize risk situations. For the analysis of the pulse frequency, a fast Fourier transform is applied in preprocessing work. With the use of the frequency-by-frequency ratio data of the extracted power spectrum, data reduction is performed. To analyze the meanings of preprocessed data, a neural network algorithm is applied. In particular, a deep neural network is used to analyze and evaluate linear data. A deep neural network can make multiple layers and can establish an operation model of nodes with the use of gradient descent. The completed model was trained by classifying the ECG signals collected in advance into normal, control, and noise groups. Thereafter, the ECG signal input in real time through the trained deep neural network system was classified into normal, control, and noise. To evaluate the performance of the proposed model, this study utilized a ratio of data operation cost reduction and F-measure. As a result, with the use of fast Fourier transform and cumulative frequency percentage, the size of ECG reduced to 1:32. According to the analysis on the F-measure of the deep neural network, the model had 83.83% accuracy. Given the results, the modified deep neural network technique can reduce the size of big data in terms of computing work, and it is an effective system to reduce operation time.


2019 ◽  
Vol 2019 ◽  
pp. 1-9 ◽  
Author(s):  
Michał Klimont ◽  
Mateusz Flieger ◽  
Jacek Rzeszutek ◽  
Joanna Stachera ◽  
Aleksandra Zakrzewska ◽  
...  

Hydrocephalus is a common neurological condition that can have traumatic ramifications and can be lethal without treatment. Nowadays, during therapy radiologists have to spend a vast amount of time assessing the volume of cerebrospinal fluid (CSF) by manual segmentation on Computed Tomography (CT) images. Further, some of the segmentations are prone to radiologist bias and high intraobserver variability. To improve this, researchers are exploring methods to automate the process, which would enable faster and more unbiased results. In this study, we propose the application of U-Net convolutional neural network in order to automatically segment CT brain scans for location of CSF. U-Net is a neural network that has proven to be successful for various interdisciplinary segmentation tasks. We optimised training using state of the art methods, including “1cycle” learning rate policy, transfer learning, generalized dice loss function, mixed float precision, self-attention, and data augmentation. Even though the study was performed using a limited amount of data (80 CT images), our experiment has shown near human-level performance. We managed to achieve a 0.917 mean dice score with 0.0352 standard deviation on cross validation across the training data and a 0.9506 mean dice score on a separate test set. To our knowledge, these results are better than any known method for CSF segmentation in hydrocephalic patients, and thus, it is promising for potential practical applications.


2020 ◽  
pp. 1-14
Author(s):  
Esraa Hassan ◽  
Noha A. Hikal ◽  
Samir Elmuogy

Nowadays, Coronavirus (COVID-19) considered one of the most critical pandemics in the earth. This is due its ability to spread rapidly between humans as well as animals. COVID_19 expected to outbreak around the world, around 70 % of the earth population might infected with COVID-19 in the incoming years. Therefore, an accurate and efficient diagnostic tool is highly required, which the main objective of our study. Manual classification was mainly used to detect different diseases, but it took too much time in addition to the probability of human errors. Automatic image classification reduces doctors diagnostic time, which could save human’s life. We propose an automatic classification architecture based on deep neural network called Worried Deep Neural Network (WDNN) model with transfer learning. Comparative analysis reveals that the proposed WDNN model outperforms by using three pre-training models: InceptionV3, ResNet50, and VGG19 in terms of various performance metrics. Due to the shortage of COVID-19 data set, data augmentation was used to increase the number of images in the positive class, then normalization used to make all images have the same size. Experimentation is done on COVID-19 dataset collected from different cases with total 2623 where (1573 training,524 validation,524 test). Our proposed model achieved 99,046, 98,684, 99,119, 98,90 In terms of Accuracy, precision, Recall, F-score, respectively. The results are compared with both the traditional machine learning methods and those using Convolutional Neural Networks (CNNs). The results demonstrate the ability of our classification model to use as an alternative of the current diagnostic tool.


2020 ◽  
Vol 13 (1) ◽  
pp. 34
Author(s):  
Rong Yang ◽  
Robert Wang ◽  
Yunkai Deng ◽  
Xiaoxue Jia ◽  
Heng Zhang

The random cropping data augmentation method is widely used to train convolutional neural network (CNN)-based target detectors to detect targets in optical images (e.g., COCO datasets). It can expand the scale of the dataset dozens of times while consuming only a small amount of calculations when training the neural network detector. In addition, random cropping can also greatly enhance the spatial robustness of the model, because it can make the same target appear in different positions of the sample image. Nowadays, random cropping and random flipping have become the standard configuration for those tasks with limited training data, which makes it natural to introduce them into the training of CNN-based synthetic aperture radar (SAR) image ship detectors. However, in this paper, we show that the introduction of traditional random cropping methods directly in the training of the CNN-based SAR image ship detector may generate a lot of noise in the gradient during back propagation, which hurts the detection performance. In order to eliminate the noise in the training gradient, a simple and effective training method based on feature map mask is proposed. Experiments prove that the proposed method can effectively eliminate the gradient noise introduced by random cropping and significantly improve the detection performance under a variety of evaluation indicators without increasing inference cost.


2021 ◽  
Vol 10 (1) ◽  
pp. 21
Author(s):  
Omar Nassef ◽  
Toktam Mahmoodi ◽  
Foivos Michelinakis ◽  
Kashif Mahmood ◽  
Ahmed Elmokashfi

This paper presents a data driven framework for performance optimisation of Narrow-Band IoT user equipment. The proposed framework is an edge micro-service that suggests one-time configurations to user equipment communicating with a base station. Suggested configurations are delivered from a Configuration Advocate, to improve energy consumption, delay, throughput or a combination of those metrics, depending on the user-end device and the application. Reinforcement learning utilising gradient descent and genetic algorithm is adopted synchronously with machine and deep learning algorithms to predict the environmental states and suggest an optimal configuration. The results highlight the adaptability of the Deep Neural Network in the prediction of intermediary environmental states, additionally the results present superior performance of the genetic reinforcement learning algorithm regarding its performance optimisation.


2021 ◽  
Vol 35 (12) ◽  
pp. 5371-5387
Author(s):  
Bin Xue ◽  
Zhong-bin Xu ◽  
Xing Huang ◽  
Peng-cheng Nie

Sign in / Sign up

Export Citation Format

Share Document