scholarly journals Image Segmentation Methods for Flood Monitoring System

Water ◽  
2020 ◽  
Vol 12 (6) ◽  
pp. 1825
Author(s):  
Nur Muhadi ◽  
Ahmad Abdullah ◽  
Siti Bejo ◽  
Muhammad Mahadi ◽  
Ana Mijic

Flood disasters are considered annual disasters in Malaysia due to their consistent occurrence. They are among the most dangerous disasters in the country. Lack of data during flood events is the main constraint to improving flood monitoring systems. With the rapid development of information technology, flood monitoring systems using a computer vision approach have gained attention over the last decade. Computer vision requires an image segmentation technique to understand the content of the image and to facilitate analysis. Various segmentation algorithms have been developed to improve results. This paper presents a comparative study of image segmentation techniques used in extracting water information from digital images. The segmentation methods were evaluated visually and statistically. To evaluate the segmentation methods statistically, the dice similarity coefficient and the Jaccard index were calculated to measure the similarity between the segmentation results and the ground truth images. Based on the experimental results, the hybrid technique obtained the highest values among the three methods, yielding an average of 97.70% for the dice score and 95.51% for the Jaccard index. Therefore, we concluded that the hybrid technique is a promising segmentation method compared to the others in extracting water features from digital images.

2020 ◽  
Vol 2020 ◽  
pp. 1-13
Author(s):  
Xiaodong Huang ◽  
Hui Zhang ◽  
Li Zhuo ◽  
Xiaoguang Li ◽  
Jing Zhang

Extracting the tongue body accurately from a digital tongue image is a challenge for automated tongue diagnoses, as the blurred edge of the tongue body, interference of pathological details, and the huge difference in the size and shape of the tongue. In this study, an automated tongue image segmentation method using enhanced fully convolutional network with encoder-decoder structure was presented. In the frame of the proposed network, the deep residual network was adopted as an encoder to obtain dense feature maps, and a Receptive Field Block was assembled behind the encoder. Receptive Field Block can capture adequate global contextual prior because of its structure of the multibranch convolution layers with varying kernels. Moreover, the Feature Pyramid Network was used as a decoder to fuse multiscale feature maps for gathering sufficient positional information to recover the clear contour of the tongue body. The quantitative evaluation of the segmentation results of 300 tongue images from the SIPL-tongue dataset showed that the average Hausdorff Distance, average Symmetric Mean Absolute Surface Distance, average Dice Similarity Coefficient, average precision, average sensitivity, and average specificity were 11.2963, 3.4737, 97.26%, 95.66%, 98.97%, and 98.68%, respectively. The proposed method achieved the best performance compared with the other four deep-learning-based segmentation methods (including SegNet, FCN, PSPNet, and DeepLab v3+). There were also similar results on the HIT-tongue dataset. The experimental results demonstrated that the proposed method can achieve accurate tongue image segmentation and meet the practical requirements of automated tongue diagnoses.


Author(s):  
Chao Zeng ◽  
Wenjing Jia ◽  
Xiangjian He ◽  
Min Xu

Image segmentation techniques using graph theory has become a thriving research area in computer vision community in recent years. This chapter mainly focuses on the most up-to-date research achievements in graph-based image segmentation published in top journals and conferences in computer vision community. The representative graph-based image segmentation methods included in this chapter are classified into six categories: minimum-cut/maximum-flow model (called graph-cut in some literatures), random walk model, minimum spanning tree model, normalized cut model and isoperimetric graph partitioning. The basic rationales of these models are presented, and the image segmentation methods based on these graph-based models are discussed as the main concern of this chapter. Several performance evaluation methods for image segmentation are given. Some public databases for testing image segmentation algorithms are introduced and the future work on graph-based image segmentation is discussed at the end of this chapter.


2021 ◽  
Vol 13 (3) ◽  
pp. 1224
Author(s):  
Xiangbin Liu ◽  
Liping Song ◽  
Shuai Liu ◽  
Yudong Zhang

As an emerging biomedical image processing technology, medical image segmentation has made great contributions to sustainable medical care. Now it has become an important research direction in the field of computer vision. With the rapid development of deep learning, medical image processing based on deep convolutional neural networks has become a research hotspot. This paper focuses on the research of medical image segmentation based on deep learning. First, the basic ideas and characteristics of medical image segmentation based on deep learning are introduced. By explaining its research status and summarizing the three main methods of medical image segmentation and their own limitations, the future development direction is expanded. Based on the discussion of different pathological tissues and organs, the specificity between them and their classic segmentation algorithms are summarized. Despite the great achievements of medical image segmentation in recent years, medical image segmentation based on deep learning has still encountered difficulties in research. For example, the segmentation accuracy is not high, the number of medical images in the data set is small and the resolution is low. The inaccurate segmentation results are unable to meet the actual clinical requirements. Aiming at the above problems, a comprehensive review of current medical image segmentation methods based on deep learning is provided to help researchers solve existing problems.


2020 ◽  
Vol 6 (4) ◽  
pp. 355-384
Author(s):  
Hiba Ramadan ◽  
Chaymae Lachqar ◽  
Hamid Tairi

AbstractImage segmentation is one of the most basic tasks in computer vision and remains an initial step of many applications. In this paper, we focus on interactive image segmentation (IIS), often referred to as foreground-background separation or object extraction, guided by user interaction. We provide an overview of the IIS literature by covering more than 150 publications, especially recent works that have not been surveyed before. Moreover, we try to give a comprehensive classification of them according to different viewpoints and present a general and concise comparison of the most recent published works. Furthermore, we survey widely used datasets, evaluation metrics, and available resources in the field of IIS.


2013 ◽  
pp. 1323-1337
Author(s):  
Chao Zeng ◽  
Wenjing Jia ◽  
Xiangjian He ◽  
Min Xu

Image segmentation techniques using graph theory has become a thriving research area in computer vision community in recent years. This chapter mainly focuses on the most up-to-date research achievements in graph-based image segmentation published in top journals and conferences in computer vision community. The representative graph-based image segmentation methods included in this chapter are classified into six categories: minimum-cut/maximum-flow model (called graph-cut in some literatures), random walk model, minimum spanning tree model, normalized cut model and isoperimetric graph partitioning. The basic rationales of these models are presented, and the image segmentation methods based on these graph-based models are discussed as the main concern of this chapter. Several performance evaluation methods for image segmentation are given. Some public databases for testing image segmentation algorithms are introduced and the future work on graph-based image segmentation is discussed at the end of this chapter.


Author(s):  
E.L. Sherzhukov ◽  
M.S. Makeev

The effectiveness of monitoring systems for runoff floods is determined by the amount of time required to implement preventive measures in order to reduce the severity of potential losses and to adequately assess the degree of threat. The three-level model of the degree of hydrological threat adopted in the Roshydromet system – the normal mode, the mode of an adverse event (AE) and the mode of a hazardous event (AE), does not make it possible to ensure preventive measures. With the rapid development of the flood, the time of the water level rise between the AE and OI marks is often no more than 10-15 minutes, which is insufficient to ensure preventive measures. The transition to a five-level model for describing the degree of hydrological threat, based on GOST 22.3.13-2018 (ISO 22324: 2015) and providing, in addition to assessing the degree of threat in the notation of Roshydromet, two modes: a mode of increased attention and a mode of potential danger, allows on the basis of an ultra-short-term flood forecast runoff nature to provide a lead time of 2–6 hours to assess the degree of hydrological threat. In the existing flood monitoring system, the degree of hydrological threat is determined for the places where hydrological posts are installed, which are often located outside of settlements. It is proposed to introduce the concept of a zone of potential hydrological threat (ZPHU), for which the degree of threat is determined by calculation. Various approaches to the implementation of ultra-short-term flood forecasting taking into account the formation model of the precipitation field are considered.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Andreas M. Weng ◽  
Julius F. Heidenreich ◽  
Corona Metz ◽  
Simon Veldhoen ◽  
Thorsten A. Bley ◽  
...  

Abstract Background Functional lung MRI techniques are usually associated with time-consuming post-processing, where manual lung segmentation represents the most cumbersome part. The aim of this study was to investigate whether deep learning-based segmentation of lung images which were scanned by a fast UTE sequence exploiting the stack-of-spirals trajectory can provide sufficiently good accuracy for the calculation of functional parameters. Methods In this study, lung images were acquired in 20 patients suffering from cystic fibrosis (CF) and 33 healthy volunteers, by a fast UTE sequence with a stack-of-spirals trajectory and a minimum echo-time of 0.05 ms. A convolutional neural network was then trained for semantic lung segmentation using 17,713 2D coronal slices, each paired with a label obtained from manual segmentation. Subsequently, the network was applied to 4920 independent 2D test images and results were compared to a manual segmentation using the Sørensen–Dice similarity coefficient (DSC) and the Hausdorff distance (HD). Obtained lung volumes and fractional ventilation values calculated from both segmentations were compared using Pearson’s correlation coefficient and Bland Altman analysis. To investigate generalizability to patients outside the CF collective, in particular to those exhibiting larger consolidations inside the lung, the network was additionally applied to UTE images from four patients with pneumonia and one with lung cancer. Results The overall DSC for lung tissue was 0.967 ± 0.076 (mean ± standard deviation) and HD was 4.1 ± 4.4 mm. Lung volumes derived from manual and deep learning based segmentations as well as values for fractional ventilation exhibited a high overall correlation (Pearson’s correlation coefficent = 0.99 and 1.00). For the additional cohort with unseen pathologies / consolidations, mean DSC was 0.930 ± 0.083, HD = 12.9 ± 16.2 mm and the mean difference in lung volume was 0.032 ± 0.048 L. Conclusions Deep learning-based image segmentation in stack-of-spirals based lung MRI allows for accurate estimation of lung volumes and fractional ventilation values and promises to replace the time-consuming step of manual image segmentation in the future.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Changyong Li ◽  
Yongxian Fan ◽  
Xiaodong Cai

Abstract Background With the development of deep learning (DL), more and more methods based on deep learning are proposed and achieve state-of-the-art performance in biomedical image segmentation. However, these methods are usually complex and require the support of powerful computing resources. According to the actual situation, it is impractical that we use huge computing resources in clinical situations. Thus, it is significant to develop accurate DL based biomedical image segmentation methods which depend on resources-constraint computing. Results A lightweight and multiscale network called PyConvU-Net is proposed to potentially work with low-resources computing. Through strictly controlled experiments, PyConvU-Net predictions have a good performance on three biomedical image segmentation tasks with the fewest parameters. Conclusions Our experimental results preliminarily demonstrate the potential of proposed PyConvU-Net in biomedical image segmentation with resources-constraint computing.


Sign in / Sign up

Export Citation Format

Share Document