bayesian segmentation
Recently Published Documents


TOTAL DOCUMENTS

82
(FIVE YEARS 4)

H-INDEX

12
(FIVE YEARS 0)

2021 ◽  
Author(s):  
Jianjun Yu ◽  
Jie Jia ◽  
Guoyu Zuo ◽  
Xuchen Li ◽  
Yilin Cao ◽  
...  

Entropy ◽  
2021 ◽  
Vol 23 (3) ◽  
pp. 301
Author(s):  
Christina Petschnigg ◽  
Markus Spitzner ◽  
Lucas Weitzendorf ◽  
Jürgen Pilz

The 3D modelling of indoor environments and the generation of process simulations play an important role in factory and assembly planning. In brownfield planning cases, existing data are often outdated and incomplete especially for older plants, which were mostly planned in 2D. Thus, current environment models cannot be generated directly on the basis of existing data and a holistic approach on how to build such a factory model in a highly automated fashion is mostly non-existent. Major steps in generating an environment model of a production plant include data collection, data pre-processing and object identification as well as pose estimation. In this work, we elaborate on a methodical modelling approach, which starts with the digitalization of large-scale indoor environments and ends with the generation of a static environment or simulation model. The object identification step is realized using a Bayesian neural network capable of point cloud segmentation. We elaborate on the impact of the uncertainty information estimated by a Bayesian segmentation framework on the accuracy of the generated environment model. The steps of data collection and point cloud segmentation as well as the resulting model accuracy are evaluated on a real-world data set collected at the assembly line of a large-scale automotive production plant. The Bayesian segmentation network clearly surpasses the performance of the frequentist baseline and allows us to considerably increase the accuracy of the model placement in a simulation scene.


2021 ◽  
pp. 1-25
Author(s):  
Shuang Jiang ◽  
Quan Zhou ◽  
Xiaowei Zhan ◽  
Qiwei Li

Author(s):  
Shuang Jiang ◽  
Quan Zhou ◽  
Xiaowei Zhan ◽  
Qiwei Li

AbstractCoronavirus disease 2019 (COVID-19) is a pandemic. To characterize the disease transmissibility, we propose a Bayesian change point detection model using daily actively infectious cases. Our model is built upon a Bayesian Poisson segmented regression model that can 1) capture the epidemiological dynamics under the changing conditions caused by external or internal factors; 2) provide uncertainty estimates of both the number and locations of change points; 3) adjust any explanatory time-varying covariates. Our model can be used to evaluate public health interventions, identify latent events associated with spreading rates, and yield better short-term forecasts.


2020 ◽  
Author(s):  
Viktor Petukhov ◽  
Ruslan A. Soldatov ◽  
Konstantin Khodosevich ◽  
Peter V. Kharchenko

Spatial transcriptomics is an emerging stack of technologies, which adds spatial dimension to conventional single-cell RNA-sequencing. New protocols, based on in situ sequencing or multiplexed RNA fluorescent in situ hybridization register positions of single molecules in fixed tissue slices. Analysis of such data at the level of individual cells, however, requires accurate identification of cell boundaries. While many existing methods are able to approximate cell center positions using nuclei stains, current protocols do not report robust signal on the cell membranes, making accurate cell segmentation a key barrier for downstream analysis and interpretation of the data. To address this challenge, we developed a tool for Bayesian Segmentation of Spatial Transcriptomics Data (Baysor), which optimizes segmentation considering the likelihood of transcriptional composition, size and shape of the cell. The Bayesian approach can take into account nuclear or cytoplasm staining, however can also perform segmentation based on the detected transcripts alone. We show that Baysor segmentation can in some cases nearly double the number of the identified cells, while reducing contamination. Importantly, we demonstrate that Baysor performs well on data acquired using five different spatially-resolved protocols, making it a useful general tool for analysis of high-resolution spatial data.


Water ◽  
2020 ◽  
Vol 12 (6) ◽  
pp. 1725
Author(s):  
Yin Chao Wu ◽  
Seong Jin Noh ◽  
Suyun Ham

This study presents a comparative assessment of image enhancement and segmentation techniques to automatically identify the flash flooding from the low-resolution images taken by traffic-monitoring cameras. Due to inaccurate equipment in severe weather conditions (e.g., raindrops or light refraction on camera lenses), low-resolution images are subject to noises that degrade the quality of information. De-noising procedures are carried out for the enhancement of images by removing different types of noises. For the comparative assessment of de-noising techniques, the Bayes shrink and three conventional methods are compared. After the de-noising, image segmentation is implemented to detect the inundation from the images automatically. For the comparative assessment of image segmentation techniques, k-means segmentation, Otsu segmentation, and Bayesian segmentation are compared. In addition, the detection of the inundation using the image segmentation with and without de-noising techniques are compared. The results indicate that among de-noising methods, the Bayes shrink with the thresholding discrete wavelet transform shows the most reliable result. For the image segmentation, the Bayesian segmentation is superior to the others. The results demonstrate that the proposed image enhancement and segmentation methods can be effectively used to identify the inundation from low-resolution images taken in severe weather conditions. By using the principle of the image processing presented in this paper, we can estimate the inundation from images and assess flooding risks in the vicinity of local flooding locations. Such information will allow traffic engineers to take preventive or proactive actions to improve the safety of drivers and protect and preserve the transportation infrastructure. This new observation with improved accuracy will enhance our understanding of dynamic urban flooding by filling an information gap in the locations where conventional observations have limitations.


2020 ◽  
Vol 12 (2) ◽  
pp. 216
Author(s):  
Dehui Xiong ◽  
Chu He ◽  
Xinlong Liu ◽  
Mingsheng Liao

Due to the development of deep convolutional neural networks (CNNs), great progress has been made in semantic segmentation recently. In this paper, we present an end-to-end Bayesian segmentation network based on generative adversarial networks (GANs) for remote sensing images. First, fully convolutional networks (FCNs) and GANs are utilized to realize the derivation of the prior probability and the likelihood to the posterior probability in Bayesian theory. Second, the cross-entropy loss in the FCN serves as an a priori to guide the training of the GAN, so as to avoid the problem of mode collapse during the training process. Third, the generator of the GAN is used as a teachable spatial filter to construct the spatial relationship between each label. Some experiments were performed on two remote sensing datasets, and the results demonstrate that the training of the proposed method is more stable than other GAN based models. The average accuracy and mean intersection (MIoU) of the two datasets were 0.0465 and 0.0821, and 0.0772 and 0.1708 higher than FCN, respectively.


Sign in / Sign up

Export Citation Format

Share Document