scholarly journals Automated Mapping of Transportation Embankments in Fine-Resolution LiDAR DEMs

2021 ◽  
Vol 13 (7) ◽  
pp. 1308
Author(s):  
Nigel Van Nieuwenhuizen ◽  
John B. Lindsay ◽  
Ben DeVries

Fine-resolution LiDAR DEMs can represent surface features such as road and railway embankments with high fidelity. However, transportation embankments are problematic for several environmental modelling applications, and particularly hydrological modelling. Currently, there are no automated techniques for the identification and removal of embankments from LiDAR DEMs. This paper presents a novel algorithm for identifying embankments in LiDAR DEMs. The algorithm utilizes repositioned transportation network cells as seed points in a region-growing operation. The embankment region grows based on derived morphometric parameters, including road surface width, embankment width, embankment height, and absolute slope. The technique was tested on eight LiDAR DEMs representing subsections of four watersheds in southwestern Ontario, Canada, ranging in size from 16 million cells to 134 million cells. The algorithm achieved a recall greater than or equal to 90% for seven of the eight DEMs, while achieving a Pearson’s phi correlation coefficient greater than 80% for five of the eight DEMs. Therefore, the method has moderate to high accuracy for identifying embankments. The processing times associated with applying the technique to the eight study site DEMs ranged from 1.4 s to 20.3 s, which demonstrates the practicality of using the embankment mapping tool in applications with data set sizes commonly encountered in practice.

Author(s):  
Maggie Hess

Purpose: Intraventricular hemorrhage (IVH) affects nearly 15% of preterm infants. It can lead to ventricular dilation and cognitive impairment. To ablate IVH clots, MR-guided focused ultrasound surgery (MRgFUS) is investigated. This procedure requires accurate, fast and consistent quantification of ventricle and clot volumes. Methods: We developed a semi-autonomous segmentation (SAS) algorithm for measuring changes in the ventricle and clot volumes. Images are normalized, and then ventricle and clot masks are registered to the images. Voxels of the registered masks and voxels obtained by thresholding the normalized images are used as seed points for competitive region growing, which provides the final segmentation. The user selects the areas of interest for correspondence after thresholding and these selections are the final seeds for region growing. SAS was evaluated on an IVH porcine model.  Results: SAS was compared to ground truth manual segmentation (MS) for accuracy, efficiency, and consistency. Accuracy was determined by comparing clot and ventricle volumes produced by SAS and MS. In Two-One-Sided Test, SAS and MS were found to be significantly equivalent (p < 0.01). SAS on average was found to be 15 times faster than MS (p < 0.01). Consistency was determined by repeated segmentation of the same image by both SAS and manual methods, SAS being significantly more consistent than MS (p < 0.05).  Conclusion: SAS is a viable method to quantify the IVH clot and the lateral brain ventricles and it is serving in a large- scale porcine study of MRgFUS treatment of IVH clot lysis.


2020 ◽  
Vol 20 (03) ◽  
pp. 2050018
Author(s):  
Neeraj Shrivastava ◽  
Jyoti Bharti

In the domain of computer technology, image processing strategies have become a part of various applications. A few broadly used image segmentation methods have been characterized as seeded region growing (SRG), edge-based image segmentation, fuzzy [Formula: see text]-means image segmentation, etc. SRG is a quick, strongly formed and impressive image segmentation algorithm. In this paper, we delve into different applications of SRG and their analysis. SRG delivers better results in analysis of magnetic resonance images, brain image, breast images, etc. On the other hand, it has some limitations as well. For example, the seed points have to be selected manually and this manual selection of seed points at the time of segmentation brings about wrong selection of regions. So, a review of some automatic seed selection methods with their advantages, disadvantages and applications in different fields has been presented.


2013 ◽  
Vol 760-762 ◽  
pp. 1552-1555 ◽  
Author(s):  
Jing Jing Wang ◽  
Xiao Wei Song ◽  
Mei Fang

Image segmentation in medical image processing has been extensively used which has also been applied in different fields of medicine to assist doctors to make the correct judgment and grasp the patient's condition. However, nowadays there are no image threshold segmentation techniques that can be applied to all of the medical images; so it has became a challenging problem. In this paper, it applies a method of identifying edge of the tissues and organs to recognize its contour, and then selects a number of seed points on the contour range to locate the cancer area by region growing. And finally, the result has demonstrated that this method can mostly locate the cancer area accurately.


2013 ◽  
Vol 2013 ◽  
pp. 1-9 ◽  
Author(s):  
Huiyan Jiang ◽  
Baochun He ◽  
Di Fang ◽  
Zhiyuan Ma ◽  
Benqiang Yang ◽  
...  

We propose a region growing vessel segmentation algorithm based on spectrum information. First, the algorithm does Fourier transform on the region of interest containing vascular structures to obtain its spectrum information, according to which its primary feature direction will be extracted. Then combined edge information with primary feature direction computes the vascular structure’s center points as the seed points of region growing segmentation. At last, the improved region growing method with branch-based growth strategy is used to segment the vessels. To prove the effectiveness of our algorithm, we use the retinal and abdomen liver vascular CT images to do experiments. The results show that the proposed vessel segmentation algorithm can not only extract the high quality target vessel region, but also can effectively reduce the manual intervention.


Geophysics ◽  
2012 ◽  
Vol 77 (4) ◽  
pp. E281-E299 ◽  
Author(s):  
David Myer ◽  
Steven Constable ◽  
Kerry Key ◽  
Michael E. Glinsky ◽  
Guimin Liu

We describe the planning, processing, and uncertainty analysis for a marine CSEM survey of the Scarborough gas field off the northwest coast of Australia, consisting of 20 transmitter tow lines and 144 deployments positioned along a dense 2D profile and a complex 3D grid. The purpose of this survey was to collect a high-quality data set over a known hydrocarbon prospect and use it to further the development of CSEM as a hydrocarbon mapping tool. Recent improvements in navigation and processing techniques yielded high-quality frequency domain data. Data pseudosections exhibit a significant anomaly that is laterally confined within the known reservoir location. Perturbation analysis of the uncertainties in the transmitter parameters yielded predicted uncertainties in amplitude and phase of just a few percent at close ranges. These uncertainties may, however, be underestimated. We introduce a method for more accurately deriving uncertainties using a line of receivers towed twice in opposite directions. Comparing the residuals for each line yields a Gaussian distribution directly related to the aggregate uncertainty of the transmitter parameters. Constraints on systematic error in the transmitter antenna dip and inline range can be calculated by perturbation analysis. Uncertainties are not equal in amplitude and phase, suggesting that inversion of these data would be better suited in these components rather than in real and imaginary components. One-dimensional inversion showed that the reservoir and a confounding resistive layer above it cannot be separately resolved even when the roughness constraint is modified to allow for jumps in resistivity and prejudices are provided, indicating that this level of detail is beyond the single-site CSEM data. Further, when range-dependent error bars are used, the resolution decreases at a shallower depth than when a fixed-error level is used.


Author(s):  
M. Shahzad ◽  
X. X. Zhu

In this paper, we present an approach that allows automatic (parametric) reconstruction of building shapes in 2-D/3-D using TomoSAR point clouds. These point clouds are generated by processing radar image stacks via advanced interferometric technique, called SAR tomography. The proposed approach reconstructs the building outline by exploiting both the available roof and façade information. Roof points are extracted out by employing a surface normals based region growing procedure via selected seed points while the extraction of façade points is based on thresholding the point scatterer density <i>SD</i> estimated by robust M-estimator. Spatial clustering is then applied to the extracted roof points in a way such that each roof cluster represents an individual building. Extracted façade points are reconstructed and afterwards incorporated to the segmented roof cluster to reconstruct the complete building shape. Initial building footprints are derived by employing alpha shapes method that are later regularized. Finally, rectilinear constraints are added to yield better geometrically looking building shapes. The proposed approach is illustrated and validated by examples using TomoSAR point clouds generated from a stack of TerraSAR-X high-resolution spotlight images from ascending orbit only covering two different test areas with one containing relatively smaller buildings in densely populated regions and the other containing moderate sized buildings in the city of Las Vegas.


Author(s):  
Drew Levin ◽  
Patrick Finley

ObjectiveTo develop a spatially accurate biosurveillance synthetic datagenerator for the testing, evaluation, and comparison of new outbreakdetection techniques.IntroductionDevelopment of new methods for the rapid detection of emergingdisease outbreaks is a research priority in the field of biosurveillance.Because real-world data are often proprietary in nature, scientists mustutilize synthetic data generation methods to evaluate new detectionmethodologies. Colizza et. al. have shown that epidemic spread isdependent on the airline transportation network [1], yet current datagenerators do not operate over network structures.Here we present a new spatial data generator that models thespread of contagion across a network of cities connected by airlineroutes. The generator is developed in the R programming languageand produces data compatible with the popular `surveillance’ softwarepackage.MethodsColizza et. al. demonstrate the power-law relationships betweencity population, air traffic, and degree distribution [1]. We generate atransportation network as a Chung-Lu random graph [2] that preservesthese scale-free relationships (Figure 1).First, given a power-law exponent and a desired number of cities,a probability mass function (PMF) is generated that mirrors theexpected degree distribution for the given power-law relationship.Values are then sampled from this PMF to generate an expecteddegree (number of connected cities) for each city in the network.Edges (airline connections) are added to the network probabilisticallyas described in [2]. Unconnected graph components are each joinedto the largest component using linear preferential attachment. Finally,city sizes are calculated based on an observed three-quarter power-law scaling relationship with the sampled degree distribution.Each city is represented as a customizable stochastic compartmentalSIR model. Transportation between cities is modeled similar to [2].An infection is initialized in a single random city and infection countsare recorded in each city for a fixed period of time. A consistentfraction of the modeled infection cases are recorded as daily clinicvisits. These counts are then added onto statically generated baselinedata for each city to produce a full synthetic data set. Alternatively,data sets can be generated using real-world networks, such as the onemaintained by the International Air Transport Association.ResultsDynamics such as the number of cities, degree distribution power-law exponent, traffic flow, and disease kinetics can be customized.In the presented example (Figure 2) the outbreak spreads over a 20city transportation network. Infection spreads rapidly once the morepopulated hub cities are infected. Cities that are multiple flights awayfrom the initially infected city are infected late in the process. Thegenerator is capable of creating data sets of arbitrary size, length, andconnectivity to better mirror a diverse set of observed network types.ConclusionsNew computational methods for outbreak detection andsurveillance must be compared to established approaches. Outbreakmitigation strategies require a realistic model of human transportationbehavior to best evaluate impact. These actions require test data thataccurately reflect the complexity of the real-world data they wouldbe applied to. The outbreak data generated here represents thecomplexity of modern transportation networks and are made to beeasily integrated with established software packages to allow for rapidtesting and deployment.Randomly generated scale-free transportation network with a power-lawdegree exponent ofλ=1.8. City and link sizes are scaled to reflect their weight.An example of observed daily outbreak-related clinic visits across a randomlygenerated network of 20 cities. Each city is colored by the number of flightsrequired to reach the city from the initial infection location. These generatedcounts are then added onto baseline data to create a synthetic data set forexperimentation.KeywordsSimulation; Network; Spatial; Synthetic; Data


2011 ◽  
Vol 9 (2) ◽  
Author(s):  
Norzailawati Mohd Noor ◽  
Alias Abdullah ◽  
Mazlan Hashim

Land use mapping in development plan basically provides resources of information and important tool in decision making. In relation to this, fine resolution of recent satellite remotely sensed data have found wide applications in land use/land cover mapping. This study reports on work carried out for classification of fused image for land use mapping in detail scale for Local Plan. The LANDSATTM, SPOT Pan and IKONOS satellite were fused and examined using three data fusion techniques, namely Principal Component Transfonn (PCT), Wavelet Transform and Multiplicative fusing approach. The best fusion technique for three datasets was determined based on the assessment of class separabilities and visualizations evaluation of the selected subset of the fused datasets, respectively. Principal Component Transform has been found to be the best technique for fusing the three datasets, where the best fused data set was subjected to further classification for producing level of land use classes while level II and III pass on to nine classes of detail classification for local plan. The overall data classification accuracy of the best fused data set was 0.86 (kappa statistic). Final land use output from classified data was successfully generated in accordance to local plan land use mapping for development plan purposes.


2021 ◽  
Vol 40 (8) ◽  
pp. 567-575
Author(s):  
Myrto Papadopoulou ◽  
Farbod Khosro Anjom ◽  
Mohammad Karim Karimpour ◽  
Valentina Laura Socco

Surface-wave (SW) tomography is a technique that has been widely used in the field of seismology. It can provide higher resolution relative to the classical multichannel SW processing and inversion schemes that are usually adopted for near-surface applications. Nevertheless, the method is rarely used in this context, mainly due to the long processing times needed to pick the dispersion curves as well as the inability of the two-station processing to discriminate between higher SW modes. To make it efficient and to retrieve pseudo-2D/3D S-wave velocity (VS) and P-wave velocity (VP) models in a fast and convenient way, we develop a fully data-driven two-station dispersion curve estimation, which achieves dense spatial coverage without the involvement of an operator. To handle higher SW modes, we apply a dedicated time-windowing algorithm to isolate and pick the different modes. A multimodal tomographic inversion is applied to estimate a VS model. The VS model is then converted to a VP model with the Poisson's ratio estimated through the wavelength-depth method. We apply the method to a 2D seismic exploration data set acquired at a mining site, where strong lateral heterogeneity is expected, and to a 3D pilot data set, recorded with state-of-the-art acquisition technology. We compare the results with the ones retrieved from classical multichannel analysis.


Circulation ◽  
2014 ◽  
Vol 130 (suppl_2) ◽  
Author(s):  
Geetha Rayarao ◽  
Robert W Biederman ◽  
Diane V Thompson ◽  
Sahadev T Reddy ◽  
June Yamrozik ◽  
...  

Introduction: In cardiac MRI (CMR), heart volumes are traditionally measured using contouring methods applied to contiguous image data. Herein, we introduce a new approach, Automatic Threshold and Manual Trimming (ATMT), which is applied to the same contiguous data set. Potentially, the ATMT method can be applied by seed/region-growing algorithms with minimal user supervision. We sought to establish its clinical validity. Hypothesis: We hypothesize that the ATMT approach is more accurate as compared to conventional 'gold standard', cardiac contouring. Methods: Hearts from two populations (N=74) were evaluated: explanted heart transplant (Tx) and a clinical validation cohort ( in vivo ). The transplanted hearts were imaged ex vivo using CMR and then weighed on a high-fidelity scale. Cardiac volume/mass was compared (N=54) to the patient cohort (N=20) and measured non-invasively with stroke volume, independently measured via CMR phase velocity technique. Bland-Altman was applied in a 3-way manner for each group. Results: Bland-Altman analysis for Standard Deviation (SD), Bias and Correlation (R) are summarized in Table 1. When compared with independent measurements (weight/flow), ATMT has lower Bias (close to zero) and SD. Further, any comparison involving cardiac contours has a substantially larger bias term and a higher SD. From the table below, ATMT has consistently higher correlations with the independent measurement than does the contour method. Conclusions: Based on multiple comparison metrics with independent measures, the ATMT approach is more accurate and reproducible for quantification of cardiac volume (integral for EF determination) as compared to standard contouring. Furthermore, ATMT accommodates trabeculae and papillary structures more intuitively than the contouring method. This intrinsic accuracy coupled with the potential for more rapid analysis gives a valid impetus to further develop the ATMT approach further increasing CMR accuracy.


Sign in / Sign up

Export Citation Format

Share Document