The impact of pulmonary nodule size estimation accuracy on the measured performance of automated nodule detection systems

2008 ◽  
Author(s):  
Sergei V. Fotin ◽  
Anthony P. Reeves ◽  
David F. Yankelevitz ◽  
Claudia I. Henschke
Thorax ◽  
2017 ◽  
Vol 73 (8) ◽  
pp. 779-781 ◽  
Author(s):  
Marjolein A Heuvelmans ◽  
Joan E Walter ◽  
Rozemarijn Vliegenthart ◽  
Peter M A van Ooijen ◽  
Geertruida H De Bock ◽  
...  

We studied 2240 indeterminate solid nodules (volume 50–500mm3) to determine the correlation of diameter and semi-automated volume measurements for pulmonary nodule size estimation. Intra-nodular diameter variation, defined as maximum minus minimum diameter through the nodule’s center, varied by 2.8 mm (median, IQR:2.2–3.7 mm), so above the 1.5 mm cutoff for nodule growth used in Lung CT Screening Reporting and Data System (Lung-RADS). Using mean or maximum axial diameter to assess nodule volume led to a substantial mean overestimation of nodule volume of 47.2% and 85.1%, respectively, compared to semi-automated volume. Thus, size of indeterminate nodules is poorly represented by diameter.Trial registration numberPre-results, ISRCTN63545820.


Diagnostics ◽  
2020 ◽  
Vol 10 (3) ◽  
pp. 131 ◽  
Author(s):  
Shimaa EL-Bana ◽  
Ahmad Al-Kabbany ◽  
Maha Sharkas

This research is concerned with malignant pulmonary nodule detection (PND) in low-dose CT scans. Due to its crucial role in the early diagnosis of lung cancer, PND has considerable potential in improving the survival rate of patients. We propose a two-stage framework that exploits the ever-growing advances in deep neural network models, and that is comprised of a semantic segmentation stage followed by localization and classification. We employ the recently published DeepLab model for semantic segmentation, and we show that it significantly improves the accuracy of nodule detection compared to the classical U-Net model and its most recent variants. Using the widely adopted Lung Nodule Analysis dataset (LUNA16), we evaluate the performance of the semantic segmentation stage by adopting two network backbones, namely, MobileNet-V2 and Xception. We present the impact of various model training parameters and the computational time on the detection accuracy, featuring a 79.1% mean intersection-over-union (mIoU) and an 88.34% dice coefficient. This represents a mIoU increase of 60% and a dice coefficient increase of 30% compared to U-Net. The second stage involves feeding the output of the DeepLab-based semantic segmentation to a localization-then-classification stage. The second stage is realized using Faster RCNN and SSD, with an Inception-V2 as a backbone. On LUNA16, the two-stage framework attained a sensitivity of 96.4%, outperforming other recent models in the literature, including deep models. Finally, we show that adopting a transfer learning approach, particularly, the DeepLab model weights of the first stage of the framework, to infer binary (malignant-benign) labels on the Kaggle dataset for pulmonary nodules achieves a classification accuracy of 95.66%, which represents approximately 4% improvement over the recent literature.


IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 46033-46044 ◽  
Author(s):  
Jun Wang ◽  
Jiawei Wang ◽  
Yaofeng Wen ◽  
Hongbing Lu ◽  
Tianye Niu ◽  
...  

2021 ◽  
pp. 096228022110432
Author(s):  
Mian Wang ◽  
Bryce B. Reeve

The use of patient-reported outcomes measures is gaining popularity in clinical trials for comparing patient groups. Such comparisons typically focus on the differences in group means and are carried out using either a traditional sum-score-based approach or item response theory (IRT)-based approaches. Several simulation studies have evaluated different group mean comparison approaches in the past, but the performance of these approaches remained unknown under certain uninvestigated conditions (e.g. under the impact of differential item functioning (DIF)). By incorporating some of the uninvestigated simulation features, the current study examines Type I error, statistical power, and effect size estimation accuracy associated with group mean comparisons using simple sum scores, IRT model likelihood ratio tests, and IRT expected-a-posteriori scores. Manipulated features include sample size per group, number of items, number of response categories, strength of discrimination parameters, location of thresholds, impact of DIF, and presence of missing data. Results are summarized and visualized using decision trees.


Sensors ◽  
2021 ◽  
Vol 21 (11) ◽  
pp. 3768
Author(s):  
Yongshou Yang ◽  
Shiliang Fang

Broadband acoustic Doppler current profiler (ADCP) is widely used in agricultural water resource explorations, such as river discharge monitoring and flood warning. Improving the velocity estimation accuracy of broadband ADCP by adjusting the waveform parameters of a phase-encoded signal will reduce the velocity measurement range and water stratification accuracy, while the promotion of stratification accuracy will degrade the velocity estimation accuracy. In order to minimize the impact of these two problems on the measurement results, the ADCP waveform optimization problem that satisfies the environment constraints while keeping high velocity estimation accuracy or stratification accuracy is studied. Firstly, the relationship between velocity or distance estimation accuracy and signal waveform parameters is studied by using an ambiguity function. Secondly, the constraints of current velocity range, velocity distribution and other environmental characteristics on the waveform parameters are studied. For two common measurement applications, two dynamic configuration methods of waveform parameters with environmental adaptability and optimal velocity estimation accuracy or stratification accuracy are proposed based on the nonlinear programming principle. Experimental results show that compared with the existing methods, the velocity estimation accuracy of the proposed method is improved by more than 50%, and the stratification accuracy is improved by more than 22%.


2019 ◽  
Vol 13 (11) ◽  
pp. 3045-3059 ◽  
Author(s):  
Nick Rutter ◽  
Melody J. Sandells ◽  
Chris Derksen ◽  
Joshua King ◽  
Peter Toose ◽  
...  

Abstract. Spatial variability in snowpack properties negatively impacts our capacity to make direct measurements of snow water equivalent (SWE) using satellites. A comprehensive data set of snow microstructure (94 profiles at 36 sites) and snow layer thickness (9000 vertical profiles across nine trenches) collected over two winters at Trail Valley Creek, NWT, Canada, was applied in synthetic radiative transfer experiments. This allowed for robust assessment of the impact of estimation accuracy of unknown snow microstructural characteristics on the viability of SWE retrievals. Depth hoar layer thickness varied over the shortest horizontal distances, controlled by subnivean vegetation and topography, while variability in total snowpack thickness approximated that of wind slab layers. Mean horizontal correlation lengths of layer thickness were less than a metre for all layers. Depth hoar was consistently ∼30 % of total depth, and with increasing total depth the proportion of wind slab increased at the expense of the decreasing surface snow layer. Distinct differences were evident between distributions of layer properties; a single median value represented density and specific surface area (SSA) of each layer well. Spatial variability in microstructure of depth hoar layers dominated SWE retrieval errors. A depth hoar SSA estimate of around 7 % under the median value was needed to accurately retrieve SWE. In shallow snowpacks <0.6 m, depth hoar SSA estimates of ±5 %–10 % around the optimal retrieval SSA allowed SWE retrievals within a tolerance of ±30 mm. Where snowpacks were deeper than ∼30 cm, accurate values of representative SSA for depth hoar became critical as retrieval errors were exceeded if the median depth hoar SSA was applied.


2018 ◽  
Vol 2018 ◽  
pp. 1-7
Author(s):  
A. B. Vallejo-Mora ◽  
M. Toril ◽  
S. Luna-Ramírez ◽  
M. Regueira ◽  
S. Pedraza

UpLink Power Control (ULPC) is a key radio resource management procedure in mobile networks. In this paper, an analytical model for estimating the impact of increasing the nominal power parameter in the ULPC algorithm for the Physical Uplink Shared CHannel (PUSCH) in Long Term Evolution (LTE) is presented. The aim of the model is to predict the effect of changing the nominal power parameter in a cell on the interference and Signal-to-Interference-plus-Noise Ratio (SINR) of that cell and its neighbors from network statistics. Model assessment is carried out by means of a field trial where the nominal power parameter is increased in some cells of a live LTE network. Results show that the proposed model achieves reasonable estimation accuracy, provided uplink traffic does not change significantly.


Sign in / Sign up

Export Citation Format

Share Document