feature variability
Recently Published Documents


TOTAL DOCUMENTS

34
(FIVE YEARS 5)

H-INDEX

6
(FIVE YEARS 0)

2021 ◽  
Vol 161 ◽  
pp. S1519-S1520
Author(s):  
M. Zaffaroni ◽  
G. Carloni ◽  
S. Volpe ◽  
C. Garibaldi ◽  
G. Marvaso ◽  
...  

2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Montserrat Carles ◽  
Tobias Fechter ◽  
Luis Martí-Bonmatí ◽  
Dimos Baltas ◽  
Michael Mix

Abstract Background Radiomics analysis usually involves, especially in multicenter and large hospital studies, different imaging protocols for acquisition, reconstruction, and processing of data. Differences in protocols can lead to differences in the quantification of the biomarker distribution, leading to radiomic feature variability. The aim of our study was to identify those radiomic features robust to the different degrading factors in positron emission tomography (PET) studies. We proposed the use of the standardized measurements of the European Association Research Ltd. (EARL) accreditation to retrospectively identify the radiomic features having low variability to the different systems and reconstruction protocols. In addition, we presented a reproducible procedure to identify PET radiomic features robust to PET/CT imaging metal artifacts. In 27 heterogeneous homemade phantoms for which ground truth was accurately defined by CT segmentation, we evaluated the segmentation accuracy and radiomic feature reliability given by the contrast-oriented algorithm (COA) and the 40% threshold PET segmentation. In the comparison of two data sets, robustness was defined by Wilcoxon rank tests, bias was quantified by Bland–Altman (BA) plot analysis, and strong correlations were identified by Spearman correlation test (r > 0.8 and p satisfied multiple test Bonferroni correction). Results Forty-eight radiomic features were robust to system, 22 to resolution, 102 to metal artifacts, and 42 to different PET segmentation tools. Overall, only 4 radiomic features were simultaneously robust to all degrading factors. Although both segmentation approaches significantly underestimated the volume with respect to the ground truth, with relative deviations of −62 ± 36% for COA and −50 ± 44% for 40%, radiomic features derived from the ground truth were strongly correlated and/or robust to 98 radiomic features derived from COA and to 102 from 40%. Conclusion In multicenter studies, we recommend the analysis of EARL accreditation measurements in order to retrospectively identify the robust PET radiomic features. Furthermore, 4 radiomic features (area under the curve of the cumulative SUV volume histogram, skewness, kurtosis, and gray-level variance derived from GLRLM after application of an equal probability quantization algorithm on the voxels within lesion) were robust to all degrading factors. In addition, the feasibility of 40% and COA segmentations for their use in radiomics analysis has been demonstrated.


2021 ◽  
Vol 11 ◽  
Author(s):  
Binsheng Zhao

Radiomics is the method of choice for investigating the association between cancer imaging phenotype, cancer genotype and clinical outcome prediction in the era of precision medicine. The fast dispersal of this new methodology has benefited from the existing advances of the core technologies involved in radiomics workflow: image acquisition, tumor segmentation, feature extraction and machine learning. However, despite the rapidly increasing body of publications, there is no real clinical use of a developed radiomics signature so far. Reasons are multifaceted. One of the major challenges is the lack of reproducibility and generalizability of the reported radiomics signatures (features and models). Sources of variation exist in each step of the workflow; some are controllable or can be controlled to certain degrees, while others are uncontrollable or even unknown. Insufficient transparency in reporting radiomics studies further prevents translation of the developed radiomics signatures from the bench to the bedside. This review article first addresses sources of variation, which is illustrated using demonstrative examples. Then, it reviews a number of published studies and progresses made to date in the investigation and improvement of feature reproducibility and model performance. Lastly, it discusses potential strategies and practical considerations to reduce feature variability and improve the quality of radiomics study. This review focuses on CT image acquisition, tumor segmentation, quantitative feature extraction, and the disease of lung cancer.


2021 ◽  
Vol 12 ◽  
Author(s):  
Leonora C. Coppens ◽  
Christine E. S. Postema ◽  
Anne Schüler ◽  
Katharina Scheiter ◽  
Tamara van Gog

Being able to categorize objects as similar or different is an essential skill. An important aspect of learning to categorize is learning to attend to relevant features (i.e., features that determine category membership) and ignore irrelevant features of the to-be-categorized objects. Feature variability across objects of different categories is informative, because it allows inferring the rules underlying category membership. In this study, participants learned to categorize fictitious creatures (i.e., aliens). We measured attention to the aliens during learning using eye-tracking and calculated the attentional focus as the ratio of attention to relevant versus irrelevant features. As expected, participants’ categorization accuracy improved with practice; however, in contrast to our expectations, their attentional focus did not improve with practice. When computing the attentional focus, attention to the aliens’ eyes was disregarded, because while eyes attract a lot of attention, they did not vary across aliens (non-informative feature). Yet, an explorative analysis of attention to eyes suggested that participants’ attentional focus did become somewhat more efficient in that over time they learned to ignore the eyes. Results are discussed in the context of the need for instructional methods to improve attentional focus in learning to categorize.


2021 ◽  
Vol 190 ◽  
pp. 312-316
Author(s):  
Larisa Ismailova ◽  
Viacheslav Wolfengagen ◽  
Sergey Kosikov

2020 ◽  
Vol 12 (23) ◽  
pp. 3879
Author(s):  
Guangxing Wang ◽  
Peng Ren

Deep learning classifiers exhibit remarkable performance for hyperspectral image classification given sufficient labeled samples but show deficiency in the situation of learning with limited labeled samples. Active learning endows deep learning classifiers with the ability to alleviate this deficiency. However, existing active deep learning methods tend to underestimate the feature variability of hyperspectral images when querying informative unlabeled samples subject to certain acquisition heuristics. A major reason for this bias is that the acquisition heuristics are normally derived based on the output of a deep learning classifier, in which representational power is bounded by the number of labeled training samples at hand. To address this limitation, we developed a feature-oriented adversarial active learning (FAAL) strategy, which exploits the high-level features from one intermediate layer of a deep learning classifier for establishing an acquisition heuristic based on a generative adversarial network (GAN). Specifically, we developed a feature generator for generating fake high-level features and a feature discriminator for discriminating between the real high-level features and the fake ones. Trained with both the real and the fake high-level features, the feature discriminator comprehensively captures the feature variability of hyperspectral images and yields a powerful and generalized discriminative capability. We leverage the well-trained feature discriminator as the acquisition heuristic to measure the informativeness of unlabeled samples. Experimental results validate the effectiveness of both (i) the full FAAL framework and (ii) the adversarially learned acquisition heuristic, for the task of classifying hyperspectral images with limited labeled samples.


2020 ◽  
Vol 65 (20) ◽  
pp. 205008
Author(s):  
Joseph J Foy ◽  
Hania A Al-Hallaq ◽  
Vincent Grekoski ◽  
Tri Tran ◽  
Kharina Guruvadoo ◽  
...  

2020 ◽  
Vol 57 (4) ◽  
Author(s):  
Juanita Todd ◽  
Jade Frost ◽  
Kaitlin Fitzgerald ◽  
István Winkler

Author(s):  
Sofia Guzzetti ◽  
Luis Alonso Mansilla Alvarez ◽  
Pablo Javier Blanco ◽  
Kevin Thomas Carlberg ◽  
Alessandro Veneziani

Reduced 1D models of the cardiovascular system are widely employed to study the propagation of pressure waves induced by the mutual interaction between the fluid and the compliant vessel walls. In particular, the interplay between anomalous pressure waves and pathologies like amputations, stenoses or devices like stents is of great interest from a medical viewpoint. However, the parameters that characterize reduced 1D models are often unknown, and feature variability not only from patient to patient, but also within the same individual, depending on physiological conditions (e.g., rest vs. stress, and young vs. old). This motivated the design of mathematical and numerical techniques to quantify the uncertainties in these models. Uncertainty Quantification (UQ) studies on the cardiovascular network entail two major challenges or limitations: (i) The employment of full 3D models for UQ analysis is extremely costly and requires computational resources that may not be easily accessible by users like hospitals, for financial, privacy or time constraints; (ii) Reduced 1D models may be inaccurate in capturing anomalies of the physiology in presence of cardiovascular pathologies like stenoses or aneurysms. Following the DDUQ approach (Carlberg et al, 2018), we enhance the efficiency and parallelism of the solvers by performing UQ at the subsystem level at each time step, and by propagating the information via Domain Decomposition techniques. We plan to enhance accuracy and reliability by replacing the 1D models with educated reduced models such as the Transversally Enriched Pipe Element Method (Mansilla et al, 2017), capable of retaining the local cross-sectional dynamics, approximately at the same cost of 1D reduced models.


Sign in / Sign up

Export Citation Format

Share Document