scholarly journals A method to adjust a prior distribution in Bayesian second-level fMRI analysis

PeerJ ◽  
2021 ◽  
Vol 9 ◽  
pp. e10861
Author(s):  
Hyemin Han

Previous research has shown the potential value of Bayesian methods in fMRI (functional magnetic resonance imaging) analysis. For instance, the results from Bayes factor-applied second-level fMRI analysis showed a higher hit rate compared with frequentist second-level fMRI analysis, suggesting greater sensitivity. Although the method reported more positives as a result of the higher sensitivity, it was able to maintain a reasonable level of selectivity in term of the false positive rate. Moreover, employment of the multiple comparison correction method to update the default prior distribution significantly improved the performance of Bayesian second-level fMRI analysis. However, previous studies have utilized the default prior distribution and did not consider the nature of each individual study. Thus, in the present study, a method to adjust the Cauchy prior distribution based on a priori information, which can be acquired from the results of relevant previous studies, was proposed and tested. A Cauchy prior distribution was adjusted based on the contrast, noise strength, and proportion of true positives that were estimated from a meta-analysis of relevant previous studies. In the present study, both the simulated images and real contrast images from two previous studies were used to evaluate the performance of the proposed method. The results showed that the employment of the prior adjustment method resulted in improved performance of Bayesian second-level fMRI analysis.

2020 ◽  
Author(s):  
Hyemin Han

Previous research has shown the potential value of Bayesian methods in fMRI (functional magnetic resonance imaging) analysis. For instance, the results from Bayes factor-applied second-level fMRI analysis showed a higher hit rate compared with frequentist second-level fMRI analysis, suggesting greater sensitivity. Although the method reported more positives as a result of the higher sensitivity, it was able to maintain a reasonable level of selectivity in term of the false positive rate. Moreover, employment of the multiple comparison correction method to update the default prior distribution significantly improved the performance of Bayesian second-level fMRI analysis. However, previous studies have utilized the default prior distribution and did not consider the nature of each individual study. Thus, in the present study, a method to adjust the Cauchy prior distribution based on a priori information, which can be acquired from the results of relevant previous studies, was proposed and tested. A Cauchy prior distribution was adjusted based on the contrast, noise strength, and proportion of true positives that were estimated from a meta-analysis of relevant previous studies. In the present study, both the simulated images and real contrast images from two previous studies were used to evaluate the performance of the proposed method. The results showed that the employment of the prior adjustment method resulted in improved performance of Bayesian second-level fMRI analysis.


2019 ◽  
Vol 485 (5) ◽  
pp. 558-563
Author(s):  
V. F. Kravchenko ◽  
V. I. Ponomaryov ◽  
V. I. Pustovoit ◽  
E. Rendon-Gonzalez

A new computer-aided detection (CAD) system for lung nodule detection and selection in computed tomography scans is substantiated and implemented. The method consists of the following stages: preprocessing based on threshold and morphological filtration, the formation of suspicious regions of interest using a priori information, the detection of lung nodules by applying the fractal dimension transformation, the computation of informative texture features for identified lung nodules, and their classification by applying the SVM and AdaBoost algorithms. A physical interpretation of the proposed CAD system is given, and its block diagram is constructed. The simulation results based on the proposed CAD method demonstrate advantages of the new approach in terms of standard criteria, such as sensitivity and the false-positive rate.


2019 ◽  
Vol 05 (03) ◽  
pp. E98-E106
Author(s):  
Elisabeth Wrede ◽  
Alexander Johannes Knippel ◽  
Pablo Emilio Verde ◽  
Ruediger Hammer ◽  
Peter Kozlowski

Abstract Objective To investigate the clinical relevance of an isolated echogenic cardiac focus (iECF) as a marker for trisomy 21 using a large second-trimester collective including a low-risk subgroup. Materials and Methods We retrospectively evaluated 1 25 211 pregnancies from 2000–2016 and analyzed all iECF cases with regard to chromosomal anomalies. It consisted of an early second-trimester collective from 14+0−17+6 weeks (n=34 791) and a second-trimester anomaly scan collective from 18+0–21+6 weeks. Two a priori risk subgroups (high and low risk) of the latter were built based on maternal age and previous screening test results using a cut-off of 1:300. Likelihood ratios (LR) of iECF for the detection of trisomy 21, trisomy 13, trisomy 18 and structural chromosomal anomalies were estimated. Results In total, 1 04 001 patients were included. An iECF was found in 4416 of 1 02 847 euploid fetuses (4.29%) and in 64 of 557 cases with trisomy 21 (11.49%) giving a positive LR of 2.68 (CI: 2.12–3.2). The sensitivity was 11.5% at a false-positive rate of 4.29% (CI:4.17–4.42) with p≤0.01%. In the high-and low-risk subgroups, the prevalence of iECF was comparable: 5.08% vs. 5.05%. The frequency of trisomy 21 was 0.39%, 98/24 979 vs 0.16%, 69/44 103. LR+was 3.86 (2.43–5.14) and 2.59 (1.05–4). For both subgroups the association of iECF with trisomy 21 was statistically significant. The prevalence of structural chromosomal anomalies in the second-trimester anomaly scan collective was 0.08% (52/68 967), of which 2 showed an iECF. Conclusion The detection of an iECF at the time of 14+0–21+6 weeks significantly increases the risk for trisomy 21 in the high-risk and in the low-risk subgroups and does not statistically change the risks for trisomy 13/18 or structural abnormalitie.


2019 ◽  
Author(s):  
Amanda Kvarven ◽  
Eirik Strømland ◽  
Magnus Johannesson

Andrews & Kasy (2019) propose an approach for adjusting effect sizes in meta-analysis for publication bias. We use the Andrews-Kasy estimator to adjust the result of 15 meta-analyses and compare the adjusted results to 15 large-scale multiple labs replication studies estimating the same effects. The pre-registered replications provide precisely estimated effect sizes, which do not suffer from publication bias. The Andrews-Kasy approach leads to a moderate reduction of the inflated effect sizes in the meta-analyses. However, the approach still overestimates effect sizes by a factor of about two or more and has an estimated false positive rate of between 57% and 100%.


2018 ◽  
Author(s):  
Qianying Wang ◽  
Jing Liao ◽  
Kaitlyn Hair ◽  
Alexandra Bannach-Brown ◽  
Zsanett Bahor ◽  
...  

AbstractBackgroundMeta-analysis is increasingly used to summarise the findings identified in systematic reviews of animal studies modelling human disease. Such reviews typically identify a large number of individually small studies, testing efficacy under a variety of conditions. This leads to substantial heterogeneity, and identifying potential sources of this heterogeneity is an important function of such analyses. However, the statistical performance of different approaches (normalised compared with standardised mean difference estimates of effect size; stratified meta-analysis compared with meta-regression) is not known.MethodsUsing data from 3116 experiments in focal cerebral ischaemia to construct a linear model predicting observed improvement in outcome contingent on 25 independent variables. We used stochastic simulation to attribute these variables to simulated studies according to their prevalence. To ascertain the ability to detect an effect of a given variable we introduced in addition this “variable of interest” of given prevalence and effect. To establish any impact of a latent variable on the apparent influence of the variable of interest we also introduced a “latent confounding variable” with given prevalence and effect, and allowed the prevalence of the variable of interest to be different in the presence and absence of the latent variable.ResultsGenerally, the normalised mean difference (NMD) approach had higher statistical power than the standardised mean difference (SMD) approach. Even when the effect size and the number of studies contributing to the meta-analysis was small, there was good statistical power to detect the overall effect, with a low false positive rate. For detecting an effect of the variable of interest, stratified meta-analysis was associated with a substantial false positive rate with NMD estimates of effect size, while using an SMD estimate of effect size had very low statistical power. Univariate and multivariable meta-regression performed substantially better, with low false positive rate for both NMD and SMD approaches; power was higher for NMD than for SMD. The presence or absence of a latent confounding variables only introduced an apparent effect of the variable of interest when there was substantial asymmetry in the prevalence of the variable of interest in the presence or absence of the confounding variable.ConclusionsIn meta-analysis of data from animal studies, NMD estimates of effect size should be used in preference to SMD estimates, and meta-regression should, where possible, be chosen over stratified meta-analysis. The power to detect the influence of the variable of interest depends on the effect of the variable of interest and its prevalence, but unless effects are very large adequate power is only achieved once at least 100 experiments are included in the meta-analysis.


2021 ◽  
Vol 23 (Supplement_2) ◽  
pp. ii11-ii12
Author(s):  
T C Booth ◽  
A Chelliah ◽  
A Roman ◽  
A Al Busaidi ◽  
H Shuaib ◽  
...  

Abstract BACKGROUND The aim of the systematic review was to assess recently published studies on diagnostic test accuracy of glioblastoma treatment response monitoring biomarkers in adults, developed through machine learning (ML). MATERIAL AND METHODS PRISMA methodology was followed. Articles published 09/2018-01/2021 (since previous reviews) were searched for using MEDLINE, EMBASE, and the Cochrane Register by two reviewers independently. Included study participants were adult patients with high grade glioma who had undergone standard treatment (maximal resection, radiotherapy with concomitant and adjuvant temozolomide) and subsequently underwent follow-up imaging to determine treatment response status (specifically, distinguishing progression/recurrence from progression/recurrence mimics - the target condition). Risk of bias and applicability was assessed with QUADAS 2. A third reviewer arbitrated any discrepancy. Contingency tables were created for hold-out test sets and recall, specificity, precision, F1-score, balanced accuracy calculated. A meta-analysis was performed using a bivariate model for recall, false positive rate and area-under the receiver operator characteristic curve (AUC). RESULTS Eighteen studies were included with 1335 patients in training sets and 384 in test sets. To determine whether there was progression or a mimic, the reference standard combination of follow-up imaging and histopathology at re-operation was applied in 67% (13/18) of studies. The small numbers of patient included in studies, the high risk of bias and concerns of applicability in the study designs (particularly in relation to the reference standard and patient selection due to confounding), and the low level of evidence, suggest that limited conclusions can be drawn from the data. Ten studies (10/18, 56%) had internal or external hold-out test set data that could be included in a meta-analysis of monitoring biomarker studies. The pooled sensitivity was 0.77 (0.65–0.86). The pooled false positive rate (1-specificity) was 0.35 (0.25–0.47). The summary point estimate for the AUC was 0.77. CONCLUSION There is likely good diagnostic performance of machine learning models that use MRI features to distinguish between progression and mimics. The diagnostic performance of ML using implicit features did not appear to be superior to ML using explicit features. There are a range of ML-based solutions poised to become treatment response monitoring biomarkers for glioblastoma. To achieve this, the development and validation of ML models require large, well-annotated datasets where the potential for confounding in the study design has been carefully considered. Therefore, multidisciplinary efforts and multicentre collaborations are necessary.


2008 ◽  
Vol 15 (4) ◽  
pp. 204-206 ◽  
Author(s):  
Jonathan P Bestwick ◽  
Wayne J Huttly ◽  
Nicholas J Wald

Objectives To examine the effect of smoking on three first trimester screening markers for Down's syndrome that constitute the Combined test, namely nuchal translucency (NT), pregnancy-associated plasma protein-A (PAPP-A) and free β human chorionic gonadotophin (free β-hCG) and to use the results to determine which of these markers need to be adjusted for smoking and by how much. Methods The difference in the median multiple of the median (MoM) values in smokers compared to non-smokers was determined for NT, PAPP-A and free β-hCG in 12,517 unaffected pregnancies that had routine first trimester Combined test screening. These results were then included in a meta-analysis of published studies and the effect of adjusting for smoking on screening performance of the Combined test was estimated. Results The results using the routine screening data were similar to the summary estimates from the meta-analysis of all studies. The results from the meta-analysis were; median MoM in smokers compared to non-smokers: 1.06 NT (95% confidence interval 1.03 to 1.10), 0.81 PAPP-A (0.80 to 0.83) and 0.94 free β-hCG (0.89 to 0.99). The effect of adjusting for smoking on the Combined test is small, with an estimated less than half percentage point increase in the detection rate (the proportion of affected pregnancies with a positive result) for a 3% false-positive rate (the proportion of unaffected pregnancies with a positive result) and less than 0.2 percentage point decrease in the false-positive rate for an 85% detection rate. Conclusion Adjusting first trimester screening markers for smoking has a minimal favourable effect on screening performance, but it is simple to implement and this paper provides the adjustment factors needed if a decision is made to make such an adjustment.


2019 ◽  
Vol 128 (4) ◽  
pp. 970-995
Author(s):  
Rémy Sun ◽  
Christoph H. Lampert

Abstract We study the problem of automatically detecting if a given multi-class classifier operates outside of its specifications (out-of-specs), i.e. on input data from a different distribution than what it was trained for. This is an important problem to solve on the road towards creating reliable computer vision systems for real-world applications, because the quality of a classifier’s predictions cannot be guaranteed if it operates out-of-specs. Previously proposed methods for out-of-specs detection make decisions on the level of single inputs. This, however, is insufficient to achieve low false positive rate and high false negative rates at the same time. In this work, we describe a new procedure named KS(conf), based on statistical reasoning. Its main component is a classical Kolmogorov–Smirnov test that is applied to the set of predicted confidence values for batches of samples. Working with batches instead of single samples allows increasing the true positive rate without negatively affecting the false positive rate, thereby overcoming a crucial limitation of single sample tests. We show by extensive experiments using a variety of convolutional network architectures and datasets that KS(conf) reliably detects out-of-specs situations even under conditions where other tests fail. It furthermore has a number of properties that make it an excellent candidate for practical deployment: it is easy to implement, adds almost no overhead to the system, works with any classifier that outputs confidence scores, and requires no a priori knowledge about how the data distribution could change.


2017 ◽  
Vol 52 (12) ◽  
pp. 1168-1170 ◽  
Author(s):  
Zachary K. Winkelmann ◽  
Ashley K. Crossway

Reference/Citation:  Harmon KG, Zigman M, Drezner JA. The effectiveness of screening history, physical exam, and ECG to detect potentially lethal cardiac disorders in athletes: a systematic review/meta-analysis. J Electrocardiol. 2015;48(3):329–338. Clinical Question:  Which screening method should be considered best practice to detect potentially lethal cardiac disorders during the preparticipation physical examination (PE) of athletes? Data Sources:  The authors completed a comprehensive literature search of MEDLINE, CINAHL, Cochrane Library, Embase, Physiotherapy Evidence Database (PEDro), and SPORTDiscus from January 1996 to November 2014. The following key words were used individually and in combination: ECG, athlete, screening, pre-participation, history, and physical. A manual review of reference lists and key journals was performed to identify additional studies. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines were followed for this review. Study Selection:  Studies selected for this analysis involved (1) outcomes of cardiovascular screening in athletes using the history, PE, and electrocardiogram (ECG); (2) history questions and PE based on the American Heart Association recommendations and guidelines; and (3) ECGs interpreted following modern standards. The exclusion criteria were (1) articles not in English, (2) conference abstracts, and (3) clinical commentary articles. Study quality was assessed on a 7-point scale for risk of bias; a score of 7 indicated the highest quality. Articles with potential bias were excluded. Data Extraction:  Data included number and sex of participants, number of true- and false-positives and negatives, type of ECG criteria used, number of cardiac abnormalities, and specific cardiac conditions. The sensitivity, specificity, false-positive rate, and positive predictive value of each screening tool were calculated and summarized using a bivariate random-effects meta-analysis model. Main Results:  Fifteen articles reporting on 47 137 athletes were fully reviewed. The overall quality of the 15 articles ranged from 5 to 7 on the 7-item assessment scale (ie, participant selection criteria, representative sample, prospective data with at least 1 positive finding, modern ECG criteria used for screening, cardiovascular screening history and PE per American Heart Association guidelines, individual test outcomes reported, and abnormal screening findings evaluated by appropriate diagnostic testing). The athletes (66% males and 34% females) were ethnically and racially diverse, were from several countries, and ranged in age from 5 to 39 years. The sensitivity and specificity of the screening methods were, respectively, ECG, 94% and 93%; history, 20% and 94%; and PE, 9% and 97%. The overall false-positive rate for ECG (6%) was less than that for history (8%) or PE (10%). The positive likelihood ratios of each screening method were 14.8 for ECG, 3.22 for history, and 2.93 for PE. The negative likelihood ratios were 0.055 for ECG, 0.85 for history, and 0.93 for PE. A total of 160 potentially lethal cardiovascular conditions were detected, for a rate of 0.3%, or 1 in 294 patients. The most common conditions were Wolff-Parkinson-White syndrome (n = 67, 42%), long QT syndrome (n = 18, 11%), hypertrophic cardiomyopathy (n = 18, 11%), dilated cardiomyopathy (n = 11, 7%), coronary artery disease or myocardial ischemia (n = 9, 6%), and arrhythmogenic right ventricular cardiomyopathy (n = 4, 3%). Conclusions:  The most effective strategy to screen athletes for cardiovascular disease was ECG. This test was 5 times more sensitive than history and 10 times more sensitive than PE, and it had a higher positive likelihood ratio, lower negative likelihood ratio, and lower false-positive rate than history or PE. The 12-lead ECG interpreted using modern criteria should be considered the best practice in screening athletes for cardiovascular disease, and the use of history and PE alone as screening tools should be reevaluated.


2016 ◽  
Author(s):  
Maarten van Iterson ◽  
Erik van Zwet ◽  
P. Eline Slagboom ◽  
Bastiaan T. Heijmans ◽  

ABSTRACTAssociation studies on omic-level data other then genotypes (GWAS) are becoming increasingly common, i.e., epigenome-and transcriptome-wide association studies (EWAS/TWAS). However, a tool box for the analysis of EWAS and TWAS studies is largely lacking and often approaches from GWAS are applied despite the fact that epigenome and transcriptome data have vedifferent characteristics than genotypes. Here, we show that EWASs and TWASs are prone not only to significant inflation but also bias of the test statistics and that these are not properly addressed by GWAS-based methodology (i.e. genomic control) and state-of-the-art approaches to control for unmeasured confounding (i.e. RUV, sva and cate). We developed a novel approach that is based on the estimation of the empirical null distribution using Bayesian statistics. Using simulation studies and empirical data, we demonstrate that our approach maximizes power while properly controlling the false positive rate. Finally, we illustrate the utility of our method in the application of meta-analysis by performing EWASs and TWASs on age and smoking which highlighted an overlap in differential methylation and expression of associated genes. An implementation of our new method is available from http://bioconductor.org/packages/bacon/.


Sign in / Sign up

Export Citation Format

Share Document