scholarly journals A new 2D‐3D registration gold‐standard dataset for the hip joint based on uncertainty modeling

2021 ◽  
Author(s):  
Fabio D’Isidoro ◽  
Christophe Chênes ◽  
Stephen J. Ferguson ◽  
Jérôme Schmid
2015 ◽  
Vol 21 (5) ◽  
pp. 699-724 ◽  
Author(s):  
LILI KOTLERMAN ◽  
IDO DAGAN ◽  
BERNARDO MAGNINI ◽  
LUISA BENTIVOGLI

AbstractIn this work, we present a novel type of graphs for natural language processing (NLP), namely textual entailment graphs (TEGs). We describe the complete methodology we developed for the construction of such graphs and provide some baselines for this task by evaluating relevant state-of-the-art technology. We situate our research in the context of text exploration, since it was motivated by joint work with industrial partners in the text analytics area. Accordingly, we present our motivating scenario and the first gold-standard dataset of TEGs. However, while our own motivation and the dataset focus on the text exploration setting, we suggest that TEGs can have different usages and suggest that automatic creation of such graphs is an interesting task for the community.


2011 ◽  
Vol 38 (3) ◽  
pp. 1491-1502 ◽  
Author(s):  
Christelle Gendrin ◽  
Primož Markelj ◽  
Supriyanto Ardjo Pawiro ◽  
Jakob Spoerk ◽  
Christoph Bloch ◽  
...  

Author(s):  
Varuni Sarwal ◽  
Sebastian Niehus ◽  
Ram Ayyala ◽  
Sei Chang ◽  
Angela Lu ◽  
...  

AbstractAdvances in whole genome sequencing promise to enable the accurate and comprehensive structural variant (SV) discovery. Dissecting SVs from whole genome sequencing (WGS) data presents a substantial number of challenges and a plethora of SV-detection methods have been developed. Currently, there is a paucity of evidence which investigators can use to select appropriate SV-detection tools. In this paper, we evaluated the performance of SV-detection tools using a comprehensive PCR-confirmed gold standard set of SVs. In contrast to the previous benchmarking studies, our gold standard dataset included a complete set of SVs allowing us to report both precision and sensitivity rates of SV-detection methods. Our study investigates the ability of the methods to detect deletions, thus providing an optimistic estimate of SV detection performance, as the SV-detection methods that fail to detect deletions are likely to miss more complex SVs. We found that SV-detection tools varied widely in their performance, with several methods providing a good balance between sensitivity and precision. Additionally, we have determined the SV callers best suited for low and ultra-low pass sequencing data.


2019 ◽  
pp. 112070001988354
Author(s):  
Thomas R Ward ◽  
Mafruha M Hussain ◽  
Mark Pickering ◽  
Diana Perriman ◽  
Al Burns ◽  
...  

Introduction: A kinematic measurement method combining dynamic motion and imaging, which captures the behaviour of the hip at terminal motion, may offer improved diagnostic accuracy and enhance our understanding of the mechanics of femoroacetabular impingement (FAI). Methods: 3 embalmed cadaveric hip/pelvis specimens with implanted Roentgen Stereophotogrammetric Analysis (RSA) beads were mounted on a custom rig and imaged with a fluoroscope in four poses to simulate a clinical impingement examination: in hip extension and in three positions: near impingement, early impingement and late impingement while simulating a flexion/adduction/internal rotation manoeuvre. Hip joint kinematics were measured using 2 methods and compared: RSA (gold standard) and a custom 3-dimensional to 2-dimensional (3D–2D) image registration method which matches 3D models developed from CT to 2D fluoroscopic images. Results: Using RSA as the gold standard, bias and precision of hip joint rotations measured using 3D–2D registration demonstrated maximums of 1.64° and 3.96°, respectively. However, if the single outlier was removed, bias and precision were 0.55° and 1.38°. Bias and precision of translations had maximums of 0.51 mm and 0.77 mm, respectively. Conclusions: This 3D to 2D registration method may offer a clinically useful solution for dynamic assessment of hip impingement. If 5-mm translation and 10° of rotation represent a clinically significant difference in hip kinematics, the method’s accuracy of approximately 1 mm displacement and 1° rotation should enable detection of significant clinical differences.


2003 ◽  
Vol 44 (1) ◽  
pp. 72-78
Author(s):  
I. Soini ◽  
A. Kotaniemi ◽  
H. Kautiainen ◽  
M. Kauppi

Purpose: To assess the significance of ultrasonography (US) in detecting hip joint synovitis in patients with rheumatic diseases. Material and Methods: Forty patients with rheumatic disease and suspected hip joint synovitis underwent MRI and US of the hip joint. In addition to the throughout MRI evaluation, the anterior collum-capsule distance (CCD) was determined by both MRI and US. Thirteen healthy volunteers were examined with MRI to establish the criteria for normal findings in MRI when classifying hip joints to those with synovitis and those without. MRI was used as a gold standard. Results: Synovitis was found using MRI in 31 hips of 22 patients (9 patients had bilateral synovitis). The intraclass correlation was 0.61 between MRI and US in measuring CCD. In classifying hip joint synovitis with US, the sensitivity of the method was 87% and specificity 42%, when the CCD criterion for synovitis was determined to be ≥7 mm. If the cut-off point was raised to 9 mm, the sensitivity decreased to 61% while specificity increased to 94%. A difference in CCD of ≥1 mm between the hips as an additional criterion for synovitis increased the number of false-positive findings. Conclusion: Measurement of CCD with US proved to be a rather inaccurate method to point out synovitis in rheumatic patients when using MRI as a reference. The main reason for this result was the thickened capsule, which US could not differentiate from a thickened synovium.


2021 ◽  
pp. 103779
Author(s):  
Rezarta Islamaj ◽  
Chih-Hsuan Wei ◽  
David Cissel ◽  
Nicholas Miliaras ◽  
Olga Printseva ◽  
...  

Author(s):  
Sanja Stajner ◽  
Simone Paolo Ponzetto ◽  
Heiner Stuckenschmidt

Lexically and syntactically simpler sentences result in shorter reading time and better understanding in many people. However, no reliable systems for automatic assessment of absolute sentence complexity have been proposed so far. Instead, the assessment is usually done manually, requiring expert human annotators. To address this problem, we first define the sentence complexity assessment as a five-level classification task, and build a ‘gold standard’ dataset. Next, we propose robust systems for sentence complexity assessment, using a novel set of features based on leveraging lexical properties of freely available corpora, and investigate the impact of the feature type and corpus size on the classification performance.


Data ◽  
2021 ◽  
Vol 6 (8) ◽  
pp. 84
Author(s):  
Jenny Heddes ◽  
Pim Meerdink ◽  
Miguel Pieters ◽  
Maarten Marx

We study the task of recognizing named datasets in scientific articles as a Named Entity Recognition (NER) problem. Noticing that available annotated datasets were not adequate for our goals, we annotated 6000 sentences extracted from four major AI conferences, with roughly half of them containing one or more named datasets. A distinguishing feature of this set is the many sentences using enumerations, conjunctions and ellipses, resulting in long BI+ tag sequences. On all measures, the SciBERT NER tagger performed best and most robustly. Our baseline rule based tagger performed remarkably well and better than several state-of-the-art methods. The gold standard dataset, with links and offsets from each sentence to the (open access available) articles together with the annotation guidelines and all code used in the experiments, is available on GitHub.


2021 ◽  
Author(s):  
Katherine James ◽  
Aoesha Alsobhe ◽  
Simon Joseph Cockell ◽  
Anil Wipat ◽  
Matthew Pocock

Background: Probabilistic functional integrated networks (PFINs) are designed to aid our understanding of cellular biology and can be used to generate testable hypotheses about protein function. PFINs are generally created by scoring the quality of interaction datasets against a Gold Standard dataset, usually chosen from a separate high-quality data source, prior to their integration. Use of an external Gold Standard has several drawbacks, including data redundancy, data loss and the need for identifier mapping, which can complicate the network build and impact on PFIN performance. Results: We describe the development of an integration technique, ssNet, that scores and integrates both high-throughput and low-throughout data from a single source database in a consistent manner without the need for an external Gold Standard dataset. Using data from Saccharomyces cerevisiae we show that ssNet is easier and faster, overcoming the challenges of data redundancy, Gold Standard bias and ID mapping, while producing comparable performance. In addition ssNet results in less loss of data and produces a more complete network. Conclusions: The ssNet method allows PFINs to be built successfully from a single database, while producing comparable network performance to networks scored using an external Gold Standard source. Keywords: Network integration; Bioinformatics; Gold Standards; Probabilistic functional integrated networks; Protein function prediction; Interactome.


Sign in / Sign up

Export Citation Format

Share Document