scholarly journals A phonological interpretation of two acoustic confusion matrices

1975 ◽  
Vol 17 (6) ◽  
pp. 537-542 ◽  
Author(s):  
D. J. Shaw
2020 ◽  
Vol 41 (Supplement_2) ◽  
Author(s):  
S Gao ◽  
D Stojanovski ◽  
A Parker ◽  
P Marques ◽  
S Heitner ◽  
...  

Abstract Background Correctly identifying views acquired in a 2D echocardiographic examination is paramount to post-processing and quantification steps often performed as part of most clinical workflows. In many exams, particularly in stress echocardiography, microbubble contrast is used which greatly affects the appearance of the cardiac views. Here we present a bespoke, fully automated convolutional neural network (CNN) which identifies apical 2, 3, and 4 chamber, and short axis (SAX) views acquired with and without contrast. The CNN was tested in a completely independent, external dataset with the data acquired in a different country than that used to train the neural network. Methods Training data comprised of 2D echocardiograms was taken from 1014 subjects from a prospective multisite, multi-vendor, UK trial with the number of frames in each view greater than 17,500. Prior to view classification model training, images were processed using standard techniques to ensure homogenous and normalised image inputs to the training pipeline. A bespoke CNN was built using the minimum number of convolutional layers required with batch normalisation, and including dropout for reducing overfitting. Before processing, the data was split into 90% for model training (211,958 frames), and 10% used as a validation dataset (23,946 frames). Image frames from different subjects were separated out entirely amongst the training and validation datasets. Further, a separate trial dataset of 240 studies acquired in the USA was used as an independent test dataset (39,401 frames). Results Figure 1 shows the confusion matrices for both validation data (left) and independent test data (right), with an overall accuracy of 96% and 95% for the validation and test datasets respectively. The accuracy for the non-contrast cardiac views of >99% exceeds that seen in other works. The combined datasets included images acquired across ultrasound manufacturers and models from 12 clinical sites. Conclusion We have developed a CNN capable of automatically accurately identifying all relevant cardiac views used in “real world” echo exams, including views acquired with contrast. Use of the CNN in a routine clinical workflow could improve efficiency of quantification steps performed after image acquisition. This was tested on an independent dataset acquired in a different country to that used to train the model and was found to perform similarly thus indicating the generalisability of the model. Figure 1. Confusion matrices Funding Acknowledgement Type of funding source: Private company. Main funding source(s): Ultromics Ltd.


1969 ◽  
Vol 71 (1) ◽  
pp. 111-125 ◽  
Author(s):  
Dennis F. Fisher ◽  
Richard A. Monty ◽  
Sam Glucksberg
Keyword(s):  

1987 ◽  
Vol 30 (1) ◽  
pp. 50-59 ◽  
Author(s):  
Allen A. Montgomery ◽  
Brian E. Walden ◽  
Robert A. Prosek

The effects of consonantal context on vowel lipreading were assessed for 30 adults with mild-to-moderate sensorineural hearing loss who lipread videotape recordings of two female talkers. The stimuli were the vowels /i, I ,, U ,u/ in symmetric CVC form with the consonants /p,b,f,v,t,d,∫,g/ and in the asymmetric consonantal contexts /h/-V-/g/, /w/-V-/g/, /r/-V-/g/. Analyses of the confusion matrices from each talker indicated that vowel intelligibility was significantly poorer in most contexts involving highly visible consonants, although the utterances of one talker were highly intelligible in the bilabial context. Among the visible contexts, the fricative and labiodental contexts in particular produced the lowest vowel intelligibility regardless of talker. Lax vowels were consistently more difficult to perceive than tense vowels. Implications for talker selection and refinement of the concept of viseme were drawn.


Sensors ◽  
2021 ◽  
Vol 21 (17) ◽  
pp. 5813
Author(s):  
Muhammad Umair ◽  
Muhammad Shahbaz Khan ◽  
Fawad Ahmed ◽  
Fatmah Baothman ◽  
Fehaid Alqahtani ◽  
...  

The COVID-19 outbreak began in December 2019 and has dreadfully affected our lives since then. More than three million lives have been engulfed by this newest member of the corona virus family. With the emergence of continuously mutating variants of this virus, it is still indispensable to successfully diagnose the virus at early stages. Although the primary technique for the diagnosis is the PCR test, the non-contact methods utilizing the chest radiographs and CT scans are always preferred. Artificial intelligence, in this regard, plays an essential role in the early and accurate detection of COVID-19 using pulmonary images. In this research, a transfer learning technique with fine tuning was utilized for the detection and classification of COVID-19. Four pre-trained models i.e., VGG16, DenseNet-121, ResNet-50, and MobileNet were used. The aforementioned deep neural networks were trained using the dataset (available on Kaggle) of 7232 (COVID-19 and normal) chest X-ray images. An indigenous dataset of 450 chest X-ray images of Pakistani patients was collected and used for testing and prediction purposes. Various important parameters, e.g., recall, specificity, F1-score, precision, loss graphs, and confusion matrices were calculated to validate the accuracy of the models. The achieved accuracies of VGG16, ResNet-50, DenseNet-121, and MobileNet are 83.27%, 92.48%, 96.49%, and 96.48%, respectively. In order to display feature maps that depict the decomposition process of an input image into various filters, a visualization of the intermediate activations is performed. Finally, the Grad-CAM technique was applied to create class-specific heatmap images in order to highlight the features extracted in the X-ray images. Various optimizers were used for error minimization purposes. DenseNet-121 outperformed the other three models in terms of both accuracy and prediction.


2019 ◽  
Vol 4 (1) ◽  
pp. 61-63
Author(s):  
Alhaji Mustapha Isa

Deforestation and climate change have become global environmental issues. The detection of forest changes in association with climate change can be successfully carried out by the use of multi-temporal remote sensing and modelling. This study undertook analysis of the past and present condition of the forest from the pattern changes of the Kota tinggi district johor state Malaysia, using landsat images of three different periods. These are thematic mapper (TM) data of 1998; enhanced thematic mapper (ETM+) image of 2008 and the operation land imager (OLI) of 2018 were collectively used. The images were geometrically and atmospherically pre-processed then classified, using maximum likelihood (M/C) algorithm to produce thematic land use/cover maps of the district. The accuracy of the classification was assessed through ground truthing and confusion matrices which revealed an accuracy of above 90% and kappa coefficient at 0.9 respectively.


Author(s):  
Nadezhda Gribkova ◽  
Ričardas Zitikis

In statistical classification and machine learning, as well as in social and other sciences, a number of measures of association have been proposed for assessing and comparing individual classifiers, raters, as well as their groups. In this paper, we introduce, justify, and explore several new measures of association, which we call CO-, ANTI-, and COANTI-correlation coefficients, that we demonstrate to be powerful tools for classifying confusion matrices. We illustrate the performance of these new coefficients using a number of examples, from which we also conclude that the coefficients are new objects in the sense that they differ from those already in the literature.


Sign in / Sign up

Export Citation Format

Share Document