scholarly journals Recognition of Thyroid Ultrasound Standard Plane Images Based on Residual Network

2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Minghui Guo ◽  
Kangjian Wang ◽  
Shunlan Liu ◽  
Yongzhao Du ◽  
Peizhong Liu ◽  
...  

Ultrasound is one of the critical methods for diagnosis and treatment in thyroid examination. In clinical application, many reasons, such as large outpatient traffic, time-consuming training of sonographers, and uneven professional level of physicians, often cause irregularities during the ultrasonic examination, leading to misdiagnosis or missed diagnosis. In order to standardize the thyroid ultrasound examination process, this paper proposes using a deep learning method based on residual network to recognize the Thyroid Ultrasound Standard Plane (TUSP). At first, referring to multiple relevant guidelines, eight TUSP were determined with the advice of clinical ultrasound experts. A total of 5,500 TUSP images of 8 categories were collected with the approval and review of the Ethics Committee and the patient’s informed consent. Then, after desensitizing and filling the images, the 18-layer residual network model (ResNet-18) was trained for TUSP image recognition, and five-fold cross-validation was performed. Finally, through indicators like accuracy rate, we compared the recognition effect of other mainstream deep convolutional neural network models. Experimental results showed that ResNet-18 has the best recognition effect on TUSP images with an average accuracy rate of 91.07%. The average macro precision, average macro recall, and average macro F1-score are 91.39%, 91.34%, and 91.30%, respectively. It proves that the deep learning method based on residual network can effectively recognize TUSP images, which is expected to standardize clinical thyroid ultrasound examination and reduce misdiagnosis and missed diagnosis.

Author(s):  
L. Xin

Utilizing high-resolution remote sensing images for earth observation has become the common method of land use monitoring. It requires great human participation when dealing with traditional image interpretation, which is inefficient and difficult to guarantee the accuracy. At present, the artificial intelligent method such as deep learning has a large number of advantages in the aspect of image recognition. By means of a large amount of remote sensing image samples and deep neural network models, we can rapidly decipher the objects of interest such as buildings, etc. Whether in terms of efficiency or accuracy, deep learning method is more preponderant. This paper explains the research of deep learning method by a great mount of remote sensing image samples and verifies the feasibility of building extraction via experiments.


Author(s):  
Qicheng Lao ◽  
Thomas Fevens

Deep residual network (ResNet) is currently the basis of many popular state-of-the-art convolutional neural network models for image recognition, and its recent variants include wide residual network (WRN), aggregated deep residual network (ResNeXt) and deep pyramidal residual network (PyramidNet). Here, we demonstrate the potential application of deep residual network and its variants in high-content screening (i.e. cell phenotype classification) that can overcome issues associated with analyzing high-content screening data, such as exhaustive preprocessing and inefficient learning. Cell phenotype classification is an image-based method that can be used for drug high-content screening, in which complex cell states associated with chemical compound treatment can be characterized. Previous work on cell phenotype classification typically requires a routine yet cumbersome step of single cell segmentation before the classification task. In this paper, we present a segmentation-free method for image-based cell phenotype classification using deep ResNet and its variants. The cell images are samples treated with annotated compounds that can be mainly grouped into three clusters, giving three classes to be classified. Instead of single-cell phenotype classification, we use the raw images without segmentation for our training and evaluation directly. Compared to previous reference work, we significantly simplify the data preprocessing step and accelerate the training while still achieving high accuracy. Our trained models achieve a 98.2% accuracy rate on the three classes classification problem (three compound clusters only), and a 93.8% accuracy rate on the four classes classification problem (three compound clusters plus the mock class) based on five-fold cross-validation.


2021 ◽  
pp. 1063293X2110031
Author(s):  
Maolin Yang ◽  
Auwal H Abubakar ◽  
Pingyu Jiang

Social manufacturing is characterized by its capability of utilizing socialized manufacturing resources to achieve value adding. Recently, a new type of social manufacturing pattern emerges and shows potential for core factories to improve their limited manufacturing capabilities by utilizing the resources from outside socialized manufacturing resource communities. However, the core factories need to analyze the resource characteristics of the socialized resource communities before making operation plans, and this is challenging due to the unaffiliated and self-driven characteristics of the resource providers in socialized resource communities. In this paper, a deep learning and complex network based approach is established to address this challenge by using socialized designer community for demonstration. Firstly, convolutional neural network models are trained to identify the design resource characteristics of each socialized designer in designer community according to the interaction texts posted by the socialized designer on internet platforms. During the process, an iterative dataset labelling method is established to reduce the time cost for training set labelling. Secondly, complex networks are used to model the design resource characteristics of the community according to the resource characteristics of all the socialized designers in the community. Two real communities from RepRap 3D printer project are used as case study.


2021 ◽  
pp. 188-198

The innovations in advanced information technologies has led to rapid delivery and sharing of multimedia data like images and videos. The digital steganography offers ability to secure communication and imperative for internet. The image steganography is essential to preserve confidential information of security applications. The secret image is embedded within pixels. The embedding of secret message is done by applied with S-UNIWARD and WOW steganography. Hidden messages are reveled using steganalysis. The exploration of research interests focused on conventional fields and recent technological fields of steganalysis. This paper devises Convolutional neural network models for steganalysis. Convolutional neural network (CNN) is one of the most frequently used deep learning techniques. The Convolutional neural network is used to extract spatio-temporal information or features and classification. We have compared steganalysis outcome with AlexNet and SRNeT with same dataset. The stegnalytic error rates are compared with different payloads.


10.29007/8mwc ◽  
2018 ◽  
Author(s):  
Sarah Loos ◽  
Geoffrey Irving ◽  
Christian Szegedy ◽  
Cezary Kaliszyk

Deep learning techniques lie at the heart of several significant AI advances in recent years including object recognition and detection, image captioning, machine translation, speech recognition and synthesis, and playing the game of Go.Automated first-order theorem provers can aid in the formalization and verification of mathematical theorems and play a crucial role in program analysis, theory reasoning, security, interpolation, and system verification.Here we suggest deep learning based guidance in the proof search of the theorem prover E. We train and compare several deep neural network models on the traces of existing ATP proofs of Mizar statements and use them to select processed clauses during proof search. We give experimental evidence that with a hybrid, two-phase approach, deep learning based guidance can significantly reduce the average number of proof search steps while increasing the number of theorems proved.Using a few proof guidance strategies that leverage deep neural networks, we have found first-order proofs of 7.36% of the first-order logic translations of the Mizar Mathematical Library theorems that did not previously have ATP generated proofs. This increases the ratio of statements in the corpus with ATP generated proofs from 56% to 59%.


2021 ◽  
Author(s):  
Pengfei Zuo ◽  
Yu Hua ◽  
Ling Liang ◽  
Xinfeng Xie ◽  
Xing Hu ◽  
...  

2019 ◽  
Vol 1 (1) ◽  
pp. 450-465 ◽  
Author(s):  
Abhishek Sehgal ◽  
Nasser Kehtarnavaz

Deep learning solutions are being increasingly used in mobile applications. Although there are many open-source software tools for the development of deep learning solutions, there are no guidelines in one place in a unified manner for using these tools toward real-time deployment of these solutions on smartphones. From the variety of available deep learning tools, the most suited ones are used in this paper to enable real-time deployment of deep learning inference networks on smartphones. A uniform flow of implementation is devised for both Android and iOS smartphones. The advantage of using multi-threading to achieve or improve real-time throughputs is also showcased. A benchmarking framework consisting of accuracy, CPU/GPU consumption, and real-time throughput is considered for validation purposes. The developed deployment approach allows deep learning models to be turned into real-time smartphone apps with ease based on publicly available deep learning and smartphone software tools. This approach is applied to six popular or representative convolutional neural network models, and the validation results based on the benchmarking metrics are reported.


2020 ◽  
Vol 147 (3) ◽  
pp. 1834-1841 ◽  
Author(s):  
Ming Zhong ◽  
Manuel Castellote ◽  
Rahul Dodhia ◽  
Juan Lavista Ferres ◽  
Mandy Keogh ◽  
...  

Author(s):  
Osama A. Osman ◽  
Hesham Rakha

Distracted driving (i.e., engaging in secondary tasks) is an epidemic that threatens the lives of thousands every year. Data collected from vehicular sensor technologies and through connectivity provide comprehensive information that, if used to detect driver engagement in secondary tasks, could save thousands of lives and millions of dollars. This study investigates the possibility of achieving this goal using promising deep learning tools. Specifically, two deep neural network models (a multilayer perceptron neural network model and a long short-term memory networks [LSTMN] model) were developed to identify three secondary tasks: cellphone calling, cellphone texting, and conversation with adjacent passengers. The Second Strategic Highway Research Program Naturalistic Driving Study (SHRP 2 NDS) time series data, collected using vehicle sensor technology, were used to train and test the model. The results show excellent performance for the developed models, with a slight improvement for the LSTMN model, with overall classification accuracies ranging between 95 and 96%. Specifically, the models are able to identify the different types of secondary tasks with high accuracies of 100% for calling, 96%–97% for texting, 90%–91% for conversation, and 95%–96% for the normal driving. Based on this performance, the developed models improve on the results of a previous model developed by the author to classify the same three secondary tasks, which had an accuracy of 82%. The model is promising for use in in-vehicle driving assistance technology to report engagement in unlawful tasks or alert drivers to take over control in level 1 and 2 automated vehicles.


2019 ◽  
Vol 2019 ◽  
pp. 1-11
Author(s):  
Yuntao Zhao ◽  
Chunyu Xu ◽  
Bo Bo ◽  
Yongxin Feng

The increasing sophistication of malware variants such as encryption, polymorphism, and obfuscation calls for the new detection and classification technology. In this paper, MalDeep, a novel malware classification framework of deep learning based on texture visualization, is proposed against malicious variants. Through code mapping, texture partitioning, and texture extracting, we can study malware classification in a new feature space of image texture representation without decryption and disassembly. Furthermore, we built a malware classifier on convolutional neural network with two convolutional layers, two downsampling layers, and many full connection layers. We adopt the dataset, from Microsoft Malware Classification Challenge including 9 categories of malware families and 10868 variant samples, to train the model. The experiment results show that the established MalDeep has a higher accuracy rate for malware classification. In particular, for some backdoor families, the classification accuracy of the model reaches over 99%. Moreover, compared with other main antivirus software, MalDeep also outperforms others in the average accuracy for the variants from different families.


Sign in / Sign up

Export Citation Format

Share Document