Extracting region of interest for palmprint by convolutional neural networks

Author(s):  
Xianjie Bao ◽  
Zhenhua Guo
2020 ◽  
Vol 20 (1) ◽  
Author(s):  
Jeong-Hoon Lee ◽  
Hee-Jin Yu ◽  
Min-ji Kim ◽  
Jin-Woo Kim ◽  
Jongeun Choi

Abstract Background Despite the integral role of cephalometric analysis in orthodontics, there have been limitations regarding the reliability, accuracy, etc. of cephalometric landmarks tracing. Attempts on developing automatic plotting systems have continuously been made but they are insufficient for clinical applications due to low reliability of specific landmarks. In this study, we aimed to develop a novel framework for locating cephalometric landmarks with confidence regions using Bayesian Convolutional Neural Networks (BCNN). Methods We have trained our model with the dataset from the ISBI 2015 grand challenge in dental X-ray image analysis. The overall algorithm consisted of a region of interest (ROI) extraction of landmarks and landmarks estimation considering uncertainty. Prediction data produced from the Bayesian model has been dealt with post-processing methods with respect to pixel probabilities and uncertainties. Results Our framework showed a mean landmark error (LE) of 1.53 ± 1.74 mm and achieved a successful detection rate (SDR) of 82.11, 92.28 and 95.95%, respectively, in the 2, 3, and 4 mm range. Especially, the most erroneous point in preceding studies, Gonion, reduced nearly halves of its error compared to the others. Additionally, our results demonstrated significantly higher performance in identifying anatomical abnormalities. By providing confidence regions (95%) that consider uncertainty, our framework can provide clinical convenience and contribute to making better decisions. Conclusion Our framework provides cephalometric landmarks and their confidence regions, which could be used as a computer-aided diagnosis tool and education.


2019 ◽  
Vol 9 (19) ◽  
pp. 3971 ◽  
Author(s):  
Katarzyna ◽  
Paweł

This study proposes a double-track method for the classification of fruit varieties for application in retail sales. The method uses two nine-layer Convolutional Neural Networks (CNNs) with the same architecture, but different weight matrices. The first network classifies fruits according to images of fruits with a background, and the second network classifies based on images with the ROI (Region Of Interest, a single fruit). The results are aggregated with the proposed values of weights (importance). Consequently, the method returns the predicted class membership with the Certainty Factor (CF). The use of the certainty factor associated with prediction results from the original images and cropped ROIs is the main contribution of this paper. It has been shown that CFs indicate the correctness of the classification result and represent a more reliable measure compared to the probabilities on the CNN outputs. The method is tested with a dataset containing images of six apple varieties. The overall image classification accuracy for this testing dataset is excellent (99.78%). In conclusion, the proposed method is highly successful at recognizing unambiguous, ambiguous, and uncertain classifications, and it can be used in a vision-based sales systems in uncertain conditions and unplanned situations.


Agriculture ◽  
2021 ◽  
Vol 11 (9) ◽  
pp. 827
Author(s):  
Kamal KC ◽  
Zhendong Yin ◽  
Dasen Li ◽  
Zhilu Wu

Convolutional neural networks have an immense impact on computer vision tasks. However, the accuracy of convolutional neural networks on a dataset is tremendously affected when images within the dataset highly vary. Test images of plant leaves are usually taken in situ. These images, apart from the region of interest, contain unwanted parts of plants, soil, rocks, and/or human body parts. Segmentation helps isolate the target region and a deep convolutional neural network classifies images precisely. Therefore, we combined edge and morphological based segmentation, background subtraction, and the convolutional neural network to help improve accuracy on image sets with images containing clean and cluttered backgrounds. In the proposed system, segmentation was applied to first extract leaf images in the foreground. Several images contained a leaf of interest interposed between unfavorable foregrounds and backgrounds. Background subtraction was implemented to remove the foreground image followed by segmentation to obtain the region of interest. Finally, the images were classified by a pre-trained classification network. The experimental results on two, four, and eight classes of datasets show that the proposed method achieves 98.7%, 96.7%, and 93.57% accuracy by fine-tuned DenseNet121, InceptionV3, and DenseNet121 models, respectively, on a clean dataset. For two class datasets, the accuracy obtained was about 12% higher for a dataset with images taken in the homogeneous background compared to that of a dataset with testing images with a cluttered background. Results also suggest that image sets with clean backgrounds tend to start training with higher accuracy and converge faster.


Author(s):  
M. Karthikeyan ◽  
T. S. Subashini

Mechanical fasteners are widely used in manufacturing of hardware and mechanical components such as automobiles, turbine & power generation and industries. Object detection method play a vital role to make a smart system for the society. Internet of things (IoT) leads to automation based on sensors and actuators not enough to build the systems due to limitations of sensors. Computer vision is the one which makes IoT too much smarter using deep learning techniques. Object detection is used to detect, recognize and localize the object in an image or a real time video. In industry revolution, robot arm is used to fit the fasteners to the automobile components. This system will helps the robot to detect the object of fasteners such as screw and nails accordingly to fit to the vehicle moved in the assembly line. Faster R-CNN deep learning algorithm is used to train the custom dataset and object detection is used to detect the fasteners. Region based convolutional neural networks (Faster R-CNN) uses a region proposed network (RPN) network to train the model efficiently and also with the help of Region of Interest able to localize the screw and nails objects with a mean average precision of 0.72 percent leads to accuracy of 95 percent object detection


2019 ◽  
Vol 2019 ◽  
pp. 1-13 ◽  
Author(s):  
Zuopeng Zhao ◽  
Chen Ye ◽  
Yanjun Hu ◽  
Ceng Li ◽  
Xiaofeng Li

With the development of computed tomography (CT), the contrast-enhanced CT scan is widely used in the diagnosis of thyroid nodules. However, due to the artifacts and high complexity of thyroid CT images, traditional machine learning has difficulty in detecting thyroid nodules in contrast-enhanced CT. A fully automated detection algorithm for thyroid nodules using contrast-enhanced CT images is developed. A modified U-Net architecture of fully convolutional networks is employed to segment the thyroid region of interest (ROI), and a fusion of convolutional neural networks (CNN-Fs) is proposed to detect benign and malignant thyroid nodules from the ROI images and original contrast-enhanced CT images. Experimental results demonstrate that the proposed cascade and fusion method of multitask convolutional neural networks (CNNs) is efficient in diagnosing thyroid diseases with contrast-enhanced CT images and has superior performance compared with other CNN methods.


2018 ◽  
Vol 8 (7) ◽  
pp. 1210 ◽  
Author(s):  
Mahdieh Izadpanahkakhk ◽  
Seyyed Razavi ◽  
Mehran Taghipour-Gorjikolaie ◽  
Seyyed Zahiri ◽  
Aurelio Uncini

Palmprint verification is one of the most significant and popular approaches for personal authentication due to its high accuracy and efficiency. Using deep region of interest (ROI) and feature extraction models for palmprint verification, a novel approach is proposed where convolutional neural networks (CNNs) along with transfer learning are exploited. The extracted palmprint ROIs are fed to the final verification system, which is composed of two modules. These modules are (i) a pre-trained CNN architecture as a feature extractor and (ii) a machine learning classifier. In order to evaluate our proposed model, we computed the intersection over union (IoU) metric for ROI extraction along with accuracy, receiver operating characteristic (ROC) curves, and equal error rate (EER) for the verification task.The experiments demonstrated that the ROI extraction module could significantly find the appropriate palmprint ROIs, and the verification results were crucially precise. This was verified by different databases and classification methods employed in our proposed model. In comparison with other existing approaches, our model was competitive with the state-of-the-art approaches that rely on the representation of hand-crafted descriptors. We achieved a IoU score of 93% and EER of 0.0125 using a support vector machine (SVM) classifier for the contact-based Hong Kong Polytechnic University Palmprint (HKPU) database. It is notable that all codes are open-source and can be accessed online.


2019 ◽  
Vol 626 ◽  
pp. A102 ◽  
Author(s):  
A. Asensio Ramos ◽  
C. J. Díaz Baso

Context. Spectropolarimetric inversions are routinely used in the field of solar physics for the extraction of physical information from observations. The application to two-dimensional fields of view often requires the use of supercomputers with parallelized inversion codes. Even in this case, the computing time spent on the process is still very large. Aims. Our aim is to develop a new inversion code based on the application of convolutional neural networks that can quickly provide a three-dimensional cube of thermodynamical and magnetic properties from the interpreation of two-dimensional maps of Stokes profiles. Methods. We trained two different architectures of fully convolutional neural networks. To this end, we used the synthetic Stokes profiles obtained from two snapshots of three-dimensional magneto-hydrodynamic numerical simulations of different structures of the solar atmosphere. Results. We provide an extensive analysis of the new inversion technique, showing that it infers the thermodynamical and magnetic properties with a precision comparable to that of standard inversion techniques. However, it provides several key improvements: our method is around one million times faster, it returns a three-dimensional view of the physical properties of the region of interest in geometrical height, it provides quantities that cannot be obtained otherwise (pressure and Wilson depression) and the inferred properties are decontaminated from the blurring effect of instrumental point spread functions for free. The code, models, and data are all open source and available for free, to allow both evaluation and training.


PeerJ ◽  
2018 ◽  
Vol 6 ◽  
pp. e4568 ◽  
Author(s):  
Sivaramakrishnan Rajaraman ◽  
Sameer K. Antani ◽  
Mahdieh Poostchi ◽  
Kamolrat Silamut ◽  
Md. A. Hossain ◽  
...  

Malaria is a blood disease caused by thePlasmodiumparasites transmitted through the bite of female Anopheles mosquito. Microscopists commonly examine thick and thin blood smears to diagnose disease and compute parasitemia. However, their accuracy depends on smear quality and expertise in classifying and counting parasitized and uninfected cells. Such an examination could be arduous for large-scale diagnoses resulting in poor quality. State-of-the-art image-analysis based computer-aided diagnosis (CADx) methods using machine learning (ML) techniques, applied to microscopic images of the smears using hand-engineered features demand expertise in analyzing morphological, textural, and positional variations of the region of interest (ROI). In contrast, Convolutional Neural Networks (CNN), a class of deep learning (DL) models promise highly scalable and superior results with end-to-end feature extraction and classification. Automated malaria screening using DL techniques could, therefore, serve as an effective diagnostic aid. In this study, we evaluate the performance of pre-trained CNN based DL models as feature extractors toward classifying parasitized and uninfected cells to aid in improved disease screening. We experimentally determine the optimal model layers for feature extraction from the underlying data. Statistical validation of the results demonstrates the use of pre-trained CNNs as a promising tool for feature extraction for this purpose.


Sign in / Sign up

Export Citation Format

Share Document