scholarly journals Deep Learning Verifikasi Kemiripan Wajah Dengan Arsitektur Jaringan Siamese

2019 ◽  
Vol 1 (2) ◽  
pp. 116-125
Author(s):  
Kartarina , ◽  
Hairul Imam

Abstrak Verifikasi wajah adalah masalah yang cukup populer dalam bidang computer vision. Banyak pendekatan yang telah dilakukan untuk menyelesaikan masalah tersebut baik menggunakan model matematika murni dengan mempelajari pola geometri pada wajah secara manual maupun cara otomatis menggunakan pendekatan pembelajaran mesin. Penelitian ini mencoba memecahkan masalah tersebut dengan pendekatan deep learning, dimana model dilatih menggunakan triplet loss yang didefinisikan pada paper FaceNet. Rancangan model yang digunakan adalah Siamese dengan menerapkan ResNet-50 yang telah dimodifikasi untuk mempelajari fitur yang ada pada gambar sehingga mampu mereduksi dimensi gambar yang tinggi menjadi vektor baris yang rendah disebut sebagai embedding. Setelah model berhasil mempelajari embedding yang baik pada gambar maka masalah verifikasi wajah bisa diselesaikan dengan membandingkan jarak embedding antar gambar dimana jarak yang dekat dapat diartikan sebagai wajah yang mirip (genuine) dan jarak yang jauh dapat diartikan sebagai wajah yang berbeda (impostor). Pada penelitian ini, model berhasil dilatih pada Dataset VGG Face v2 (Visual Geometry Group) dengan nilai akurasi 92% pada Dataset LFW (Labeled Faces in the Wild) sebagai data testing dan mendapatkan nilai AUC (Area Under the Curve) 97%. Nilai AUC yang tinggi dapat diartikan bahwa model dapat memverifikasi dengan baik gambar wajah orang yang sama sebagai genuine dan gambar wajah orang yang berbeda sebagai impostor.   Kata Kunci: Siamese, Triplet Loss, Verifikasi Wajah, Face Embedding, Dimensionality Reduction.   Abstract Face verification is a quiet popular problem in computer vision. Various approach has been applied to solve this problem from using pure mathematical model by manually defining Face geometric pattern into automatic way by using machine learning.  This research try to solve this problem using deep learning approach. The model will be trained using Triplet Loss as defined in the Face Net paper. The model architecture that will be used is Siamese by applying modified ResNet-50 as the body of the Network, the Network will be trained as how to reduce high dimension image into a low dimension row vector, reduced image dimension into a low row feature vector also called embedding. If model successfully trained to produce a good embedding quality of an image then Face verification problem can be seen as Pythagorean problem where the distance of two pair of images can be calculated using euclidean distance those the distance can be seen as the similarity value which by applying some threshold value we can determine if those pair of images is genuine (similar) or not (impostor). This Research, model successfully trained on VGG Face v2 (Visual Geometry Group) Dataset by getting 92% accuracy on LFW (Labelled Face in the Wild) as testing Dataset. Also the AUC (Area Under The Curve) score is reacing 97%, high AUC score can be interpreted that the model is successfully verify similar person as genuine and different person as impostor.   Keywords: Siamese, Triplet Loss, Face Verification, Face Embedding, Dimensionality Reduction.

2019 ◽  
Vol 28 (3) ◽  
pp. 151-164
Author(s):  
Bilel Ameur ◽  
Mebarka Belahcene ◽  
Sabeur Masmoudi ◽  
Ahmed Ben Hamida

Author(s):  
Ahmad Heidary-Sharifabad ◽  
Mohsen Sardari Zarchi ◽  
Sima Emadi ◽  
Gholamreza Zarei

The Chenopodiaceae species are ecologically and financially important, and play a significant role in biodiversity around the world. Biodiversity protection is critical for the survival and sustainability of each ecosystem and since plant species recognition in their natural habitats is the first process in plant diversity protection, an automatic species classification in the wild would greatly help the species analysis and consequently biodiversity protection on earth. Computer vision approaches can be used for automatic species analysis. Modern computer vision approaches are based on deep learning techniques. A standard dataset is essential in order to perform a deep learning algorithm. Hence, the main goal of this research is to provide a standard dataset of Chenopodiaceae images. This dataset is called ACHENY and contains 27030 images of 30 Chenopodiaceae species in their natural habitats. The other goal of this study is to investigate the applicability of ACHENY dataset by using deep learning models. Therefore, two novel deep learning models based on ACHENY dataset are introduced: First, a lightweight deep model which is trained from scratch and is designed innovatively to be agile and fast. Second, a model based on the EfficientNet-B1 architecture, which is pre-trained on ImageNet and is fine-tuned on ACHENY. The experimental results show that the two proposed models can do Chenopodiaceae fine-grained species recognition with promising accuracy. To evaluate our models, their performance was compared with the well-known VGG-16 model after fine-tuning it on ACHENY. Both VGG-16 and our first model achieved about 80% accuracy while the size of VGG-16 is about 16[Formula: see text] larger than the first model. Our second model has an accuracy of about 90% and outperforms the other models where its number of parameters is 5[Formula: see text] than the first model but it is still about one-third of the VGG-16 parameters.


2020 ◽  
Vol 9 (6) ◽  
pp. 1646 ◽  
Author(s):  
Giorgio Radetti ◽  
Antonio Fanolla ◽  
Fiorenzo Lupi ◽  
Alessandro Sartorio ◽  
Graziano Grugni

(1) Objective: To compare the accuracy of different indexes of adiposity and/or body composition in identifying metabolic syndrome (MetS) in adult patients suffering from Prader‒Willi syndrome (PWS). (2) Study Design: One hundred and twenty PWS patients (69 females and 51 males), aged 29.1 ± 9.4 years, body mass index (BMI) 36.7 ± 9.9, were evaluated. The following indexes were assessed in each subject: body mass index (BMI), fat-free mass index (FFMI), fat mass index (FMI), tri-ponderal mass index (TMI), waist-to-height ratio (WtHR) and the body mass fat index (BMFI), which adjusts the BMI for the percentage of body fat and waist circumference. Thereafter, a threshold value adjusted for age and sex, which could identify MetS, was calculated for each index. (3) Results: A significant correlation was found among all indexes (p < 0.0001 for all). However, when the area under the curve (AUC) was compared, BMFI performed better than FMI (p < 0.05) and BMI better than TMI (p < 0.05), but only in females. (4) Conclusions: Besides small differences, all the indexes taken into consideration seem to have the same ability to identify MetS in adults with PWS. Consequently, the most easily calculated index, i.e., BMI, should be considered as the best choice. The use of thresholds appropriate for sex and age can further improve its accuracy.


2021 ◽  
Vol 109 (5) ◽  
pp. 863-890
Author(s):  
Yannis Panagakis ◽  
Jean Kossaifi ◽  
Grigorios G. Chrysos ◽  
James Oldfield ◽  
Mihalis A. Nicolaou ◽  
...  

Plant Methods ◽  
2021 ◽  
Vol 17 (1) ◽  
Author(s):  
Shuo Zhou ◽  
Xiujuan Chai ◽  
Zixuan Yang ◽  
Hongwu Wang ◽  
Chenxue Yang ◽  
...  

Abstract Background Maize (Zea mays L.) is one of the most important food sources in the world and has been one of the main targets of plant genetics and phenotypic research for centuries. Observation and analysis of various morphological phenotypic traits during maize growth are essential for genetic and breeding study. The generally huge number of samples produce an enormous amount of high-resolution image data. While high throughput plant phenotyping platforms are increasingly used in maize breeding trials, there is a reasonable need for software tools that can automatically identify visual phenotypic features of maize plants and implement batch processing on image datasets. Results On the boundary between computer vision and plant science, we utilize advanced deep learning methods based on convolutional neural networks to empower the workflow of maize phenotyping analysis. This paper presents Maize-IAS (Maize Image Analysis Software), an integrated application supporting one-click analysis of maize phenotype, embedding multiple functions: (I) Projection, (II) Color Analysis, (III) Internode length, (IV) Height, (V) Stem Diameter and (VI) Leaves Counting. Taking the RGB image of maize as input, the software provides a user-friendly graphical interaction interface and rapid calculation of multiple important phenotypic characteristics, including leaf sheath points detection and leaves segmentation. In function Leaves Counting, the mean and standard deviation of difference between prediction and ground truth are 1.60 and 1.625. Conclusion The Maize-IAS is easy-to-use and demands neither professional knowledge of computer vision nor deep learning. All functions for batch processing are incorporated, enabling automated and labor-reduced tasks of recording, measurement and quantitative analysis of maize growth traits on a large dataset. We prove the efficiency and potential capability of our techniques and software to image-based plant research, which also demonstrates the feasibility and capability of AI technology implemented in agriculture and plant science.


Author(s):  
Yongfeng Gao ◽  
Jiaxing Tan ◽  
Zhengrong Liang ◽  
Lihong Li ◽  
Yumei Huo

AbstractComputer aided detection (CADe) of pulmonary nodules plays an important role in assisting radiologists’ diagnosis and alleviating interpretation burden for lung cancer. Current CADe systems, aiming at simulating radiologists’ examination procedure, are built upon computer tomography (CT) images with feature extraction for detection and diagnosis. Human visual perception in CT image is reconstructed from sinogram, which is the original raw data acquired from CT scanner. In this work, different from the conventional image based CADe system, we propose a novel sinogram based CADe system in which the full projection information is used to explore additional effective features of nodules in the sinogram domain. Facing the challenges of limited research in this concept and unknown effective features in the sinogram domain, we design a new CADe system that utilizes the self-learning power of the convolutional neural network to learn and extract effective features from sinogram. The proposed system was validated on 208 patient cases from the publicly available online Lung Image Database Consortium database, with each case having at least one juxtapleural nodule annotation. Experimental results demonstrated that our proposed method obtained a value of 0.91 of the area under the curve (AUC) of receiver operating characteristic based on sinogram alone, comparing to 0.89 based on CT image alone. Moreover, a combination of sinogram and CT image could further improve the value of AUC to 0.92. This study indicates that pulmonary nodule detection in the sinogram domain is feasible with deep learning.


Sign in / Sign up

Export Citation Format

Share Document