Feature extractor for the classification of approved Halal logo in Malaysia

Author(s):  
Khairul Muzzammil Saipullah ◽  
Nurul Atiqah Ismail ◽  
Yewguan Soo
Keyword(s):  
2019 ◽  
Vol 2019 ◽  
pp. 1-9
Author(s):  
Yizhe Wang ◽  
Cunqian Feng ◽  
Yongshun Zhang ◽  
Sisan He

Precession is a common micromotion form of space targets, introducing additional micro-Doppler (m-D) modulation into the radar echo. Effective classification of space targets is of great significance for further micromotion parameter extraction and identification. Feature extraction is a key step during the classification process, largely influencing the final classification performance. This paper presents two methods for classifying different types of space precession targets from the HRRPs. We first establish the precession model of space targets and analyze the scattering characteristics and then compute electromagnetic data of the cone target, cone-cylinder target, and cone-cylinder-flare target. Experimental results demonstrate that the support vector machine (SVM) using histograms of oriented gradient (HOG) features achieves a good result, whereas the deep convolutional neural network (DCNN) obtains a higher classification accuracy. DCNN combines the feature extractor and the classifier itself to automatically mine the high-level signatures of HRRPs through a training process. Besides, the efficiency of the two classification processes are compared using the same dataset.


Author(s):  
Tahmina Zebin ◽  
Shahadate Rezvy ◽  
Wei Pang

Abstract Chest X-rays are playing an important role in the testing and diagnosis of COVID-19 disease in the recent pandemic. However, due to the limited amount of labelled medical images, automated classification of these images for positive and negative cases remains the biggest challenge in their reliable use in diagnosis and disease progression. We applied and implemented a transfer learning pipeline for classifying COVID-19 chest X-ray images from two publicly available chest X-ray datasets {https://github.com/ieee8023/covid-chestxray-dataset},{https://www.kaggle.com/paultimothymooney/chest-xray-pneumonia}}. The classifier effectively distinguishes inflammation in lungs due to COVID-19 and pneumonia (viral and bacterial) from the ones with no infection (normal). We have used multiple pre-trained convolutional backbones as the feature extractor and achieved an overall detection accuracy of 91.2% , 95.3%, 96.7% for the VGG16, ResNet50 and EfficientNetB0 backbones respectively. Additionally, we trained a generative adversarial framework (a cycleGAN) to generate and augment the minority COVID-19 class in our approach. For visual explanations and interpretation purposes, we visualized the regions of input that are important for predictions and a gradient class activation mapping (Grad-CAM) technique is used in the pipeline to produce a coarse localization map of the highlighted regions in the image. This activation map can be used to monitor affected lung regions during disease progression and severity stages.


Author(s):  
Snehal R. Sambhe ◽  
Dr. Kamlesh A. Waghmare

As insufficient testing kits are available, the development of new testing kits for detecting COVID remains an open vicinity of research. It’s impossible to test each and every patient suffering from coronavirus symptoms using the traditional method i.e. RT-PCR. This test requires more time to produce results and have less sensitivity. Detecting feasible coronavirus infection using chest X-Ray may also assist quarantine excessive risk sufferers while testing results are disclosed. A learning model can be built based on CT scan images or Chest X-rays of individuals with higher accuracy. This paper represents a computer-aided diagnosis of COVID 19 infection bases on a feature extractor by using CNN models.


Author(s):  
Rodrigo Dalvit C. Silva ◽  
Thomas R. Jenkyn

In this paper, the issue of classifying mammogram abnormalities using images from an mammogram image analysis society (MIAS) database is discussed. We compare a feature extractor based on Legendre moments (LMs) with six other feature extractors. To determine the best feature extractor, the performance of each was compared in terms of classification accuracy rate and extraction time using a [Formula: see text]-nearest neighbors ([Formula: see text]-NN) classifier. This study shows that feature extraction using LMs performed best with an accuracy rate over 84% and requiring relatively little time for feature extraction, on average only 1[Formula: see text]s.


2020 ◽  
Vol 2 (2) ◽  
pp. 28-35 ◽  
Author(s):  
Hor Sui Lyn ◽  
Sarina Mansor ◽  
Nouar AlDahoul ◽  
Hezerul Abdul Karim

Content filtering is gaining popularity due to easy exposure of explicit visual contents to the public. Excessive exposure of inappropriate visual contents can cause devastating effects such as the growth of improper mindset and rise of societal issues such as free sex, child abandonment and rape cases. At present, most of the broadcasting media sites are hiring censorship editors to label graphic contents manually. Nevertheless, the efficiency is limited by factors such as the attention span of humans and the training required for the editors. This paper proposes to study the effect of usage of Convolutional Neural Network (CNN) as feature extractor coupled with Support Vector Machine (SVM) as classifier in an automated pornographic detection system. Three CNN architectures: Mobile Net, Visual Geometry Group-19 (VGG-19) and Residual Network-50 Version 2 (ResNet50_V2), and two classifiers: CNN and SVM were utilized to explore the combination that produce the best result. Frames of films fed as input into the CNN were classified into two groups: porn or non-porn. The best accuracy was 92.80% obtained using fine-tuned ResNet50_V2 as feature extractor and SVM as classifier. Transfer learning and SVM have improved the CNN model by approximately 10%.


Sign in / Sign up

Export Citation Format

Share Document