A Flexible Multichannel EEG Artifact Identification Processor using Depthwise-Separable Convolutional Neural Networks

2021 ◽  
Vol 17 (2) ◽  
pp. 1-21
Author(s):  
Mohit Khatwani ◽  
Hasib-Al Rashid ◽  
Hirenkumar Paneliya ◽  
Mark Horton ◽  
Nicholas Waytowich ◽  
...  

This article presents an energy-efficient and flexible multichannel Electroencephalogram (EEG) artifact identification network and its hardware using depthwise and separable convolutional neural networks. EEG signals are recordings of the brain activities. EEG recordings that are not originated from cerebral activities are termed artifacts . Our proposed model does not need expert knowledge for feature extraction or pre-processing of EEG data and has a very efficient architecture implementable on mobile devices. The proposed network can be reconfigured for any number of EEG channel and artifact classes. Experiments were done with the proposed model with the goal of maximizing the identification accuracy while minimizing the weight parameters and required number of operations. Our proposed network achieves 93.14% classification accuracy using an EEG dataset collected by 64-channel BioSemi ActiveTwo headsets, averaged across 17 patients and 10 artifact classes. Our hardware architecture is fully parameterized with number of input channels, filters, depth, and data bit-width. The number of processing engines (PE) in the proposed hardware can vary between 1 to 16, providing different latency, throughput, power, and energy efficiency measurements. We implement our custom hardware architecture on Xilinx FPGA (Artix-7), which on average consumes 1.4 to 4.7 mJ dynamic energy with different PE configurations. Energy consumption is further reduced by 16.7× implementing on application-specified integrated circuit at the post layout level in 65-nm CMOS technology. Our FPGA implementation is 1.7 × to 5.15 × higher in energy efficiency than some previous works. Moreover, our Application-Specified Integrated Circuit implementation is also 8.47 × to 25.79 × higher in energy efficiency compared to previous works. We also demonstrated that the proposed network is reconfigurable to detect artifacts from another EEG dataset collected in our lab by a 14-channel Emotiv EPOC+ headset and achieved 93.5% accuracy for eye blink artifact detection.

2021 ◽  
Vol 5 (2) ◽  
Author(s):  
Alexander Knyshov ◽  
Samantha Hoang ◽  
Christiane Weirauch

Abstract Automated insect identification systems have been explored for more than two decades but have only recently started to take advantage of powerful and versatile convolutional neural networks (CNNs). While typical CNN applications still require large training image datasets with hundreds of images per taxon, pretrained CNNs recently have been shown to be highly accurate, while being trained on much smaller datasets. We here evaluate the performance of CNN-based machine learning approaches in identifying three curated species-level dorsal habitus datasets for Miridae, the plant bugs. Miridae are of economic importance, but species-level identifications are challenging and typically rely on information other than dorsal habitus (e.g., host plants, locality, genitalic structures). Each dataset contained 2–6 species and 126–246 images in total, with a mean of only 32 images per species for the most difficult dataset. We find that closely related species of plant bugs can be identified with 80–90% accuracy based on their dorsal habitus alone. The pretrained CNN performed 10–20% better than a taxon expert who had access to the same dorsal habitus images. We find that feature extraction protocols (selection and combination of blocks of CNN layers) impact identification accuracy much more than the classifying mechanism (support vector machine and deep neural network classifiers). While our network has much lower accuracy on photographs of live insects (62%), overall results confirm that a pretrained CNN can be straightforwardly adapted to collection-based images for a new taxonomic group and successfully extract relevant features to classify insect species.


2020 ◽  
Vol 3 (1) ◽  
pp. 445-454
Author(s):  
Celal Buğra Kaya ◽  
Alperen Yılmaz ◽  
Gizem Nur Uzun ◽  
Zeynep Hilal Kilimci

Pattern classification is related with the automatic finding of regularities in dataset through the utilization of various learning techniques. Thus, the classification of the objects into a set of categories or classes is provided. This study is undertaken to evaluate deep learning methodologies to the classification of stock patterns. In order to classify patterns that are obtained from stock charts, convolutional neural networks (CNNs), recurrent neural networks (RNNs), and long-short term memory networks (LSTMs) are employed. To demonstrate the efficiency of proposed model in categorizing patterns, hand-crafted image dataset is constructed from stock charts in Istanbul Stock Exchange and NASDAQ Stock Exchange. Experimental results show that the usage of convolutional neural networks exhibits superior classification success in recognizing patterns compared to the other deep learning methodologies.


Information ◽  
2021 ◽  
Vol 12 (9) ◽  
pp. 361
Author(s):  
Handan Hou ◽  
Wei Shi ◽  
Jinyan Guo ◽  
Zhe Zhang ◽  
Weizheng Shen ◽  
...  

Individual identification of dairy cows based on computer vision technology shows strong performance and practicality. Accurate identification of each dairy cow is the prerequisite of artificial intelligence technology applied in smart animal husbandry. While the rump of each dairy cow also has lots of important features, so do the back and head, which are also important for individual recognition. In this paper, we propose a non-contact cow rump identification method based on convolutional neural networks. First, the rump image sequences of the cows while feeding were collected. Then, an object detection model was applied to detect the cow rump object in each frame of image. Finally, a fine-tuned convolutional neural network model was trained to identify cow rumps. An image dataset containing 195 different cows was created to validate the proposed method. The method achieved an identification accuracy of 99.76%, which showed a better performance compared to other related methods and a good potential in the actual production environment of cow husbandry, and the model is light enough to be deployed in an edge-computing device.


Entropy ◽  
2018 ◽  
Vol 20 (12) ◽  
pp. 990 ◽  
Author(s):  
Sheng Shen ◽  
Honghui Yang ◽  
Junhao Li ◽  
Guanghui Xu ◽  
Meiping Sheng

Detecting and classifying ships based on radiated noise provide practical guidelines for the reduction of underwater noise footprint of shipping. In this paper, the detection and classification are implemented by auditory inspired convolutional neural networks trained from raw underwater acoustic signal. The proposed model includes three parts. The first part is performed by a multi-scale 1D time convolutional layer initialized by auditory filter banks. Signals are decomposed into frequency components by convolution operation. In the second part, the decomposed signals are converted into frequency domain by permute layer and energy pooling layer to form frequency distribution in auditory cortex. Then, 2D frequency convolutional layers are applied to discover spectro-temporal patterns, as well as preserve locality and reduce spectral variations in ship noise. In the third part, the whole model is optimized with an objective function of classification to obtain appropriate auditory filters and feature representations that are correlative with ship categories. The optimization reflects the plasticity of auditory system. Experiments on five ship types and background noise show that the proposed approach achieved an overall classification accuracy of 79.2%, which improved by 6% compared to conventional approaches. Auditory filter banks were adaptive in shape to improve accuracy of classification.


2020 ◽  
Vol 2020 ◽  
pp. 1-16
Author(s):  
Zhuofu Deng ◽  
Binbin Wang ◽  
Zhiliang Zhu

Maxillary sinus segmentation plays an important role in the choice of therapeutic strategies for nasal disease and treatment monitoring. Difficulties in traditional approaches deal with extremely heterogeneous intensity caused by lesions, abnormal anatomy structures, and blurring boundaries of cavity. 2D and 3D deep convolutional neural networks have grown popular in medical image segmentation due to utilization of large labeled datasets to learn discriminative features. However, for 3D segmentation in medical images, 2D networks are not competent in extracting more significant spacial features, and 3D ones suffer from unbearable burden of computation, which results in great challenges to maxillary sinus segmentation. In this paper, we propose a deep neural network with an end-to-end manner to generalize a fully automatic 3D segmentation. At first, our proposed model serves a symmetrical encoder-decoder architecture for multitask of bounding box estimation and in-region 3D segmentation, which cannot reduce excessive computation requirements but eliminate false positives remarkably, promoting 3D segmentation applied in 3D convolutional neural networks. In addition, an overestimation strategy is presented to avoid overfitting phenomena in conventional multitask networks. Meanwhile, we introduce residual dense blocks to increase the depth of the proposed network and attention excitation mechanism to improve the performance of bounding box estimation, both of which bring little influence to computation cost. Especially, the structure of multilevel feature fusion in the pyramid network strengthens the ability of identification to global and local discriminative features in foreground and background achieving more advanced segmentation results. At last, to address problems of blurring boundary and class imbalance in medical images, a hybrid loss function is designed for multiple tasks. To illustrate the strength of our proposed model, we evaluated it against the state-of-the-art methods. Our model performed better significantly with an average Dice 0.947±0.031, VOE 10.23±5.29, and ASD 2.86±2.11, respectively, which denotes a promising technique with strong robust in practice.


Sensors ◽  
2019 ◽  
Vol 19 (16) ◽  
pp. 3556 ◽  
Author(s):  
Husein Perez ◽  
Joseph H. M. Tah ◽  
Amir Mosavi

Clients are increasingly looking for fast and effective means to quickly and frequently survey and communicate the condition of their buildings so that essential repairs and maintenance work can be done in a proactive and timely manner before it becomes too dangerous and expensive. Traditional methods for this type of work commonly comprise of engaging building surveyors to undertake a condition assessment which involves a lengthy site inspection to produce a systematic recording of the physical condition of the building elements, including cost estimates of immediate and projected long-term costs of renewal, repair and maintenance of the building. Current asset condition assessment procedures are extensively time consuming, laborious, and expensive and pose health and safety threats to surveyors, particularly at height and roof levels which are difficult to access. This paper aims at evaluating the application of convolutional neural networks (CNN) towards an automated detection and localisation of key building defects, e.g., mould, deterioration, and stain, from images. The proposed model is based on pre-trained CNN classifier of VGG-16 (later compaired with ResNet-50, and Inception models), with class activation mapping (CAM) for object localisation. The challenges and limitations of the model in real-life applications have been identified. The proposed model has proven to be robust and able to accurately detect and localise building defects. The approach is being developed with the potential to scale-up and further advance to support automated detection of defects and deterioration of buildings in real-time using mobile devices and drones.


2020 ◽  
Author(s):  
Elena Codruta Constantinescu ◽  
Anca-Loredana Udriștoiu ◽  
Ștefan Cristinel Udriștoiu ◽  
Andreea Valentina Iacob ◽  
Lucian Gheorghe Gruionu ◽  
...  

Aim: In this paper we proposed different architectures of convolutional neural network (CNN) to classify fatty liver disease in images using only pixels and diagnosis labels as input. We trained and validated our models using a dataset of 629 images consisting of 2 types of liver images, normal and liver steatosis. Material and methods: We assessed two pre-trained models of convolutional neural networks, Inception-v3 and VGG-16 using fine-tuning. Both models were pre-trained on ImageNet dataset to extract features from B-mode ultrasound liver images. The results obtained through these methods were compared for selecting the predictive model with the best performance metrics. We trained the two models using a dataset of 262 images of liver steatosis and 234 images of normal liver. We assessed the models using a dataset of 70 liver steatosis im-ages and 63 normal liver images. Results. The proposed model that used Inception v3 obtained a 93.23% test accuracy with a sensitivity of 89.9%% and a precision of 96.6%, and areas under each receiver operating characteristic curves (ROC AUC) of 0.93. The other proposed model that used VGG-16, obtained a 90.77% test accuracy with a sensitivity of 88.9% and a precision of 92.85%, and areas under each receiver operating characteristic curves (ROC AUC) of 0.91. Conclusion. The deep learning algorithms that we proposed to detect steatosis and classify the images in normal and fatty liver images, yields an excellent test performance of over 90%. However, future larger studies are required in order to establish how these algorithms can be implemented in a clinical setting.


Author(s):  
Pham Van Hai ◽  
Samson Eloanyi Amaechi

Conventional methods used in brain tumors detection, diagnosis, and classification such as magnetic resonance imaging and computed tomography scanning technologies are unbridged in their results. This paper presents a proposed model combination, convolutional neural networks with fuzzy rules in the detection and classification of medical imaging such as healthy brain cell and tumors brain cells. This model contributes fully on the automatic classification and detection medical imaging such as brain tumors, heart diseases, breast cancers, HIV and FLU. The experimental result of the proposed model shows overall accuracy of 97.6%, which indicates that the proposed method achieves improved performance than the other current methods in the literature such as [classification of tumors in human brain MRI using wavelet and support vector machine 94.7%, and deep convolutional neural networks with transfer learning for automated brain image classification 95.0%], uses in the detection, diagnosis, and classification of medical imaging decision supports.


Sign in / Sign up

Export Citation Format

Share Document