scholarly journals Intelligent Brushing Monitoring Using a Smart Toothbrush with Recurrent Probabilistic Neural Network

Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1238
Author(s):  
Ching-Han Chen ◽  
Chien-Chun Wang ◽  
Yan-Zhen Chen

Smart toothbrushes equipped with inertial sensors are emerging as high-tech oral health products in personalized health care. The real-time signal processing of nine-axis inertial sensing and toothbrush posture recognition requires high computational resources. This paper proposes a recurrent probabilistic neural network (RPNN) for toothbrush posture recognition that demonstrates the advantages of low computational resources as a requirement, along with high recognition accuracy and efficiency. The RPNN model is trained for toothbrush posture recognition and brushing position and then monitors the correctness and integrity of the Bass Brushing Technique. Compared to conventional deep learning models, the recognition accuracy of RPNN is 99.08% in our experiments, which is 16.2% higher than that of the Convolutional Neural Network (CNN) and 21.21% higher than the Long Short-Term Memory (LSTM) model. The model we used can greatly reduce the computing power of hardware devices, and thus, our system can be used directly on smartphones.

Author(s):  
Chih-Ta Yen ◽  
Jia-De Lin

This study employed wearable inertial sensors integrated with an activity-recognition algorithm to recognize six types of daily activities performed by humans, namely walking, ascending stairs, descending stairs, sitting, standing, and lying. The sensor system consisted of a microcontroller, a three-axis accelerometer, and a three-axis gyro; the algorithm involved collecting and normalizing the activity signals. To simplify the calculation process and to maximize the recognition accuracy, the data were preprocessed through linear discriminant analysis; this reduced their dimensionality and captured their features, thereby reducing the feature space of the accelerometer and gyro signals; they were then verified through the use of six classification algorithms. The new contribution is that after feature extraction, data classification results indicated that an artificial neural network was the most stable and effective of the six algorithms. In the experiment, 20 participants equipped the wearable sensors on their waists to record the aforementioned six types of daily activities and to verify the effectiveness of the sensors. According to the cross-validation results, the combination of linear discriminant analysis and an artificial neural network was the most stable classification algorithm for data generalization; its activity-recognition accuracy was 87.37% on the training data and 80.96% on the test data.


Sensors ◽  
2020 ◽  
Vol 20 (15) ◽  
pp. 4091
Author(s):  
Musong Gu ◽  
Kuan-Ching Li ◽  
Zhongwen Li ◽  
Qiyi Han ◽  
Wenjie Fan

The original pattern recognition and classification of crop diseases needs to collect a large amount of data in the field and send them next to a computer server through the network for recognition and classification. This method usually takes a long time, is expensive, and is difficult to carry out for timely monitoring of crop diseases, causing delays to diagnosis and treatment. With the emergence of edge computing, one can attempt to deploy the pattern recognition algorithm to the farmland environment and monitor the growth of crops promptly. However, due to the limited resources of the edge device, the original deep recognition model is challenging to apply. Due to this, in this article, a recognition model based on a depthwise separable convolutional neural network (DSCNN) is proposed, which operation particularities include a significant reduction in the number of parameters and the amount of computation, making the proposed design well suited for the edge. To show its effectiveness, simulation results are compared with the main convolution neural network (CNN) models LeNet and Visual Geometry Group Network (VGGNet) and show that, based on high recognition accuracy, the recognition time of the proposed model is reduced by 80.9% and 94.4%, respectively. Given its fast recognition speed and high recognition accuracy, the model is suitable for the real-time monitoring and recognition of crop diseases by provisioning remote embedded equipment and deploying the proposed model using edge computing.


2019 ◽  
Vol 2019 ◽  
pp. 1-9 ◽  
Author(s):  
Linqin Cai ◽  
Yaxin Hu ◽  
Jiangong Dong ◽  
Sitong Zhou

With the rapid development in social media, single-modal emotion recognition is hard to satisfy the demands of the current emotional recognition system. Aiming to optimize the performance of the emotional recognition system, a multimodal emotion recognition model from speech and text was proposed in this paper. Considering the complementarity between different modes, CNN (convolutional neural network) and LSTM (long short-term memory) were combined in a form of binary channels to learn acoustic emotion features; meanwhile, an effective Bi-LSTM (bidirectional long short-term memory) network was resorted to capture the textual features. Furthermore, we applied a deep neural network to learn and classify the fusion features. The final emotional state was determined by the output of both speech and text emotion analysis. Finally, the multimodal fusion experiments were carried out to validate the proposed model on the IEMOCAP database. In comparison with the single modal, the overall recognition accuracy of text increased 6.70%, and that of speech emotion recognition soared 13.85%. Experimental results show that the recognition accuracy of our multimodal is higher than that of the single modal and outperforms other published multimodal models on the test datasets.


Processes ◽  
2021 ◽  
Vol 9 (11) ◽  
pp. 1995
Author(s):  
Guangjun Liu ◽  
Xiaoping Xu ◽  
Xiangjia Yu ◽  
Feng Wang

In the development of high-tech industries, graphite has become increasingly more important. The world has gradually entered the graphite era from the silicon era. In order to make good use of high-quality graphite resources, a graphite classification and recognition algorithm based on an improved convolution neural network is proposed in this paper. Based on the self-built initial data set, the offline expansion and online enhancement of the data set can effectively expand the data set and reduce the risk of deep convolution neural network overfitting. Based on the visual geometry group 16 (VGG16), residual net 34 (ResNet34), and mobile net Vision 2 (MobileNet V2), a new output module is redesigned and loaded into the full connection layer. The improved migration network enhances the generalization ability and robustness of the model; moreover, combined with the focal loss function, the superparameters of the model are modified and trained on the basis of the graphite data set. The simulation results illustrate that the recognition accuracy of the proposed method is significantly improved, the convergence speed is accelerated, and the model is more stable, which proves the feasibility and effectiveness of the proposed method.


2020 ◽  
Author(s):  
Hao Gu ◽  
Guangwei Qing ◽  
Yu Wang ◽  
Sheng Hong ◽  
Guan Gui ◽  
...  

<div>Drones-aided ubiquitous applications play more and more important roles in our daily life. Accurate recognition of drones is required in aviation management due to their potential risks and even disasters.</div><div>Radio frequency (RF) fingerprinting-based recognition technology based on deep learning is considered as one of the effective approaches to extract hidden abstract features from RF data of drones. Existing deep learning-based methods are either a high computational burden or low accuracy.</div><div>In this paper, we propose a deep complex-valued convolutional neural network (DC-CNN) method based on RF fingerprinting for recognizing different drones.</div><div>Compared with existing recognition methods, the DC-CNN method has the advantages of high recognition accuracy, fast running time and small network complexity.</div><div>Nine algorithm models and two datasets are used to represent the superior performance of our system.</div><div>Experimental results show that our proposed DC-CNN can achieve recognition accuracy of 99.5\% and 74.1\% respectively on 4 and 8 classes of RF drone datasets.</div>


Author(s):  
LIK MUI ◽  
ARUN AGARWAL ◽  
AMAR GUPTA ◽  
PATRICK SHEN-PEI WANG

The topology and the capacity of a traditional multilayer neural system, as measured by the number of connections in the network, has surprisingly little impact on its generalization ability. This paper presents a new adaptive modular network that offers superior generalization capability. The new network provides significant fault tolerance, quick adaption to novel inputs, and high recognition accuracy. We demonstrate this paradigm on recognition of unconstrained handwritten characters.


2020 ◽  
Vol 2020 ◽  
pp. 1-16
Author(s):  
Huixiang Zhang ◽  
Wenteng Xu ◽  
Chunlei Chen ◽  
Liang Bai ◽  
Yonghui Zhang

Motion-based hand gesture is an important scheme to allow users to invoke commands on their smartphones in an eyes-free manner. However, the existing scheme is facing some problems. On the one hand, the expression ability of one single gesture is limited. As a result, a gesture set consisting of multiple gestures is typically adopted to represent different commands. Users must memorize all gestures in order to make interaction successfully. On the other hand, the design of gestures needs to be complicated to express diverse intensions. However, complex gestures are difficult to learn and remember. In addition, complex gestures set a high recognition barrier to smart APPs. This leads to an imbalance problem. Different gestures have different recognition accuracy levels, which may result in instability of recognition precision in practical applications. To address these problems, this paper proposes a novel scheme using binary motion gestures. Only two simple gestures are required to express bit “0” and “1,” and rich information can be expressed through the permutation and combination of the two binary gestures. Firstly, four kinds of candidate binary gestures are evaluated for eyes-free interactions. Then, an online signal cutting and merging algorithm is designed to split accelerometer signals sequence into multiple separate gesture signal segments. Next, five algorithms, including Dynamic Time Warping (DTW), Naive Bayes, Decision Tree, Support Vector Machine (SVM), and Bidirectional Long Short-Term Memory (BLSTM) Network, are adopted to recognize these segments of knock gestures. The BLSTM achieves the top performance in terms of both recognition accuracy and recognition imbalance. Finally, an Android application is developed to illustrate the usability of the proposed binary gestures. As binary gestures are much simpler than traditional hand gestures, they are more efficient and user-friendly. Our scheme eliminates the imbalance problem and achieves high recognition accuracy.


2021 ◽  
Author(s):  
Hayley Weir ◽  
Keiran Thompson ◽  
Ben Choi ◽  
Amelia Woodward ◽  
Augustin Braun ◽  
...  

<p>Inputting molecules into chemistry software, such as quantum chemistry packages, currently requires domain expertise, expensive software and/or cumbersome procedures. Leveraging recent breakthroughs in machine learning, we develop ChemPix: an offline, hand-drawn hydrocarbon structure recognition tool designed to remove these barriers. A neural image captioning approach consisting of a convolutional neural network (CNN) encoder and a long short-term memory (LSTM) decoder learned a mapping from photographs of hand-drawn hydrocarbon structures to machine-readable SMILES representations. We generated a large auxiliary training dataset, based on RDKit molecular images, by combining image augmentation, image degradation and background addition. Additionally, a small dataset of ~600 hand-drawn hydrocarbon chemical structures was crowd-sourced using a phone web application. These datasets were used to train the image-to-SMILES neural network with the goal of maximizing the hand-drawn hydrocarbon recognition accuracy. By forming a committee of the trained neural networks, we achieved a nearly 10 percentage point improvement of the molecule recognition accuracy and were able to assign a confidence value for the prediction based on the number of agreeing votes. The top ensemble model achieved a hand-drawn hydrocarbon recognition accuracy of 77% for the first prediction and 86% if the top 3 predictions were considered; in over 50% of cases, the model was at least 97% confident in the prediction, making it a promising tool for real-world use cases.</p>


2021 ◽  
Author(s):  
Hayley Weir ◽  
Keiran Thompson ◽  
Ben Choi ◽  
Amelia Woodward ◽  
Augustin Braun ◽  
...  

<p>Inputting molecules into chemistry software, such as quantum chemistry packages, currently requires domain expertise, expensive software and/or cumbersome procedures. Leveraging recent breakthroughs in machine learning, we develop ChemPix: an offline, hand-drawn hydrocarbon structure recognition tool designed to remove these barriers. A neural image captioning approach consisting of a convolutional neural network (CNN) encoder and a long short-term memory (LSTM) decoder learned a mapping from photographs of hand-drawn hydrocarbon structures to machine-readable SMILES representations. We generated a large auxiliary training dataset, based on RDKit molecular images, by combining image augmentation, image degradation and background addition. Additionally, a small dataset of ~600 hand-drawn hydrocarbon chemical structures was crowd-sourced using a phone web application. These datasets were used to train the image-to-SMILES neural network with the goal of maximizing the hand-drawn hydrocarbon recognition accuracy. By forming a committee of the trained neural networks, we achieved a nearly 10 percentage point improvement of the molecule recognition accuracy and were able to assign a confidence value for the prediction based on the number of agreeing votes. The top ensemble model achieved a hand-drawn hydrocarbon recognition accuracy of 77% for the first prediction and 86% if the top 3 predictions were considered; in over 50% of cases, the model was at least 97% confident in the prediction, making it a promising tool for real-world use cases.</p>


2021 ◽  
Vol 9 ◽  
Author(s):  
Bibo Dai ◽  
Yunmin Wang ◽  
Chunyang Ye ◽  
Qihang Li ◽  
Canming Yuan ◽  
...  

This paper proposed an improved U-Net fully convolutional neural network to automatically extract a single landslide deformation information under time series based on the physical model experiments. This method extracts time series information for three different landslide deformation ranges. Compared to U-Net and mainstream superpixel method, evaluation indicators of DSC, VOE and RVD verify the high recognition accuracy and strong robustness of our method.


Sign in / Sign up

Export Citation Format

Share Document