scholarly journals Edge Machine Learning: Enabling Smart Internet of Things Applications

2018 ◽  
Vol 2 (3) ◽  
pp. 26 ◽  
Author(s):  
Mahmut Yazici ◽  
Shadi Basurra ◽  
Mohamed Gaber

Machine learning has traditionally been solely performed on servers and high-performance machines. However, advances in chip technology have given us miniature libraries that fit in our pockets and mobile processors have vastly increased in capability narrowing the vast gap between the simple processors embedded in such things and their more complex cousins in personal computers. Thus, with the current advancement in these devices, in terms of processing power, energy storage and memory capacity, the opportunity has arisen to extract great value in having on-device machine learning for Internet of Things (IoT) devices. Implementing machine learning inference on edge devices has huge potential and is still in its early stages. However, it is already more powerful than most realise. In this paper, a step forward has been taken to understand the feasibility of running machine learning algorithms, both training and inference, on a Raspberry Pi, an embedded version of the Android operating system designed for IoT device development. Three different algorithms: Random Forests, Support Vector Machine (SVM) and Multi-Layer Perceptron, respectively, have been tested using ten diverse data sets on the Raspberry Pi to profile their performance in terms of speed (training and inference), accuracy, and power consumption. As a result of the conducted tests, the SVM algorithm proved to be slightly faster in inference and more efficient in power consumption, but the Random Forest algorithm exhibited the highest accuracy. In addition to the performance results, we will discuss their usability scenarios and the idea of implementing more complex and taxing algorithms such as Deep Learning on these small devices in more details.

Sensors ◽  
2021 ◽  
Vol 21 (19) ◽  
pp. 6349
Author(s):  
Jawad Ahmad ◽  
Johan Sidén ◽  
Henrik Andersson

This paper presents a posture recognition system aimed at detecting sitting postures of a wheelchair user. The main goals of the proposed system are to identify and inform irregular and improper posture to prevent sitting-related health issues such as pressure ulcers, with the potential that it could also be used for individuals without mobility issues. In the proposed monitoring system, an array of 16 screen printed pressure sensor units was employed to obtain pressure data, which are sampled and processed in real-time using read-out electronics. The posture recognition was performed for four sitting positions: right-, left-, forward- and backward leaning based on k-nearest neighbors (k-NN), support vector machines (SVM), random forest (RF), decision tree (DT) and LightGBM machine learning algorithms. As a result, a posture classification accuracy of up to 99.03 percent can be achieved. Experimental studies illustrate that the system can provide real-time pressure distribution value in the form of a pressure map on a standard PC and also on a raspberry pi system equipped with a touchscreen monitor. The stored pressure distribution data can later be shared with healthcare professionals so that abnormalities in sitting patterns can be identified by employing a post-processing unit. The proposed system could be used for risk assessments related to pressure ulcers. It may be served as a benchmark by recording and identifying individuals’ sitting patterns and the possibility of being realized as a lightweight portable health monitoring device.


2018 ◽  
Vol 7 (2.24) ◽  
pp. 42
Author(s):  
Amber Goel ◽  
Apaar Khurana ◽  
Pranav Sehgal ◽  
K Suganthi

The paper focuses on two areas, automation and security. Raspberry Pi is the heart of the project and it is fuelled by Machine Learning Algorithms using Open CV and Internet of Things. Face recognition uses Linear Binary Pattern and if an unknown person uses their workstation, a message will be sent to the respective person with the photo of the person who uses the workstation. Face recognition is also being used for uploading attendance and switching ON and OFF appliances automatically. During un-official hours, A Human Detection algorithm is being used to detect the human presence. If an unknown person enters the office, a photo of the person will be taken and sent to the authorities. This technology is a combination of Computer Vision, Machine learning and Internet of things, that serves to be an efficient tool for both automation and security.  


2021 ◽  
Vol 30 (04) ◽  
pp. 2150020
Author(s):  
Luke Holbrook ◽  
Miltiadis Alamaniotis

With the increase of cyber-attacks on millions of Internet of Things (IoT) devices, the poor network security measures on those devices are the main source of the problem. This article aims to study a number of these machine learning algorithms available for their effectiveness in detecting malware in consumer internet of things devices. In particular, the Support Vector Machines (SVM), Random Forest, and Deep Neural Network (DNN) algorithms are utilized for a benchmark with a set of test data and compared as tools in safeguarding the deployment for IoT security. Test results on a set of 4 IoT devices exhibited that all three tested algorithms presented here detect the network anomalies with high accuracy. However, the deep neural network provides the highest coefficient of determination R2, and hence, it is identified as the most precise among the tested algorithms concerning the security of IoT devices based on the data sets we have undertaken.


Electronics ◽  
2021 ◽  
Vol 10 (5) ◽  
pp. 600
Author(s):  
Gianluca Cornetta ◽  
Abdellah Touhafi

Low-cost, high-performance embedded devices are proliferating and a plethora of new platforms are available on the market. Some of them either have embedded GPUs or the possibility to be connected to external Machine Learning (ML) algorithm hardware accelerators. These enhanced hardware features enable new applications in which AI-powered smart objects can effectively and pervasively run in real-time distributed ML algorithms, shifting part of the raw data analysis and processing from cloud or edge to the device itself. In such context, Artificial Intelligence (AI) can be considered as the backbone of the next generation of Internet of the Things (IoT) devices, which will no longer merely be data collectors and forwarders, but really “smart” devices with built-in data wrangling and data analysis features that leverage lightweight machine learning algorithms to make autonomous decisions on the field. This work thoroughly reviews and analyses the most popular ML algorithms, with particular emphasis on those that are more suitable to run on resource-constrained embedded devices. In addition, several machine learning algorithms have been built on top of a custom multi-dimensional array library. The designed framework has been evaluated and its performance stressed on Raspberry Pi III- and IV-embedded computers.


2021 ◽  
Vol 12 (1) ◽  
pp. 89
Author(s):  
Ruiqi Chen ◽  
Tianyu Wu ◽  
Yuchen Zheng ◽  
Ming Ling

In Internet of Things (IoT) scenarios, it is challenging to deploy Machine Learning (ML) algorithms on low-cost Field Programmable Gate Arrays (FPGAs) in a real-time, cost-efficient, and high-performance way. This paper introduces Machine Learning on FPGA (MLoF), a series of ML IP cores implemented on the low-cost FPGA platforms, aiming at helping more IoT developers to achieve comprehensive performance in various tasks. With Verilog, we deploy and accelerate Artificial Neural Networks (ANNs), Decision Trees (DTs), K-Nearest Neighbors (k-NNs), and Support Vector Machines (SVMs) on 10 different FPGA development boards from seven producers. Additionally, we analyze and evaluate our design with six datasets, and compare the best-performing FPGAs with traditional SoC-based systems including NVIDIA Jetson Nano, Raspberry Pi 3B+, and STM32L476 Nucle. The results show that Lattice’s ICE40UP5 achieves the best overall performance with low power consumption, on which MLoF averagely reduces power by 891% and increases performance by 9 times. Moreover, its cost, power, Latency Production (CPLP) outperforms SoC-based systems by 25 times, which demonstrates the significance of MLoF in endpoint deployment of ML algorithms. Furthermore, we make all of the code open-source in order to promote future research.


PLoS ONE ◽  
2021 ◽  
Vol 16 (11) ◽  
pp. e0260315
Author(s):  
Kenichiro Nagata ◽  
Toshikazu Tsuji ◽  
Kimitaka Suetsugu ◽  
Kayoko Muraoka ◽  
Hiroyuki Watanabe ◽  
...  

Overdose prescription errors sometimes cause serious life-threatening adverse drug events, while underdose errors lead to diminished therapeutic effects. Therefore, it is important to detect and prevent these errors. In the present study, we used the one-class support vector machine (OCSVM), one of the most common unsupervised machine learning algorithms for anomaly detection, to identify overdose and underdose prescriptions. We extracted prescription data from electronic health records in Kyushu University Hospital between January 1, 2014 and December 31, 2019. We constructed an OCSVM model for each of the 21 candidate drugs using three features: age, weight, and dose. Clinical overdose and underdose prescriptions, which were identified and rectified by pharmacists before administration, were collected. Synthetic overdose and underdose prescriptions were created using the maximum and minimum doses, defined by drug labels or the UpToDate database. We applied these prescription data to the OCSVM model and evaluated its detection performance. We also performed comparative analysis with other unsupervised outlier detection algorithms (local outlier factor, isolation forest, and robust covariance). Twenty-seven out of 31 clinical overdose and underdose prescriptions (87.1%) were detected as abnormal by the model. The constructed OCSVM models showed high performance for detecting synthetic overdose prescriptions (precision 0.986, recall 0.964, and F-measure 0.973) and synthetic underdose prescriptions (precision 0.980, recall 0.794, and F-measure 0.839). In comparative analysis, OCSVM showed the best performance. Our models detected the majority of clinical overdose and underdose prescriptions and demonstrated high performance in synthetic data analysis. OCSVM models, constructed using features such as age, weight, and dose, are useful for detecting overdose and underdose prescriptions.


2020 ◽  
Vol 16 (6) ◽  
pp. 155014772091156 ◽  
Author(s):  
Asif Iqbal ◽  
Farman Ullah ◽  
Hafeez Anwar ◽  
Ata Ur Rehman ◽  
Kiran Shah ◽  
...  

We propose to perform wearable sensors-based human physical activity recognition. This is further extended to an Internet-of-Things (IoT) platform which is based on a web-based application that integrates wearable sensors, smartphones, and activity recognition. To this end, a smartphone collects the data from wearable sensors and sends it to the server for processing and recognition of the physical activity. We collect a novel data set of 13 physical activities performed both indoor and outdoor. The participants are from both the genders where their number per activity varies. During these activities, the wearable sensors measure various body parameters via accelerometers, gyroscope, magnetometers, pressure, and temperature. These measurements and their statistical are then represented in features vectors that used to train and test supervised machine learning algorithms (classifiers) for activity recognition. On the given data set, we evaluate a number of widely known classifiers such random forests, support vector machine, and many others using the WEKA machine learning suite. Using the default settings of these classifiers in WEKA, we attain the highest overall classification accuracy of 90%. Consequently, such a recognition rate is encouraging, reliable, and effective to be used in the proposed platform.


Author(s):  
Samar Amassmir ◽  
Said Tkatek ◽  
Otman Abdoun ◽  
Jaafar Abouchabaka

This paper proposes a comparison of three machine learning algorithms for a better intelligent irrigation system based on internet of things (IoT) for differents products. This work's major contribution is to specify the most accurate algorithm among the three machine learning algorithms (k-nearest neighbors (KNN), support vector machine (SVM), artificial neural network (ANN)). This is achieved by collecting irrigation data of a specific products and split it into training data and test data then compare the accuracy of the three algorithms. To evaluate the performance of our algorithm we built a system of IoT devices. The temperature and humidity sensors are installed in the field interact with the Arduino microcontroller. The Arduino is connected to Raspberry Pi3, which holds the machine learning algorithm. It turned out to be ANN algorithm is the most accurate for such system of irrigation. The ANN algorithm is the best choice for an intelligent system to minimize water loss for some products.


Author(s):  
Pratic Chakraborty

Abstract: Machine learning is the buzz word right now. With the machine learning algorithms one can make a computer differentiate between a human and a cow. Can detect objects, can predict different parameters and can process our native languages. But all these algorithms require a fair amount of processing power in order to be trained and fitted as a model. Thankfully, with the current improvement in technology, processing power of computers have significantly increased. But there is a limitation in power consumption and deployability of a server computer. This is where “tinyML” helps the industry out. Machine Learning has never been so easy to access before!


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Clíssia Barboza da Silva ◽  
Nielsen Moreira Oliveira ◽  
Marcia Eugenia Amaral de Carvalho ◽  
André Dantas de Medeiros ◽  
Marina de Lima Nogueira ◽  
...  

AbstractIn the agricultural industry, advances in optical imaging technologies based on rapid and non-destructive approaches have contributed to increase food production for the growing population. The present study employed autofluorescence-spectral imaging and machine learning algorithms to develop distinct models for classification of soybean seeds differing in physiological quality after artificial aging. Autofluorescence signals from the 365/400 nm excitation-emission combination (that exhibited a perfect correlation with the total phenols in the embryo) were efficiently able to segregate treatments. Furthermore, it was also possible to demonstrate a strong correlation between autofluorescence-spectral data and several quality indicators, such as early germination and seed tolerance to stressful conditions. The machine learning models developed based on artificial neural network, support vector machine or linear discriminant analysis showed high performance (0.99 accuracy) for classifying seeds with different quality levels. Taken together, our study shows that the physiological potential of soybean seeds is reduced accompanied by changes in the concentration and, probably in the structure of autofluorescent compounds. In addition, altering the autofluorescent properties in seeds impact the photosynthesis apparatus in seedlings. From the practical point of view, autofluorescence-based imaging can be used to check modifications in the optical properties of soybean seed tissues and to consistently discriminate high-and low-vigor seeds.


Sign in / Sign up

Export Citation Format

Share Document