scholarly journals The Lightweight Autonomous Vehicle Self-Diagnosis (LAVS) Using Machine Learning Based on Sensors and Multi-Protocol IoT Gateway

Sensors ◽  
2019 ◽  
Vol 19 (11) ◽  
pp. 2534 ◽  
Author(s):  
YiNa Jeong ◽  
SuRak Son ◽  
ByungKwan Lee

This paper proposes the lightweight autonomous vehicle self-diagnosis (LAVS) using machine learning based on sensors and the internet of things (IoT) gateway. It collects sensor data from in-vehicle sensors and changes the sensor data to sensor messages as it passes through protocol buses. The changed messages are divided into header information, sensor messages, and payloads and they are stored in an address table, a message queue, and a data collection table separately. In sequence, the sensor messages are converted to the message type of the other protocol and the payloads are transferred to an in-vehicle diagnosis module (In-VDM). The LAVS informs the diagnosis result of Cloud or road side unit(RSU) by the internet of vehicles (IoV) and of drivers by Bluetooth. To design the LAVS, the following two modules are needed. First, a multi-protocol integrated gateway module (MIGM) converts sensor messages for communication between two different protocols, transfers the extracted payloads to the In-VDM, and performs IoV to transfer the diagnosis result and payloads to the Cloud through wireless access in vehicular environment(WAVE). Second, the In-VDM uses random forest to diagnose parts of the vehicle, and delivers the results of the random forest as an input to the neural network to diagnose the total condition of the vehicle. Since the In-VDM uses them for self-diagnosis, it can diagnose a vehicle with efficiency. In addition, because the LAVS converts payloads to a WAVE message and uses IoV to transfer the WAVE messages to RSU or the Cloud, it prevents accidents in advance by informing the vehicle condition of drivers rapidly.

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Chinmay P. Swami ◽  
Nicholas Lenhard ◽  
Jiyeon Kang

AbstractProsthetic arms can significantly increase the upper limb function of individuals with upper limb loss, however despite the development of various multi-DoF prosthetic arms the rate of prosthesis abandonment is still high. One of the major challenges is to design a multi-DoF controller that has high precision, robustness, and intuitiveness for daily use. The present study demonstrates a novel framework for developing a controller leveraging machine learning algorithms and movement synergies to implement natural control of a 2-DoF prosthetic wrist for activities of daily living (ADL). The data was collected during ADL tasks of ten individuals with a wrist brace emulating the absence of wrist function. Using this data, the neural network classifies the movement and then random forest regression computes the desired velocity of the prosthetic wrist. The models were trained/tested with ADLs where their robustness was tested using cross-validation and holdout data sets. The proposed framework demonstrated high accuracy (F-1 score of 99% for the classifier and Pearson’s correlation of 0.98 for the regression). Additionally, the interpretable nature of random forest regression was used to verify the targeted movement synergies. The present work provides a novel and effective framework to develop an intuitive control for multi-DoF prosthetic devices.


Agriculture is one of the cardinal sectors of the Indian Economy. The proposed system offers a methodology to efficiently monitor and control various attributes that affect crop growth and production. The system also uses machine learning along with the Internet of Things (IoT) to predict the crop yield. Various weather conditions such as temperature, humidity, and soil moisture are monitored in real-time using IoT sensors. IoT is also used to regulate the water level in the water tanks, which helps in reducing the wastage of water resources. A machine learning model is developed to predict the yield of the crop based on parameters taken from these sensors. The model uses Random Forest Regressor and gives an accuracy of 87.5%. Such a system provides a simple and efficient way to maintain and monitor the health of the crop.


The aim of indoor localization is to locate the objects inside a location wirelessly. This paper reports the models that predict the location along with floor and coordinates from the WAPs (Web Access Points) signal strengths of a user who connects to the internet at a specific location which had three locations. Starting with the cleaning of data, then assigning attributes into proper data types, making subset of dataset for each location, examining each column, and normalizing WAPs rows in order to build models. Different algorithms have been used to predict the location, floor, and coordinates of a logged in user. The models that have been used in this paper are k-Nearest Neighbor (k-NN) for location prediction, random forest for floor prediction and regression with k-NN for coordinate prediction.


2021 ◽  
Vol 5 (2) ◽  
pp. 415
Author(s):  
Firdausi Nuzula Zamzami ◽  
Adiwijaya Adiwijaya ◽  
Mahendra Dwifebri P

Information exchange is currently the most happening on the internet. Information exchange can be done in many ways, such as expressing expressions on social media. One of them is reviewing a film. When someone reviews a film he will use his emotions to express their feelings, it can be positive or negative. The fast growth of the internet has made information more diverse, plentiful and unstructured. Sentiment analysis can handle this, because sentiment analysis is a classification process to understand opinions, interactions, and emotions of a document or text that is carried out automatically by a computer system. One suitable machine learning method is the Modified Balanced Random Forest. To deal with the various data, the feature selection used is Mutual Information. With these two methods, the system is able to produce an accuracy value of 79% and F1-scores value of 75%.


Sensors ◽  
2021 ◽  
Vol 21 (9) ◽  
pp. 3079
Author(s):  
André Glória ◽  
João Cardoso ◽  
Pedro Sebastião

Presently, saving natural resources is increasingly a concern, and water scarcity is a fact that has been occurring in more areas of the globe. One of the main strategies used to counter this trend is the use of new technologies. On this topic, the Internet of Things has been highlighted, with these solutions being characterized by offering robustness and simplicity, while being low cost. This paper presents the study and development of an automatic irrigation control system for agricultural fields. The developed solution had a wireless sensors and actuators network, a mobile application that offers the user the capability of consulting not only the data collected in real time but also their history and also act in accordance with the data it analyses. To adapt the water management, Machine Learning algorithms were studied to predict the best time of day for water administration. Of the studied algorithms (Decision Trees, Random Forest, Neural Networks, and Support Vectors Machines) the one that obtained the best results was Random Forest, presenting an accuracy of 84.6%. Besides the ML solution, a method was also developed to calculate the amount of water needed to manage the fields under analysis. Through the implementation of the system it was possible to realize that the developed solution is effective and can achieve up to 60% of water savings.


Author(s):  
Rowan Wilken

The Conclusion revisits the terrain the book has covered, providing recapitulations of the arguments of each of the three parts of the book and the chapters contained within them. The argument of this Conclusion is that, while locative media having shifted significantly over the course of the past decade or so, location, locative media, and location data capture remain central concerns, both in the present and within and for new technological developments. It is, for instance, central to visions of “smart” or “networked” cities, and of depth-sensing vision capture technologies. Location is also crucial to recent developments in mapping and indoor mapping, autonomous vehicle development, environmental sensing, the internet of things, machine learning, and distributed ledger technologies.


The Internet of Things (IoT) activates massive data flow in the real world. Each computer can presently be linked to the internet and supply useful decision-making information. Virtually sensors are implemented in every aspect of life. From different sources of sensors can produce raw data. Due to the various data sources, the method of extracting information from the flow of data is mostly complicated, networks inadequate and criteria for real-time processing. In addition, an issue of context-aware data processing and architecture also present, despite the fact that they are essential criteria for stronger IoT structure. In order to meet this issue, we recommend a Context-aware Internet of Things Middleware (CAIM) architecture. This enables the incorporation of highly diverse IoT application context information by using light weigh protocol MQTT (Message Queue Telemetry Transport) for transmitting basic data streams from sensors to middleware and applications. In this paper, we propose a contextualization which means that obtain data from sensors of different sources. First have to create a context profile with the help of context type like user, activity, physical, and environment context. Then also is create a profile by using attributes. Finally, raw data can be change into contextualized data through CAPS (context-aware Publish-Subscribe) hybrid approach. This paper discusses the current context analysis strategies that use either rational models or probabilistic methods exclusively. The evaluation of identifying contextualization methods shows the shortcomings of IoT sensor data processing as well as offers alternative ways of identifying the context


Author(s):  
Sheikh Shehzad Ahmed

The Internet is used practically everywhere in today's digital environment. With the increased use of the Internet comes an increase in the number of threats. DDoS attacks are one of the most popular types of cyber-attacks nowadays. With the fast advancement of technology, the harm caused by DDoS attacks has grown increasingly severe. Because DDoS attacks may readily modify the ports/protocols utilized or how they function, the basic features of these attacks must be examined. Machine learning approaches have also been used extensively in intrusion detection research. Still, it is unclear what features are applicable and which approach would be better suited for detection. With this in mind, the research presents a machine learning-based DDoS attack detection approach. To train the attack detection model, we employ four Machine Learning algorithms: Decision Tree classifier (ID3), k-Nearest Neighbors (k-NN), Logistic Regression, and Random Forest classifier. The results of our experiments show that the Random Forest classifier is more accurate in recognizing attacks.


2021 ◽  
Vol 44 (4) ◽  
pp. 1-12
Author(s):  
Ratchainant Thammasudjarit ◽  
Punnathorn Ingsathit ◽  
Sigit Ari Saputro ◽  
Atiporn Ingsathit ◽  
Ammarin Thakkinstian

Background: Chronic kidney disease (CKD) takes huge amounts of resources for treatments. Early detection of patients by risk prediction model should be useful in identifying risk patients and providing early treatments. Objective: To compare the performance of traditional logistic regression with machine learning (ML) in predicting the risk of CKD in Thai population. Methods: This study used Thai Screening and Early Evaluation of Kidney Disease (SEEK) data. Seventeen features were firstly considered in constructing prediction models using logistic regression and 4 MLs (Random Forest, Naïve Bayes, Decision Tree, and Neural Network). Data were split into train and test data with a ratio of 70:30. Performances of the model were assessed by estimating recall, C statistics, accuracy, F1, and precision. Results: Seven out of 17 features were included in the prediction models. A logistic regression model could well discriminate CKD from non-CKD patients with the C statistics of 0.79 and 0.78 in the train and test data. The Neural Network performed best among ML followed by a Random Forest, Naïve Bayes, and a Decision Tree with the corresponding C statistics of 0.82, 0.80, 0.78, and 0.77 in training data set. Performance of these corresponding models in testing data decreased about 5%, 3%, 1%, and 2% relative to the logistic model by 2%. Conclusions: Risk prediction model of CKD constructed by the logit equation may yield better discrimination and lower tendency to get overfitting relative to ML models including the Neural Network and Random Forest.  


2021 ◽  
Author(s):  
Gabriel Ricardo Vásquez Morales ◽  
Sergio Mauricio Martínez Monterrubio ◽  
Juan Antonio Recio García ◽  
Pablo Moreno Ger

Abstract The COVID-19 pandemic, which began in late 2019, has become a global public health problem, resulting in large numbers of people infected and dead. One of the greatest challenges in dealing with the disease is to identify those people who are most at risk of becoming infected, seriously ill and dying from the virus, so that they can be isolated in a targeted manner and thus reduce mortality rates. This article proposes the use of machine learning, and specifically of neural networks and random forest to build two complementary models that identify the probability that a person has of dying because of COVID-19. The models are trained with the demographic information and medical history of two population groups: on the one hand, 43,000 people who died from COVID-19 in Colombia during 2020, and on the other hand, a random sample of 43,000 people who became ill with COVID-19 during the same period of time, but later recovered. After training the neural network classification model, evaluation metrics were applied that yielded an 88% accuracy value. However, transparency is a major requirement for the explicability of COVID-19 prognosis. Therefore, a complementary random forest model is trained that allows the identification of the most significant predictors of mortality by COVID-19.


Sign in / Sign up

Export Citation Format

Share Document