scholarly journals Pedestrian and Multi-Class Vehicle Classification in Radar Systems Using Rulex Software on the Raspberry Pi

2020 ◽  
Vol 10 (24) ◽  
pp. 9113
Author(s):  
Ali Walid Daher ◽  
Ali Rizik ◽  
Andrea Randazzo ◽  
Emanuele Tavanti ◽  
Hussein Chible ◽  
...  

Nowadays, cities can be perceived as increasingly dangerous places. Usually, CCTV is one of the main technologies used in a modern security system. However, poor light situations or bad weather conditions (rain, fog, etc.) limit the detection capabilities of image-based systems. Microwave radar detection systems can be an answer to this limitation and take advantage of the results obtained by low-cost technologies for the automotive market. Transportation by car may be dangerous, and every year car accidents lead to the fatalities of many individuals. Humans require automated assistance when driving through detecting and correctly classifying approaching vehicles and, more importantly, pedestrians. In this paper, we present the application of machine learning to data collected by a 24 GHz short-range radar for urban classification. The training and testing take place on a Raspberry Pi as an edge computing node operating in a client/server arrangement. The software of choice is Rulex, a high-performance machine learning package controlled through a remote interface. Forecasts with a varying number of classes were performed with one, two, or three classes for vehicles and one for humans. Furthermore, we applied a single forecast for all four classes, as well as cascading forecasts in a tree-like structure while varying algorithms, cascading the block order, setting class weights, and varying the data splitting ratio for each forecast to improve prediction accuracy. In the experiments carried out for the validation of the presented approach, an accuracy of up to 100% for human classification and 96.67% for vehicles, in general, was obtained. Vehicle sub-classes were predicted with 90.63% accuracy for motorcycles and 77.34% accuracy for both cars and trucks.

Electronics ◽  
2021 ◽  
Vol 10 (5) ◽  
pp. 600
Author(s):  
Gianluca Cornetta ◽  
Abdellah Touhafi

Low-cost, high-performance embedded devices are proliferating and a plethora of new platforms are available on the market. Some of them either have embedded GPUs or the possibility to be connected to external Machine Learning (ML) algorithm hardware accelerators. These enhanced hardware features enable new applications in which AI-powered smart objects can effectively and pervasively run in real-time distributed ML algorithms, shifting part of the raw data analysis and processing from cloud or edge to the device itself. In such context, Artificial Intelligence (AI) can be considered as the backbone of the next generation of Internet of the Things (IoT) devices, which will no longer merely be data collectors and forwarders, but really “smart” devices with built-in data wrangling and data analysis features that leverage lightweight machine learning algorithms to make autonomous decisions on the field. This work thoroughly reviews and analyses the most popular ML algorithms, with particular emphasis on those that are more suitable to run on resource-constrained embedded devices. In addition, several machine learning algorithms have been built on top of a custom multi-dimensional array library. The designed framework has been evaluated and its performance stressed on Raspberry Pi III- and IV-embedded computers.


2021 ◽  
Vol 12 (1) ◽  
pp. 89
Author(s):  
Ruiqi Chen ◽  
Tianyu Wu ◽  
Yuchen Zheng ◽  
Ming Ling

In Internet of Things (IoT) scenarios, it is challenging to deploy Machine Learning (ML) algorithms on low-cost Field Programmable Gate Arrays (FPGAs) in a real-time, cost-efficient, and high-performance way. This paper introduces Machine Learning on FPGA (MLoF), a series of ML IP cores implemented on the low-cost FPGA platforms, aiming at helping more IoT developers to achieve comprehensive performance in various tasks. With Verilog, we deploy and accelerate Artificial Neural Networks (ANNs), Decision Trees (DTs), K-Nearest Neighbors (k-NNs), and Support Vector Machines (SVMs) on 10 different FPGA development boards from seven producers. Additionally, we analyze and evaluate our design with six datasets, and compare the best-performing FPGAs with traditional SoC-based systems including NVIDIA Jetson Nano, Raspberry Pi 3B+, and STM32L476 Nucle. The results show that Lattice’s ICE40UP5 achieves the best overall performance with low power consumption, on which MLoF averagely reduces power by 891% and increases performance by 9 times. Moreover, its cost, power, Latency Production (CPLP) outperforms SoC-based systems by 25 times, which demonstrates the significance of MLoF in endpoint deployment of ML algorithms. Furthermore, we make all of the code open-source in order to promote future research.


Author(s):  
T. Tadono ◽  
M. Ohki ◽  
T. Abe

<p><strong>Abstract.</strong> The Advanced Land Observing Satellite-2 (ALOS-2) was launched on May 24, 2014, and it is operating very well in space more than 4.5 years. The designed mission life is five years as nominal operational phase and the target is over seven years since launch the satellite. The mission objectives of ALOS-2 are 1) disaster monitoring, 2) national land and infrastructure information, 3) cultivated area monitoring, and 4) global forest monitoring. To achieve the objectives, ALOS-2 carries on the Phased Array type L-band Synthetic Aperture Radar-2 (PALSAR-2), which is an active microwave radar using the 1.2 GHz frequency band and observes in day and night times even in bad weather conditions as successor PALSAR instrument onboard ALOS satellite operated from 2006 to 2011. PALSAR-2 instrument has several enhanced features from PALSAR e.g. finer spatial resolution, spotlight observing mode, dual-polarisation ScanSAR. This paper summarises an introduction of typical data analysis results for monitoring natural disasters by ALOS-2 during the operational phase. As the response natural disasters, more than 400 times of the emergency observations have been conducted to identify damages caused by volcanic activities, earthquakes, flooding etc. happened in Japan and the World.</p>


2021 ◽  
Author(s):  
Nicholas Parkyn

Emerging heterogeneous computing, computing at the edge, machine learning and AI at the edge technology drives approaches and techniques for processing and analysing onboard instrument data in near real-time. The author has used edge computing and neural networks combined with high performance heterogeneous computing platforms to accelerate AI workloads. Heterogeneous computing hardware used is readily available, low cost, delivers impressive AI performance and can run multiple neural networks in parallel. Collecting, processing and machine learning from onboard instruments data in near real-time is not a trivial problem due to data volumes, complexities of data filtering, data storage and continual learning. Little research has been done on continual machine learning which aims at a higher level of machine intelligence through providing the artificial agents with the ability to learn from a non-stationary and never-ending stream of data. The author has applied the concept of continual learning to building a system that continually learns from actual boat performance and refines predictions previously done using static VPP data. The neural networks used are initially trained using the output from traditional VPP software and continue to learn from actual data collected under real sailing conditions. The author will present the system design, AI, and edge computing techniques used and the approaches he has researched for incremental training to realise continual learning.


2022 ◽  
Vol 201 ◽  
pp. 110881
Author(s):  
Xiaoxi Mi ◽  
Lianjuan Tian ◽  
Aitao Tang ◽  
Jing Kang ◽  
Peng Peng ◽  
...  

2021 ◽  
Author(s):  
Sérgio Baldo Junior ◽  
Thiago Faria dos Santos ◽  
Renato Tinós ◽  
Paulo Roberto Pereira Santiago

Abstract The analysis of running patterns, especially those associated with fatigue, can help specialists in designing more efficient workouts and preventing injuries in high-performance sports. However, classifying running patterns is not trivial for humans. An interesting alternative is to use Machine Learning methods, such as Artificial Neural Networks (ANNs), to classify running patterns. In this work, ground reaction forces are measured by sensors coupled to the base of a low-cost open-source treadmill. ANNs are used to classify the force signals and to indicate the occurrence of fatigue. Different features, extracted from the force signals, are proposed and investigated. A Genetic Algorithm (GA) is used to select the best features. The experimental results indicate that the ANN is able to classify the running patterns with good accuracy. In addition, some features selected by the GA provide important information regarding the identification of fatigue in treadmill running.


2021 ◽  
Vol 10 (3) ◽  
pp. 40
Author(s):  
Gilson Augusto Helfer ◽  
Jorge Luis Victória Barbosa ◽  
Douglas Alves ◽  
Adilson Ben da Costa ◽  
Marko Beko ◽  
...  

The present work proposed a low-cost portable device as an enabling technology for agriculture using multispectral imaging and machine learning in soil texture. Clay is an important factor for the verification and monitoring of soil use due to its fast reaction to chemical and surface changes. The system developed uses the analysis of reflectance in wavebands for clay prediction. The selection of each wavelength is performed through an LED lamp panel. A NoIR microcamera controlled by a Raspberry Pi device is employed to acquire the image and unfold it in RGB histograms. Results showed a good prediction performance with R2 of 0.96, RMSEC of 3.66% and RMSECV of 16.87%. The high portability allows the equipment to be used in a field providing strategic information related to soil sciences.


Sensors ◽  
2021 ◽  
Vol 21 (9) ◽  
pp. 3127
Author(s):  
Giuseppe Loprencipe ◽  
Flavio Guilherme Vaz de Almeida Filho ◽  
Rafael Henrique de Oliveira ◽  
Salvatore Bruno

Road networks are monitored to evaluate their decay level and the performances regarding ride comfort, vehicle rolling noise, fuel consumption, etc. In this study, a novel inertial sensor-based system is proposed using a low-cost inertial measurement unit (IMU) and a global positioning system (GPS) module, which are connected to a Raspberry Pi Zero W board and embedded inside a vehicle to indirectly monitor the road condition. To assess the level of pavement decay, the comfort index awz defined by the ISO 2631 standard was used. Considering 21 km of roads with different levels of pavement decay, validation measurements were performed using the novel sensor, a high performance inertial based navigation sensor, and a road surface profiler. Therefore, comparisons between awz determined with accelerations measured on the two different inertial sensors are made; in addition, also correlations between awz, and typical pavement indicators such as international roughness index, and ride number were also performed. The results showed very good correlations between the awz values calculated with the two inertial devices (R2 = 0.98). In addition, the correlations between awz values and the typical pavement indices showed promising results (R2 = 0.83–0.90). The proposed sensor may be assumed as a reliable and easy-to-install method to assess the pavement conditions in urban road networks, since the use of traditional systems is difficult and/or expensive.


2021 ◽  
Vol 4 (3) ◽  
pp. 40
Author(s):  
Abdul Majeed

During the ongoing pandemic of the novel coronavirus disease 2019 (COVID-19), latest technologies such as artificial intelligence (AI), blockchain, learning paradigms (machine, deep, smart, few short, extreme learning, etc.), high-performance computing (HPC), Internet of Medical Things (IoMT), and Industry 4.0 have played a vital role. These technologies helped to contain the disease’s spread by predicting contaminated people/places, as well as forecasting future trends. In this article, we provide insights into the applications of machine learning (ML) and high-performance computing (HPC) in the era of COVID-19. We discuss the person-specific data that are being collected to lower the COVID-19 spread and highlight the remarkable opportunities it provides for knowledge extraction leveraging low-cost ML and HPC techniques. We demonstrate the role of ML and HPC in the context of the COVID-19 era with the successful implementation or proposition in three contexts: (i) ML and HPC use in the data life cycle, (ii) ML and HPC use in analytics on COVID-19 data, and (iii) the general-purpose applications of both techniques in COVID-19’s arena. In addition, we discuss the privacy and security issues and architecture of the prototype system to demonstrate the proposed research. Finally, we discuss the challenges of the available data and highlight the issues that hinder the applicability of ML and HPC solutions on it.


2021 ◽  
Vol 13 (17) ◽  
pp. 3479
Author(s):  
Maria Pia Del Rosso ◽  
Alessandro Sebastianelli ◽  
Dario Spiller ◽  
Pierre Philippe Mathieu ◽  
Silvia Liberata Ullo

In recent years, the growth of Machine Learning (ML) algorithms has raised the number of studies including their applicability in a variety of different scenarios. Among all, one of the hardest ones is the aerospace, due to its peculiar physical requirements. In this context, a feasibility study, with a prototype of an on board Artificial Intelligence (AI) model, and realistic testing equipment and scenario are presented in this work. As a case study, the detection of volcanic eruptions has been investigated with the objective to swiftly produce alerts and allow immediate interventions. Two Convolutional Neural Networks (CNNs) have been designed and realized from scratch, showing how to efficiently implement them for identifying the eruptions and at the same time adapting their complexity in order to fit on board requirements. The CNNs are then tested with experimental hardware, by means of a drone with a paylod composed of a generic processing unit (Raspberry PI), an AI processing unit (Movidius stick) and a camera. The hardware employed to build the prototype is low-cost, easy to found and to use. Moreover, the dataset has been published on GitHub, made available to everyone. The results are promising and encouraging toward the employment of the proposed system in future missions, given that ESA has already moved the first steps of AI on board with the Phisat-1 satellite, launched on September 2020.


Sign in / Sign up

Export Citation Format

Share Document