People Detection and Localization in Real Time during Navigation of Autonomous Robots

Author(s):  
Percy W. Lovon-Ramos ◽  
Yessica Rosas-Cuevas ◽  
Claudia Cervantes-Jilaja ◽  
Maria Tejada-Begazo ◽  
Raquel E. Patino-Escarcina ◽  
...  
Author(s):  
M. G. Harinarayanan Nampoothiri ◽  
P. S. Godwin Anand ◽  
Rahul Antony

Agriculture ◽  
2021 ◽  
Vol 11 (10) ◽  
pp. 954
Author(s):  
Abhijeet Ravankar ◽  
Ankit A. Ravankar ◽  
Arpit Rawankar ◽  
Yohei Hoshino

In recent years, autonomous robots have extensively been used to automate several vineyard tasks. Autonomous navigation is an indispensable component of such field robots. Autonomous and safe navigation has been well studied in indoor environments and many algorithms have been proposed. However, unlike structured indoor environments, vineyards pose special challenges for robot navigation. Particularly, safe robot navigation is crucial to avoid damaging the grapes. In this regard, we propose an algorithm that enables autonomous and safe robot navigation in vineyards. The proposed algorithm relies on data from a Lidar sensor and does not require a GPS. In addition, the proposed algorithm can avoid dynamic obstacles in the vineyard while smoothing the robot’s trajectories. The curvature of the trajectories can be controlled, keeping a safe distance from both the crop and the dynamic obstacles. We have tested the algorithm in both a simulation and with robots in an actual vineyard. The results show that the robot can safely navigate the lanes of the vineyard and smoothly avoid dynamic obstacles such as moving people without abruptly stopping or executing sharp turns. The algorithm performs in real-time and can easily be integrated into robots deployed in vineyards.


2017 ◽  
Vol 70 ◽  
pp. 422-435 ◽  
Author(s):  
Ángel Manuel Guerrero-Higueras ◽  
Noemí DeCastro-García ◽  
Francisco Javier Rodríguez-Lera ◽  
Vicente Matellán

2021 ◽  
Author(s):  
Daniele Berardini ◽  
Adriano Mancini ◽  
Primo Zingaretti ◽  
Sara Moccia

Abstract Nowadays, video surveillance has a crucial role. Analyzing surveillance videos is, however, a time consuming and tiresome procedure. In the last years, artificial intelligence paved the way for automatic and accurate surveillance-video analysis. In parallel to the development of artificial-intelligence methodologies, edge computing is becoming an active field of research with the final goal to provide cost-effective and real time deployment of the developed methodologies. In this work, we present an edge artificial intelligence application to video surveillance. Our approach relies on a set of four IP cameras, which acquire video frames that are processed on the edge using the NVIDIA® Jetson Nano. A state-of-the-art deep-learning model, called Single Shot multibox Detector (SSD) MobileNetV2 network, is used to perform object and people detection in real-time. The proposed infrastructure obtained an inference speed of ∼10.0 Frames per Second (FPS) for each parallel video stream. These results prompt the possibility of translating our work into a real word scenario. The integration of the presented application into a wider monitoring system with a central unit could bring benefits to the overall infrastructure. Indeed our application could send only video-related high-level information to the central unit, allowing it to combine information with data coming from other sensing devices without unuseful data overload. This would ensure a fast response in case of emergency or detected anomalies. We hope this work will contribute to stimulate the research in the field of edge artificial intelligence for video surveillance.


Sign in / Sign up

Export Citation Format

Share Document