scholarly journals Embedded Deep Learning for Ship Detection and Recognition

2019 ◽  
Vol 11 (2) ◽  
pp. 53 ◽  
Author(s):  
Hongwei Zhao ◽  
Weishan Zhang ◽  
Haoyun Sun ◽  
Bing Xue

Ship detection and recognition are important for smart monitoring of ships in order to manage port resources effectively. However, this is challenging due to complex ship profiles, ship background, object occlusion, variations of weather and light conditions, and other issues. It is also expensive to transmit monitoring video in a whole, especially if the port is not in a rural area. In this paper, we propose an on-site processing approach, which is called Embedded Ship Detection and Recognition using Deep Learning (ESDR-DL). In ESDR-DL, the video stream is processed using embedded devices, and we design a two-stage neural network named DCNet, which is composed of a DNet for ship detection and a CNet for ship recognition, running on embedded devices. We have extensively evaluated ESDR-DL, including performance of accuracy and efficiency. The ESDR-DL is deployed at the Dongying port of China, which has been running for over a year and demonstrates that it can work reliably for practical usage.

Author(s):  
V. V. Kniaz ◽  
L. Grodzitskiy ◽  
V. A. Knyaz

Abstract. Coded targets are physical optical markers that can be easily identified in an image. Their detection is a critical step in the process of camera calibration. A wide range of coded targets was developed to date. The targets differ in their decoding algorithms. The main limitation of the existing methods is low robustness to new backgrounds and illumination conditions. Modern deep learning recognition-based algorithms demonstrate exciting progress in object detection performance in low-light conditions or new environments. This paper is focused on the development of a new deep convolutional network for automatic detection and recognition of the coded targets and sub-pixel estimation of their centers.


2019 ◽  
Vol 11 (24) ◽  
pp. 2997 ◽  
Author(s):  
Clément Dechesne ◽  
Sébastien Lefèvre ◽  
Rodolphe Vadaine ◽  
Guillaume Hajduch ◽  
Ronan Fablet

The monitoring and surveillance of maritime activities are critical issues in both military and civilian fields, including among others fisheries’ monitoring, maritime traffic surveillance, coastal and at-sea safety operations, and tactical situations. In operational contexts, ship detection and identification is traditionally performed by a human observer who identifies all kinds of ships from a visual analysis of remotely sensed images. Such a task is very time consuming and cannot be conducted at a very large scale, while Sentinel-1 SAR data now provide a regular and worldwide coverage. Meanwhile, with the emergence of GPUs, deep learning methods are now established as state-of-the-art solutions for computer vision, replacing human intervention in many contexts. They have been shown to be adapted for ship detection, most often with very high resolution SAR or optical imagery. In this paper, we go one step further and investigate a deep neural network for the joint classification and characterization of ships from SAR Sentinel-1 data. We benefit from the synergies between AIS (Automatic Identification System) and Sentinel-1 data to build significant training datasets. We design a multi-task neural network architecture composed of one joint convolutional network connected to three task specific networks, namely for ship detection, classification, and length estimation. The experimental assessment shows that our network provides promising results, with accurate classification and length performance (classification overall accuracy: 97.25%, mean length error: 4.65 m ± 8.55 m).


2007 ◽  
Vol 20 (1) ◽  
pp. 93-105
Author(s):  
Catalin-Daniel Caleanu ◽  
Corina Botoca

Key issues on using a new programming language - C# - in implementation of a face detection and recognition (FDR) system are presented. Mainly the following aspects are detailed: how to acquire an image, broadcast a video stream, manipulate a database, and finally, the detection/recognition phase all in relation with theirs possible C#/.NET solutions. Emphasis was placed on artificial neural network (ANN) methods for face detection/recognition along with C# object oriented implementation proposal.


Author(s):  
Tereza Paterova ◽  
Michal Prauzek

This article focuses on applying a deep learning approach to predict daily total solar energy for the next day by a neural network. Predicting future solar irradiance is an important topic in the renewable energy generation field to improve the performance and stability of the system. The forecast is used as a support parameter to control the operation duty-cycle, data collection or communication activities at energy-independent energy harvesting embedded devices. The prediction is based on previous hourly-measured atmospheric pressure values. For prediction, a back-propagation algorithm in combination with deep learning methods is used for multilayer network training. The ability of the proposed system to estimate the daily solar energy is compared to the support vector regression model and to the evolutionary-fuzzy prediction scheme presented in previous research studies. It is concluded that the presented neural network approach gave satisfying predictions in early spring, autumn, and winter. In a particular setting, the proposed solution provides better results than a model using the support vector regression method (e.g., the MAPE value of the proposed algorithm is 0.032 less than the MAPE value of support vector regression method). The time and computational complexity for neural network training is considerable, and therefore it was assumed to train the network on an external computer or a cloud, where only the network parameters have been obtained and transferred to the embedded devices.


2021 ◽  
Author(s):  
Hao Zheng ◽  
Jianfang Liu ◽  
Xiaogang Ren

Abstract Although the current vehicle detection and recognition framework based on deep learning has its own characteristics and advantages, it is difficult to effectively combine multi-scale and multi category vehicle features, and there is still room for improvement in vehicle detection and recognition performance. Based on this, an improved fast R-CNN convolutional neural network is proposed to detect dim targets in complex traffic environment. The deep learning model of fast R-CNN convolutional neural network is introduced into the image recognition of complex traffic environment, and a structure optimization method is proposed, which replaces vgg16 in fast RCNN with RESNET to make it suitable for small target recognition in complex background. Max pooling is the down sampling method, and then feature pyramid network is introduced into RPN to generate target candidate box to optimize the structure of convolutional neural network. After training with 1497 images, the complex traffic environment images are identified and tested.


2020 ◽  
Vol 39 (6) ◽  
pp. 8463-8475
Author(s):  
Palanivel Srinivasan ◽  
Manivannan Doraipandian

Rare event detections are performed using spatial domain and frequency domain-based procedures. Omnipresent surveillance camera footages are increasing exponentially due course the time. Monitoring all the events manually is an insignificant and more time-consuming process. Therefore, an automated rare event detection contrivance is required to make this process manageable. In this work, a Context-Free Grammar (CFG) is developed for detecting rare events from a video stream and Artificial Neural Network (ANN) is used to train CFG. A set of dedicated algorithms are used to perform frame split process, edge detection, background subtraction and convert the processed data into CFG. The developed CFG is converted into nodes and edges to form a graph. The graph is given to the input layer of an ANN to classify normal and rare event classes. Graph derived from CFG using input video stream is used to train ANN Further the performance of developed Artificial Neural Network Based Context-Free Grammar – Rare Event Detection (ACFG-RED) is compared with other existing techniques and performance metrics such as accuracy, precision, sensitivity, recall, average processing time and average processing power are used for performance estimation and analyzed. Better performance metrics values have been observed for the ANN-CFG model compared with other techniques. The developed model will provide a better solution in detecting rare events using video streams.


2020 ◽  
Vol 2 ◽  
pp. 58-61 ◽  
Author(s):  
Syed Junaid ◽  
Asad Saeed ◽  
Zeili Yang ◽  
Thomas Micic ◽  
Rajesh Botchu

The advances in deep learning algorithms, exponential computing power, and availability of digital patient data like never before have led to the wave of interest and investment in artificial intelligence in health care. No radiology conference is complete without a substantial dedication to AI. Many radiology departments are keen to get involved but are unsure of where and how to begin. This short article provides a simple road map to aid departments to get involved with the technology, demystify key concepts, and pique an interest in the field. We have broken down the journey into seven steps; problem, team, data, kit, neural network, validation, and governance.


Sign in / Sign up

Export Citation Format

Share Document