Connected Smartphones and High-Performance Servers for Remote Object Detection

Author(s):  
Yuki Iida ◽  
Manato Hirabayashi ◽  
Takuya Azumi ◽  
Nobuhiko Nishio ◽  
Shinpei Kato
2020 ◽  
Vol 219 (10) ◽  
Author(s):  
Dominic Waithe ◽  
Jill M. Brown ◽  
Katharina Reglinski ◽  
Isabel Diez-Sevilla ◽  
David Roberts ◽  
...  

Object detection networks are high-performance algorithms famously applied to the task of identifying and localizing objects in photography images. We demonstrate their application for the classification and localization of cells in fluorescence microscopy by benchmarking four leading object detection algorithms across multiple challenging 2D microscopy datasets. Furthermore we develop and demonstrate an algorithm that can localize and image cells in 3D, in close to real time, at the microscope using widely available and inexpensive hardware. Furthermore, we exploit the fast processing of these networks and develop a simple and effective augmented reality (AR) system for fluorescence microscopy systems using a display screen and back-projection onto the eyepiece. We show that it is possible to achieve very high classification accuracy using datasets with as few as 26 images present. Using our approach, it is possible for relatively nonskilled users to automate detection of cell classes with a variety of appearances and enable new avenues for automation of fluorescence microscopy acquisition pipelines.


2021 ◽  
Vol 2021 ◽  
pp. 1-15
Author(s):  
Zhichao Zhang ◽  
Hui Chen ◽  
Xiaoqing Yin ◽  
Jinsheng Deng

With the upgrading of the high-performance image processing platform and visual internet of things sensors, VIOT is widely used in intelligent transportation, autopilot, military reconnaissance, public safety, and other fields. However, the outdoor visual internet of things system is very sensitive to the weather and unbalanced scale of latent object. The performance of supervised learning is often limited by the disturbance of abnormal data. It is difficult to collect all classes from limited historical instances. Therefore, in terms of the anomaly detection images, fast and accurate artificial intelligence-based object detection technology has become a research hot spot in the field of intelligent vision internet of things. To this end, we propose an efficient and accurate deep learning framework for real-time and dense object detection in VIOT named the Edge Attention-wise Convolutional Neural Network (EAWNet) with three main features. First, it can identify remote aerial and daily scenery objects fast and accurately in terms of an unbalanced category. Second, edge prior and rotated anchor are adopted to enhance the efficiency of detection in edge computing internet. Third, our EAWNet network uses an edge sensing object structure, makes full use of an attention mechanism to dynamically screen different kinds of objects, and performs target recognition on multiple scales. The edge recovery effect and target detection performance for long-distance aerial objects were significantly improved. We explore the efficiency of various architectures and fine tune the training process using various backbone and data enhancement strategies to increase the variety of the training data and overcome the size limitation of input images. Extensive experiments and comprehensive evaluation on COCO and large-scale DOTA datasets proved the effectiveness of this framework that achieved the most advanced performance in real-time VIOT object detection.


2021 ◽  
Vol 14 (1) ◽  
pp. 45
Author(s):  
Subrahmanyam Vaddi ◽  
Dongyoun Kim ◽  
Chandan Kumar ◽  
Shafqat Shad ◽  
Ali Jannesari

Unmanned Aerial Vehicles (UAVs) equipped with vision capabilities have become popular in recent years. Many applications have especially been employed object detection techniques extracted from the information captured by an onboard camera. However, object detection on UAVs requires high performance, which has a negative effect on the result. In this article, we propose a deep feature pyramid architecture with a modified focal loss function, which enables it to reduce the class imbalance. Moreover, the proposed method employed an end to end object detection model running on the UAV platform for real-time application. To evaluate the proposed architecture, we combined our model with Resnet and MobileNet as a backend network, and we compared it with RetinaNet and HAL-RetinaNet. Our model produced a performance of 30.6 mAP with an inference time of 14 fps. This result shows that our proposed model outperformed RetinaNet by 6.2 mAP.


2019 ◽  
Vol 7 (4) ◽  
pp. 1152-1167 ◽  
Author(s):  
Ashiq Anjum ◽  
Tariq Abdullah ◽  
M. Fahim Tariq ◽  
Yusuf Baltaci ◽  
Nick Antonopoulos

Sign in / Sign up

Export Citation Format

Share Document