scholarly journals Road Characteristics Detection Based on Joint Convolutional Neural Networks with Adaptive Squares

2021 ◽  
Vol 10 (6) ◽  
pp. 377
Author(s):  
Chiao-Ling Kuo ◽  
Ming-Hua Tsai

The importance of road characteristics has been highlighted, as road characteristics are fundamental structures established to support many transportation-relevant services. However, there is still huge room for improvement in terms of types and performance of road characteristics detection. With the advantage of geographically tiled maps with high update rates, remarkable accessibility, and increasing availability, this paper proposes a novel simple deep-learning-based approach, namely joint convolutional neural networks (CNNs) adopting adaptive squares with combination rules to detect road characteristics from roadmap tiles. The proposed joint CNNs are responsible for the foreground and background image classification and various types of road characteristics classification from previous foreground images, raising detection accuracy. The adaptive squares with combination rules help efficiently focus road characteristics, augmenting the ability to detect them and provide optimal detection results. Five types of road characteristics—crossroads, T-junctions, Y-junctions, corners, and curves—are exploited, and experimental results demonstrate successful outcomes with outstanding performance in reality. The information of exploited road characteristics with location and type is, thus, converted from human-readable to machine-readable, the results will benefit many applications like feature point reminders, road condition reports, or alert detection for users, drivers, and even autonomous vehicles. We believe this approach will also enable a new path for object detection and geospatial information extraction from valuable map tiles.

2019 ◽  
Vol 11 (18) ◽  
pp. 2176 ◽  
Author(s):  
Chen ◽  
Zhong ◽  
Tan

Detecting objects in aerial images is a challenging task due to multiple orientations and relatively small size of the objects. Although many traditional detection models have demonstrated an acceptable performance by using the imagery pyramid and multiple templates in a sliding-window manner, such techniques are inefficient and costly. Recently, convolutional neural networks (CNNs) have successfully been used for object detection, and they have demonstrated considerably superior performance than that of traditional detection methods; however, this success has not been expanded to aerial images. To overcome such problems, we propose a detection model based on two CNNs. One of the CNNs is designed to propose many object-like regions that are generated from the feature maps of multi scales and hierarchies with the orientation information. Based on such a design, the positioning of small size objects becomes more accurate, and the generated regions with orientation information are more suitable for the objects arranged with arbitrary orientations. Furthermore, another CNN is designed for object recognition; it first extracts the features of each generated region and subsequently makes the final decisions. The results of the extensive experiments performed on the vehicle detection in aerial imagery (VEDAI) and overhead imagery research data set (OIRDS) datasets indicate that the proposed model performs well in terms of not only the detection accuracy but also the detection speed.


2020 ◽  
Vol 17 (9) ◽  
pp. 4364-4367
Author(s):  
Shreya Srinarasi ◽  
Seema Jahagirdar ◽  
Charan Renganathan ◽  
H. Mallika

The preliminary step in the navigation of Unmanned Vehicles is to detect and identify the horizon line. One method to locate the horizon and obstacles in an image is through a supervised learning, semantic segmentation algorithm using Neural Networks. Unmanned Aerial Vehicles (UAVs) are rapidly gaining prominence in military, commercial and civilian applications. For the safe navigation of UAVs, there poses a requirement for an accurate and efficient obstacle detection and avoidance. The position of the horizon and obstacles can also be used for adjusting flight parameters and estimating altitude. It can also be used for the navigation of Unmanned Ground Vehicles (UGV), by neglecting the part of the image above the horizon to reduce the processing time. Locating the horizon and identifying the various obstacles in an image can help in minimizing collisions and high costs due to failure of UAVs and UGVs. To achieve a robust and accurate system to aid navigation of autonomous vehicles, the efficiency and accuracy of Convolutional Neural Networks (CNN) and Recurrent-CNNs (RCNN) are analysed. It is observed via experimentation that the RCNN model classifies test images with higher accuracy.


2021 ◽  
Author(s):  
Jason Munger ◽  
Carlos W. Morato

This project explores how raw image data obtained from AV cameras can provide a model with more spatial information than can be learned from simple RGB images alone. This paper leverages the advances of deep neural networks to demonstrate steering angle predictions of autonomous vehicles through an end-to-end multi-channel CNN model using only the image data provided from an onboard camera. Image data is processed through existing neural networks to provide pixel segmentation and depth estimates and input to a new neural network along with the raw input image to provide enhanced feature signals from the environment. Various input combinations of Multi-Channel CNNs are evaluated, and their effectiveness is compared to single CNN networks using the individual data inputs. The model with the most accurate steering predictions is identified and performance compared to previous neural networks.


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Mundher Mohammed Taresh ◽  
Ningbo Zhu ◽  
Talal Ahmed Ali Ali ◽  
Asaad Shakir Hameed ◽  
Modhi Lafta Mutar

The novel coronavirus disease 2019 (COVID-19) is a contagious disease that has caused thousands of deaths and infected millions worldwide. Thus, various technologies that allow for the fast detection of COVID-19 infections with high accuracy can offer healthcare professionals much-needed help. This study is aimed at evaluating the effectiveness of the state-of-the-art pretrained Convolutional Neural Networks (CNNs) on the automatic diagnosis of COVID-19 from chest X-rays (CXRs). The dataset used in the experiments consists of 1200 CXR images from individuals with COVID-19, 1345 CXR images from individuals with viral pneumonia, and 1341 CXR images from healthy individuals. In this paper, the effectiveness of artificial intelligence (AI) in the rapid and precise identification of COVID-19 from CXR images has been explored based on different pretrained deep learning algorithms and fine-tuned to maximise detection accuracy to identify the best algorithms. The results showed that deep learning with X-ray imaging is useful in collecting critical biological markers associated with COVID-19 infections. VGG16 and MobileNet obtained the highest accuracy of 98.28%. However, VGG16 outperformed all other models in COVID-19 detection with an accuracy, F1 score, precision, specificity, and sensitivity of 98.72%, 97.59%, 96.43%, 98.70%, and 98.78%, respectively. The outstanding performance of these pretrained models can significantly improve the speed and accuracy of COVID-19 diagnosis. However, a larger dataset of COVID-19 X-ray images is required for a more accurate and reliable identification of COVID-19 infections when using deep transfer learning. This would be extremely beneficial in this pandemic when the disease burden and the need for preventive measures are in conflict with the currently available resources.


2021 ◽  
Vol 11 (12) ◽  
pp. 2907-2917
Author(s):  
P. V. Deepa ◽  
S. Joseph Jawhar ◽  
J. Merry Geisa

The field of nanotechnology has lately acquired prominence according to the raised level of correct identification and performance in the patients using Computer-Aided Diagnosis (CAD). Nano-scale imaging model enables for a high level of precision and accuracy in determining if a brain tumour is malignant or benign. This contributes to people with brain tumours having a better standard of living. In this study, We present a revolutionary Semantic nano-segmentation methodology for the nanoscale classification of brain tumours. The suggested Advanced-Convolutional Neural Networks-based Semantic Nano-segmentation will aid radiologists in detecting brain tumours even when lesions are minor. ResNet-50 was employed in the suggested Advanced-Convolutional Neural Networks (A-CNN) approach. The tumour image is partitioned using Semantic Nano-segmentation, that has averaged dice and SSIM values of 0.9704 and 0.2133, correspondingly. The input is a nano-image, and the tumour image is segmented using Semantic Nano-segmentation, which has averaged dice and SSIM values of 0.9704 and 0.2133, respectively. The suggested Semantic nano segments achieves 93.2 percent and 92.7 percent accuracy for benign and malignant tumour pictures, correspondingly. For malignant or benign pictures, The accuracy of the A-CNN methodology of correct segmentation is 99.57 percent and 95.7 percent, respectively. This unique nano-method is designed to detect tumour areas in nanometers (nm) and hence accurately assess the illness. The suggested technique’s closeness to with regard to True Positive values, the ROC curve implies that it outperforms earlier approaches. A comparison analysis is conducted on ResNet-50 using testing and training data at rates of 90%–10%, 80%–20%, and 70%–30%, corresponding, indicating the utility of the suggested work.


2018 ◽  
Vol 8 (4) ◽  
pp. 38 ◽  
Author(s):  
Arjun Pal Chowdhury ◽  
Pranav Kulkarni ◽  
Mahdi Nazm Bojnordi

Applications of neural networks have gained significant importance in embedded mobile devices and Internet of Things (IoT) nodes. In particular, convolutional neural networks have emerged as one of the most powerful techniques in computer vision, speech recognition, and AI applications that can improve the mobile user experience. However, satisfying all power and performance requirements of such low power devices is a significant challenge. Recent work has shown that binarizing a neural network can significantly improve the memory requirements of mobile devices at the cost of minor loss in accuracy. This paper proposes MB-CNN, a memristive accelerator for binary convolutional neural networks that perform XNOR convolution in-situ novel 2R memristive data blocks to improve power, performance, and memory requirements of embedded mobile devices. The proposed accelerator achieves at least 13.26 × , 5.91 × , and 3.18 × improvements in the system energy efficiency (computed by energy × delay) over the state-of-the-art software, GPU, and PIM architectures, respectively. The solution architecture which integrates CPU, GPU and MB-CNN outperforms every other configuration in terms of system energy and execution time.


2020 ◽  
Vol 7 (1) ◽  
pp. 82-95 ◽  
Author(s):  
Parham M. Kebria ◽  
Abbas Khosravi ◽  
Syed Moshfeq Salaken ◽  
Saeid Nahavandi

2021 ◽  
Author(s):  
Gregory Rutkowski ◽  
Ilgar Azizov ◽  
Evan Unmann ◽  
Marcin Dudek ◽  
Brian Arthur Grimes

As the complexity of microfluidic experiments and the associated image data volumes scale, traditional feature extraction approaches begin to struggle at both detection and analysis pipeline throughput. Deep-neural networks trained to detect certain objects are rapidly emerging as data gathering tools that can either match or outperform the analysis capabilities of the conventional methods used in microfluidic emulsion science. We demonstrate that various convolutional neural networks can be trained and used as droplet detectors in a wide variety of microfluidic systems. A generalized microfluidic droplet training and validation dataset was developed and used to tune two versions of the You Only Look Once (YOLOv3/YOLOv5) model as well as Faster R-CNN. Each model was used to detect droplets in mono- and polydisperse flow cell systems. The detection accuracy of each model shows excellent statistical symmetry with an implementation of the Hough transform as well as relevant ImageJ plugins. The models were successfully used as droplet detectors in non-microfluidic micrograph observations, where these data were not included in the training set. The models outperformed the traditional methods in more complex, porous-media simulating chip architectures with a significant speedup to per-frame analysis times. Implementing these neural networks as the primary detectors in these microfluidic systems not only makes the data pipelining more efficient, but opens the door for live detection and development of autonomous microfluidic experimental platforms. <br>


2021 ◽  
Vol 15 ◽  
Author(s):  
Guoqiang Chen ◽  
Bingxin Bai ◽  
Hongpeng Zhou ◽  
Mengchao Liu ◽  
Huailong Yi

Background: The study on facemask detection is of great significance because facemask detection is difficult, and the workload is heavy in places with a large number of people during the COVID-19 outbreak. Objective: The study aims to explore new deep learning networks that can accurately detect facemasks and improve the network's ability to extract multi-level features and contextual information. In addition, the proposed network effectively avoids the interference of objects like masks. The new network could eventually detect masks wearers in the crowd. Method: A Multi-stage Feature Fusion Block (MFFB) and a Detector Cascade Block (DCB) are proposed and connected to the deep learning network for facemask detection. The network's ability to obtain information improves. The network proposed in the study is Double Convolutional Neural Networks (CNN) called DCNN, which can fuse mask features and face position information. During facemask detection, the network extracts the featural information of the object and then inputs it into the data fusion layer. Results: The experiment results show that the proposed network can detect masks and faces in a complex environment and dense crowd. The detection accuracy of the network improves effectively. At the same time, the real-time performance of the detection model is excellent. Conclusion: The two branch networks of the DCNN can effectively obtain the feature and position information of facemasks. The network overcomes the disadvantage that a single CNN is susceptible to the interference of the suspected mask objects. The verification shows that the MFFB and the DCB can improve the network's ability to obtain object information, and the proposed DCNN can achieve excellent detection performance.


Sign in / Sign up

Export Citation Format

Share Document