scholarly journals Visibility Enhancement of Scene Images Degraded by Foggy Weather Conditions with Deep Neural Networks

2016 ◽  
Vol 2016 ◽  
pp. 1-9 ◽  
Author(s):  
Farhan Hussain ◽  
Jechang Jeong

Nowadays many camera-based advanced driver assistance systems (ADAS) have been introduced to assist the drivers and ensure their safety under various driving conditions. One of the problems faced by drivers is the faded scene visibility and lower contrast while driving in foggy conditions. In this paper, we present a novel approach to provide a solution to this problem by employing deep neural networks. We assume that the fog in an image can be mathematically modeled by an unknown complex function and we utilize the deep neural network to approximate the corresponding mathematical model for the fog. The advantages of our technique are as follows: (i) its real-time operation and (ii) being based on minimal input, that is, a single image, and exhibiting robustness/generalization for various unseen image data. Experiments carried out on various synthetic images indicate that our proposed technique has the abilities to approximate the corresponding fog function reasonably and remove it for better visibility and safety.

Author(s):  
Haitham Baomar ◽  
Peter J. Bentley

AbstractWe describe the Intelligent Autopilot System (IAS), a fully autonomous autopilot capable of piloting large jets such as airliners by learning from experienced human pilots using Artificial Neural Networks. The IAS is capable of autonomously executing the required piloting tasks and handling the different flight phases to fly an aircraft from one airport to another including takeoff, climb, cruise, navigate, descent, approach, and land in simulation. In addition, the IAS is capable of autonomously landing large jets in the presence of extreme weather conditions including severe crosswind, gust, wind shear, and turbulence. The IAS is a potential solution to the limitations and robustness problems of modern autopilots such as the inability to execute complete flights, the inability to handle extreme weather conditions especially during approach and landing where the aircraft’s speed is relatively low, and the uncertainty factor is high, and the pilots shortage problem compared to the increasing aircraft demand. In this paper, we present the work done by collaborating with the aviation industry to provide training data for the IAS to learn from. The training data is used by Artificial Neural Networks to generate control models automatically. The control models imitate the skills of the human pilot when executing all the piloting tasks required to pilot an aircraft between two airports. In addition, we introduce new ANNs trained to control the aircraft’s elevators, elevators’ trim, throttle, flaps, and new ailerons and rudder ANNs to counter the effects of extreme weather conditions and land safely. Experiments show that small datasets containing single demonstrations are sufficient to train the IAS and achieve excellent performance by using clearly separable and traceable neural network modules which eliminate the black-box problem of large Artificial Intelligence methods such as Deep Learning. In addition, experiments show that the IAS can handle landing in extreme weather conditions beyond the capabilities of modern autopilots and even experienced human pilots. The proposed IAS is a novel approach towards achieving full control autonomy of large jets using ANN models that match the skills and abilities of experienced human pilots and beyond.


2019 ◽  
Vol 20 (1) ◽  
Author(s):  
Fuyong Xing ◽  
Yuanpu Xie ◽  
Xiaoshuang Shi ◽  
Pingjun Chen ◽  
Zizhao Zhang ◽  
...  

Abstract Background Nucleus or cell detection is a fundamental task in microscopy image analysis and supports many other quantitative studies such as object counting, segmentation, tracking, etc. Deep neural networks are emerging as a powerful tool for biomedical image computing; in particular, convolutional neural networks have been widely applied to nucleus/cell detection in microscopy images. However, almost all models are tailored for specific datasets and their applicability to other microscopy image data remains unknown. Some existing studies casually learn and evaluate deep neural networks on multiple microscopy datasets, but there are still several critical, open questions to be addressed. Results We analyze the applicability of deep models specifically for nucleus detection across a wide variety of microscopy image data. More specifically, we present a fully convolutional network-based regression model and extensively evaluate it on large-scale digital pathology and microscopy image datasets, which consist of 23 organs (or cancer diseases) and come from multiple institutions. We demonstrate that for a specific target dataset, training with images from the same types of organs might be usually necessary for nucleus detection. Although the images can be visually similar due to the same staining technique and imaging protocol, deep models learned with images from different organs might not deliver desirable results and would require model fine-tuning to be on a par with those trained with target data. We also observe that training with a mixture of target and other/non-target data does not always mean a higher accuracy of nucleus detection, and it might require proper data manipulation during model training to achieve good performance. Conclusions We conduct a systematic case study on deep models for nucleus detection in a wide variety of microscopy images, aiming to address several important but previously understudied questions. We present and extensively evaluate an end-to-end, pixel-to-pixel fully convolutional regression network and report a few significant findings, some of which might have not been reported in previous studies. The model performance analysis and observations would be helpful to nucleus detection in microscopy images.


Energies ◽  
2018 ◽  
Vol 11 (8) ◽  
pp. 2100 ◽  
Author(s):  
Rosario Miceli ◽  
Giuseppe Schettino ◽  
Fabio Viola

In this paper, a novel approach to low order harmonic mitigation in fundamental switching frequency modulation is proposed for high power photovoltaic (PV) applications, without trying to solve the cumbersome non-linear transcendental equations. The proposed method allows for mitigation of the first-five harmonics (third, fifth, seventh, ninth, and eleventh harmonics), to reduce the complexity of the required procedure and to allocate few computational resource in the Field Programmable Gate Array (FPGA) based control board. Therefore, the voltage waveform taken into account is different respect traditional voltage waveform. The same concept, known as “voltage cancelation”, used for single-phase cascaded H-bridge inverters, has been applied at a single-phase five-level cascaded H-bridge multilevel inverter (CHBMI). Through a very basic methodology, the polynomial equations that drive the control angles were detected for a single-phase five-level CHBMI. The acquired polynomial equations were implemented in a digital system to real-time operation. The paper presents the preliminary analysis in simulation environment and its experimental validation.


2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Shazia Akbar ◽  
Mohammad Peikari ◽  
Sherine Salama ◽  
Azadeh Yazdan Panah ◽  
Sharon Nofech-Mozes ◽  
...  

Abstract The residual cancer burden index is an important quantitative measure used for assessing treatment response following neoadjuvant therapy for breast cancer. It has shown to be predictive of overall survival and is composed of two key metrics: qualitative assessment of lymph nodes and the percentage of invasive or in situ tumour cellularity (TC) in the tumour bed (TB). Currently, TC is assessed through eye-balling of routine histopathology slides estimating the proportion of tumour cells within the TB. With the advances in production of digitized slides and increasing availability of slide scanners in pathology laboratories, there is potential to measure TC using automated algorithms with greater precision and accuracy. We describe two methods for automated TC scoring: 1) a traditional approach to image analysis development whereby we mimic the pathologists’ workflow, and 2) a recent development in artificial intelligence in which features are learned automatically in deep neural networks using image data alone. We show strong agreements between automated and manual analysis of digital slides. Agreements between our trained deep neural networks and experts in this study (0.82) approach the inter-rater agreements between pathologists (0.89). We also reveal properties that are captured when we apply deep neural network to whole slide images, and discuss the potential of using such visualisations to improve upon TC assessment in the future.


2018 ◽  
Vol 15 (9) ◽  
pp. 1451-1455 ◽  
Author(s):  
Grant J. Scott ◽  
Kyle C. Hagan ◽  
Richard A. Marcum ◽  
James Alex Hurt ◽  
Derek T. Anderson ◽  
...  

Sensors ◽  
2020 ◽  
Vol 20 (3) ◽  
pp. 612 ◽  
Author(s):  
Eldar Šabanovič ◽  
Vidas Žuraulis ◽  
Olegas Prentkovskis ◽  
Viktor Skrickij

Nowadays, vehicles have advanced driver-assistance systems which help to improve vehicle safety and save the lives of drivers, passengers and pedestrians. Identification of the road-surface type and condition in real time using a video image sensor, can increase the effectiveness of such systems significantly, especially when adapting it for braking and stability-related solutions. This paper contributes to the development of the new efficient engineering solution aimed at improving vehicle dynamics control via the anti-lock braking system (ABS) by estimating friction coefficient using video data. The experimental research on three different road surface types in dry and wet conditions has been carried out and braking performance was established with a car mathematical model (MM). Testing of a deep neural networks (DNN)-based road-surface and conditions classification algorithm revealed that this is the most promising approach for this task. The research has shown that the proposed solution increases the performance of ABS with a rule-based control strategy.


2019 ◽  
Author(s):  
David Beniaguev ◽  
Idan Segev ◽  
Michael London

AbstractWe introduce a novel approach to study neurons as sophisticated I/O information processing units by utilizing recent advances in the field of machine learning. We trained deep neural networks (DNNs) to mimic the I/O behavior of a detailed nonlinear model of a layer 5 cortical pyramidal cell, receiving rich spatio-temporal patterns of input synapse activations. A Temporally Convolutional DNN (TCN) with seven layers was required to accurately, and very efficiently, capture the I/O of this neuron at the millisecond resolution. This complexity primarily arises from local NMDA-based nonlinear dendritic conductances. The weight matrices of the DNN provide new insights into the I/O function of cortical pyramidal neurons, and the approach presented can provide a systematic characterization of the functional complexity of different neuron types. Our results demonstrate that cortical neurons can be conceptualized as multi-layered “deep” processing units, implying that the cortical networks they form have a non-classical architecture and are potentially more computationally powerful than previously assumed.


2021 ◽  
Author(s):  
Jason Munger ◽  
Carlos W. Morato

This project explores how raw image data obtained from AV cameras can provide a model with more spatial information than can be learned from simple RGB images alone. This paper leverages the advances of deep neural networks to demonstrate steering angle predictions of autonomous vehicles through an end-to-end multi-channel CNN model using only the image data provided from an onboard camera. Image data is processed through existing neural networks to provide pixel segmentation and depth estimates and input to a new neural network along with the raw input image to provide enhanced feature signals from the environment. Various input combinations of Multi-Channel CNNs are evaluated, and their effectiveness is compared to single CNN networks using the individual data inputs. The model with the most accurate steering predictions is identified and performance compared to previous neural networks.


2018 ◽  
Author(s):  
Titus Josef Brinker ◽  
Achim Hekler ◽  
Christof von Kalle

BACKGROUND In recent months, multiple publications have demonstrated the use of convolutional neural networks (CNN) to classify images of skin cancer as precisely as dermatologists. These CNNs failed to outperform the International Symposium on Biomedical Imaging (ISBI) 2016 challenge in terms of average precision, however, so the technical progress represented by these studies is limited. In addition, the available reports are difficult to reproduce, due to incomplete descriptions of training procedures and the use of proprietary image databases. These factors prevent the comparison of various CNN classifiers in equal terms. OBJECTIVE To demonstrate the training of an image-classifier CNN that outperforms the winner of the ISBI 2016 challenge by using open source images exclusively. METHODS A detailed description of the training procedure is reported while the used images and test sets are disclosed fully, to insure the reproducibility of our work. RESULTS Our CNN classifier outperforms all recent attempts to classify the original ISBI 2016 challenge test data (full set of 379 test images), with an average precision of 0.709 (vs. 0.637 of the ISBI winner) and with an area under the receiver operating curve of 0.85. CONCLUSIONS This work illustrates the potential for improving skin cancer classification with enhanced training procedures for CNNs, while avoiding the use of costly equipment or proprietary image data.


Sign in / Sign up

Export Citation Format

Share Document