The development of synthetic thermal image generation tools and training data at FLIR

Author(s):  
Arthur Stout ◽  
Kedar Madineni ◽  
Louis Tremblay ◽  
Zachary Tane
2019 ◽  
Vol 12 (2) ◽  
pp. 120-127 ◽  
Author(s):  
Wael Farag

Background: In this paper, a Convolutional Neural Network (CNN) to learn safe driving behavior and smooth steering manoeuvring, is proposed as an empowerment of autonomous driving technologies. The training data is collected from a front-facing camera and the steering commands issued by an experienced driver driving in traffic as well as urban roads. Methods: This data is then used to train the proposed CNN to facilitate what it is called “Behavioral Cloning”. The proposed Behavior Cloning CNN is named as “BCNet”, and its deep seventeen-layer architecture has been selected after extensive trials. The BCNet got trained using Adam’s optimization algorithm as a variant of the Stochastic Gradient Descent (SGD) technique. Results: The paper goes through the development and training process in details and shows the image processing pipeline harnessed in the development. Conclusion: The proposed approach proved successful in cloning the driving behavior embedded in the training data set after extensive simulations.


2021 ◽  
Vol 7 (3) ◽  
pp. 59
Author(s):  
Yohanna Rodriguez-Ortega ◽  
Dora M. Ballesteros ◽  
Diego Renza

With the exponential growth of high-quality fake images in social networks and media, it is necessary to develop recognition algorithms for this type of content. One of the most common types of image and video editing consists of duplicating areas of the image, known as the copy-move technique. Traditional image processing approaches manually look for patterns related to the duplicated content, limiting their use in mass data classification. In contrast, approaches based on deep learning have shown better performance and promising results, but they present generalization problems with a high dependence on training data and the need for appropriate selection of hyperparameters. To overcome this, we propose two approaches that use deep learning, a model by a custom architecture and a model by transfer learning. In each case, the impact of the depth of the network is analyzed in terms of precision (P), recall (R) and F1 score. Additionally, the problem of generalization is addressed with images from eight different open access datasets. Finally, the models are compared in terms of evaluation metrics, and training and inference times. The model by transfer learning of VGG-16 achieves metrics about 10% higher than the model by a custom architecture, however, it requires approximately twice as much inference time as the latter.


2019 ◽  
Author(s):  
Gabriel Loewinger ◽  
Prasad Patil ◽  
Kenneth Kishida ◽  
Giovanni Parmigiani

Prediction settings with multiple studies have become increasingly common. Ensembling models trained on individual studies has been shown to improve replicability in new studies. Motivated by a groundbreaking new technology in human neuroscience, we introduce two generalizations of multi-study ensemble predictions. First, while existing methods weight ensemble elements by cross-study prediction performance, we extend weighting schemes to also incorporate covariate similarity between training data and target validation studies. Second, we introduce a hierarchical resampling scheme to generate pseudo-study replicates (“study straps”) and ensemble classifiers trained on these rather than the original studies themselves. We demonstrate analytically that existing methods are special cases. Through a tuning parameter, our approach forms a continuum between merging all training data and training with existing multi-study ensembles. Leveraging this continuum helps accommodate different levels of between-study heterogeneity.Our methods are motivated by the application of Voltammetry in humans. This technique records electrical brain measurements and converts signals into neurotransmitter concentration estimates using a prediction model. Using this model in practice presents a cross-study challenge, for which we show marked improvements after application of our methods. We verify our methods in simulations and provide the studyStrap R package.


1988 ◽  
Vol 32 (13) ◽  
pp. 760-764
Author(s):  
Robert F. Randolph

Leaders of task-oriented production groups play an important role in their group's functioning and performance. That role also evolves as groups mature and learn to work together more smoothly. The present study uses a functional analysis of the evolving role of supervisors of underground coal mining crews to evaluate the impact of supervisors' characteristics and behaviors on their crews' efficiency and safety, and makes recommendations for improving supervisory selection and training. Data were gathered from a sample of 138 supervisors at 13 underground coal mines. Detailed structured observations of the supervisors indicated that most of their time was spent attending to hardware and paperwork, while comparatively little time was spent on person to person “leadership”. The findings point out that while group needs changed over time, the supervisors' behaviors typically did not keep pace and probably restricted group performance.


2020 ◽  
pp. 105971231989648 ◽  
Author(s):  
David Windridge ◽  
Henrik Svensson ◽  
Serge Thill

We consider the benefits of dream mechanisms – that is, the ability to simulate new experiences based on past ones – in a machine learning context. Specifically, we are interested in learning for artificial agents that act in the world, and operationalize “dreaming” as a mechanism by which such an agent can use its own model of the learning environment to generate new hypotheses and training data. We first show that it is not necessarily a given that such a data-hallucination process is useful, since it can easily lead to a training set dominated by spurious imagined data until an ill-defined convergence point is reached. We then analyse a notably successful implementation of a machine learning-based dreaming mechanism by Ha and Schmidhuber (Ha, D., & Schmidhuber, J. (2018). World models. arXiv e-prints, arXiv:1803.10122). On that basis, we then develop a general framework by which an agent can generate simulated data to learn from in a manner that is beneficial to the agent. This, we argue, then forms a general method for an operationalized dream-like mechanism. We finish by demonstrating the general conditions under which such mechanisms can be useful in machine learning, wherein the implicit simulator inference and extrapolation involved in dreaming act without reinforcing inference error even when inference is incomplete.


2020 ◽  
Vol 9 (4) ◽  
pp. 59
Author(s):  
Fabrizio De Vita ◽  
Dario Bruneo

During the last decade, the Internet of Things acted as catalyst for the big data phenomenon. As result, modern edge devices can access a huge amount of data that can be exploited to build useful services. In such a context, artificial intelligence has a key role to develop intelligent systems (e.g., intelligent cyber physical systems) that create a connecting bridge with the physical world. However, as time goes by, machine and deep learning applications are becoming more complex, requiring increasing amounts of data and training time, which makes the use of centralized approaches unsuitable. Federated learning is an emerging paradigm which enables the cooperation of edge devices to learn a shared model (while keeping private their training data), thereby abating the training time. Although federated learning is a promising technique, its implementation is difficult and brings a lot of challenges. In this paper, we present an extension of Stack4Things, a cloud platform developed in our department; leveraging its functionalities, we enabled the deployment of federated learning on edge devices without caring their heterogeneity. Experimental results show a comparison with a centralized approach and demonstrate the effectiveness of the proposed approach in terms of both training time and model accuracy.


2019 ◽  
Vol 7 (3) ◽  
pp. SE269-SE280
Author(s):  
Xu Si ◽  
Yijun Yuan ◽  
Tinghua Si ◽  
Shiwen Gao

Random noise often contaminates seismic data and reduces its signal-to-noise ratio. Therefore, the removal of random noise has been an essential step in seismic data processing. The [Formula: see text]-[Formula: see text] predictive filtering method is one of the most widely used methods in suppressing random noise. However, when the subsurface structure becomes complex, this method suffers from higher prediction errors owing to the large number of different dip components that need to be predicted. Here, we used a denoising convolutional neural network (DnCNN) algorithm to attenuate random noise in seismic data. This method does not assume the linearity and stationarity of the signal in the conventional [Formula: see text]-[Formula: see text] domain prediction technique, and it involves creating a set of training data that are obtained by data processing, feeding the neural network with the training data obtained, and deep network learning and training. During deep network learning and training, the activation function and batch normalization are used to solve the gradient vanishing and gradient explosion problems, and the residual learning technique is used to improve the calculation precision, respectively. After finishing deep network learning and training, the network will have the ability to separate the residual image from the seismic data with noise. Then, clean images can be obtained by subtracting the residual image from the raw data with noise. Tests on the synthetic and real data demonstrate that the DnCNN algorithm is very effective for random noise attenuation in seismic data.


Sign in / Sign up

Export Citation Format

Share Document