scholarly journals Appearance-Based Sequential Robot Localization Using a Patchwise Approximation of a Descriptor Manifold

Sensors ◽  
2021 ◽  
Vol 21 (7) ◽  
pp. 2483
Author(s):  
Alberto Jaenal ◽  
Francisco-Angel Moreno ◽  
Javier Gonzalez-Jimenez

This paper addresses appearance-based robot localization in 2D with a sparse, lightweight map of the environment composed of descriptor–pose image pairs. Based on previous research in the field, we assume that image descriptors are samples of a low-dimensional Descriptor Manifold that is locally articulated by the camera pose. We propose a piecewise approximation of the geometry of such Descriptor Manifold through a tessellation of so-called Patches of Smooth Appearance Change (PSACs), which defines our appearance map. Upon this map, the presented robot localization method applies both a Gaussian Process Particle Filter (GPPF) to perform camera tracking and a Place Recognition (PR) technique for relocalization within the most likely PSACs according to the observed descriptor. A specific Gaussian Process (GP) is trained for each PSAC to regress a Gaussian distribution over the descriptor for any particle pose lying within that PSAC. The evaluation of the observed descriptor in this distribution gives us a likelihood, which is used as the weight for the particle. Besides, we model the impact of appearance variations on image descriptors as a white noise distribution within the GP formulation, ensuring adequate operation under lighting and scene appearance changes with respect to the conditions in which the map was constructed. A series of experiments with both real and synthetic images show that our method outperforms state-of-the-art appearance-based localization methods in terms of robustness and accuracy, with median errors below 0.3 m and 6∘.

Author(s):  
Wenyan Zhang ◽  
Ling Xu ◽  
Meng Yan ◽  
Ziliang Wang ◽  
Chunlei Fu

In recent years, the number of online services has grown rapidly, invoking the required services through the cloud platform has become the primary trend. How to help users choose and recommend high-quality services among huge amounts of unused services has become a hot issue in research. Among the existing QoS prediction methods, the collaborative filtering (CF) method can only learn low-dimensional linear characteristics, and its effect is limited by sparse data. Although existing deep learning methods could capture high-dimensional nonlinear features better, most of them only use the single feature of identity, and the problem of network deepening gradient disappearance is serious, so the effect of QoS prediction is unsatisfactory. To address these problems, we propose an advanced probability distribution and location-aware ResNet approach for QoS Prediction (PLRes). This approach considers the historical invocations probability distribution and location characteristics of users and services, and first uses the ResNet in QoS prediction to reuses the features, which alleviates the problems of gradient disappearance and model degradation. A series of experiments are conducted on a real-world web service dataset WS-DREAM. At the density of 5%–30%, the experimental results on both QoS attribute response time and throughput indicate that PLRes performs better than the existing five state-of-the-art QoS prediction approaches.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Yuhao Liu

Most of the recent advances in image superresolution (SR) assume that the blur kernel during downsampling is predefined (e.g., Bicubic or Gaussian kernel), but it is a difficult task to make it suitable for all the realistic images. In this paper, we propose an Improved Superresolution Feedback Network (ISRFN) which is designed free to predefine the downsampling blur kernel by dealing with real-world HR-LR image pairs directly without downsampling process. We propose ISRFN by modifying the layers and network structures of the famous Superresolution Feedback Network (SRFBN). We trained the ISRFN with the Camera Lens Database named City100, which produced the HR and LR on the same lens, respectively, free for downsampling, so our proposed ISRFN is free to estimate the blur kernel. Due to different camera lens (smartphone and DSLR) databases, we perform two series of experiments under two camera lenses-based City100 databases, respectively, to choose the optimum network structures; experiments make it clear that different camera lens-based databases have different optimum network structures. We also compare our two ISRFNs with the state-of-the-art algorithms on performance; experiments show that our proposed ISRFN outperforms other state-of-the-art algorithms.


Metals ◽  
2021 ◽  
Vol 11 (2) ◽  
pp. 250
Author(s):  
Jiří Hájek ◽  
Zaneta Dlouha ◽  
Vojtěch Průcha

This article is a response to the state of the art in monitoring the cooling capacity of quenching oils in industrial practice. Very often, a hardening shop requires a report with data on the cooling process for a particular quenching oil. However, the interpretation of the data can be rather difficult. The main goal of our work was to compare various criteria used for evaluating quenching oils. Those of which prove essential for operation in tempering plants would then be introduced into practice. Furthermore, the article describes monitoring the changes in the properties of a quenching oil used in a hardening shop, the effects of quenching oil temperature on its cooling capacity and the impact of the water content on certain cooling parameters of selected oils. Cooling curves were measured (including cooling rates and the time to reach relevant temperatures) according to ISO 9950. The hardening power of the oil and the area below the cooling rate curve as a function of temperature (amount of heat removed in the nose region of the Continuous cooling transformation - CCT curve) were calculated. V-values based on the work of Tamura, reflecting the steel type and its CCT curve, were calculated as well. All the data were compared against the hardness and microstructure on a section through a cylinder made of EN C35 steel cooled in the particular oil. Based on the results, criteria are recommended for assessing the suitability of a quenching oil for a specific steel grade and product size. The quenching oils used in the experiment were Houghto Quench C120, Paramo TK 22, Paramo TK 46, CS Noro MO 46 and Durixol W72.


Author(s):  
Florian Kuisat ◽  
Fernando Lasagni ◽  
Andrés Fabián Lasagni

AbstractIt is well known that the surface topography of a part can affect its mechanical performance, which is typical in additive manufacturing. In this context, we report about the surface modification of additive manufactured components made of Titanium 64 (Ti64) and Scalmalloy®, using a pulsed laser, with the aim of reducing their surface roughness. In our experiments, a nanosecond-pulsed infrared laser source with variable pulse durations between 8 and 200 ns was applied. The impact of varying a large number of parameters on the surface quality of the smoothed areas was investigated. The results demonstrated a reduction of surface roughness Sa by more than 80% for Titanium 64 and by 65% for Scalmalloy® samples. This allows to extend the applicability of additive manufactured components beyond the current state of the art and break new ground for the application in various industrial applications such as in aerospace.


Electronics ◽  
2021 ◽  
Vol 10 (12) ◽  
pp. 1407
Author(s):  
Peng Wang ◽  
Jing Zhou ◽  
Yuzhang Liu ◽  
Xingchen Zhou

Knowledge graph embedding aims to embed entities and relations into low-dimensional vector spaces. Most existing methods only focus on triple facts in knowledge graphs. In addition, models based on translation or distance measurement cannot fully represent complex relations. As well-constructed prior knowledge, entity types can be employed to learn the representations of entities and relations. In this paper, we propose a novel knowledge graph embedding model named TransET, which takes advantage of entity types to learn more semantic features. More specifically, circle convolution based on the embeddings of entity and entity types is utilized to map head entity and tail entity to type-specific representations, then translation-based score function is used to learn the presentation triples. We evaluated our model on real-world datasets with two benchmark tasks of link prediction and triple classification. Experimental results demonstrate that it outperforms state-of-the-art models in most cases.


Logistics ◽  
2021 ◽  
Vol 5 (1) ◽  
pp. 8
Author(s):  
Hicham Lamzaouek ◽  
Hicham Drissi ◽  
Naima El Haoud

The bullwhip effect is a pervasive phenomenon in all supply chains causing excessive inventory, delivery delays, deterioration of customer service, and high costs. Some researchers have studied this phenomenon from a financial perspective by shedding light on the phenomenon of cash flow bullwhip (CFB). The objective of this article is to provide the state of the art in relation to research work on CFB. Our ambition is not to make an exhaustive list, but to synthesize the main contributions, to enable us to identify other interesting research perspectives. In this regard, certain lines of research remain insufficiently explored, such as the role that supply chain digitization could play in controlling CFB, the impact of CFB on the profitability of companies, or the impacts of the omnichannel commerce on CFB.


Energies ◽  
2021 ◽  
Vol 14 (15) ◽  
pp. 4392
Author(s):  
Jia Zhou ◽  
Hany Abdel-Khalik ◽  
Paul Talbot ◽  
Cristian Rabiti

This manuscript develops a workflow, driven by data analytics algorithms, to support the optimization of the economic performance of an Integrated Energy System. The goal is to determine the optimum mix of capacities from a set of different energy producers (e.g., nuclear, gas, wind and solar). A stochastic-based optimizer is employed, based on Gaussian Process Modeling, which requires numerous samples for its training. Each sample represents a time series describing the demand, load, or other operational and economic profiles for various types of energy producers. These samples are synthetically generated using a reduced order modeling algorithm that reads a limited set of historical data, such as demand and load data from past years. Numerous data analysis methods are employed to construct the reduced order models, including, for example, the Auto Regressive Moving Average, Fourier series decomposition, and the peak detection algorithm. All these algorithms are designed to detrend the data and extract features that can be employed to generate synthetic time histories that preserve the statistical properties of the original limited historical data. The optimization cost function is based on an economic model that assesses the effective cost of energy based on two figures of merit: the specific cash flow stream for each energy producer and the total Net Present Value. An initial guess for the optimal capacities is obtained using the screening curve method. The results of the Gaussian Process model-based optimization are assessed using an exhaustive Monte Carlo search, with the results indicating reasonable optimization results. The workflow has been implemented inside the Idaho National Laboratory’s Risk Analysis and Virtual Environment (RAVEN) framework. The main contribution of this study addresses several challenges in the current optimization methods of the energy portfolios in IES: First, the feasibility of generating the synthetic time series of the periodic peak data; Second, the computational burden of the conventional stochastic optimization of the energy portfolio, associated with the need for repeated executions of system models; Third, the inadequacies of previous studies in terms of the comparisons of the impact of the economic parameters. The proposed workflow can provide a scientifically defendable strategy to support decision-making in the electricity market and to help energy distributors develop a better understanding of the performance of integrated energy systems.


2021 ◽  
Vol 11 (15) ◽  
pp. 7046
Author(s):  
Jorge Francisco Ciprián-Sánchez ◽  
Gilberto Ochoa-Ruiz ◽  
Lucile Rossi ◽  
Frédéric Morandini

Wildfires stand as one of the most relevant natural disasters worldwide, particularly more so due to the effect of climate change and its impact on various societal and environmental levels. In this regard, a significant amount of research has been done in order to address this issue, deploying a wide variety of technologies and following a multi-disciplinary approach. Notably, computer vision has played a fundamental role in this regard. It can be used to extract and combine information from several imaging modalities in regard to fire detection, characterization and wildfire spread forecasting. In recent years, there has been work pertaining to Deep Learning (DL)-based fire segmentation, showing very promising results. However, it is currently unclear whether the architecture of a model, its loss function, or the image type employed (visible, infrared, or fused) has the most impact on the fire segmentation results. In the present work, we evaluate different combinations of state-of-the-art (SOTA) DL architectures, loss functions, and types of images to identify the parameters most relevant to improve the segmentation results. We benchmark them to identify the top-performing ones and compare them to traditional fire segmentation techniques. Finally, we evaluate if the addition of attention modules on the best performing architecture can further improve the segmentation results. To the best of our knowledge, this is the first work that evaluates the impact of the architecture, loss function, and image type in the performance of DL-based wildfire segmentation models.


Sign in / Sign up

Export Citation Format

Share Document