scholarly journals Gaining a Sense of Touch Object Stiffness Estimation Using a Soft Gripper and Neural Networks

Electronics ◽  
2021 ◽  
Vol 10 (1) ◽  
pp. 96
Author(s):  
Michal Bednarek ◽  
Piotr Kicki ◽  
Jakub Bednarek ◽  
Krzysztof Walas

Soft grippers are gaining significant attention in the manipulation of elastic objects, where it is required to handle soft and unstructured objects, which are vulnerable to deformations. The crucial problem is to estimate the physical parameters of a squeezed object to adjust the manipulation procedure, which poses a significant challenge. The research on physical parameters estimation using deep learning algorithms on measurements from direct interaction with objects using robotic grippers is scarce. In our work, we proposed a trainable system which performs the regression of an object stiffness coefficient from the signals registered during the interaction of the gripper with the object. First, using the physics simulation environment, we performed extensive experiments to validate our approach. Afterwards, we prepared a system that works in a real-world scenario with real data. Our learned system can reliably estimate the stiffness of an object, using the Yale OpenHand soft gripper, based on readings from Inertial Measurement Units (IMUs) attached to the fingers of the gripper. Additionally, during the experiments, we prepared three datasets of IMU readings gathered while squeezing the objects—two created in the simulation environment and one composed of real data. The dataset is the contribution to the community providing the way for developing and validating new approaches in the growing field of soft manipulation.

2021 ◽  
Vol 2 (1) ◽  
pp. 01-11
Author(s):  
Ahmed Nafidi ◽  
Oussama Rida ◽  
Boujemaa Achchab

A new stochastic diffusion process based on Generalized Brody curve is proposed. Such a process can be considered as an extension of the nonhomogeneous lognormal diffusion process. From the corresponding Itô’s stochastic differential equation (SDE), firstly we establish the probabilistic characteristics of the studied process, such as the solution to the SDE, the probability transition density function and their distribution, the moments function, in particular the conditional and non-conditional trend functions. Secondly, we treat the parameters estimation problem by using the maximum likelihood method in basis of the discrete sampling, thus we obtain nonlinear equations that can be solved by metaheuristic optimization algorithms such as simulated annealing and variable search neighborhood. Finally, we perform a simulation studies and we apply the model to the data of life expectancy at birth in Morocco.


Sensors ◽  
2020 ◽  
Vol 20 (10) ◽  
pp. 2841
Author(s):  
Mohammad Ali Zaiter ◽  
Régis Lherbier ◽  
Ghaleb Faour ◽  
Oussama Bazzi ◽  
Jean-Charles Noyer

This paper details a new extrinsic calibration method for scanning laser rangefinder that is precisely focused on the geometrical ground plane-based estimation. This method is also efficient in the challenging experimental configuration of a high angle of inclination of the LiDAR. In this configuration, the calibration of the LiDAR sensor is a key problem that can be be found in various domains and in particular to guarantee the efficiency of ground surface object detection. The proposed extrinsic calibration method can be summarized by the following procedure steps: fitting ground plane, extrinsic parameters estimation (3D orientation angles and altitude), and extrinsic parameters optimization. Finally, the results are presented in terms of precision and robustness against the variation of LiDAR’s orientation and range accuracy, respectively, showing the stability and the accuracy of the proposed extrinsic calibration method, which was validated through numerical simulation and real data to prove the method performance.


2014 ◽  
Vol 11 (S308) ◽  
pp. 368-371
Author(s):  
Jukka Nevalainen ◽  
L. J. Liivamägi ◽  
E. Tempel ◽  
E. Branchini ◽  
M. Roncarelli ◽  
...  

AbstractWe have developed a new method to approach the missing baryons problem. We assume that the missing baryons reside in a form of Warm Hot Intergalactic Medium, i.e. the WHIM. Our method consists of (a) detecting the coherent large scale structure in the spatial distribution of galaxies that traces the Cosmic Web and that in hydrodynamical simulations is associated to the WHIM, (b) mapping its luminosity into a galaxy luminosity density field, (c) using numerical simulations to relate the luminosity density to the density of the WHIM, (d) applying this relation to real data to trace the WHIM using the observed galaxy luminosities in the Sloan Digital Sky Survey and 2dF redshift surveys. In our application we find evidence for the WHIM along the line of sight to the Sculptor Wall, at redshifts consistent with the recently reported X-ray absorption line detections. Our indirect WHIM detection technique complements the standard method based on the detection of characteristic X-ray absorption lines, showing that the galaxy luminosity density is a reliable signpost for the WHIM. For this reason, our method could be applied to current galaxy surveys to optimise the observational strategies for detecting and studying the WHIM and its properties. Our estimates of the WHIM hydrogen column density NH in Sculptor agree with those obtained via the X-ray analysis. Due to the additional NH estimate, our method has potential for improving the constrains of the physical parameters of the WHIM as derived with X-ray absorption, and thus for improving the understanding of the missing baryons problem.


2018 ◽  
Vol 22 (4) ◽  
pp. 1581-1588 ◽  
Author(s):  
Kai-Wen Wang ◽  
Xiao-Hua Yang ◽  
Yu-Qi Li ◽  
Chang-Ming Liu ◽  
Xing-Jian Guo

To improve the precision of parameters? estimation in Philip infiltration model, chaos gray-coded genetic algorithm was introduced. The optimization algorithm made it possible to change from the discrete form of time perturbation function to a more flexible continuous form. The software RETC and Hydrus-1D were applied to estimate the soil physical parameters and referenced cumulative infiltration for seven different soils in the USDA soil texture triangle. The comparisons among Philip infiltration model with different numerical calculation methods showed that using optimization technique can increase the Nash and Sutcliffe efficiency from 0.82 to 0.97, and decrease the percent bias from 14% to 2%. The results indicated that using the discrete relationship of time perturbation function in Philip infiltration model?s numerical calculation underestimated model?s parameters, but this problem can be corrected a lot by using optimization algorithm. We acknowledge that in this study the fitting of time perturbation function, Chebyshev polynomial with order 20, did not perform perfectly near saturated and residue water content. So exploring a more appropriate function for representing time perturbation function is valuable in the future.


2021 ◽  
Author(s):  
Philipp Weiler ◽  
Koen Van den Berge ◽  
Kelly Street ◽  
Simone Tiberi

Technological developments have led to an explosion of high-throughput single cell data, which are revealing unprecedented perspectives on cell identity. Recently, significant attention has focused on investigating, from single-cell RNA-sequencing (scRNA-seq) data, cellular dynamic processes, such as cell differentiation, cell cycle and cell (de)activation. Trajectory inference methods estimate a trajectory, a collection of differentiation paths of a dynamic system, by ordering cells along the paths of such a dynamic process. While trajectory inference tools typically work with gene expression levels, common scRNA-seq protocols allow the identification and quantification of unspliced pre-mRNAs and mature spliced mRNAs, for each gene. By exploiting the abundance of unspliced and spliced mRNA, one can infer the RNA velocity of individual cells, i.e., the time derivative of the gene expression state of cells. Whereas traditional trajectory inference methods reconstruct cellular dynamics given a population of cells of varying maturity, RNA velocity relies on a dynamical model describing splicing dynamics. Here, we initially discuss conceptual and theoretical aspects of both approaches, then illustrate how they can be combined together, and finally present an example use-case on real data.


1989 ◽  
Vol 257 (1) ◽  
pp. R237-R245 ◽  
Author(s):  
G. Van Waeg ◽  
T. Groth

A six-compartment model of allopurinol and oxipurinol kinetics, after intravenous allopurinol injection in the human, is studied further to improve the blood and urine specimen collection schedule for clinical use. The effects of various error sources are also investigated by simple techniques like real data set truncation and adding normally distributed random errors to data obtained from simulation of allopurinol and oxipurinol plasma curves with preset parameters. All parameters estimation is performed with the NONLIN parameter estimation program. Main interest was focused on estimation of the fractional rate constant of transport from the central "extracellular" compartment to the metabolically active compartment. This parameter is regarded as a lumped measure of liver perfusion and liver cell membrane transport. The blood sampling schedule can be reduced to six specimens collected over 60 min, without affecting the accuracy and precision of estimated clinical parameters. The maximum allowable coefficient of variation for preanalytical errors and the analytical within-run and between-run errors are around 5, 4, and 5%, respectively. Analytical between-run bias up to 20% does not affect the estimate of the principal parameter, when both allopurinol and oxipurinol are biased in the same direction. Collection and analysis of urine samples was shown to be unnecessary.


2021 ◽  
Vol 13 (12) ◽  
pp. 6963
Author(s):  
Abd-ElHady Ramadan ◽  
Salah Kamel ◽  
Tahir Khurshaid ◽  
Seung-Ryle Oh ◽  
Sang-Bong Rhee

The enhancement of photovoltaic (PV) energy systems relies on an accurate PV model. Researchers have made significant efforts to extract PV parameters due to their nonlinear characteristics of the PV system, and the lake information from the manufactures’ PV system datasheet. PV parameters estimation using optimization algorithms is a challenging problem in which a wide range of research has been conducted. The idea behind this challenge is the selection of a proper PV model and algorithm to estimate the accurate parameters of this model. In this paper, a new application of the improved gray wolf optimizer (I-GWO) is proposed to estimate the parameters’ values that achieve an accurate PV three diode model (TDM) in a perfect and robust manner. The PV TDM is developed to represent the effect of grain boundaries and large leakage current in the PV system. I-GWO is developed with the aim of improving population, exploration and exploitation balance and convergence of the original GWO. The performance of I-GWO is compared with other well-known optimization algorithms. I-GWO is evaluated through two different applications. In the first application, the real data from RTC furnace is applied and in the second one, the real data of PTW polycrystalline PV panel is applied. The results are compared with different evaluation factors (root mean square error (RMSE), current absolute error and statistical analysis for multiple independent runs). I-GWO achieved the lowest RMSE values in comparison with other algorithms. The RMSE values for the two applications are 0.00098331 and 0.0024276, respectively. Based on quantitative and qualitative performance evaluation, it can be concluded that the estimated parameters of TDM by I-GWO are more accurate than those obtained by other studied optimization algorithms.


Sign in / Sign up

Export Citation Format

Share Document