Hilbert stereo reconstruction algorithm based on depth feature and stereo matching

2020 ◽  
Vol 39 (5) ◽  
pp. 8027-8038
Author(s):  
Weiyi Kong ◽  
Menglong Yang ◽  
Qinzhen Huang

This paper proposes a Hilbert stereo reconstruction algorithm based on depth feature and stereo matching to solve the problem of occlusive region matching errors, namely, the Hilbert stereo network. The traditional stereo network pays more attention to disparity itself, leading to the inaccuracy of disparity estimation. Our design network studies the effective disparity matching and refinement through reconstruction representation of Hilbert’s disparity coefficient. Since the Hilbert coefficient is not affected by the occlusion and texture in the image, stereo disparity matching can conducted effectively. Our network includes three sub-modules, namely, depth feature representation, Hilbert cost volume fusion, and Hilbert refinement reconstruction. Separately, texture features of different depth levels of the image were extracted through Hilbert filtering operation. Next, stereoscopic disparity fusion was performed, and then Hilbert designed to refine the difference regression stereo matching solution was used. Based on the end-to-end design, the structure is refined by combining the depth feature extraction module and Hilbert coefficient disparity. Finally, the Hilbert stereo matching algorithm achieves excellent performance on standard big data set and is compared with other advanced stereo networks. Experiments show that our network has high accuracy and high performance.

2020 ◽  
Vol 10 (10) ◽  
pp. 3382
Author(s):  
Rahmat Ullah ◽  
Tughrul Arslan

For processing large-scale medical imaging data, adopting high-performance computing and cloud-based resources are getting attention rapidly. Due to its low–cost and non-invasive nature, microwave technology is being investigated for breast and brain imaging. Microwave imaging via space-time algorithm and its extended versions are commonly used, as it provides high-quality images. However, due to intensive computation and sequential execution, these algorithms are not capable of producing images in an acceptable time. In this paper, a parallel microwave image reconstruction algorithm based on Apache Spark on high-performance computing and Google Cloud Platform is proposed. The input data is first converted to a resilient distributed data set and then distributed to multiple nodes on a cluster. The subset of pixel data is calculated in parallel on these nodes, and the results are retrieved to a master node for image reconstruction. Using Apache Spark, the performance of the parallel microwave image reconstruction algorithm is evaluated on high-performance computing and Google Cloud Platform, which shows an average speed increase of 28.56 times on four homogeneous computing nodes. Experimental results revealed that the proposed parallel microwave image reconstruction algorithm fully inherits the parallelism, resulting in fast reconstruction of images from radio frequency sensor’s data. This paper also illustrates that the proposed algorithm is generalized and can be deployed on any master-slave architecture.


2012 ◽  
Vol 2012 ◽  
pp. 1-11 ◽  
Author(s):  
Sanja Damjanović ◽  
Ferdinand van der Heijden ◽  
Luuk J. Spreeuwers

We propose a new dense local stereo matching framework for gray-level images based on an adaptive local segmentation using a dynamic threshold. We define a new validity domain of the frontoparallel assumption based on the local intensity variations in the 4 neighborhoods of the matching pixel. The preprocessing step smoothes low-textured areas and sharpens texture edges, whereas the postprocessing step detects and recovers occluded and unreliable disparities. The algorithm achieves high stereo reconstruction quality in regions with uniform intensities as well as in textured regions. The algorithm is robust against local radiometrical differences and successfully recovers disparities around the objects edges, disparities of thin objects, and the disparities of the occluded region. Moreover, our algorithm intrinsically prevents errors caused by occlusion to propagate into nonoccluded regions. It has only a small number of parameters. The performance of our algorithm is evaluated on the Middlebury test bed stereo images. It ranks highly on the evaluation list outperforming many local and global stereo algorithms using color images. Among the local algorithms relying on the frontoparallel assumption, our algorithm is the best-ranked algorithm. We also demonstrate that our algorithm is working well on practical examples as for disparity estimation of a tomato seedling and a 3D reconstruction of a face.


Author(s):  
C. Sauer ◽  
F. Bagusat ◽  
M.-L. Ruiz-Ripoll ◽  
C. Roller ◽  
M. Sauer ◽  
...  

AbstractThis work aims at the characterization of a modern concrete material. For this purpose, we perform two experimental series of inverse planar plate impact (PPI) tests with the ultra-high performance concrete B4Q, using two different witness plate materials. Hugoniot data in the range of particle velocities from 180 to 840 m/s and stresses from 1.1 to 7.5 GPa is derived from both series. Within the experimental accuracy, they can be seen as one consistent data set. Moreover, we conduct corresponding numerical simulations and find a reasonably good agreement between simulated and experimentally obtained curves. From the simulated curves, we derive numerical Hugoniot results that serve as a homogenized, mean shock response of B4Q and add further consistency to the data set. Additionally, the comparison of simulated and experimentally determined results allows us to identify experimental outliers. Furthermore, we perform a parameter study which shows that a significant influence of the applied pressure dependent strength model on the derived equation of state (EOS) parameters is unlikely. In order to compare the current results to our own partially reevaluated previous work and selected recent results from literature, we use simulations to numerically extrapolate the Hugoniot results. Considering their inhomogeneous nature, a consistent picture emerges for the shock response of the discussed concrete and high-strength mortar materials. Hugoniot results from this and earlier work are presented for further comparisons. In addition, a full parameter set for B4Q, including validated EOS parameters, is provided for the application in simulations of impact and blast scenarios.


2021 ◽  
pp. 016555152110184
Author(s):  
Gunjan Chandwani ◽  
Anil Ahlawat ◽  
Gaurav Dubey

Document retrieval plays an important role in knowledge management as it facilitates us to discover the relevant information from the existing data. This article proposes a cluster-based inverted indexing algorithm for document retrieval. First, the pre-processing is done to remove the unnecessary and redundant words from the documents. Then, the indexing of documents is done by the cluster-based inverted indexing algorithm, which is developed by integrating the piecewise fuzzy C-means (piFCM) clustering algorithm and inverted indexing. After providing the index to the documents, the query matching is performed for the user queries using the Bhattacharyya distance. Finally, the query optimisation is done by the Pearson correlation coefficient, and the relevant documents are retrieved. The performance of the proposed algorithm is analysed by the WebKB data set and Twenty Newsgroups data set. The analysis exposes that the proposed algorithm offers high performance with a precision of 1, recall of 0.70 and F-measure of 0.8235. The proposed document retrieval system retrieves the most relevant documents and speeds up the storing and retrieval of information.


2018 ◽  
Vol 10 (8) ◽  
pp. 80
Author(s):  
Lei Zhang ◽  
Xiaoli Zhi

Convolutional neural networks (CNN for short) have made great progress in face detection. They mostly take computation intensive networks as the backbone in order to obtain high precision, and they cannot get a good detection speed without the support of high-performance GPUs (Graphics Processing Units). This limits CNN-based face detection algorithms in real applications, especially in some speed dependent ones. To alleviate this problem, we propose a lightweight face detector in this paper, which takes a fast residual network as backbone. Our method can run fast even on cheap and ordinary GPUs. To guarantee its detection precision, multi-scale features and multi-context are fully exploited in efficient ways. Specifically, feature fusion is used to obtain semantic strongly multi-scale features firstly. Then multi-context including both local and global context is added to these multi-scale features without extra computational burden. The local context is added through a depthwise separable convolution based approach, and the global context by a simple global average pooling way. Experimental results show that our method can run at about 110 fps on VGA (Video Graphics Array)-resolution images, while still maintaining competitive precision on WIDER FACE and FDDB (Face Detection Data Set and Benchmark) datasets as compared with its state-of-the-art counterparts.


Author(s):  
Felix Grimm ◽  
Roland Ewert ◽  
Jürgen Dierke ◽  
Berthold Noll ◽  
Manfred Aigner

A new highly efficient, hybrid CFD/CAA approach for broadband combustion noise modeling is introduced. The inherent sound source generation mechanism is based on turbulent flow field statistics, which are determined from reacting RANS calculations. The generated sources form the right-hand side of the linearized Euler equations for the calculation of sound fields. The stochastic time-domain source reconstruction algorithm is briefly described with emphasis on two different ways of spatial discretization, RPM (Random Particle Method) and the newly developed FRPM (Fast RPM). The application of mainly the latter technique to combustion noise (CN) prediction and several methodical progressions are presented in the paper. (F)RPM-CN is verified in terms of its ability to accurately reproduce prescribed turbulence-induced one- and two-point statistics for a generic test and the DLR-A jet flame validation case. Former works on RPM-CN have been revised and as a consequence methodical improvements are introduced along with the progression to FRPM-CN: A canonical CAA setup for the applications DLR-A, -B and H3 flame is used. Furthermore, a second order Langevin decorrelation model is introduced for FRPM-CN, to avoid spurious high frequency noise. A new calibration parameter set for reacting jet noise prediction with (F)RPM-CN is proposed. The analysis shows the universality of the data set for 2D jet flame applications and furthermore the method’s accountance for Reynolds scalability. In this context, a Mach number scaling law is used to conserve Strouhal similarity of the jet flame spectra. Finally, the numerical results are compared to suitable similarity spectra.


2021 ◽  
Author(s):  
Oliver Stenzel ◽  
Robin Thor ◽  
Martin Hilchenbach

<p>Orbital Laser altimeters deliver a plethora of data that is used to map planetary surfaces [1] and to understand interiors of solar system bodies [2]. Accuracy and precision of laser altimetry measurements depend on the knowledge of spacecraft position and pointing and on the instrument. Both are important for the retrieval of tidal parameters. In order to assess the quality of the altimeter retrievals, we are training and implementing an artificial neural network (ANN) to identify and exclude scans from analysis which yield erroneous data. The implementation is based on the PyTorch framework [3]. We are presenting our results for the MESSENGER Mercury Laser Altimeter (MLA) data set [4], but also in view of future analysis of the BepiColombo Laser Altimeter (BELA) data, which will arrive in orbit around Mercury in 2025 on board the Mercury Planetary Orbiter [5,6]. We further explore conventional methods of error identification and compare these with the machine learning results. Short periods of large residuals or large variation of residuals are identified and used to detect erroneous measurements. Furthermore, long-period systematics, such as those caused by slow variations in instrument pointing, can be modelled by including additional parameters.<br>[1] Zuber, Maria T., David E. Smith, Roger J. Phillips, Sean C. Solomon, Gregory A. Neumann, Steven A. Hauck, Stanton J. Peale, et al. ‘Topography of the Northern Hemisphere of Mercury from MESSENGER Laser Altimetry’. Science 336, no. 6078 (13 April 2012): 217–20. https://doi.org/10.1126/science.1218805.<br>[2] Thor, Robin N., Reinald Kallenbach, Ulrich R. Christensen, Philipp Gläser, Alexander Stark, Gregor Steinbrügge, and Jürgen Oberst. ‘Determination of the Lunar Body Tide from Global Laser Altimetry Data’. Journal of Geodesy 95, no. 1 (23 December 2020): 4. https://doi.org/10.1007/s00190-020-01455-8.<br>[3] Paszke, Adam, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, et al. ‘PyTorch: An Imperative Style, High-Performance Deep Learning Library’. Advances in Neural Information Processing Systems 32 (2019): 8026–37.<br>[4] Cavanaugh, John F., James C. Smith, Xiaoli Sun, Arlin E. Bartels, Luis Ramos-Izquierdo, Danny J. Krebs, Jan F. McGarry, et al. ‘The Mercury Laser Altimeter Instrument for the MESSENGER Mission’. Space Science Reviews 131, no. 1 (1 August 2007): 451–79. https://doi.org/10.1007/s11214-007-9273-4.<br>[5] Thomas, N., T. Spohn, J. -P. Barriot, W. Benz, G. Beutler, U. Christensen, V. Dehant, et al. ‘The BepiColombo Laser Altimeter (BELA): Concept and Baseline Design’. Planetary and Space Science 55, no. 10 (1 July 2007): 1398–1413. https://doi.org/10.1016/j.pss.2007.03.003.<br>[6] Benkhoff, Johannes, Jan van Casteren, Hajime Hayakawa, Masaki Fujimoto, Harri Laakso, Mauro Novara, Paolo Ferri, Helen R. Middleton, and Ruth Ziethe. ‘BepiColombo—Comprehensive Exploration of Mercury: Mission Overview and Science Goals’. Planetary and Space Science, Comprehensive Science Investigations of Mercury: The scientific goals of the joint ESA/JAXA mission BepiColombo, 58, no. 1 (1 January 2010): 2–20. https://doi.org/10.1016/j.pss.2009.09.020.</p>


2020 ◽  
Vol 12 (24) ◽  
pp. 4025
Author(s):  
Rongshu Tao ◽  
Yuming Xiang ◽  
Hongjian You

As an essential step in 3D reconstruction, stereo matching still faces unignorable problems due to the high resolution and complex structures of remote sensing images. Especially in occluded areas of tall buildings and textureless areas of waters and woods, precise disparity estimation has become a difficult but important task. In this paper, we develop a novel edge-sense bidirectional pyramid stereo matching network to solve the aforementioned problems. The cost volume is constructed from negative to positive disparities since the disparity range in remote sensing images varies greatly and traditional deep learning networks only work well for positive disparities. Then, the occlusion-aware maps based on the forward-backward consistency assumption are applied to reduce the influence of the occluded area. Moreover, we design an edge-sense smoothness loss to improve the performance of textureless areas while maintaining the main structure. The proposed network is compared with two baselines. The experimental results show that our proposed method outperforms two methods, DenseMapNet and PSMNet, in terms of averaged endpoint error (EPE) and the fraction of erroneous pixels (D1), and the improvements in occluded and textureless areas are significant.


Sign in / Sign up

Export Citation Format

Share Document