scholarly journals Mastcam Image Resolution Enhancement with Application to Disparity Map Generation for Stereo Images with Different Resolutions

Sensors ◽  
2019 ◽  
Vol 19 (16) ◽  
pp. 3526 ◽  
Author(s):  
Ayhan ◽  
Kwan

In this paper, we introduce an in-depth application of high-resolution disparity map estimation using stereo images from Mars Curiosity rover’s Mastcams, which have two imagers with different resolutions. The left Mastcam has three times lower resolution as that of the right. The left Mastcam image’s resolution is first enhanced with three methods: Bicubic interpolation, pansharpening-based method, and a deep learning super resolution method. The enhanced left camera image and the right camera image are then used to estimate the disparity map. The impact of the left camera image enhancement is examined. The comparative performance analyses showed that the left camera enhancement results in getting more accurate disparity maps in comparison to using the original left Mastcam images for disparity map estimation. The deep learning-based method provided the best performance among the three for both image enhancement and disparity map estimation accuracy. A high-resolution disparity map, which is the result of the left camera image enhancement, is anticipated to improve the conducted science products in the Mastcam imagery such as 3D scene reconstructions, depth maps, and anaglyph images.

Author(s):  
Omer C. Gurol ◽  
Secil Oztürk ◽  
Burak Acar ◽  
Bulent Sankur ◽  
Mehmet Guney

2021 ◽  
pp. 20200553
Author(s):  
Yuki Sakai ◽  
Erina Kitamoto ◽  
Kazutoshi Okamura ◽  
Masato Tatsumi ◽  
Takashi Shirasaka ◽  
...  

Objectives: This study aimed to improve the impact of the metal artefact reduction (MAR) algorithm for the oral cavity by assessing the effect of acquisition and reconstruction parameters on an ultra-high-resolution CT (UHRCT) scanner. Methods: The mandible tooth phantom with and without the lesion was scanned using super-high-resolution, high-resolution (HR), and normal-resolution (NR) modes. Images were reconstructed with deep learning-based reconstruction (DLR) and hybrid iterative reconstruction (HIR) using the MAR algorithm. Two dental radiologists independently graded the degree of metal artefact (1, very severe; 5, minimum) and lesion shape reproducibility (1, slight; 5, almost perfect). The signal-to-artefact ratio (SAR), accuracy of the CT number of the lesion, and image noise were calculated quantitatively. The Tukey-Kramer method with a p-value of less than 0.05 was used to determine statistical significance. Results: The HRDLR visual score was better than the NRHIR score in terms of degree of metal artefact (4.6 ± 0.5 and 2.6 ± 0.5, p < 0.0001) and lesion shape reproducibility (4.5 ± 0.5 and 2.9 ± 1.1, p = 0.0005). The SAR of HRDLR was significantly better than that of NRHIR (4.9 ± 0.4 and 2.1 ± 0.2, p < 0.0001), and the absolute percentage error of the CT number in HRDLR was lower than that in NRHIR (0.8% in HRDLR and 23.8% in NRIR). The image noise of HRDLR was lower than that of NRHIR (15.7 ± 1.4 and 51.6 ± 15.3, p < 0.0001). Conclusions: Our study demonstrated that the combination of HR mode and DLR in UHRCT scanner improved the impact of the MAR algorithm in the oral cavity.


2018 ◽  
Vol 10 (10) ◽  
pp. 1542 ◽  
Author(s):  
Livia Piermattei ◽  
Mauro Marty ◽  
Wilfried Karel ◽  
Camillo Ressl ◽  
Markus Hollaus ◽  
...  

This work focuses on the accuracy estimation of canopy height models (CHMs) derived from image matching of Pléiades stereo imagery over forested mountain areas. To determine the height above ground and hence canopy height in forest areas, we use normalised digital surface models (nDSMs), computed as the differences between external high-resolution digital terrain models (DTMs) and digital surface models (DSMs) from Pléiades image matching. With the overall goal of testing the operational feasibility of Pléiades images for forest monitoring over mountain areas, two questions guide this work whose answers can help in identifying the optimal acquisition planning to derive CHMs. Specifically, we want to assess (1) the benefit of using tri-stereo images instead of stereo pairs, and (2) the impact of different viewing angles and topography. To answer the first question, we acquired new Pléiades data over a study site in Canton Ticino (Switzerland), and we compare the accuracies of CHMs from Pléiades tri-stereo and from each stereo pair combination. We perform the investigation on different viewing angles over a study area near Ljubljana (Slovenia), where three stereo pairs were acquired at one-day offsets. We focus the analyses on open stable and on tree covered areas. To evaluate the accuracy of Pléiades CHMs, we use CHMs from aerial image matching and airborne laser scanning as reference for the Ticino and Ljubljana study areas, respectively. For the two study areas, the statistics of the nDSMs in stable areas show median values close to the expected value of zero. The smallest standard deviation based on the median of absolute differences (σMAD) was 0.80 m for the forward-backward image pair in Ticino and 0.29 m in Ljubljana for the stereo images with the smallest absolute across-track angle (−5.3°). The differences between the highest accuracy Pléiades CHMs and their reference CHMs show a median of 0.02 m in Ticino with a σMAD of 1.90 m and in Ljubljana a median of 0.32 m with a σMAD of 3.79 m. The discrepancies between these results are most likely attributed to differences in forest structure, particularly tree height, density, and forest gaps. Furthermore, it should be taken into account that temporal vegetational changes between the Pléiades and reference data acquisitions introduce additional, spurious CHM differences. Overall, for narrow forward–backward angle of convergence (12°) and based on the used software and workflow to generate the nDSMs from Pléiades images, the results show that the differences between tri-stereo and stereo matching are rather small in terms of accuracy and completeness of the CHM/nDSMs. Therefore, a small angle of convergence does not constitute a major limiting factor. More relevant is the impact of a large across-track angle (19°), which considerably reduces the quality of Pléiades CHMs/nDSMs.


2021 ◽  
Author(s):  
Rilwan A. Adewoyin ◽  
Peter Dueben ◽  
Peter Watson ◽  
Yulan He ◽  
Ritabrata Dutta

AbstractClimate models (CM) are used to evaluate the impact of climate change on the risk of floods and heavy precipitation events. However, these numerical simulators produce outputs with low spatial resolution that exhibit difficulties representing precipitation events accurately. This is mainly due to computational limitations on the spatial resolution used when simulating multi-scale weather dynamics in the atmosphere. To improve the prediction of high resolution precipitation we apply a Deep Learning (DL) approach using input data from a reanalysis product, that is comparable to a climate model’s output, but can be directly related to precipitation observations at a given time and location. Further, our input excludes local precipitation, but includes model fields (weather variables) that are more predictable and generalizable than local precipitation. To this end, we present TRU-NET (Temporal Recurrent U-Net), an encoder-decoder model featuring a novel 2D cross attention mechanism between contiguous convolutional-recurrent layers to effectively model multi-scale spatio-temporal weather processes. We also propose a non-stochastic variant of the conditional-continuous (CC) loss function to capture the zero-skewed patterns of rainfall. Experiments show that our models, trained with our CC loss, consistently attain lower RMSE and MAE scores than a DL model prevalent in precipitation downscaling and outperform a state-of-the-art dynamical weather model. Moreover, by evaluating the performance of our model under various data formulation strategies, for the training and test sets, we show that there is enough data for our deep learning approach to output robust, high-quality results across seasons and varying regions.


Author(s):  
N. Tatar ◽  
M. Saadatseresht ◽  
H. Arefi

Semi Global Matching (SGM) algorithm is known as a high performance and reliable stereo matching algorithm in photogrammetry community. However, there are some challenges using this algorithm especially for high resolution satellite stereo images over urban areas and images with shadow areas. As it can be seen, unfortunately the SGM algorithm computes highly noisy disparity values for shadow areas around the tall neighborhood buildings due to mismatching in these lower entropy areas. In this paper, a new method is developed to refine the disparity map in shadow areas. The method is based on the integration of potential of panchromatic and multispectral image data to detect shadow areas in object level. In addition, a RANSAC plane fitting and morphological filtering are employed to refine the disparity map. The results on a stereo pair of GeoEye-1 captured over Qom city in Iran, shows a significant increase in the rate of matched pixels compared to standard SGM algorithm.


Author(s):  
N. Tatar ◽  
H. Arefi

Abstract. Dense stereo processing requires a critical step that called cost aggregation or cost optimization. Most of the cost aggregation methods are evaluated on close range stereo images from Middlebury or KITTI datasets. While the effect of cost aggregation on high resolution satellite stereo processing has not yet been sufficiently evaluated. In this paper, three typical cost aggregation methods together with another approach which is a combination of these methods are evaluated on high resolution satellite stereo images and then are compared with LiDAR ground truth. These methods including Semi-Global Matching (SGM), Guided Filtering (GF), iterative GF (IGF), and SGM followed by GF (SGM-GF) with Census and Zero Normalized Cross Correlation (ZNCC) cost functions. Although the Census cost function has a good performance on the border of the objects and low blurring effects, the results of both cost functions, i.e. Census and ZNCC, have same treatment on all stereo methods. Also, in order to make an impartial assessment, for all stereo methods, we do not perform any disparity map refinement. The bad-pixel criteria with an absolute difference height error greater than 2 meters for SGM, GF, IGF, and SGM-GF methods is 36.7%, 34.8%, 33.8%, and 28.6% respectively. Also, the Normalized Median Absolute Difference (NMAD) error for SGM, GF, IGF, and SGM-GF is 1.29, 1.15, 1.06, and 0.94 meters, respectively. Overall, the experimental results on WV III stereo images demonstrate that the SGM method has lower accuracy and SGM-GF method is accurate than other methods.


Author(s):  
H.S. von Harrach ◽  
D.E. Jesson ◽  
S.J. Pennycook

Phase contrast TEM has been the leading technique for high resolution imaging of materials for many years, whilst STEM has been the principal method for high-resolution microanalysis. However, it was demonstrated many years ago that low angle dark-field STEM imaging is a priori capable of almost 50% higher point resolution than coherent bright-field imaging (i.e. phase contrast TEM or STEM). This advantage was not exploited until Pennycook developed the high-angle annular dark-field (ADF) technique which can provide an incoherent image showing both high image resolution and atomic number contrast.This paper describes the design and first results of a 300kV field-emission STEM (VG Microscopes HB603U) which has improved ADF STEM image resolution towards the 1 angstrom target. The instrument uses a cold field-emission gun, generating a 300 kV beam of up to 1 μA from an 11-stage accelerator. The beam is focussed on to the specimen by two condensers and a condenser-objective lens with a spherical aberration coefficient of 1.0 mm.


Author(s):  
N. D. Browning ◽  
M. M. McGibbon ◽  
M. F. Chisholm ◽  
S. J. Pennycook

The recent development of the Z-contrast imaging technique for the VG HB501 UX dedicated STEM, has added a high-resolution imaging facility to a microscope used mainly for microanalysis. This imaging technique not only provides a high-resolution reference image, but as it can be performed simultaneously with electron energy loss spectroscopy (EELS), can be used to position the electron probe at the atomic scale. The spatial resolution of both the image and the energy loss spectrum can be identical, and in principle limited only by the 2.2 Å probe size of the microscope. There now exists, therefore, the possibility to perform chemical analysis of materials on the scale of single atomic columns or planes.In order to achieve atomic resolution energy loss spectroscopy, the range over which a fast electron can cause a particular excitation event, must be less than the interatomic spacing. This range is described classically by the impact parameter, b, which ranges from ~10 Å for the low loss region of the spectrum to <1Å for the core losses.


Sign in / Sign up

Export Citation Format

Share Document