Improving Component Substitution Pan-Sharpening Through Refinement of the Injection Detail

2020 ◽  
Vol 86 (5) ◽  
pp. 317-325 ◽  
Author(s):  
Xiaohua Li ◽  
Hao Chen ◽  
Jiliu Zhou ◽  
Yuan Wang

This article presents a novel strategy for improving the well-established component substitution-based multispectral image fusion methods, because the fused results obtained by component substitution methods tend to exhibit significant spectral distortion. The main cause of spectral distortion is analyzed and discussed based on the component substitution method's general model. An improved scheme is derived from the sensitivity imaging model to refine the approximate spatial detail and obtain one that is almost ideal. The experimental results on two data sets show that when it has been integrated into the Gram–Schmidt method and the generalized intensity-hue-saturation method, the proposed scheme allows the production of fused images of the same spatial sharpness as standard implementations but with significantly increased spectral quality. Quantitative scores and visual inspection at full resolution and spatially reduced resolution confirm the superiority of the improved methods over the conventional algorithms.

Sensors ◽  
2018 ◽  
Vol 18 (12) ◽  
pp. 4418 ◽  
Author(s):  
Aleksandra Sekrecka ◽  
Michal Kedzierski

Commonly used image fusion techniques generally produce good results for images obtained from the same sensor, with a standard ratio of spatial resolution (1:4). However, an atypical high ratio of resolution reduces the effectiveness of fusion methods resulting in a decrease in the spectral or spatial quality of the sharpened image. An important issue is the development of a method that allows for maintaining simultaneous high spatial and spectral quality. The authors propose to strengthen the pan-sharpening methods through prior modification of the panchromatic image. Local statistics of the differences between the original panchromatic image and the intensity of the multispectral image are used to detect spatial details. The Euler’s number and the distance of each pixel from the nearest pixel classified as a spatial detail determine the weight of the information collected from each integrated image. The research was carried out for several pan-sharpening methods and for data sets with different levels of spectral matching. The proposed solution allows for a greater improvement in the quality of spectral fusion, while being able to identify the same spatial details for most pan-sharpening methods and is mainly dedicated to Intensity-Hue-Saturation based methods for which the following improvements in spectral quality were achieved: about 30% for the urbanized area and about 15% for the non-urbanized area.


2004 ◽  
Vol 37 (3) ◽  
pp. 399-409 ◽  
Author(s):  
Nicholas K. Sauter ◽  
Ralf W. Grosse-Kunstleve ◽  
Paul D. Adams

Improved methods for indexing diffraction patterns from macromolecular crystals are presented. The novel procedures include a more robust way to verify the position of the incident X-ray beam on the detector, an algorithm to verify that the deduced lattice basis is consistent with the observations, and an alternative approach to identify the metric symmetry of the lattice. These methods help to correct failures commonly experienced during indexing, and increase the overall success rate of the process. Rapid indexing, without the need for visual inspection, will play an important role as beamlines at synchrotron sources prepare for high-throughput automation.


2006 ◽  
Vol 23 (2) ◽  
pp. 64-68 ◽  
Author(s):  
Enno Middelberg

AbstractEditing radio interferometer data, a process commonly known as ‘flagging’, can be laborious and time-consuming. One quickly tends to flag more data than actually required, sacrificing sensitivity and image fidelity in the process. I describe a program, PIEFLAG, which can analyze radio interferometer data to filter out measurements which are likely to be affected by interference. PIEFLAG uses two algorithms to allow for data sets which are either dominated by receiver noise or by source structure. Together, the algorithms detect essentially all affected data whilst the amount of data which is not affected by interference but falsely marked as such is kept to a minimum. The sections marked by PIEFLAG are very similar to what would be deemed affected by the observer in a visual inspection of the data. PIEFLAG displays its results concisely and allows the user to add and remove flags interactively. It is written in python, is easy to install and use, and has a variety of options to adjust its algorithms to a particular observing situation. I describe how PIEFLAG works and illustrate its effect using data from typical observations.


Author(s):  
C. Liu ◽  
Y. Zhang ◽  
Y. Ou

Abstract. Pan-sharpening refers to the technology which fuses a low resolution multispectral image (MS) and a high resolution panchromatic (PAN) image into a high resolution multispectral image (HRMS). In this paper, we propose a Component Substitution Network (CSN) for pan-sharpening. By adding a feature exchange module (FEM) to the widely used encoder-decoder framework, we design a network following the general procedure of the traditional component substitution (CS) approaches. Encoder of the network decomposes the input image into spectral feature and structure feature. The FEM regroups the extracted features and combines the spectral feature of the MS image with the structure feature of the PAN image. The decoder is an inverse process of the encoder and reconstructs the image. The MS and the PAN image share the same encoder and decoder, which makes the network robust to spectral and spatial variations. To reduce the burden of data preparation and improve the performance on full-resolution data, the network is trained through semi-supervised learning with image patches at both reduced-resolution and full-resolution. Experiments performed on GeoEye-1 data verifies that the proposed network has achieved state-of-the-art performance, and the semi-supervised learning stategy further improves the performance on full-resolution data.


2020 ◽  
Author(s):  
RW Helms ◽  
W Ai ◽  
Jocelyn Cranefield

Online communities offer many potential sources of value to individuals and organisations. However, the effectiveness of online communities in delivering benefits such as knowledge sharing depends on the network of social relations within a community. Research in this area aims to understand and optimize such networks. Researchers in this area employ diverse network creation methods, with little focus on the selection process, the fit of the selected method, or its relative accuracy. In this study we evaluate and compare the performance of four network creation methods. First we review the literature to identify four network creation methods (algorithms) and their underlying assumptions. Using several data sets from an online community we test and compare the accuracy of each method against a baseline ('actual') network determined by content analysis. We use visual inspection, network correlation analysis and sensitivity analysis to highlight similarities and differences between the methods, and find some differences significant enough to impact study results. Based on our observations we argue for more careful selection of network creation methods. We propose two key guidelines for research into social networks that uses unstructured data from online communities. The study contributes to the rigour of methodological decisions underpinning research in this area.


2020 ◽  
Vol 34 (07) ◽  
pp. 12460-12467
Author(s):  
Liang Xie ◽  
Chao Xiang ◽  
Zhengxu Yu ◽  
Guodong Xu ◽  
Zheng Yang ◽  
...  

LIDAR point clouds and RGB-images are both extremely essential for 3D object detection. So many state-of-the-art 3D detection algorithms dedicate in fusing these two types of data effectively. However, their fusion methods based on Bird's Eye View (BEV) or voxel format are not accurate. In this paper, we propose a novel fusion approach named Point-based Attentive Cont-conv Fusion(PACF) module, which fuses multi-sensor features directly on 3D points. Except for continuous convolution, we additionally add a Point-Pooling and an Attentive Aggregation to make the fused features more expressive. Moreover, based on the PACF module, we propose a 3D multi-sensor multi-task network called Pointcloud-Image RCNN(PI-RCNN as brief), which handles the image segmentation and 3D object detection tasks. PI-RCNN employs a segmentation sub-network to extract full-resolution semantic feature maps from images and then fuses the multi-sensor features via powerful PACF module. Beneficial from the effectiveness of the PACF module and the expressive semantic features from the segmentation module, PI-RCNN can improve much in 3D object detection. We demonstrate the effectiveness of the PACF module and PI-RCNN on the KITTI 3D Detection benchmark, and our method can achieve state-of-the-art on the metric of 3D AP.


Entropy ◽  
2019 ◽  
Vol 21 (9) ◽  
pp. 866 ◽  
Author(s):  
Sandra Rothe ◽  
Bastian Kudszus ◽  
Dirk Söffker

The reliability of complex or safety critical systems is of increasing importance in several application fields. In many cases, decisions evaluating situations or conditions are made. To ensure the high accuracy of these decisions, the assignments from different classifiers can be fused to one final decision to improve the decision performance in terms of given measures like accuracy or false alarm rate. Recent research results show that fusion methods not always outperform individual classifiers trained and optimized for a specific situation. Nevertheless fusion helps to ensure reliability and redundancy by combining the advantages of individual classifiers, even if some classifiers are not performing well for specific situations. Especially in unexpected (untrained) situations, fusion of more than one classifier allows to get a suitable decision, because of different behavior of classifiers in this case. Nevertheless, there are several examples, where fusion not always improves the overall accuracy of a decision. In this contribution fusion options are discussed to overcome the problem to overcome the aforementioned problem and to define influencing factors on overall fusion accuracy. As a results requirements for good or guaranteed or possibly increased fusion performance and also suggestions denoting those options not leading to any kind of improvement are given. For illustrating the effects a practical example based on three characteristics of fusion methods (type of classifier output, use of these outputs and necessity of training) and four data properties (number of classes, number of samples, entropy of classes and entropy of attributes) are considered and analyzed with 15 different benchmark data sets, which are classified with eight classification methods. The classification results are fused using seven fusion methods. From the discussion of the results it can be concluded, which fusion method performs best/worst for all data sets as well as which fusion method characteristic or data property has more or less positive/negative influence on the fusion performance in comparison to the best base classifier.Using this information, suitable fusion methods can be selected or data sets can be adapted to improve the reliability of decisions made in complex or safety critical systems.


2017 ◽  
Vol 21 (12) ◽  
pp. 6069-6089 ◽  
Author(s):  
Anne-Sophie Høyer ◽  
Giulio Vignoli ◽  
Thomas Mejer Hansen ◽  
Le Thanh Vu ◽  
Donald A. Keefer ◽  
...  

Abstract. Most studies on the application of geostatistical simulations based on multiple-point statistics (MPS) to hydrogeological modelling focus on relatively fine-scale models and concentrate on the estimation of facies-level structural uncertainty. Much less attention is paid to the use of input data and optimal construction of training images. For instance, even though the training image should capture a set of spatial geological characteristics to guide the simulations, the majority of the research still relies on 2-D or quasi-3-D training images. In the present study, we demonstrate a novel strategy for 3-D MPS modelling characterized by (i) realistic 3-D training images and (ii) an effective workflow for incorporating a diverse group of geological and geophysical data sets. The study covers an area of 2810 km2 in the southern part of Denmark. MPS simulations are performed on a subset of the geological succession (the lower to middle Miocene sediments) which is characterized by relatively uniform structures and dominated by sand and clay. The simulated domain is large and each of the geostatistical realizations contains approximately 45 million voxels with size 100 m  ×  100 m  ×  5 m. Data used for the modelling include water well logs, high-resolution seismic data, and a previously published 3-D geological model. We apply a series of different strategies for the simulations based on data quality, and develop a novel method to effectively create observed spatial trends. The training image is constructed as a relatively small 3-D voxel model covering an area of 90 km2. We use an iterative training image development strategy and find that even slight modifications in the training image create significant changes in simulations. Thus, this study shows how to include both the geological environment and the type and quality of input information in order to achieve optimal results from MPS modelling. We present a practical workflow to build the training image and effectively handle different types of input information to perform large-scale geostatistical modelling.


mSphere ◽  
2020 ◽  
Vol 5 (5) ◽  
Author(s):  
Artur Yakimovich ◽  
Moona Huttunen ◽  
Jerzy Samolej ◽  
Barbara Clough ◽  
Nagisa Yoshida ◽  
...  

ABSTRACT The use of deep neural networks (DNNs) for analysis of complex biomedical images shows great promise but is hampered by a lack of large verified data sets for rapid network evolution. Here, we present a novel strategy, termed “mimicry embedding,” for rapid application of neural network architecture-based analysis of pathogen imaging data sets. Embedding of a novel host-pathogen data set, such that it mimics a verified data set, enables efficient deep learning using high expressive capacity architectures and seamless architecture switching. We applied this strategy across various microbiological phenotypes, from superresolved viruses to in vitro and in vivo parasitic infections. We demonstrate that mimicry embedding enables efficient and accurate analysis of two- and three-dimensional microscopy data sets. The results suggest that transfer learning from pretrained network data may be a powerful general strategy for analysis of heterogeneous pathogen fluorescence imaging data sets. IMPORTANCE In biology, the use of deep neural networks (DNNs) for analysis of pathogen infection is hampered by a lack of large verified data sets needed for rapid network evolution. Artificial neural networks detect handwritten digits with high precision thanks to large data sets, such as MNIST, that allow nearly unlimited training. Here, we developed a novel strategy we call mimicry embedding, which allows artificial intelligence (AI)-based analysis of variable pathogen-host data sets. We show that deep learning can be used to detect and classify single pathogens based on small differences.


Sign in / Sign up

Export Citation Format

Share Document