scholarly journals Novel Image State Ensemble Decomposition Method for M87 Imaging

2020 ◽  
Vol 10 (4) ◽  
pp. 1535 ◽  
Author(s):  
Timothy Ryan Taylor ◽  
Chun-Tang Chao ◽  
Juing-Shian Chiou

This paper proposes a new method of image decomposition with a filtering capability. The image state ensemble decomposition (ISED) method has generative capabilities that work by removing a discrete ensemble of quanta from an image to provide a range of filters and images for a single red, green, and blue (RGB) input image. This method provides an image enhancement because ISED is a spatial domain filter that transforms or eliminates image regions that may have detrimental effects, such as noise, glare, and image artifacts, and it also improves the aesthetics of the image. ISED was used to generate 126 images from two tagged image file (TIF) images of M87 taken by the Spitzer Space Telescope. Analysis of the images used various full and no-reference quality metrics as well as histograms and color clouds. In most instances, the no-reference quality metrics of the generated images were shown to be superior to those of the two original images. Select ISED images yielded previously unknown galactic structures, reduced glare, and enhanced contrast, with good overall performance.

2020 ◽  
Vol 10 (11) ◽  
pp. 3952
Author(s):  
Timothy Ryan Taylor ◽  
Chun-Tang Chao ◽  
Juing-Shian Chiou

Standard spatial domain filters fail to adequately denoise and enhance the contrast of an image. These filters have drawbacks like oversmoothing, diminished texture, and lack of generative capabilities. This paper proposes a new method of image reconstruction, Image State Ensemble Enhancement (ISEE), based on our previous work, Image State Ensemble Decomposition (ISED). Deep level ISEE and ISED have been developed to produce a class of filters that can address these issues. Full-reference and no-reference quality metrics are used to assess the image, and the full reference metrics showed a marked improvement, while the no-reference metrics were often better than the test image. The test image was taken from the Spitzer Space Telescope (SST), and ISEE reconstruction yielded improved structural detail over that of ISED and the original test image. Glare and noise were reduced in a narrow bandwidth, which led to the discovery of a vortex-shaped structure and an outburst in M87′s dusty infrared core. The vortex is located over M87′s visible core and black hole. This is verified with an SST and Hubble Space Telescope (HST) overlay, ISEE processed image. A counter-jet channel was also discovered, and it appears to be the path of the unobservable superluminal counter-jet.


2021 ◽  
Vol 11 (19) ◽  
pp. 9197
Author(s):  
Muhammad Tahir ◽  
Saeed Anwar

Person Re-Identification is an essential task in computer vision, particularly in surveillance applications. The aim is to identify a person based on an input image from surveillance photographs in various scenarios. Most Person re-ID techniques utilize Convolutional Neural Networks (CNNs); however, Vision Transformers are replacing pure CNNs for various computer vision tasks such as object recognition, classification, etc. The vision transformers contain information about local regions of the image. The current techniques take this advantage to improve the accuracy of the tasks underhand. We propose to use the vision transformers in conjunction with vanilla CNN models to investigate the true strength of transformers in person re-identification. We employ three backbones with different combinations of vision transformers on two benchmark datasets. The overall performance of the backbones increased, showing the importance of vision transformers. We provide ablation studies and show the importance of various components of the vision transformers in re-identification tasks.


2019 ◽  
Author(s):  
Анастасия Звездакова ◽  
Anastasia Zvezdakova ◽  
Сергей Звездаков ◽  
Sergey Zvezdakov ◽  
Дмитрий Куликов ◽  
...  

Video quality measurement takes an important role in many applications. Full-reference quality metrics which are usually used in video codecs comparisons are expected to reflect any changes in videos. In this article, we consider different color corrections of compressed videos which increase the values of full-reference metric VMAF and almost don’t decrease other widely-used metric SSIM. The proposed video contrast enhancement approach shows the metric in-applicability in some cases for video codecs comparisons, as it may be used for cheating in the comparisons via tuning to improve this metric values.


2017 ◽  
Vol 21 (4) ◽  
pp. 997-1012 ◽  
Author(s):  
Nazeer Muhammad ◽  
Nargis Bibi ◽  
Iqbal Qasim ◽  
Adnan Jahangir ◽  
Zahid Mahmood

Electronics ◽  
2019 ◽  
Vol 8 (8) ◽  
pp. 850 ◽  
Author(s):  
Caleb Vununu ◽  
Suk-Hwan Lee ◽  
Oh-Jun Kwon ◽  
Ki-Ryong Kwon

The complete analysis of the images representing the human epithelial cells of type 2, commonly referred to as HEp-2 cells, is one of the most important tasks in the diagnosis procedure of various autoimmune diseases. The problem of the automatic classification of these images has been widely discussed since the unfolding of deep learning-based methods. Certain datasets of the HEp-2 cell images exhibit an extreme complexity due to their significant heterogeneity. We propose in this work a method that tackles specifically the problem related to this disparity. A dynamic learning process is conducted with different networks taking different input variations in parallel. In order to emphasize the localized changes in intensity, the discrete wavelet transform is used to produce different versions of the input image. The approximation and detail coefficients are fed to four different deep networks in a parallel learning paradigm in order to efficiently homogenize the features extracted from the images that have different intensity levels. The feature maps from these different networks are then concatenated and passed to the classification layers to produce the final type of the cellular image. The proposed method was tested on a public dataset that comprises images from two intensity levels. The significant heterogeneity of this dataset limits the discrimination results of some of the state-of-the-art deep learning-based methods. We have conducted a comparative study with these methods in order to demonstrate how the dynamic learning proposed in this work manages to significantly minimize this heterogeneity related problem, thus boosting the discrimination results.


Sign in / Sign up

Export Citation Format

Share Document