Adaptive pseudo-color enhancement method of weld radiographic images based on HSI color space and self-transformation of pixels

2017 ◽  
Vol 88 (6) ◽  
pp. 065106 ◽  
Author(s):  
Hongquan Jiang ◽  
Yalin Zhao ◽  
Jianmin Gao ◽  
Zhiyong Gao
2021 ◽  
Author(s):  
HAIBIN SUN ◽  
haiwei liu

Abstract To improve the visual effect and quality of haze images after fog removal, a model for color correction and repair of haze images under hue-saturation-intensity (HSI) color space combined with machine learning is proposed. First, the haze image imaging model is constructed according to the atmospheric scattering theory. Second, based on HSI color space, the color enhancement and fog removal of the haze image model is proposed, and a haze image-transmittancegallery is constructed. Third, the visual dictionary of the transmittance graph is obtained by training the k-means clustering algorithm based on density parameter optimization and support vector machine algorithm based on genetic algorithm optimization. Fourth, based on the visual dictionary and the atmospheric scattering model, the haze image is repaired and defogged, and the subjective visual effects and objective evaluation indexes of color enhancement and fog removal of haze images are compared. It is concluded that the algorithm can effectively guarantee the detail and clarity of the image after defogging.


Author(s):  
ZHAO Baiting ◽  
WANG Feng ◽  
JIA Xiaofen ◽  
GUO Yongcun ◽  
WANG Chengjun

Background:: Aiming at the problems of color distortion, low clarity and poor visibility of underwater image caused by complex underwater environment, a wavelet fusion method UIPWF for underwater image enhancement is proposed. Methods:: First of all, an improved NCB color balance method is designed to identify and cut the abnormal pixels, and balance the color of R, G and B channels by affine transformation. Then, the color correction map is converted to CIELab color space, and the L component is equalized with contrast limited adaptive histogram to obtain the brightness enhancement map. Finally, different fusion rules are designed for low-frequency and high-frequency components, the pixel level wavelet fusion of color balance image and brightness enhancement image is realized to improve the edge detail contrast on the basis of protecting the underwater image contour. Results:: The experiments demonstrate that compared with the existing underwater image processing methods, UIPWF is highly effective in the underwater image enhancement task, improves the objective indicators greatly, and produces visually pleasing enhancement images with clear edges and reasonable color information. Conclusion:: The UIPWF method can effectively mitigate the color distortion, improve the clarity and contrast, which is applicable for underwater image enhancement in different environments.


Author(s):  
HUA YANG ◽  
MASAAKI KASHIMURA ◽  
NORIKADU ONDA ◽  
SHINJI OZAWA

This paper describes a new system for extracting and classifying bibliography regions from the color image of a book cover. The system consists of three major components: preprocessing, color space segmentation and text region extraction and classification. Preprocessing extracts the edge lines of the book and geometrically corrects and segments the input image, into the parts of front cover, spine and back cover. The same as all color image processing researches, the segmentation of color space is an essential and important step here. Instead of RGB color space, HSI color space is used in this system. The color space is segmented into achromatic and chromatic regions first; and both the achromatic and chromatic regions are segmented further to complete the color space segmentation. Then text region extraction and classification follow. After detecting fundamental features (stroke width and local label width) text regions are determined. By comparing the text regions on front cover with those on spine, all extracted text regions are classified into suitable bibliography categories: author, title, publisher and other information, without applying OCR.


2020 ◽  
Vol 49 (3) ◽  
pp. 335-345
Author(s):  
Yan Xu ◽  
Jiangtao Dong ◽  
Zishuo Han ◽  
Peiguang Wang

During target tracking, certain multi-modal background scenes are unsuitable for off-line training model. To solve this problem, based on the Gaussian mixture model and considering the pixels’ time correlation, a method that combines the random sampling operator and neighborhood space propagation theory is proposed to simplify the model update process. To accelerate the model convergence, the observation vector is constructed in the time dimension by optimizing the model parameters. Finally, a three channel-multimodal background model fusing the HSI color space and gradient information is established in this study. Hence the detection of moving targets in a complicated environment is achieved. Experiments indicate that the algorithm has good detection performance when inhibiting ghosts, dynamic background, and shade; thus, the execution efficiency can meet the needs of real-time computing.


2013 ◽  
Vol 18 (2) ◽  
pp. 140-148 ◽  
Author(s):  
Taeha Um ◽  
Wonha Kim

2016 ◽  
Vol 2016 ◽  
pp. 1-17 ◽  
Author(s):  
Li Zhou ◽  
Du Yan Bi ◽  
Lin Yuan He

Foggy images taken in the bad weather inevitably suffer from contrast loss and color distortion. Existing defogging methods merely resort to digging out an accurate scene transmission in ignorance of their unpleasing distortion and high complexity. Different from previous works, we propose a simple but powerful method based on histogram equalization and the physical degradation model. By revising two constraints in a variational histogram equalization framework, the intensity component of a fog-free image can be estimated in HSI color space, since the airlight is inferred through a color attenuation prior in advance. To cut down the time consumption, a general variation filter is proposed to obtain a numerical solution from the revised framework. After getting the estimated intensity component, it is easy to infer the saturation component from the physical degradation model in saturation channel. Accordingly, the fog-free image can be restored with the estimated intensity and saturation components. In the end, the proposed method is tested on several foggy images and assessed by two no-reference indexes. Experimental results reveal that our method is relatively superior to three groups of relevant and state-of-the-art defogging methods.


2014 ◽  
Vol 543-547 ◽  
pp. 2484-2487
Author(s):  
Jing Zhang ◽  
Wei Dong ◽  
Jian Xin Wang ◽  
Xu Ning Liu

Aiming at the problem of poor image contrast and low visibility, a single image contrast enhancement method is put forward in this paper. The method is based on Dark-object subtraction technique, translating the fog degraded image from RGB color space to YIQ color space, and taking out the Y component. Then using the maximum entropy method to get the threshold value of image segmentation, we can put different portion of the image according to the different formula for image restoration. The processed image must be converted from YIQ color space to RGB color space In the back of the steps. Finally, the image needs a linear dynamic range adjustment to enhance the contrast and brightness. Experiments show that the method can effectively remove haze effect on the image. The dehazing effect of the processed image is obvious. The image becomes clear and bright, and the details is outstanding, which is convenient for observation and analysis.


Sign in / Sign up

Export Citation Format

Share Document