scholarly journals Improving Discrimination in Color Vision Deficiency by Image Re-Coloring

Sensors ◽  
2019 ◽  
Vol 19 (10) ◽  
pp. 2250 ◽  
Author(s):  
Huei-Yung Lin ◽  
Li-Qi Chen ◽  
Min-Liang Wang

People with color vision deficiency (CVD) cannot observe the colorful world due to the damage of color reception nerves. In this work, we present an image enhancement approach to assist colorblind people to identify the colors they are not able to distinguish naturally. An image re-coloring algorithm based on eigenvector processing is proposed for robust color separation under color deficiency transformation. It is shown that the eigenvector of color vision deficiency is distorted by an angle in the λ , Y-B, R-G color space. The experimental results show that our approach is useful for the recognition and separation of the CVD confusing colors in natural scene images. Compared to the existing techniques, our results of natural images with CVD simulation work very well in terms of RMS, HDR-VDP-2 and an IRB-approved human test. Both the objective comparison with previous works and the subjective evaluation on human tests validate the effectiveness of the proposed method.

Author(s):  
Alex Chaparro ◽  
Maria Chaparro

Color vision deficiency is common, affecting one in every 12 men. Despite its prevalence, displays are seldom designed to accommodate color-vision-deficient (CVD) users, who confront daily challenges interpreting color in a broad range of applications, whether weather displays, informational graphics, road signs, or computer interfaces. In this article we discuss the prevalence of color deficiency, its effects, and the availability of tools that enable design teams to evaluate candidate solutions that meet the needs of CVD users, thereby ensuring universal accessibility.


Author(s):  
Muhammad Zunair Aziz ◽  
Muhammad Moeez Uddin ◽  
Umar Farooque ◽  
Rizwan Farooque ◽  
Sundas Karimi

Introduction Color vision deficiency (CVD) constitutes one of the frequently observed eye disorders in all human populations. Color is a prominent sign utilized in the medical profession to study and identify histopathological specimens, lab instruments, and patient examination. Color deficiency affects the medical skills of students resulting in poor clinical examination and color appreciation. There is no effective screening of CVD at any level of the medical profession. Hence, this study was aimed to determine the prevalence of CVD among medical students. Materials and methods This was a cross-sectional study conducted from September 2019 to February 2020 over a period of six months in Karachi, Pakistan. All medical students aged 18-21 years of either gender enrolled in the first and second years of medical college were included in this study. The examination was performed during daylight. Ishihara plates were placed at a distance of 75 cm from the subject and tilted so that the plane of the paper lies perpendicular to the line of vision. Students were given five seconds to read the plate and one examiner was instructed to mark the checklist. A score of less than 12 out of 14 red/green test plates (not including the demonstration plate) was considered as a CVD. All statistical analysis was performed using Statistical Package for Social Sciences version 20.0 (Armonk, NY: IBM Corp). Results The mean age of the medical students was 19.61± 1.22 years. There were (n=123) 53.0% females and (n=111) 47.0% males. Most of the medical students (n=131, 56.0%) belonged to the upper-middle-class socioeconomic group. CVD was observed in (n=13) 6.0%of medical students. Age (p=0.001) and socioeconomic status (p=0.001) were the only demographic factors significantly associated with color deficiency. Conclusions Color deficiency, although an unnoticed concern, is fairly common among medical students. Medical students must be screened for CVD as this will enable them to be aware of their limitations in their future observational skills as a doctor and devise ways of overcoming them in clinical practice.


Author(s):  
Najla A. Alqahtani ◽  
Rafi A. Togoo ◽  
Mashael M. Alqahtani ◽  
Nouf S. Suliman ◽  
Foziah A. Alasmari ◽  
...  

Abstract Objective The current research was conducted to evaluate the frequency of color-vision deficiency among dental students of King Khalid University College of Dentistry, Saudi Arabia. Materials and Methods A cross-sectional study was performed among 203 dental students working as interns at male and female dental clinics of King Khalid University College of Dentistry (KKUCOD), Saudi Arabia. The Ishihara color-vision deficiency (CVD) test with 24 plates was used for diagnosis of the problem. Analysis of the data was done by performing Chi-square tests using SPSS software version 20. Results The frequency of total CVD was found to be 3.9%. While the association of gender with total CVD was observed to be statistically nonsignificant, a statistically significant relation was drawn with red-green color deficiency. Out of the total of 203 patients, 44 males were identified with red-green color deficiency, whereas only three females were found to have this condition, therefore revealing that CVD is more prevalent in males. Age was found to have a significant association with red-green color vision deficiency, protanopia, and total CVD. Conclusion A total of 20.19% male dental students had red-green color vision deficiency compared to 1.4% in female students. The dental students must be aware of their congenital color vision deficiency and its impact on their professional life. Screening of such dental students and professionals is quite important so that they can tackle the color vision problems well without having detrimental effect on their future dental practice.


2019 ◽  
pp. 30-33
Author(s):  
U. R. Khamdamov ◽  
M. N. Mukhiddinov ◽  
A. O. Mukhamedaminov ◽  
O. N. Djuraev

Author(s):  
Pushpendra Singh ◽  
P.N. Hrisheekesha ◽  
Vinai Kumar Singh

Content based image retrieval (CBIR) is one of the field for information retrieval where similar images are retrieved from database based on the various image descriptive parameters. The image descriptor vector is used by machine learning based systems to store, learn and template matching. These feature descriptor vectors locally or globally demonstrate the visual content present in an image using texture, color, shape, and other information. In past, several algorithms were proposed to fetch the variety of contents from an image based on which the image is retrieved from database. But, the literature suggests that the precision and recall for the gained results using single content descriptor is not significant. The main vision of this paper is to categorize and evaluate those algorithms, which were proposed in the interval of last 10 years. In addition, experiment is performed using a hybrid content descriptors methodology that helps to gain the significant results as compared with state-of-art algorithms. The hybrid methodology decreases the error rate and improves the precision and recall for large natural scene images dataset having more than 20 classes.


Author(s):  
ZHAO Baiting ◽  
WANG Feng ◽  
JIA Xiaofen ◽  
GUO Yongcun ◽  
WANG Chengjun

Background:: Aiming at the problems of color distortion, low clarity and poor visibility of underwater image caused by complex underwater environment, a wavelet fusion method UIPWF for underwater image enhancement is proposed. Methods:: First of all, an improved NCB color balance method is designed to identify and cut the abnormal pixels, and balance the color of R, G and B channels by affine transformation. Then, the color correction map is converted to CIELab color space, and the L component is equalized with contrast limited adaptive histogram to obtain the brightness enhancement map. Finally, different fusion rules are designed for low-frequency and high-frequency components, the pixel level wavelet fusion of color balance image and brightness enhancement image is realized to improve the edge detail contrast on the basis of protecting the underwater image contour. Results:: The experiments demonstrate that compared with the existing underwater image processing methods, UIPWF is highly effective in the underwater image enhancement task, improves the objective indicators greatly, and produces visually pleasing enhancement images with clear edges and reasonable color information. Conclusion:: The UIPWF method can effectively mitigate the color distortion, improve the clarity and contrast, which is applicable for underwater image enhancement in different environments.


2021 ◽  
Vol 40 (1) ◽  
pp. 551-563
Author(s):  
Liqiong Lu ◽  
Dong Wu ◽  
Ziwei Tang ◽  
Yaohua Yi ◽  
Faliang Huang

This paper focuses on script identification in natural scene images. Traditional CNNs (Convolution Neural Networks) cannot solve this problem perfectly for two reasons: one is the arbitrary aspect ratios of scene images which bring much difficulty to traditional CNNs with a fixed size image as the input. And the other is that some scripts with minor differences are easily confused because they share a subset of characters with the same shapes. We propose a novel approach combing Score CNN, Attention CNN and patches. Attention CNN is utilized to determine whether a patch is a discriminative patch and calculate the contribution weight of the discriminative patch to script identification of the whole image. Score CNN uses a discriminative patch as input and predict the score of each script type. Firstly patches with the same size are extracted from the scene images. Secondly these patches are used as inputs to Score CNN and Attention CNN to train two patch-level classifiers. Finally, the results of multiple discriminative patches extracted from the same image via the above two classifiers are fused to obtain the script type of this image. Using patches with the same size as inputs to CNN can avoid the problems caused by arbitrary aspect ratios of scene images. The trained classifiers can mine discriminative patches to accurately identify some confusing scripts. The experimental results show the good performance of our approach on four public datasets.


Sign in / Sign up

Export Citation Format

Share Document