scholarly journals Simulation of effect of non-uniform input image on characteristics of output image of optical novelty filter based bacteriorhodopsin film

2007 ◽  
Vol 56 (12) ◽  
pp. 6954
Author(s):  
物理学报

Image-transformation problem is a problem in which an input image is transformed to an output image. In most of the recent methods, a feed-forward neural network is defined which utilizes per-pixel loss between the output image and the ground-truth image. In this paper we have showcased that highquality images can be generated by defining a feature-loss function which is based on high-level perceptual features extracted from pre-trained convolutional networks. We have combined both the approaches that have been formerly mentioned and have proposed a feature-loss function for training a feed-forward neural network capable of image transformation tasks. We have compared out method with that of an optimization based approach, similar to the one utilized in Generative Adversarial Networks (GANs) and our method produced visually appealing results whilst fully capturing the intricate details of the object in the image.


In semantic image-to-image translation, the goal will be to learn mapping between an input image and the output image. A model of semantic image to image translation problem using Cycle GAN algorithm is proposed. Given a set of paired or unpaired images a transformation is learned to translate the input image into the specified domain. The dataset considered is cityscape dataset. In the cityscape dataset, the semantic images are converted into photographic images. Here a Generative Adversarial Network algorithm called Cycle GAN algorithm with cycle consistency loss is used. The cycle GAN algorithm can be used to transform the semantic image into a photographic or real image. The cycle consistency loss compares the real image and the output image of the second generator and gives the loss functions. In this paper, the model shows that by considering more training time we get the accurate results and the image quality will be improved. The model can be used when images from one domain needs to be converted into another domain inorder to obtain high quality of images.


Axioms ◽  
2018 ◽  
Vol 7 (3) ◽  
pp. 53 ◽  
Author(s):  
Kelvin Chan ◽  
Raymond Chan ◽  
Mila Nikolova

The goal of edge-histogram specification is to find an image whose edge image has a histogram that matches a given edge-histogram as much as possible. Mignotte has proposed a non-convex model for the problem in 2012. In his work, edge magnitudes of an input image are first modified by histogram specification to match the given edge-histogram. Then, a non-convex model is minimized to find an output image whose edge-histogram matches the modified edge-histogram. The non-convexity of the model hinders the computations and the inclusion of useful constraints such as the dynamic range constraint. In this paper, instead of considering edge magnitudes, we directly consider the image gradients and propose a convex model based on them. Furthermore, we include additional constraints in our model based on different applications. The convexity of our model allows us to compute the output image efficiently using either Alternating Direction Method of Multipliers or Fast Iterative Shrinkage-Thresholding Algorithm. We consider several applications in edge-preserving smoothing including image abstraction, edge extraction, details exaggeration, and documents scan-through removal. Numerical results are given to illustrate that our method successfully produces decent results efficiently.


Author(s):  
GIULIANA RAMELLA ◽  
GABRIELLA SANNITI DI BAJA

A technique for color quantization is described, which consists of two processes. The first process is based on the analysis of the histograms of the three color components of the RGB input image. The second process performs clustering of the colors quantized by the first process, based on their Euclidean distance. At the end of the second process, the output image is obtained by replacing the color of each pixel of the input image with the closest representative color. The obtained results are satisfactory from both the qualitative and the quantitative point of view.


Author(s):  
Ujjwal Chakraborty ◽  
Jayanta Kumar Paul ◽  
Priya Ranjan Sinha Mahapatra

In this paper two methods for (2, 2) and (2, 3) visual cryptographic scheme(VCS) is proposed. The first scheme considers 4 pixels of input image at a time and generates 4 output pixels in each share. As 4 output pixels are generated from 4 input pixels dimension and aspect ratio of the decrypted image remain same during the process. The second scheme considers 2 pixels (1 block) of input image at a time and generates 3 output pixels in each share. Here probability of 1/3 for black pixel is maintained in each share. The scheme improves the contrast of output image. The dimension of revealed image is increased by 1.5 times in horizontal direction and remains same in vertical direction.


Author(s):  
J. Magelin Mary ◽  
Chitra K. ◽  
Y. Arockia Suganthi

Image processing technique in general, involves the application of signal processing on the input image for isolating the individual color plane of an image. It plays an important role in the image analysis and computer version. This paper compares the efficiency of two approaches in the area of finding breast cancer in medical image processing. The fundamental target is to apply an image mining in the area of medical image handling utilizing grouping guideline created by genetic algorithm. The parameter using extracted border, the border pixels are considered as population strings to genetic algorithm and Ant Colony Optimization, to find out the optimum value from the border pixels. We likewise look at cost of ACO and GA also, endeavors to discover which one gives the better solution to identify an affected area in medical image based on computational time.


2020 ◽  
Vol 2020 (9) ◽  
pp. 323-1-323-8
Author(s):  
Litao Hu ◽  
Zhenhua Hu ◽  
Peter Bauer ◽  
Todd J. Harris ◽  
Jan P. Allebach

Image quality assessment has been a very active research area in the field of image processing, and there have been numerous methods proposed. However, most of the existing methods focus on digital images that only or mainly contain pictures or photos taken by digital cameras. Traditional approaches evaluate an input image as a whole and try to estimate a quality score for the image, in order to give viewers an idea of how “good” the image looks. In this paper, we mainly focus on the quality evaluation of contents of symbols like texts, bar-codes, QR-codes, lines, and hand-writings in target images. Estimating a quality score for this kind of information can be based on whether or not it is readable by a human, or recognizable by a decoder. Moreover, we mainly study the viewing quality of the scanned document of a printed image. For this purpose, we propose a novel image quality assessment algorithm that is able to determine the readability of a scanned document or regions in a scanned document. Experimental results on some testing images demonstrate the effectiveness of our method.


Author(s):  
Ervina Varijki ◽  
Bambang Krismono Triwijoyo

One type of cancer that is capable identified using MRI technology is breast cancer. Breast cancer is still the leading cause of death world. therefore early detection of this disease is needed. In identifying breast cancer, a doctor or radiologist analyzing the results of magnetic resonance image that is stored in the format of the Digital Imaging Communication In Medicine (DICOM). It takes skill and experience sufficient for diagnosis is appropriate, andaccurate, so it is necessary to create a digital image processing applications by utilizing the process of object segmentation and edge detection to assist the physician or radiologist in identifying breast cancer. MRI image segmentation using edge detection to identification of breast cancer using a method stages gryascale change the image format, then the binary image thresholding and edge detection process using the latest Robert operator. Of the20 tested the input image to produce images with the appearance of the boundary line of each region or object that is visible and there are no edges are cut off, with the average computation time less than one minute.


Author(s):  
Manpreet Kaur ◽  
Jasdev Bhatti ◽  
Mohit Kumar Kakkar ◽  
Arun Upmanyu

Introduction: Face Detection is used in many different steams like video conferencing, human-computer interface, in face detection, and in the database management of image. Therefore, the aim of our paper is to apply Red Green Blue ( Methods: The morphological operations are performed in the face region to a number of pixels as the proposed parameter to check either an input image contains face region or not. Canny edge detection is also used to show the boundaries of a candidate face region, in the end, the face can be shown detected by using bounding box around the face. Results: The reliability model has also been proposed for detecting the faces in single and multiple images. The results of the experiments reflect that the algorithm been proposed performs very well in each model for detecting the faces in single and multiple images and the reliability model provides the best fit by analyzing the precision and accuracy. Moreover Discussion: The calculated results show that HSV model works best for single faced images whereas YCbCr and TSL models work best for multiple faced images. Also, the evaluated results by this paper provides the better testing strategies that helps to develop new techniques which leads to an increase in research effectiveness. Conclusion: The calculated value of all parameters is helpful for proving that the proposed algorithm has been performed very well in each model for detecting the face by using a bounding box around the face in single as well as multiple images. The precision and accuracy of all three models are analyzed through the reliability model. The comparison calculated in this paper reflects that HSV model works best for single faced images whereas YCbCr and TSL models work best for multiple faced images.


2020 ◽  
Vol 34 (03) ◽  
pp. 2594-2601
Author(s):  
Arjun Akula ◽  
Shuai Wang ◽  
Song-Chun Zhu

We present CoCoX (short for Conceptual and Counterfactual Explanations), a model for explaining decisions made by a deep convolutional neural network (CNN). In Cognitive Psychology, the factors (or semantic-level features) that humans zoom in on when they imagine an alternative to a model prediction are often referred to as fault-lines. Motivated by this, our CoCoX model explains decisions made by a CNN using fault-lines. Specifically, given an input image I for which a CNN classification model M predicts class cpred, our fault-line based explanation identifies the minimal semantic-level features (e.g., stripes on zebra, pointed ears of dog), referred to as explainable concepts, that need to be added to or deleted from I in order to alter the classification category of I by M to another specified class calt. We argue that, due to the conceptual and counterfactual nature of fault-lines, our CoCoX explanations are practical and more natural for both expert and non-expert users to understand the internal workings of complex deep learning models. Extensive quantitative and qualitative experiments verify our hypotheses, showing that CoCoX significantly outperforms the state-of-the-art explainable AI models. Our implementation is available at https://github.com/arjunakula/CoCoX


Sign in / Sign up

Export Citation Format

Share Document