Design of graphics processing unit for image processing

Author(s):  
J. George Cherian Panappally ◽  
M. S. Dhanesh
2016 ◽  
Vol 15 (10) ◽  
pp. 7160-7163
Author(s):  
Gurpreet Kaur ◽  
Sonika Jindal

Image Segmentations play a heavy role in areas such as computer vision and image processing due to its broad usage and immense applications. Because of the large importance of image segmentation a number of algorithms have been proposed and different approaches have been adopted. Segmentation divides an image into distinct regions containing each pixel with similar attributes. The objective of apportioning is to simplify and/or alter the representation of an image into something that is more meaningful and more comfortable to break down. This paper discusses the various techniques implemented for image segmentation and discusses the various Computations that can be performed on the graphics processing unit (GPU) by means of the CUDA architecture in order to achieve fast performance and increase the utilization of available system resources.


Author(s):  
Prashanta Kumar Das ◽  
Ganesh Chandra Deka

The Graphics Processing Unit (GPU) is a specialized and highly parallel microprocessor designed to offload 2D/3D image from the Central Processing Unit (CPU) to expedite image processing. The modern GPU is not only a powerful graphics engine, but also a parallel programmable processor with high precision and powerful features. It is forcasted that by 2020, 48 Core GPU will be available while by 2030 GPU with 3000 core is likely to be available.This chapter describes the chronology of evolution of GPU hardware architecture and the future ahead.


2020 ◽  
Vol 32 ◽  
pp. 03041
Author(s):  
Sayooj Ottapura ◽  
Rahul Mistry ◽  
Jatin Keni ◽  
Chaitanya Jage

Image processing is a method used for enhancement of an image or to extract some useful information from the image. It is a type of signal processing in which input is an image and output may be an image or any characteristics/features associated with that image. In this paper we will be focusing on a specific type of Image Processing i.e. Underwater Image Processing. Underwater Image Processing has always faced the problem of imbalance in colour distribution and this problem can be tackled by the simplest algorithm for colour balancing. We will be proceeding with the assumption that the highest values of R, G, B observed in the image corresponds to white and the lowest values corresponds to darkness. The underwater images are majorly saturated by blue colour because of its short wavelength and in this paper, we aim to enhance the image. We proposed a colour balancing algorithm for normalizing the image. The entire process will first be carried out on a CPU followed by a GPU. We will then compare the speedup obtained. Speedup is an important parameter in the field on image processing since a better speedup can help reduce the computation time significantly while maintaining a higher efficiency.


Image classification algorithms such as Convolutional Neural Network used for classifying huge image datasets takes a lot of time to perform convolution operations, thus increasing the computational demand of image processing. Compared to CPU, Graphics Processing Unit (GPU) is a good way to accelerate the processing of the images. Parallelizing multiple CPU cores is also another way to process the images faster. Increasing the system memory (RAM) can also decrease the computational time of image processing. Comparing the architecture of CPU and GPU, the former consists of a few cores optimized for sequential processing whereas the later has thousands of relatively simple cores clocked at approx. 1Ghz. The aim of this project is to compare the performance of parallelized CPUs and a GPU. Python’s Ray library is being used to parallelize multicore CPUs. The benchmark image classification algorithm used in this project is Convolutional Neural Network. The dataset used in this project is Plant Disease Image Dataset. Our results show that the GPU implementation achieves 80% speedup compared to the CPU implementation.


2007 ◽  
Author(s):  
Fredrick H. Rothganger ◽  
Kurt W. Larson ◽  
Antonio Ignacio Gonzales ◽  
Daniel S. Myers

2021 ◽  
Vol 22 (10) ◽  
pp. 5212
Author(s):  
Andrzej Bak

A key question confronting computational chemists concerns the preferable ligand geometry that fits complementarily into the receptor pocket. Typically, the postulated ‘bioactive’ 3D ligand conformation is constructed as a ‘sophisticated guess’ (unnecessarily geometry-optimized) mirroring the pharmacophore hypothesis—sometimes based on an erroneous prerequisite. Hence, 4D-QSAR scheme and its ‘dialects’ have been practically implemented as higher level of model abstraction that allows the examination of the multiple molecular conformation, orientation and protonation representation, respectively. Nearly a quarter of a century has passed since the eminent work of Hopfinger appeared on the stage; therefore the natural question occurs whether 4D-QSAR approach is still appealing to the scientific community? With no intention to be comprehensive, a review of the current state of art in the field of receptor-independent (RI) and receptor-dependent (RD) 4D-QSAR methodology is provided with a brief examination of the ‘mainstream’ algorithms. In fact, a myriad of 4D-QSAR methods have been implemented and applied practically for a diverse range of molecules. It seems that, 4D-QSAR approach has been experiencing a promising renaissance of interests that might be fuelled by the rising power of the graphics processing unit (GPU) clusters applied to full-atom MD-based simulations of the protein-ligand complexes.


Sign in / Sign up

Export Citation Format

Share Document