scholarly journals Medical Cloud Computing Data Processing to Optimize the Effect of Drugs

2021 ◽  
Vol 2021 ◽  
pp. 1-15
Author(s):  
Fengxia Li ◽  
Zhi Qu ◽  
Ruiling Li

In recent years, cloud computing technology is maturing in the process of growing. Hadoop originated from Apache Nutch and is an open-source cloud computing platform. Moreover, the platform is characterized by large scale, virtualization, strong stability, strong versatility, and support for scalability. It is necessary and far-reaching, based on the characteristics of unstructured medical images, to combine content-based medical image retrieval with the Hadoop cloud platform to conduct research. This study combines the impact mechanism of senile dementia vascular endothelial cells with cloud computing to construct a corresponding data retrieval platform of the cloud computing image set. Moreover, this study uses Hadoop’s core framework distributed file system HDFS to upload images, store the images in the HDFS and image feature vectors in HBase, and use MapReduce programming mode to perform parallel retrieval, and each of the nodes cooperates with each other. The results show that the proposed method has certain effects and can be applied to medical research.

2020 ◽  
Vol 29 (2) ◽  
pp. 1-24
Author(s):  
Yangguang Li ◽  
Zhen Ming (Jack) Jiang ◽  
Heng Li ◽  
Ahmed E. Hassan ◽  
Cheng He ◽  
...  

2014 ◽  
Vol 687-691 ◽  
pp. 3733-3737
Author(s):  
Dan Wu ◽  
Ming Quan Zhou ◽  
Rong Fang Bie

Massive image processing technology requires high requirements of processor and memory, and it needs to adopt high performance of processor and the large capacity memory. While the single or single core processing and traditional memory can’t satisfy the need of image processing. This paper introduces the cloud computing function into the massive image processing system. Through the cloud computing function it expands the virtual space of the system, saves computer resources and improves the efficiency of image processing. The system processor uses multi-core DSP parallel processor, and develops visualization parameter setting window and output results using VC software settings. Through simulation calculation we get the image processing speed curve and the system image adaptive curve. It provides the technical reference for the design of large-scale image processing system.


Author(s):  
Shruthi P. ◽  
Nagaraj G. Cholli

Cloud Computing is the environment in which several virtual machines (VM) run concurrently on physical machines. The cloud computing infrastructure hosts multiple cloud service segments that communicate with each other using the interfaces. This creates distributed computing environment. During operation, the software systems accumulate errors or garbage that leads to system failure and other hazardous consequences. This status is called software aging. Software aging happens because of memory fragmentation, resource consumption in large scale and accumulation of numerical error. Software aging degrads the performance that may result in system failure. This happens because of premature resource exhaustion. This issue cannot be determined during software testing phase because of the dynamic nature of operation. The errors that cause software aging are of special types. These errors do not disturb the software functionality but target the response time and its environment. This issue is to be resolved only during run time as it occurs because of the dynamic nature of the problem. To alleviate the impact of software aging, software rejuvenation technique is being used. Rejuvenation process reboots the system or re-initiates the softwares. This avoids faults or failure. Software rejuvenation removes accumulated error conditions, frees up deadlocks and defragments operating system resources like memory. Hence, it avoids future failures of system that may happen due to software aging. As service availability is crucial, software rejuvenation is to be carried out at defined schedules without disrupting the service. The presence of Software rejuvenation techniques can make software systems more trustworthy. Software designers are using this concept to improve the quality and reliability of the software. Software aging and rejuvenation has generated a lot of research interest in recent years. This work reviews some of the research works related to detection of software aging and identifies research gaps.


Cloud computing technologies and service models are attractive to scientific computing users due to the ability to get on-demand access to resources as well as the ability to control the software environment. Scientific computing researchers and resource providers servicing these users are considering the impact of new models and technologies. SaaS solutions like Globus Online and IaaS solutions such as Nimbus Infrastructure and OpenNebula accelerate the discovery of science by helping scientists to conduct advanced and large-scale science. This chapter describes how cloud is helping researchers to accelerate scientific discovery by transforming manual and difficult tasks into the cloud.


2014 ◽  
Vol 19 (4) ◽  
pp. 5-20 ◽  
Author(s):  
Rocío Pérez De Prado ◽  
Sebastián García-Galán ◽  
José Enrique Munoz Expósito ◽  
Luis Ramón López López ◽  
Rafael Rodríguez Reche

Abstract Montage image engine is an astronomical tool created by NASA’s Earth Sciences Technology Office to obtain mosaics of the sky by the processing of multiple images from diverse regions. The associated computational processes involve the recalculation of the images geometry, the re-projection of the rotation and scale, the homogenization of the background emission and the combination of all images in a standardized format to show a final mosaic. These processes are highly computing demanding and structured in the form of workflows. A workflow is a set of individual jobs that allow the parallelization of the workload to be executed in distributed systems and thus, to reduce its finish time. Cloud computing is a distributed computing platform based on the provision of computing resources in the form of services becoming more and more required to perform large scale simulations in many science applications. Nevertheless, a computational cloud is a dynamic environment where resources capabilities can change on the fly depending on the networks demands. Therefore, flexible strategies to distribute workload among the different resources are necessary. In this work, the consideration of fuzzy rule-based systems as local brokers in cloud computing is proposed to speed up the execution of the Montage workflows. Simulations of the expert broker using synthetic workflows obtained from real systems considering diverse sets of jobs are conducted. Results show that the proposal is able to significantly reduce makespan in comparison to well-known scheduling strategies in distributed systems and in this way, to offer an efficient solution to accelerate the processing of astronomical image mosaic workflows.


Author(s):  
Wagner Al Alam ◽  
Francisco Carvalho Junior

The efforts to make cloud computing suitable for the requirements of HPC applications have motivated us to design HPC Shelf, a cloud computing platform of services for building and deploying parallel computing systems for large-scale parallel processing. We introduce Alite, the system of contextual contracts of HPC Shelf, aimed at selecting component implementations according to requirements of applications, features of targeting parallel computing platforms (e.g. clusters), QoS (Quality-of-Service) properties and cost restrictions. It is evaluated through a small-scale case study employing a componentbased framework for matrix-multiplication based on the BLAS library.


2013 ◽  
Vol 756-759 ◽  
pp. 2386-2390
Author(s):  
Yuan Yuan Guo ◽  
Jing Li ◽  
Xin Chun Liu ◽  
Wei Wei Wang

With the quick development of information science, it becomes much harder to deal with a large scale of data. In this case, cloud computing begins to become a hot topic as a new computing model because of its good scalability. It enables customers to acquire and release computing resources from and to the cloud computing service providers according to current workload. The scaling ability is achieved by system automatically according to auto scaling policies reserved by customers in advance, and it can greatly decrease users operating burden. In this paper, we proposed a new architecture of auto-scaling system, used auto-scaling technology on batch jobs based system and considered tasks deadlines and VM setup time as affecting factors on auto-scaling policy besides substrate resource utilities.


Sign in / Sign up

Export Citation Format

Share Document