scholarly journals Service Virtualization Using a Non-von Neumann Parallel, Distributed, and Scalable Computing Model

2012 ◽  
Vol 2012 ◽  
pp. 1-10 ◽  
Author(s):  
Rao Mikkilineni ◽  
Giovanni Morana ◽  
Daniele Zito ◽  
Marco Di Sano

This paper describes a prototype implementing a high degree of transaction resilience in distributed software systems using a non-von Neumann computing model exploiting parallelism in computing nodes. The prototype incorporates fault, configuration, accounting, performance, and security (FCAPS) management using a signaling network overlay and allows the dynamic control of a set of distributed computing elements in a network. Each node is a computing entity endowed with self-management and signaling capabilities to collaborate with similar nodes in a network. The separation of parallel computing and management channels allows the end-to-end transaction management of computing tasks (provided by the autonomous distributed computing elements) to be implemented as network-level FCAPS management. While the new computing model is operating system agnostic, a Linux, Apache, MySQL, PHP/Perl/Python (LAMP) based services architecture is implemented in a prototype to demonstrate end-to-end transaction management with auto-scaling, self-repair, dynamic performance management and distributed transaction security assurance. The implementation is made possible by a non-von Neumann middleware library providing Linux process management through multi-threaded parallel execution of self-management and signaling abstractions. We did not use Hypervisors, Virtual machines, or layers of complex virtualization management systems in implementing this prototype.

2012 ◽  
Vol 4 (4) ◽  
pp. 37-51
Author(s):  
Rao Mikkilineni

Cellular organisms have evolved to manage themselves and their interactions with their surroundings with a high degree of resiliency, efficiency and scalability. Signaling and collaboration of autonomous distributed computing elements accomplishing a common goal with optimal resource utilization are the differentiating characteristics that contribute to the computing model of cellular organisms. By introducing signaling and self-management abstractions in an autonomic computing element called Distributed Intelligent Managed Element (DIME), the authors improve the architectural resiliency, efficiency, and scaling in distributed computing systems. Described are two implementations of DIME network architecture to demonstrate auto-scaling, self-repair, dynamic performance optimization, and end to end distributed transaction management. By virtualizing a process (by converting it into a DIME) in the Linux operating system and also building a new native operating system called Parallax OS optimized for Intel-multi-core processors, which converts each core into a DIME, implications of the DIME computing model to future cloud computing services and datacenter infrastructure management practices and discuss the relationship of the DIME computing model to current discussions on Turing machines, Gödel’s theorems and a call for no less than a Kuhnian paradigm shift by some computer scientists.


10.29007/44jw ◽  
2018 ◽  
Author(s):  
Rao Mikkilineni ◽  
Albert Comparini ◽  
Giovanni Morana

Turing’s o-machine discussed in his PhD thesis can perform all of the usual operations of a Turing machine and in addition, when it is in a certain internal state, can also query an oracle for an answer to a specific question that dictates its further evolution. In his thesis, Turing said 'We shall not go any further into the nature of this oracle apart from saying that it cannot be a machine.’ There is a host of literature discussing the role of the oracle in AI, modeling brain, computing, and hyper-computing machines. In this paper, we take a broader view of the oracle machine inspired by the genetic computing model of cellular organisms and the self-organizing fractal theory. We describe a specific software architecture implementation that circumvents the halting and un-decidability problems in a process workflow computation to introduce the architectural resiliency found in cellular organisms into distributed computing machines. A DIME (Distributed Intelligent Computing Element), recently introduced as the building block of the DIME computing model, exploits the concepts from Turing’s oracle machine and extends them to implement a recursive managed distributed computing network, which can be viewed as an interconnected group of such specialized oracle machines, referred to as a DIME network. The DIME network architecture provides the architectural resiliency through auto-failover; auto-scaling; live-migration; and end-to-end transaction security assurance in a distributed system. We demonstrate these characteristics using prototypes without the complexity introduced by hypervisors, virtual machines and other layers of ad-hoc management software in today’s distributed computing environments.


Author(s):  
Rao Mikkilineni ◽  
Giovanni Morana ◽  
Ian Seyler

This chapter introduces a new network-centric computing model using Distributed Intelligent Managed Element (DIME) network architecture (DNA). A parallel signaling network overlay over a network of self-managed von Neumann computing nodes is utilized to implement dynamic fault, configuration, accounting, performance, and security management of both the nodes and the network based on business priorities, workload variations and latency constraints. Two implementations of the new computing model are described which demonstrate the feasibility of the new computing model. One implementation provides service virtualization at the Linux process level and another provides virtualization of a core in a many-core processor. Both point to an alternative way to assure end-to-end transaction reliability, availability, performance, and security in distributed Cloud computing, reducing current complexity in configuring and managing virtual machines and making the implementation of Federation of Clouds simpler.


2012 ◽  
pp. 1929-1942
Author(s):  
Mehdi Sheikhalishahi ◽  
Manoj Devare ◽  
Lucio Grandinetti ◽  
Maria Carmen Incutti

Cloud computing is a new kind of computing model and technology introduced by industry leaders in recent years. Nowadays, it is the center of attention because of various excellent promises. However, it brings some challenges and arguments among computing leaders about the future of computing models and infrastructure. For example, whether it is going to be in place of other technologies in computing like grid or not, is an interesting question. In this chapter, we address this issue by considering the original grid architecture. We show how cloud can be put in the grid architecture to complement it. As a result, we face some shadow challenges to be addressed.


This chapter proposes an application of simulation modelling to frame the relationships between healthcare, patient organization management, and patient co-created healthcare. For the purpose, it presents a case study within the Italian context, for which it adopts a methodological approach combining performance management and system dynamics. After background information, the chapter introduces the methodology and explains the modelling steps, undertaken assuming the privileged perspective of a patient organization. The model building goes by progressive approximations. A tailored dynamic performance management framework identifies key variables and links within the system. Then a stock-and-flow structure deepens the analysis by depicting processes of accumulation of material, money, and information; a comprehensive loop analysis describes the system's dynamics in terms of interacting feedback structures. Finally, quantitative simulations concerning the mutual development of patient organizations and healthcare allow graphing behavior patterns according to alternative scenarios.


Author(s):  
Pooja Arora ◽  
Anurag Dixit

Purpose The advancements in the cloud computing has gained the attention of several researchers to provide on-demand network access to users with shared resources. Cloud computing is important a research direction that can provide platforms and softwares to clients using internet. However, handling huge number of tasks in cloud infrastructure is a complicated task. Thus, it needs a load balancing (LB) method for allocating tasks to virtual machines (VMs) without influencing system performance. This paper aims to develop a technique for LB in cloud using optimization algorithms. Design/methodology/approach This paper proposes a hybrid optimization technique, named elephant herding-based grey wolf optimizer (EHGWO), in the cloud computing model for LB by determining the optimal VMs for executing the reallocated tasks. The proposed EHGWO is derived by incorporating elephant herding optimization (EHO) in grey wolf optimizer (GWO) such that the tasks are allocated to the VM by eliminating the tasks from overloaded VM by maintaining the system performance. Here, the load of physical machine (PM), capacity and load of VM is computed for deciding whether the LB has to be done or not. Moreover, two pick factors, namely, task pick factor (TPF) and VM pick factor (VPF), are considered for choosing the tasks for reallocating them from overloaded VM to underloaded VM. The proposed EHGWO decides the task to be allocated in the VM based on the newly derived fitness functions. Findings The minimum load and makespan obtained in the existing methods, constraint measure based LB (CMLB), fractional dragonfly based LB algorithm (FDLA), EHO, GWO and proposed EHGWO for the maximum number of VMs is illustrated. The proposed EHGWO attained minimum makespan with value 814,264 ns and minimum load with value 0.0221, respectively. Meanwhile, the makespan values attained by existing CMLB, FDLA, EHO, GWO, are 318,6896 ns, 230,9140 ns, 1,804,851 ns and 1,073,863 ns, respectively. The minimum load values computed by existing methods, CMLB, FDLA, EHO, GWO, are 0.0587, 0.026, 0.0248 and 0.0234. On the other hand, the proposed EHGWO with minimum load value is 0.0221. Hence, the proposed EHGWO attains maximum performance as compared to the existing technique. Originality/value This paper illustrates the proposed LB algorithm using EHGWO in a cloud computing model using two pitch factors, named TPF and VPF. For initiating LB, the tasks assigned to the overloaded VM are reallocated to under loaded VMs. Here, the proposed LB algorithm adapts capacity and loads for the reallocation. Based on TPF and VPF, the tasks are reallocated from VMs using the proposed EHGWO. The proposed EHGWO is developed by integrating EHO and GWO algorithm using a new fitness function formulated by load of VM, migration cost, load of VM, capacity of VM and makespan. The proposed EHGWO is analyzed based on load and makespan.


Sign in / Sign up

Export Citation Format

Share Document