Jenova: New Approach on Concurrency Control in Web Service Transaction Management

Author(s):  
Heqing Guan ◽  
Shuchao Wan ◽  
Jun Wei
Author(s):  
Yanzhen Zou ◽  
Lu Zhang ◽  
Yan Li ◽  
Bing Xie ◽  
Hong Mei

Web services retrieval is a critical step for reusing existing services in the SOA paradigm. In the UDDI registry, traditional category-based approaches have been used to locate candidate services. However, these approaches usually achieve relatively low precision because some candidate Web Services in the result set cannot provide actually suitable operations for users. In this article, we present a new approach to improve this kind of category-based Web Services retrieval process that can refine the coarse matching results step by step. The refinement is based on the idea that operation specification is very important to service reuse. Therefore, a Web Service is investigated via multiple instances view in our approach, which indicates that a service is labeled as positive if and only if at least one operation provided by this service is usable to the user. Otherwise, it is labeled as negative. Experimental results demonstrate that our approach can increase the retrieval precision to a certain extent after one or two rounds of refinement.


1992 ◽  
Vol 01 (03n04) ◽  
pp. 579-617 ◽  
Author(s):  
MAREK RUSINKIEWICZ ◽  
PIOTR KRYCHNIAK ◽  
ANDRZEJ CICHOCKI

In many application areas the information that may be of interest to a user is stored under the control of multiple, autonomous database systems. To support global transactions in a multidatabase environment, we must coordinate the activities of multiple Database Management Systems that were designed for independent, stand-alone operation. The autonomy and heterogeneity of these systems present a major impediment to the direct adaptation of transaction management mechanisms developed for distributed databases. In this paper we introduce a transaction model designed for a multidatabase environment. A multidatabase transaction is defined by providing a set of (local) sub-transactions, together with their precedence and dataflow requirements. Additionally, the transaction designer may specify failure atomicity and execution atomicity requirements of the multidatabase transaction. These high-level specifications are then used by the scheduler of a multidatabase transaction to assure that its execution satisfies the constraints imposed by the semantics of the application. Uncontrolled interleaving of multidatabase transactions may lead to the violation of interdatabase integrity constraints. We discuss the issues involved in a concurrent execution of multidatabase transactions and propose a new concurrency control correctness criterion that is less restrictive than global serializability. We also show how the multidatabase SQL can be extended to allow the user to specify multidatabase transactions in a nonprocedural way.


Author(s):  
Ryan Saptarshi Ray

Current parallel programming uses low-level programming constructs like threads and explicit synchronization (for example, locks, semaphores and monitors) to coordinate thread execution which makes these programs difficult to design, program and debug. In this paper we present Software Transactional Memory (STM) which is a promising new approach for programming in parallel processors having shared memory. It is a concurrency control mechanism that is widely considered to be easier to use by programmers than other mechanisms such as locking. It allows portions of a program to execute in isolation, without regard to other, concurrently executing tasks. A programmer can reason about the correctness of code within a transaction and need not worry about complex interactions with other, concurrently executing parts of the program.


Author(s):  
Krzysztof Ostrowski ◽  
Ken Birman ◽  
Danny Dolev

Existing web service notification and eventing standards are useful in many applications, but they have serious limitations that make them ill-suited for large-scale deployments, or as a middleware or a component-integration technology in today’s data centers. For example, it is not possible to use IP multicast, or for recipients to forward messages to others, scalable notification trees must be setup manually, and no end-to-end security, reliability, or QoS guarantees can be provided. This chapter proposes an architecture based on object-oriented design principles that is free of such limitations, extremely modular and extensible, and that can serve as a basis for extending and complementing the existing standards. The new approach emerges from the authors’ work on Live Distributed Objects, a new programming model that brings object-orientation into the realm of distributed computing.


Sign in / Sign up

Export Citation Format

Share Document