scholarly journals Server‐side workflow execution using data grid technology for reproducible analyses of data‐intensive hydrologic systems

2016 ◽  
Vol 3 (4) ◽  
pp. 163-175 ◽  
Author(s):  
Bakinam T. Essawy ◽  
Jonathan L. Goodall ◽  
Hao Xu ◽  
Arcot Rajasekar ◽  
James D. Myers ◽  
...  
2015 ◽  
Vol 26 (5) ◽  
pp. 1035-1045
Author(s):  
Chi-Hwan Choi ◽  
Jin-Hyuk Kim ◽  
Min-Kyu Park ◽  
Kaaen Kwon ◽  
Seung-Hyun Jung ◽  
...  
Keyword(s):  

Author(s):  
Gokop Goteng ◽  
Ashutosh Tiwari ◽  
Rajkumar Roy

The emerging grid technology provides a secured platform for multidisciplinary experts in the security intelligence profession to collaborate and fight global terrorism. This chapter developed grid architecture and implementation strategy on how to connect the dots between security agents such as the CIA, FBI, police, custom officers and transport industry to share data and information on terrorists and their movements. The major grid components that featured in the architecture are the grid security portal, data grid, computational grid, semantic grid and collaboratory. The challenges of implementing this architecture are conflicting laws, cooperation among governments, and information on terrorist’s network and interoperability problem.


Author(s):  
Eleana Asimakopoulou ◽  
Chimay J. Anumba ◽  
Bouchlaghem ◽  
Bouchlaghem

Much work is under way within the Grid technology community on issues associated with the development of services to foster collaboration via the integration and exploitation of multiple autonomous, distributed data sources through a seamless and flexible virtualized interface. However, several obstacles arise in the design and implementation of such services. A notable obstacle, namely how clients within a data Grid environment can be kept automatically informed of the latest and relevant changes about data entered/committed in single or multiple autonomous distributed datasets is identified. The view is that keeping interested users informed of relevant changes occurring across their domain of interest will enlarge their decision-making space which in turn will increase the opportunities for a more informed decision to be encountered. With this in mind, the chapter goes on to describe in detail the model architecture and its implementation to keep interested users informed automatically about relevant up-to-date data.


2017 ◽  
Vol 75 ◽  
pp. 402-422 ◽  
Author(s):  
Vítor Silva ◽  
José Leite ◽  
José J. Camata ◽  
Daniel de Oliveira ◽  
Alvaro L.G.A. Coutinho ◽  
...  

2015 ◽  
Vol 151 ◽  
pp. 114-121 ◽  
Author(s):  
H.R. Griffiths ◽  
E.M. Augustyniak ◽  
S.J. Bennett ◽  
F. Debacq-Chainiaux ◽  
C.R. Dunston ◽  
...  

Author(s):  
Anna Vadimovna Lapkina ◽  
Andrew Alexandrovitch Petukhov

The problem of automatic requests classification, as well as the problem of determining the routing rules for the requests on the server side, is directly connected with analysis of the user interface of dynamic web pages. This problem can be solved at the browser level, since it contains complete information about possible requests arising from interaction interaction between the user and the web application. In this paper, in order to extract the classification features, using data from the request execution context in the web client is suggested. A request context or a request trace is a collection of additional identification data that can be obtained by observing the web page JavaScript code execution or the user interface elements changes as a result of the interface elements activation. Such data, for example, include the position and the style of the element that caused the client request, the JavaScript function call stack, and the changes in the page's DOM tree after the request was initialized. In this study the implementation of the Chrome Developer Tools Protocol is used to solve the problem at the browser level and to automate the request trace selection.


Sign in / Sign up

Export Citation Format

Share Document