Leveraging open-source development in large-scale science data management systems

2002 ◽  
Author(s):  
Mike Moore ◽  
Dawn Lowe
2003 ◽  
Vol 33 (3) ◽  
pp. 422-429 ◽  
Author(s):  
John Nelson

Significant advances have been made that integrate landscape issues in forest-level models. These advanced models are designed to simulate and evaluate economic, ecological, and social goals that are included in the management of forests. The application of multiple-objective heuristics such as tabu search and simulated annealing, combined with remarkable advances in computing power, now allows us to explore highly complex management scenarios over long time horizons and over vast geographic scales. While the power of these decision support systems is highly appealing, and even intoxicating, we still face three sobering challenges on the path towards generating credible forecasts. First, advanced data acquisition and data management systems are needed to support these systems. Data management systems must have high storage capacity, be capable of rapid updates, and accommodate a seemingly endless demand for queries from customers, government agencies, and the public. Planning is an interdisciplinary, hierarchical process, and team members have different data demands, depending on where they fit in the hierarchy. Second, the models must be verified. Multiple-objective models have dozens of parameters, and when these are combined with random search techniques, they become difficult to understand and replicate. Thorough sensitivity analysis is needed to test model parameters, goal weights, and assumptions of uncertainty. Finally, our ability to formulate and run large-scale, long-term forecasting models often exceeds the scientific credibility of the data, especially for complex forest ecosystems. In the absence of critical thinking, such powerful models can become dangerous weapons.


2008 ◽  
Vol 33 (7-8) ◽  
pp. 597-610 ◽  
Author(s):  
Katja Hose ◽  
Armin Roth ◽  
André Zeitz ◽  
Kai-Uwe Sattler ◽  
Felix Naumann

2021 ◽  
Vol 49 (4) ◽  
pp. 18-23
Author(s):  
Suman Karumuri ◽  
Franco Solleza ◽  
Stan Zdonik ◽  
Nesime Tatbul

Observability has been gaining importance as a key capability in today's large-scale software systems and services. Motivated by current experience in industry exemplified by Slack and as a call to arms for database research, this paper outlines the challenges and opportunities involved in designing and building Observability Data Management Systems (ODMSs) to handle this emerging workload at scale.


2020 ◽  
Author(s):  
Andrew Conway ◽  
Adam Leadbetter ◽  
Tara Keena

<p>Integration of data management systems is a persistent problem in European projects that span multiple agencies. Months, if not years of projects are often expended on the integration of disparate database structures, data types, methodologies and outputs. Moreover, this work is usually confined to a single effort, meaning it is needlessly repeated on subsequent projects. The legacy effect of removing these barriers could therefore yield monetary and time savings for all involved, far beyond a single cross-jurisdictional project. </p><p>The European Union’s INTERREG VA Programme has funded the COMPASS project to better manage marine protected areas (MPA) in peripheral areas. Involving five organisations, spread across two nations, the project has developed a cross-border network for marine monitoring. Three of those organisations are UK-based and bound for Brexit (the Agri-Food and Biosciences Institute, Marine Scotland Science and the Scottish Association of Marine Science). With that network under construction, significant efforts have been placed on harmonizing data management processes and procedures between the partners. </p><p>A data management quality management framework (DM-QMF) was introduced to guide this harmonization and ensure adequate quality controls would be enforced. As lead partner on data management, the Irish Marine Institute (MI) initially shared guidelines for infrastructure, architecture and metadata. The implementation of those requirements were then left to the other four partners, with the MI acting as facilitator. This led to the following being generated for each process in the project:</p><p>Data management plan: Information on how and what data were to be generated as well as where it would be stored. </p><p>Flow diagrams: Diagrammatic overview of the flow of data through the project. </p><p>Standard Operating Procedures: Detailed explanatory documents on the precise workings of a process.</p><p>Data management processes were allowed to evolve naturally out of a need to adhere to this set standard. Organisations were able to work within their operational limitations, without being required to alter their existing procedures, but encouraged to learn from each other. Very quickly it was found that there were similarities in processes, where previously it was thought there were significant differences. This process of sharing data management information has created mutually benefiting synergies and enabled the convergence of procedures within the separate organisations. </p><p>The downstream data management synergies that COMPASS has produced have already taken effect. Sister INTERREG VA projects, SeaMonitor and MarPAMM, have felt the benefits. The same data management systems cultivated as part of the COMPASS project are being reused, while the groundwork in creating strong cross boundary channels of communication and cooperation are saving significant amounts of time in project coordination.</p><p>Through data management, personal and institutional relationships have been strengthened, both of which should persist beyond the project terminus in 2021, well into a post-Brexit Europe. The COMPASS project has been an exemplar of how close collaboration can persist and thrive in a changing political environment, in spite of the ongoing uncertainty surrounding Brexit.</p>


Sign in / Sign up

Export Citation Format

Share Document