MOLE: A data management application based on a protein production data model

2004 ◽  
Vol 58 (2) ◽  
pp. 285-289 ◽  
Author(s):  
Chris Morris ◽  
Peter Wood ◽  
Susanne L. Griffiths ◽  
Keith S. Wilson ◽  
Alun W. Ashton
Author(s):  
Arie Wisianto ◽  
Hidayatus Saniya ◽  
Oki Gumilar

Development of web based GIS application often requires high cost on base map datasets and software licenses. Web based GIS Pipeline Data Management Application can be developed using the benefit of Google Maps datasets combined with available local spatial datasets resulting comprehensive spatial information. Sharp Map is an easy-to-use mapping library for use in web and desktop applications. It provides access and enables spatial querying to many types of GIS data. The engine is written in C# and based on the .Net 2.0 frameworks and provides advantages for integration with Pipeline Data Model such as PODS using .NET technology. Sharp Map enables development of WMS and web services for serving pipeline data management information on internet/intranet web based application. Open Layers is use to integrate pipelines data model and Google Maps dataset on single map display with user friendly and dynamic user interfaces. The use of Sharp Map and Open Layers creating powerful Pipeline Data Management web based GIS application by combining specific information from pipelines data model and comprehensive Google Maps satellites datasets without publishing private information from local datasets. The combination on Sharp Map, Open Layers, Google Maps datasets, and .NET technology resulting a low cost and powerful Pipeline Data Management web based GIS solution. Impact zone of the event then we can calculate their consequences and finally we can figure their risk.


2015 ◽  
Vol 713-715 ◽  
pp. 2418-2422
Author(s):  
Lei Rao ◽  
Fan De Yang ◽  
Xin Ming Li ◽  
Dong Liu

Data management has experienced three stages: labor management, file systems, and database systems. In this paper, manage equipment data using a combination of HDFS file system and HBase database: the principles of HBase data management is studied; equipment data’s reading and writing processes is established; data model of equipment database is designed based on HBase.


Author(s):  
Catherine Jayapandian ◽  
Chien-Hung Chen ◽  
Aman Dabir ◽  
Samden Lhatoo ◽  
Guo-Qiang Zhang ◽  
...  

Data ◽  
2019 ◽  
Vol 4 (2) ◽  
pp. 83 ◽  
Author(s):  
Timm Fitschen ◽  
Alexander Schlemmer ◽  
Daniel Hornung ◽  
Henrik tom Wörden ◽  
Ulrich Parlitz ◽  
...  

We present CaosDB, a Research Data Management System (RDMS) designed to ensure seamless integration of inhomogeneous data sources and repositories of legacy data in a FAIR way. Its primary purpose is the management of data from biomedical sciences, both from simulations and experiments during the complete research data lifecycle. An RDMS for this domain faces particular challenges: research data arise in huge amounts, from a wide variety of sources, and traverse a highly branched path of further processing. To be accepted by its users, an RDMS must be built around workflows of the scientists and practices and thus support changes in workflow and data structure. Nevertheless, it should encourage and support the development and observation of standards and furthermore facilitate the automation of data acquisition and processing with specialized software. The storage data model of an RDMS must reflect these complexities with appropriate semantics and ontologies while offering simple methods for finding, retrieving, and understanding relevant data. We show how CaosDB responds to these challenges and give an overview of its data model, the CaosDB Server and its easy-to-learn CaosDB Query Language. We briefly discuss the status of the implementation, how we currently use CaosDB, and how we plan to use and extend it.


Author(s):  
Sumit Singh ◽  
Essam Shehab ◽  
Nigel Higgins ◽  
Kevin Fowler ◽  
Dylan Reynolds ◽  
...  

Digital Twin (DT) is the imitation of the real world product, process or system. Digital Twin is the ideal solution for data-driven optimisations in different phases of the product lifecycle. With the rapid growth in DT research, data management for digital twin is a challenging field for both industries and academia. The challenges for DT data management are analysed in this article are data variety, big data & data mining and DT dynamics. The current research proposes a novel concept of DT ontology model and methodology to address these data management challenges. The DT ontology model captures and models the conceptual knowledge of the DT domain. Using the proposed methodology, such domain knowledge is transformed into a minimum data model structure to map, query and manage databases for DT applications. The proposed research is further validated using a case study based on Condition-Based Monitoring (CBM) DT application. The query formulation around minimum data model structure further shows the effectiveness of the current approach by returning accurate results, along with maintaining semantics and conceptual relationships along DT lifecycle. The method not only provides flexibility to retain knowledge along DT lifecycle but also helps users and developers to design, maintain and query databases effectively for DT applications and systems of different scale and complexities.


Author(s):  
Alhad A. Joshi

Over the past decade, Computer Aided Engineering (Simulation) has experienced explosive growth being a significant enabler for: 1. Validating product design; 2. Providing low-cost methods for exploring a variety of product design alternatives; 3. Optimizing parts for better service performance; 4. Reducing dependence on physical testing; 5. Reducing warranty costs; 6. Achieving faster time to market. This rapid growth in the number of simulations performed and the amount of data generated in the absence of any significant data and process management initiatives has led to considerable inefficiencies in the CAE domain. Many companies now recognize the need to manage their CAE process and data as well as their desire to leverage their existing PDM systems as the primary repositories of CAE data. Some major issues are: 1. There is a need for a PDM data model to support CAE; 2. The CAE data model can be very complex; 3. There is an immense variety of CAE applications and data types; 4. Many CAE simulations require access to physical test data for input and correlation; 5. Data management discipline is not typically part of the CAE culture today. Despite the unique challenges posed by bringing PDM into the CAE world, the transition could occur faster than it has in the CAD world. This presentation will showcase an approach for managing CAE data in traditional PDM systems. Two working examples of CAE process automation software solutions integrated with CAD and PDM will be discussed. In particular, these applications will show how CAE users can leverage established PDM infrastructure and interact with EDS’ Teamcenter/Enterprise, Teamcenter/Engineering and Dassault Systeme’s SmarTeam through seamless integrations with their CAE systems.


2016 ◽  
Vol 2016 (0) ◽  
pp. S1440103
Author(s):  
Hikaru ISHIGURI ◽  
Chinghui WU ◽  
Kouhei OGAWA ◽  
Nagomu MORITA ◽  
Yasuyuki NISHIOKA

Sign in / Sign up

Export Citation Format

Share Document