scholarly journals Applying OGC Sensor Web Enablement Standards to Develop a TDR Multi-Functional Measurement Model

Sensors ◽  
2019 ◽  
Vol 19 (19) ◽  
pp. 4070
Author(s):  
Chung ◽  
Huang ◽  
Guan ◽  
Jian

Time-domain reflectometry (TDR) is considered as a passive monitoring technique which reveals multi-functions, such as water level, bridge scour, landslide, and suspended sediment concentration (SSC), based on a single TDR device via multiplexing and related algorithms. The current platform for revealing TDR analysis and interpreted observations, however, is complex to access, thus a coherent data model and format for TDR heterogeneous data exchange is useful and necessary. To enhance the interoperability of TDR information, this research aims at standardizing the TDR data based on the Open Geospatial Consortium (OGC) Sensor Web Enablement (SWE) standards. To be specific, this study proposes a TDR sensor description model and an observation model based on the Sensor Model Language (SensorML) and Observation and Measurement (O&M) standards. In addition, a middleware was developed to translate existing TDR information to a Sensor Observation Service (SOS) web service. Overall, by standardizing TDR data with the OGC SWE open standards, relevant information for disaster management can be effectively and efficiently integrated in an interoperable manner.

2016 ◽  
Vol 49 (1) ◽  
pp. 302-310 ◽  
Author(s):  
Michael Kachala ◽  
John Westbrook ◽  
Dmitri Svergun

Recent advances in small-angle scattering (SAS) experimental facilities and data analysis methods have prompted a dramatic increase in the number of users and of projects conducted, causing an upsurge in the number of objects studied, experimental data available and structural models generated. To organize the data and models and make them accessible to the community, the Task Forces on SAS and hybrid methods for the International Union of Crystallography and the Worldwide Protein Data Bank envisage developing a federated approach to SAS data and model archiving. Within the framework of this approach, the existing databases may exchange information and provide independent but synchronized entries to users. At present, ways of exchanging information between the various SAS databases are not established, leading to possible duplication and incompatibility of entries, and limiting the opportunities for data-driven research for SAS users. In this work, a solution is developed to resolve these issues and provide a universal exchange format for the community, based on the use of the widely adopted crystallographic information framework (CIF). The previous version of the sasCIF format, implemented as an extension of the core CIF dictionary, has been available since 2000 to facilitate SAS data exchange between laboratories. The sasCIF format has now been extended to describe comprehensively the necessary experimental information, results and models, including relevant metadata for SAS data analysis and for deposition into a database. Processing tools for these files (sasCIFtools) have been developed, and these are available both as standalone open-source programs and integrated into the SAS Biological Data Bank, allowing the export and import of data entries as sasCIF files. Software modules to save the relevant information directly from beamline data-processing pipelines in sasCIF format are also developed. This update of sasCIF and the relevant tools are an important step in the standardization of the way SAS data are presented and exchanged, to make the results easily accessible to users and to promote further the application of SAS in the structural biology community.


Author(s):  
Bamshad Mobasher

In the span of a decade, the World Wide Web has been transformed from a tool for information sharing among researchers into an indispensable part of everyday activities. This transformation has been characterized by an explosion of heterogeneous data and information available electronically, as well as increasingly complex applications driving a variety of systems for content management, e-commerce, e-learning, collaboration, and other Web services. This tremendous growth, in turn, has necessitated the development of more intelligent tools for end users as well as information providers in order to more effectively extract relevant information or to discover actionable knowledge. From its very beginning, the potential of extracting valuable knowledge from the Web has been quite evident. Web mining (i.e. the application of data mining techniques to extract knowledge from Web content, structure, and usage) is the collection of technologies to fulfill this potential. In this article, we will summarize briefly each of the three primary areas of Web mining—Web usage mining, Web content mining, and Web structure mining— and discuss some of the primary applications in each area.


2020 ◽  
Vol 9 (4) ◽  
pp. 394-409
Author(s):  
Saikiran Gopalakrishnan ◽  
Nathan W. Hartman ◽  
Michael D. Sangid

AbstractThe digital transformation of manufacturing requires digitalization, including automatic and efficient data exchange. Model-based definitions (MBDs) capture digital product definitions, in order to eliminate error-prone information exchange associated with traditional paper-based drawings and to provide contextual information through additional metadata. The flow of MBDs extends throughout the product lifecycle (including the design, analysis, manufacturing, in service life, and retirement stages) and can be extended beyond the typical geometry and tolerance information within a computer-aided design. In this paper, the MBDs are extended to include materials information, via dynamic linkages. To this end, a model-based feature information network (MFIN) is created to provide a comprehensive framework that facilitates storing, updating, searching, and retrieving of relevant information across a product’s lifecycle. The use case of a damage tolerant analysis for a compressor bladed-disk (blisk) is demonstrated, in Ti-6Al-4V blade(s) linear friction welded to the Ti-6Al-4V disk, creating well-defined regions exhibiting grain refinement and high residuals stresses. By capturing the location-specific microstructure and residual stress values at the weld regions, this information is accessed within the MFIN and used for downstream damage tolerant analysis. The introduction of the MFIN framework facilitates access to dynamically evolving data for use within physics-based models (resulting in the opportunity to reduce uncertainty in subsequent prognosis analyses), thereby enabling a digital twin description of the component or system.


2019 ◽  
Vol 8 (8) ◽  
pp. 340
Author(s):  
Tagliolato Paolo ◽  
Fugazza Cristiano ◽  
Oggioni Alessandro ◽  
Carrara Paola

The adoption of Sensor Web Enablement (SWE) practices by sensor maintainers is hampered by the inherent complexity of the Sensor Model Language (SensorML), its high expressiveness, and the scarce availability of editing tools. To overcome these issues, the Earth Observation (EO) community often recurs to SensorML profiles narrowing the range of admitted metadata structures and value ranges. Unfortunately, profiles frequently fall short of providing usable editing tools and comprehensive validation criteria, particularly for the difficulty of checking value ranges in the multi-tenanted domain of the Web of Data. In this paper, we provide an updated review of current practices, techniques, and tools for editing SensorML in the perspective of profile support and introduce our solution for effective profile definition. Beside allowing for formalization of a broad range of constraints that concur in defining a metadata profile, our proposal closes the gap between profile definition and actual editing of the corresponding metadata by allowing for ex-ante validation of the metadata that is produced. On this basis, we suggest the notion of Semantic Web SensorML profiles, characterized by a new family of constraints involving Semantic Web sources. We also discuss implementation of SensorML profiles with our tool and pinpoint the benefits with respect to the existing ex-post validation facilities provided by schema definition languages.


2018 ◽  
Vol 3 (2) ◽  
pp. 162
Author(s):  
Slamet Sudaryanto Nurhendratno ◽  
Sudaryanto Sudaryanto

 Data integration is an important step in integrating information from multiple sources. The problem is how to find and combine data from scattered data sources that are heterogeneous and have semantically informant interconnections optimally. The heterogeneity of data sources is the result of a number of factors, including storing databases in different formats, using different software and hardware for database storage systems, designing in different data semantic models (Katsis & Papakonstantiou, 2009, Ziegler & Dittrich , 2004). Nowadays there are two approaches in doing data integration that is Global as View (GAV) and Local as View (LAV), but both have different advantages and limitations so that proper analysis is needed in its application. Some of the major factors to be considered in making efficient and effective data integration of heterogeneous data sources are the understanding of the type and structure of the source data (source schema). Another factor to consider is also the view type of integration result (target schema). The results of the integration can be displayed into one type of global view or a variety of other views. So in integrating data whose source is structured the approach will be different from the integration of the data if the data source is not structured or semi-structured. Scheme mapping is a specific declaration that describes the relationship between the source scheme and the target scheme. In the scheme mapping is expressed in in some logical formulas that can help applications in data interoperability, data exchange and data integration. In this paper, in the case of establishing a patient referral center data center, it requires integration of data whose source is derived from a number of different health facilities, it is necessary to design a schema mapping system (to support optimization). Data Center as the target orientation schema (target schema) from various reference service units as a source schema (source schema) has the characterization and nature of data that is structured and independence. So that the source of data can be integrated tersetruktur of the data source into an integrated view (as a data center) with an equivalent query rewriting (equivalent). The data center as a global schema serves as a schema target requires a "mediator" that serves "guides" to maintain global schemes and map (mapping) between global and local schemes. Data center as from Global As View (GAV) here tends to be single and unified view so to be effective in its integration process with various sources of schema which is needed integration facilities "integration". The "Pemadu" facility is a declarative mapping language that allows to specifically link each of the various schema sources to the data center. So that type of query rewriting equivalent is suitable to be applied in the context of query optimization and maintenance of physical data independence.Keywords: Global as View (GAV), Local as View (LAV), source schema ,mapping schema


Author(s):  
Jin Minli ◽  
Feng Yuqiang ◽  
Peng Wuliang ◽  
Zhang Chen

Sign in / Sign up

Export Citation Format

Share Document