scholarly journals Toward a Lifecycle Information Framework and Technology in Manufacturing

Author(s):  
Thomas Hedberg ◽  
Allison Barnard Feeney ◽  
Moneer Helu ◽  
Jaime A. Camelio

Industry has been chasing the dream of integrating and linking data across the product lifecycle and enterprises for decades. However, industry has been challenged by the fact that the context in which data are used varies based on the function/role in the product lifecycle that is interacting with the data. Holistically, the data across the product lifecycle must be considered an unstructured data set because multiple data repositories and domain-specific schema exist in each phase of the lifecycle. This paper explores a concept called the lifecycle information framework and technology (LIFT). LIFT is a conceptual framework for lifecycle information management and the integration of emerging and existing technologies, which together form the basis of a research agenda for dynamic information modeling in support of digital-data curation and reuse in manufacturing. This paper provides a discussion of the existing technologies and activities that the LIFT concept leverages. Also, the paper describes the motivation for applying such work to the domain of manufacturing. Then, the LIFT concept is discussed in detail, while underlying technologies are further examined and a use case is detailed. Lastly, potential impacts are explored.

2020 ◽  
Author(s):  
Nicholas Jarboe ◽  
Rupert Minnett ◽  
Catherine Constable ◽  
Anthony Koppers ◽  
Lisa Tauxe

<p>MagIC (earthref.org/MagIC) is an organization dedicated to improving research capacity in the Earth and Ocean sciences by maintaining an open community digital data archive for rock and paleomagnetic data with portals that allow users access to archive, search, visualize, download, and combine these versioned datasets. We are a signatory of the Coalition for Publishing Data in the Earth and Space Sciences (COPDESS)'s Enabling FAIR Data Commitment Statement and an approved repository for the Nature set of journals. We have been in collaboration with EarthCube's GeoCodes data search portal, adding schema.org/JSON-LD headers to our data set landing pages and suggesting extensions to schema.org when needed. Collaboration with the European Plate Observing System (EPOS)'s Thematic Core Service Multi-scale laboratories (TCS MSL) is ongoing with MagIC sending its contributions' metadata to TCS MSL via DataCite records.</p><p>Improving and updating our data repository to meet the demands of the quickly changing landscape of data archival, retrieval, and interoperability is a challenging proposition. Most journals now require data to be archived in a "FAIR" repository, but the exact specifications of FAIR are still solidifying. Some journals vet and have their own list of accepted repositories while others rely on other organizations to investigate and certify repositories. As part of the COPDESS group at Earth Science Information Partners (ESIP), we have been and will continue to be part of the discussion on the needed and desired features for acceptable data repositories.</p><p>We are actively developing our software and systems to meet the needs of our scientific community. Some current issues we are confronting are: developing workflows with journals on how to publish the journal article and data in MagIC simultaneously, sustainability of data repository funding especially in light of the greater demands on them due to data policy changes at journals, and how to best share and expose metadata about our data holdings to organizations such as EPOS, EarthCube, and Google.</p>


Author(s):  
Rahman Sanya ◽  
Gilbert Maiga ◽  
Ernest Mwebaze

Rapid increase in digital data coupled with advances in deep learning algorithms is opening unprecedented opportunities for incorporating multiple data sources for modeling spatial dynamics of human infectious diseases. We used Convolutional Neural Networks (CNN) in conjunction with satellite imagery-based urban housing and socio-economic data to predict disease density in a developing country setting. We explored both single (uni) and multiple input (multimodality) network architectures for this purpose. We achieved maximum test set accuracy of 81.6 per cent using a single input CNN model built with one convolutional layer and trained using housing image data. However, this fairly good performance was biased in favor of specific disease density classes due to an unbalanced data set despite our use of methods to address the problem. These results suggest CNN are promising for modeling spatial dynamics of human infectious diseases, especially in a developing country setting. Urban housing signals extracted from satellite imagery seem suitable for this purpose, under the same context.


Author(s):  
Kristin Vanderbilt ◽  
David Blankman

Science has become a data-intensive enterprise. Data sets are commonly being stored in public data repositories and are thus available for others to use in new, often unexpected ways. Such re-use of data sets can take the form of reproducing the original analysis, analyzing the data in new ways, or combining multiple data sets into new data sets that are analyzed still further. A scientist who re-uses a data set collected by another must be able to assess its trustworthiness. This chapter reviews the types of errors that are found in metadata referring to data collected manually, data collected by instruments (sensors), and data recovered from specimens in museum collections. It also summarizes methods used to screen these types of data for errors. It stresses the importance of ensuring that metadata associated with a data set thoroughly document the error prevention, detection, and correction methods applied to the data set prior to publication.


Author(s):  
Seunghwa Park ◽  
Inhan Kim

Today’s buildings are getting larger and more complex. As a result, the traditional method of manually checking the design of a building is no longer efficient since such a process is time-consuming and laborious. It is becoming increasingly important to establish and automate processes for checking the quality of buildings. By automatically checking whether buildings satisfy requirements, Building Information Modeling (BIM) allows for rapid decision-making and evaluation. In this context, the work presented here focuses on resolving building safety issues via a proposed BIM-based quality checking process. Through the use case studies, the efficiency and usability of the devised strategy is evaluated. This research can be beneficial in promoting the efficient use of BIM-based communication and collaboration among the project party concerned for improving safety management. In addition, the work presented here has the potential to expand research efforts in BIM-based quality checking processes.


1982 ◽  
Vol 61 (s109) ◽  
pp. 34-34
Author(s):  
Samuel J. Agronow ◽  
Federico C. Mariona ◽  
Frederick C. Koppitch ◽  
Kazutoshi Mayeda

Author(s):  
Adrienne M Stilp ◽  
Leslie S Emery ◽  
Jai G Broome ◽  
Erin J Buth ◽  
Alyna T Khan ◽  
...  

Abstract Genotype-phenotype association studies often combine phenotype data from multiple studies to increase power. Harmonization of the data usually requires substantial effort due to heterogeneity in phenotype definitions, study design, data collection procedures, and data set organization. Here we describe a centralized system for phenotype harmonization that includes input from phenotype domain and study experts, quality control, documentation, reproducible results, and data sharing mechanisms. This system was developed for the National Heart, Lung and Blood Institute’s Trans-Omics for Precision Medicine program, which is generating genomic and other omics data for >80 studies with extensive phenotype data. To date, 63 phenotypes have been harmonized across thousands of participants from up to 17 studies per phenotype (participants recruited 1948-2012). We discuss challenges in this undertaking and how they were addressed. The harmonized phenotype data and associated documentation have been submitted to National Institutes of Health data repositories for controlled-access by the scientific community. We also provide materials to facilitate future harmonization efforts by the community, which include (1) the code used to generate the 63 harmonized phenotypes, enabling others to reproduce, modify or extend these harmonizations to additional studies; and (2) results of labeling thousands of phenotype variables with controlled vocabulary terms.


Sign in / Sign up

Export Citation Format

Share Document