scholarly journals 3D Geometry-Based Indoor Network Extraction for Navigation Applications Using SFCGAL

2020 ◽  
Vol 9 (7) ◽  
pp. 417 ◽  
Author(s):  
Jernej Tekavec ◽  
Anka Lisec

This study is focused on indoor navigation network extraction for navigation applications based on available 3D building data and using SFCGAL library, e.g. simple features computational geometry algorithms library. In this study, special attention is given to 3D cadastre and BIM (building information modelling) datasets, which have been used as data sources for 3D geometric indoor modelling. SFCGAL 3D functions are used for the extraction of an indoor network, which has been modelled in the form of indoor connectivity graphs based on 3D geometries of indoor features. The extraction is performed by the integration of extract transform load (ETL) software and the spatial database to support multiple data sources and provide access to SFCGAL functions. With this integrated approach, the current lack of straightforward software support for complex 3D spatial analyses is addressed. Based on the developed methodology, we perform and discuss the extraction of an indoor navigation network from 3D cadastral and BIM data. The efficiency and performance of the network analyses were evaluated using the processing and query execution times. The results show that the proposed methodology for geometry-based navigation network extraction of buildings is efficient and can be used with various types of 3D geometric indoor data.

2019 ◽  
Vol 19 (S6) ◽  
Author(s):  
Lei Deng ◽  
Danyi Ye ◽  
Junmin Zhao ◽  
Jingpu Zhang

Abstract Background A collection of disease-associated data contributes to study the association between diseases. Discovering closely related diseases plays a crucial role in revealing their common pathogenic mechanisms. This might further imply treatment that can be appropriated from one disease to another. During the past decades, a number of approaches for calculating disease similarity have been developed. However, most of them are designed to take advantage of single or few data sources, which results in their low accuracy. Methods In this paper, we propose a novel method, called MultiSourcDSim, to calculate disease similarity by integrating multiple data sources, namely, gene-disease associations, GO biological process-disease associations and symptom-disease associations. Firstly, we establish three disease similarity networks according to the three disease-related data sources respectively. Secondly, the representation of each node is obtained by integrating the three small disease similarity networks. In the end, the learned representations are applied to calculate the similarity between diseases. Results Our approach shows the best performance compared to the other three popular methods. Besides, the similarity network built by MultiSourcDSim suggests that our method can also uncover the latent relationships between diseases. Conclusions MultiSourcDSim is an efficient approach to predict similarity between diseases.


Author(s):  
Wei Liang ◽  
Laibin Zhang

This paper describes a new approach of data source fusion based on process fusion entropy for leak detecting of product pipelines. Data sources are either single-channeled or multi-channeled: single-channeled data sources can be structured or semi-structured process steady entropy, whereas multi-channeled sources are singular spectrum entropy and power spectrum entropy. In order to develop data sources fusion systems for pipeline leak detection in real-time contexts, we need to study all issues raised by the matching paradigms. This challenging problem becomes crucial with the dominating role of the internet. Classical approaches of data integration, based on schemas mediation, are not suitable to the pipeline SCADA (Supervisory Control and Data Acquisition) environment where data is frequently modified or updated. Therefore, we develop a loosely integrated approach that takes into consideration both steady and transient states which must be separated correctly in order to integrate new data sources. Moreover, we introduce a process fusion entropy-based Multi-data source Fusion Method (MFM) that aims to define and retrieve conflicting data from multiple data sources.


2002 ◽  
Vol 11 (01n02) ◽  
pp. 119-144 ◽  
Author(s):  
NAVEEN ASHISH ◽  
CRAIG KNOBLOCK ◽  
CYRUS SHAHABI

There is currently great interest in building information mediators that can integrate information from multiple data sources such as databases or Web sources. The query response time for such mediators is typically quite high, mainly due to the time spent in retrieving data from remote sources. We present an approach for optimizing the performance of information mediators by selectively materializing data. We first present our overall framework for materialization in a mediator environment. The data is materialized selectively. We outline the factors that are considered in selecting data to materialize. We present an algorithm for identifying classes of data to materialize by analyzing one of the factors which is the distribution of user queries. We present results with an implemented version of our optimization system for the Ariadne information mediator, which show the effectiveness of our algorithm in extracting patterns of frequently accessed classes from user queries. We also demonstrate the effectiveness of approach in optimizing mediator performance by materializing such classes.


Buildings ◽  
2019 ◽  
Vol 9 (5) ◽  
pp. 114 ◽  
Author(s):  
Mariangela Zanni ◽  
Tim Sharpe ◽  
Philipp Lammers ◽  
Leo Arnold ◽  
James Pickard

A common barrier to achieving design intent is the absence of comprehensive information about operational performance during design development. This results in uninformed decision-making which impacts on actual building performance, in particular Whole Life Costs (WLC). It is proposed that Building Information Modelling (BIM) has the potential to facilitate a more comprehensive and accurate design approach from the initial stages if the model can utilize reliable and robust cost and performance data from buildings in use. This paper describes the initial findings of a research project that has investigated the integration of WLC estimation into BIM processes. The study focusses specifically on the rapidly emerging Private Rental Sector (PRS) as the build-to-rent market has repeatable tasks and similar workflow patterns, roles and responsibilities, but impacts of WLC can significantly influence the business model. The study adopted a mixed method approach for the development and validation of a structured standardized process for timely WLC estimation through BIM. The research identified a number of barriers. These included varying definitions of WLC calculation methodologies; the availability and standards of data sources, in particular, the misalignment of coding systems for identification and classification of components at various levels of development, proprietary ownership of data, lack of knowledge and skills in team members to produce and/or utilize data sources, and limitations of software. However, the research proposes that these may be addressed by a reverse-engineered systematic process that uses the Integrated DEFinition (IDEF) 3 structured diagramming modelling technique that can be incorporated into a software model and has developed a model for a systematic approach for BIM-enabled WLC assessment based on CE principles which would include access to live data streams from completed buildings. The paper describes this model development which has the potential to enhance BIM lifecycle management through an augmented decision-making approach that is integral to the natural design development process.


2020 ◽  
Vol 10 (7) ◽  
pp. 177
Author(s):  
Priyashri Kamlesh Sridhar ◽  
Suranga Nanayakkara

It has been shown that combining data from multiple sources, such as observations, self-reports, and performance with physiological markers offers better insights into cognitive-affective states during the learning process. Through a study with 12 kindergarteners, we explore the role of utilizing insights from multiple data sources, as a potential arsenal to supplement and complement existing assessments methods in understanding cognitive-affective states across two main pedagogical approaches—constructionist and instructionist—as children explored learning a chosen Science, Technology, Engineering and Mathematics (STEM) concept. We present the trends that emerged across pedagogies from different data sources and illustrate the potential value of additional data channels through case illustrations. We also offer several recommendations for such studies, particularly when collecting physiological data, and summarize key challenges that provide potential avenues for future work.


Author(s):  
Katie Wilson ◽  
Lucy Montgomery ◽  
Cameron Neylon ◽  
Rebecca N. Handcock ◽  
Richard Hosking ◽  
...  

AbstractThe Curtin Open Knowledge Initiative (COKI) is an innovative research project that collects and analyses publicly available research output data to assist and encourage researchers, academics, administrators and executives to understand the actual and potential reach of openness in research, and to assess their progress on the path towards open knowledge institutions. By taking a broad global approach and using multiple data sources, the project diverges from existing approaches, methods and bibliometric measures in the scholarly research environment. It combines analysis of research output, citations, publication sources and publishers, funders, social media events, open and not open access to provide overviews of research output and performance at institutional, funder, consortial and country levels. The project collects and analyses personnel diversity data such as gender, focusing on widening the reach of data analysis to emphasise the importance and value of diversity in research and knowledge production. Interactive visual tools present research output and performance to encourage understanding and dialogue among researchers and management. The path towards becoming open knowledge institutions involves a process of cultural change, moving beyond dominant publishing and evaluation practices. This paper discusses how through divergence, diversity and dialogue the COKI project can contribute to this change, with examples of applications in understanding and embracing openness.


2021 ◽  
pp. 1-22
Author(s):  
Emily Berg ◽  
Johgho Im ◽  
Zhengyuan Zhu ◽  
Colin Lewis-Beck ◽  
Jie Li

Statistical and administrative agencies often collect information on related parameters. Discrepancies between estimates from distinct data sources can arise due to differences in definitions, reference periods, and data collection protocols. Integrating statistical data with administrative data is appealing for saving data collection costs, reducing respondent burden, and improving the coherence of estimates produced by statistical and administrative agencies. Model based techniques, such as small area estimation and measurement error models, for combining multiple data sources have benefits of transparency, reproducibility, and the ability to provide an estimated uncertainty. Issues associated with integrating statistical data with administrative data are discussed in the context of data from Namibia. The national statistical agency in Namibia produces estimates of crop area using data from probability samples. Simultaneously, the Namibia Ministry of Agriculture, Water, and Forestry obtains crop area estimates through extension programs. We illustrate the use of a structural measurement error model for the purpose of synthesizing the administrative and survey data to form a unified estimate of crop area. Limitations on the available data preclude us from conducting a genuine, thorough application. Nonetheless, our illustration of methodology holds potential use for a general practitioner.


Sign in / Sign up

Export Citation Format

Share Document