scholarly journals View-Dependent Progressive Transmission Method for 3D Building Models

2021 ◽  
Vol 10 (4) ◽  
pp. 228
Author(s):  
Yuchang Sun ◽  
Jingsong Ma ◽  
Jiangfeng She ◽  
Qiang Zhao ◽  
Lixia He

Complex 3D building models, because of their huge data volume, almost always result in transmission congestion, which leads to poor user experience. To reduce the real-time transmission pressure, a novel view-dependent progressive transmission method was developed. With this method, only a small amount of transmitted data is necessary to achieve an acceptable rendering effect when the viewpoint changes. The method involves two stages. A preprocessing stage simplifies the building model using a multi-level vertex clustering algorithm. The local mesh in each clustering unit is organized into a node tree where each node includes a vertex and its related triangles. The building model is finally reorganized into a node forest. In the reconstruction stage, all root nodes are transmitted first to build a basic model. Their descendant nodes are then requested and transmitted according to viewpoint information to refine the building model during user interaction. The experimental results show that this method can effectively improve the transmission and reconstruction efficiency of 3D building models.

Author(s):  
Zhenni Wu ◽  
Hengxin Chen ◽  
Bin Fang ◽  
Zihao Li ◽  
Xinrun Chen

With the rapid development of computer technology, building pose estimation combined with Augmented Reality (AR) can play a crucial role in the field of urban planning and architectural design. For example, a virtual building model can be placed into a realistic scenario acquired by a Unmanned Aerial Vehicle (UAV) to visually observe whether the building can integrate well with its surroundings, thus optimizing the design of the building. In the work, we contribute a building dataset for pose estimation named BD3D. To obtain accurate building pose, we use a physical camera which can simulate realistic cameras in Unity3D to simulate UAVs perspective and use virtual building models as objects. We propose a novel neural network that combines MultiBin module with PoseNet architecture to estimate the building pose. Sometimes, the building is symmetry and ambiguity causes its different surfaces to have similar features, making it difficult for CNNs to learn the differential features between the different surfaces. We propose a generalized world coordinate system repositioning strategy to deal with it. We evaluate our network with the strategy on BD3D, and the angle error is reduced to [Formula: see text] from [Formula: see text]. Code and dataset have been made available at: https://github.com/JellyFive/Building-pose-estimation-from-the-perspective-of-UAVs-based-on-CNNs .


Author(s):  
K. Chaidas ◽  
G. Tataris ◽  
N. Soulakellis

Abstract. In recent years 3D building modelling techniques are commonly used in various domains such as navigation, urban planning and disaster management, mostly confined to visualization purposes. The 3D building models are produced at various Levels of Detail (LOD) in the CityGML standard, that not only visualize complex urban environment but also allows queries and analysis. The aim of this paper is to present the methodology and the results of the comparison among two scenarios of LOD2 building models, which have been generated by the derivate UAS data acquired from two flight campaigns in different altitudes. The study was applied in Vrisa traditional settlement, Lesvos island, Greece, which was affected by a devastating earthquake of Mw = 6.3 on 12th June 2017. Specifically, the two scenarios were created by the results that were derived from two different flight campaigns which were: i) on 12th January 2020 with a flying altitude of 100 m and ii) on 4th February 2020 with a flying altitude of 40 m, both with a nadir camera position. The LOD2 buildings were generated in a part of Vrisa settlement consisted of 80 buildings using the footprints of the buildings, Digital Surface Models (DSMs), a Digital Elevation Model (DEM) and orthophoto maps of the area. Afterwards, a comparison was implemented between the LOD2 buildings of the two different scenarios, with their volumes and their heights. Subsequently, the heights of the LOD2 buildings were compared with the heights of the respective terrestrial laser scanner (TLS) models. Additionally, the roofs of the LOD2 buildings were evaluated through visual inspections. The results showed that the 65 of 80 LOD2 buildings were generated accurately in terms of their heights and roof types for the first scenario and 64 for the second respectively. Finally, the comparison of the results proved that the generation of post-earthquake LOD2 buildings can be achieved with the appropriate UAS data acquired at a flying altitude of 100 m and they are not affected significantly by a lower one altitude.


2020 ◽  
Vol 25 ◽  
pp. 469-481
Author(s):  
Kay Rogage ◽  
David Greenwood

The operation and maintenance of built assets is crucial for optimising their whole life cost and efficiency. Historically, however, there has been a general failure in the transfer information between the design-and-construct (D&C) and operate-and-maintain (O&M) phases of the asset lifecycle. The recent steady uptake of digital technologies, such as Building Information Modelling (BIM) in the D&C phase has been accompanied by an expectation that this would enable better transfer of information to those responsible for O&M. Progress has been slow, with practitioners being unsure as to how to incorporate BIM into their working practices. Three types of challenge are identified, related to communication, experience and technology. In examining the last aspect, it appears that a major problem has been that of interoperability between building information models and the many computer-aided facilities management (CAFM) systems in use. The successful and automatic transfer of information from a building model to an FM tool is, in theory, achievable through the medium of the Industry Foundation Classes (IFC) schema. However, this relies upon the authoring of the model in terms of how well its structure permits the identification of relevant objects, their relationships and attributes. The testing of over 100 anonymised building models revealed that very few did; prohibiting their straightforward mapping to the maintenance database we had selected for the test. An alternative, hybrid approach was developed using an open-source software toolkit to identify objects by their geometry as well as their classification, thus enabling their automatic transfer. In some cases, manual transfer proved necessary. The implications are that while these problems can be overcome on a case-by-case basis, interoperability between D&C and O&M systems will not become standard until it is accommodated by appropriate and informed authoring of building models.


Author(s):  
Z. Li ◽  
W. Zhang ◽  
J. Shan

Abstract. Building models are conventionally reconstructed by building roof points via planar segmentation and then using a topology graph to group the planes together. Roof edges and vertices are then mathematically represented by intersecting segmented planes. Technically, such solution is based on sequential local fitting, i.e., the entire data of one building are not simultaneously participating in determining the building model. As a consequence, the solution is lack of topological integrity and geometric rigor. Fundamentally different from this traditional approach, we propose a holistic parametric reconstruction method which means taking into consideration the entire point clouds of one building simultaneously. In our work, building models are reconstructed from predefined parametric (roof) primitives. We first use a well-designed deep neural network to segment and identify primitives in the given building point clouds. A holistic optimization strategy is then introduced to simultaneously determine the parameters of a segmented primitive. In the last step, the optimal parameters are used to generate a watertight building model in CityGML format. The airborne LiDAR dataset RoofN3D with predefined roof types is used for our test. It is shown that PointNet++ applied to the entire dataset can achieve an accuracy of 83% for primitive classification. For a subset of 910 buildings in RoofN3D, the holistic approach is then used to determine the parameters of primitives and reconstruct the buildings. The achieved overall quality of reconstruction is 0.08 meters for point-surface-distance or 0.7 times RMSE of the input LiDAR points. This study demonstrates the efficiency and capability of the proposed approach and its potential to handle large scale urban point clouds.


Author(s):  
Yehorova O.I. ◽  
Kozlova Yu.V.

The article aims at analyzing the topical English pandemic (coronavirus) vocabulary from the perspective of system-functional approach. This envisages performing following tasks: 1) to identify the pandemic (coronavirus) lexical cluster, 2) to describe the word-building peculiarities of the English coronavirus vocabulary and 3) to interpret the functioning of this vocabulary within the political, every day, and Internet discourses.Methods. The methodological framework used in the study features: 1) generalization for establishing basic theoretical principles of the research; 2) structure-semantic analysis for studying the word-building specifics of the pandemic vocabulary; 3) statistical method for defining calculate the frequency and the productivity of certain word-building models within the pandemic lexical cluster; 4) the elements of discourse analysis to highlight the functional peculiarities of coronavirus vocabulary.Results. Coronacrisis, that we have experienced till the present, has become a crucial factor catalyzing nomination processes of the novel concepts, thus influencing the lexical system of the English language. We consider pandemic lexicon (coronavirus vocabulary) the novelist group of neologisms in the English language since it comprises innovative words and phrases which have been coined since the start of COVID-19 pandemic and relate to its impact on the modern life. Among the most common for coronavirus vocabulary word-building models are derivation, compounding, shortening, loan and substitution; alongside, the statistical analysis has proved blending to be the most productive word-building model. The study of functional peculiarities of the pandemic lexicon within various types of discourses shows that its biggest part has entered the usus. The use of pandemic vocabulary within the Internet discourse is marked by the development of a number of thematic groups of language units referring to: 1) routine activities and events; 2) changes in learn and work modes; 3) excess weight; 4) alcohol and 5) verbal aggressiveness.Conclusions. The study enabled categorizing the units of the English pandemic (coronavirus) vocabulary as a separate lexical cluster, which has predominantly developed with the help of the already existing language resources. The units of this innovative cluster perform nominative function by naming new concepts and realia of life, reflect social moods, for instance, the feelings of worry, fear, anguish, and hopelessness, or facilitate the humorous effect in communication. Prospects for future research lie within the expansion of discursive analysis of pandemic innovations for revealing functional of some neological units on different stages of the COVID-19 pandemic, as well as conducting a comparative study of pandemic innovations in distant languages.Key words: word-building, lexical innovation, pandemic vocabulary, discourse. Метою пропонованого дослідження є висвітлення актуального пандемійного (коронавірусного) вокабуляру англійської мови з позицій системно-функціонального підходу. Досягненню мети сприяє виконання таких завдань: 1) ідентифікувати пандемійний (коронавірусний) лексичний кластер; 2) охарактеризувати словотвірні особливості коронавокабуляра англійської мови та 3) проінтерпретувати особливості функціонування коронавокабуляра в політичному, повсякденному та інтернет-дискурсах.Методи. Для досягнення поставленої мети застосовувалися: 1) метод узагальнення для ідентифікації базових теоретич-них положень; 2) метод структурно-семантичного аналізу для вивчення особливостей словотвору пандемійного вокабуляра; 3) статистичний метод для вирахування частотності та продуктивності словотворення пандемійного лексичного кластера за конкретними моделями; 4) елементи дискурс-аналізу для вивчення функціональних особливостей короновокабуляра.Результати. Коронакриза, що триває нині, є центральним фактором впливу на лексикографічну систему англійської мови, оскільки актуалізувала проблему номінації нових реалій. Найактуальнішою неологічною групою англійської мови нині є пандемійна лексика (коронавірусний вокабуляр), до складу якого, зокрема, входять інноваційні слова та вирази, що виникли з початку пандемії COVID-19 та пов’язані з її впливом на сучасне життя. Елементи коронавокабуляра утворюються за низкою дериваційних моделей, до числа яких відносимо деривацію, основоскладання, скорочення, запозичення, субституцію, проте найпродуктивнішою моделлю за результатами статистичного аналізу є телескопія. Дослідження особливостей функціонуван-ня коронавірусного вокабуляра в різних типах дискурсу дає змогу констатувати превалювання узуальної лексики та тісні між-дискурсивні зв’язки, зокрема між політичним дискурсом та дискурсом повсякденності. Використання пандемійної лексики на просторах інтернет-дискурсу відзначається формуванням низки лексико-семантичних груп на позначення: 1) рутинних занять та подій; 2) змін у звичному розпорядку навчальної та робочої діяльності; 3) зайвої ваги; 4) алкоголю та 5) мовної агресії.Висновки. Проведене дослідження уможливило виокремлення англомовного пандемійного (коронавірусного) вокабуляра як окремого лексичного кластера, основу якого становить загальновживана лексика. Одиниці цього інноваційного кластера виконують номінативну функцію через іменування нових реалій та концептів життя, а також рефлектують настрої суспільства, зокрема відчуття занепокоєння, остраху, туги та безнадійності, або ж сприяють реалізації гумористичного ефекту комунікації.Ключові слова: словотворення, лексична інновація, пандемійний вокабуляр, дискурс.


2020 ◽  
Vol 4 (4) ◽  
pp. 191
Author(s):  
Mohammad Aljanabi ◽  
Hind Ra'ad Ebraheem ◽  
Zahraa Faiz Hussain ◽  
Mohd Farhan Md Fudzee ◽  
Shahreen Kasim ◽  
...  

Much attention has been paid to large data technologies in the past few years mainly due to its capability to impact business analytics and data mining practices, as well as the possibility of influencing an ambit of a highly effective decision-making tools. With the current increase in the number of modern applications (including social media and other web-based and healthcare applications) which generates high data in different forms and volume, the processing of such huge data volume is becoming a challenge with the conventional data processing tools. This has resulted in the emergence of big data analytics which also comes with many challenges. This paper introduced the use of principal components analysis (PCA) for data size reduction, followed by SVM parallelization. The proposed scheme in this study was executed on the Spark platform and the experimental findings revealed the capability of the proposed scheme to reduce the classifiers’ classification time without much influence on the classification accuracy of the classifier.


2018 ◽  
Vol 11 (9) ◽  
pp. 3781-3794 ◽  
Author(s):  
Joy Merwin Monteiro ◽  
Jeremy McGibbon ◽  
Rodrigo Caballero

Abstract. sympl (System for Modelling Planets) and climt (Climate Modelling and Diagnostics Toolkit) are an attempt to rethink climate modelling frameworks from the ground up. The aim is to use expressive data structures available in the scientific Python ecosystem along with best practices in software design to allow scientists to easily and reliably combine model components to represent the climate system at a desired level of complexity and to enable users to fully understand what the model is doing. sympl is a framework which formulates the model in terms of a state that gets evolved forward in time or modified within a specific time by well-defined components. sympl's design facilitates building models that are self-documenting, are highly interoperable, and provide fine-grained control over model components and behaviour. sympl components contain all relevant information about the input they expect and output that they provide. Components are designed to be easily interchanged, even when they rely on different units or array configurations. sympl provides basic functions and objects which could be used in any type of Earth system model. climt is an Earth system modelling toolkit that contains scientific components built using sympl base objects. These include both pure Python components and wrapped Fortran libraries. climt provides functionality requiring model-specific assumptions, such as state initialization and grid configuration. climt's programming interface designed to be easy to use and thus appealing to a wide audience. Model building, configuration and execution are performed through a Python script (or Jupyter Notebook), enabling researchers to build an end-to-end Python-based pipeline along with popular Python data analysis and visualization tools.


Author(s):  
Sanghoon Jun ◽  
Seungmin Rho ◽  
Eenjun Hwang

A typical music clip consists of one or more segments with different moods and such mood information could be a crucial clue for determining the similarity between music clips. One representative mood has been selected for music clip for retrieval, recommendation or classification purposes, which often gives unsatisfactory result. In this paper, the authors propose a new music retrieval and recommendation scheme based on the mood sequence of music clips. The authors first divide each music clip into segments through beat structure analysis, then, apply the k-medoids clustering algorithm for grouping all the segments into clusters with similar features. By assigning a unique mood symbol for each cluster, one can transform each music clip into a musical mood sequence. For music retrieval, the authors use the Smith-Waterman (SW) algorithm to measure the similarity between mood sequences. However, for music recommendation, user preferences are retrieved from a recent music playlist or user interaction through the interface, which generates a music recommendation list based on the mood sequence similarity. The authors demonstrate that the proposed scheme achieves excellent performance in terms of retrieval accuracy and user satisfaction in music recommendation.


Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Atif Khan ◽  
Qaiser Shah ◽  
M. Irfan Uddin ◽  
Fasee Ullah ◽  
Abdullah Alharbi ◽  
...  

Huge data on the web come from discussion forums, which contain millions of threads. Discussion threads are a valuable source of knowledge for Internet users, as they have information about numerous topics. The discussion thread related to single topic comprises a huge number of reply posts, which makes it hard for the forum users to scan all the replies and determine the most relevant replies in the thread. At the same time, it is also hard for the forum users to manually summarize the bulk of reply posts in order to get the gist of discussion thread. Thus, automatically extracting the most relevant replies from discussion thread and combining them to form a summary are a challenging task. With this motivation behind, this study has proposed a sentence embedding based clustering approach for discussion thread summarization. The proposed approach works in the following fashion: At first, word2vec model is employed to represent reply sentences in the discussion thread through sentence embeddings/sentence vectors. Next, K-medoid clustering algorithm is applied to group semantically similar reply sentences in order to reduce the overlapping reply sentences. Finally, different quality text features are utilized to rank the reply sentences in different clusters, and then the high-ranked reply sentences are picked out from all clusters to form the thread summary. Two standard forum datasets are used to assess the effectiveness of the suggested approach. Empirical results confirm that the proposed sentence based clustering approach performed superior in comparison to other summarization methods in the context of mean precision, recall, and F-measure.


Sign in / Sign up

Export Citation Format

Share Document