spatial data quality
Recently Published Documents


TOTAL DOCUMENTS

90
(FIVE YEARS 2)

H-INDEX

12
(FIVE YEARS 0)

2021 ◽  
Vol 10 (6) ◽  
pp. 374
Author(s):  
Francisco Javier Ariza-López ◽  
Antonio Rodríguez-Pascual ◽  
Francisco J. Lopez-Pellicer ◽  
Luis M. Vilches-Blázquez ◽  
Agustín Villar-Iglesias ◽  
...  

The production of official statistical and geospatial data is often in the hands of highly specialized public agencies that have traditionally followed their own paths and established their own production frameworks. In this article, we present the main frameworks of these two areas and focus on the possibility and need to achieve a better integration between them through the interoperability of systems, processes, and data. The statistical area is well led and has well-defined frameworks. The geospatial area does not have clear leadership and the large number of standards establish a framework that is not always obvious. On the other hand, the lack of a general and common legal framework is also highlighted. Additionally, three examples are offered: the first is the application of the spatial data quality model to the case of statistical data, the second of the application of the statistical process model to the geospatial case, and the third is the use of linked geospatial and statistical data. These examples demonstrate the possibility of transferring experiences/advances from one area to another. In this way, we emphasize the conceptual proximity of these two areas, highlighting synergies, gaps, and potential integration.


2021 ◽  
Vol 10 (4) ◽  
pp. 265
Author(s):  
Godwin Yeboah ◽  
João Porto de Albuquerque ◽  
Rafael Troilo ◽  
Grant Tregonning ◽  
Shanaka Perera ◽  
...  

This paper examines OpenStreetMap data quality at different stages of a participatory mapping process in seven slums in Africa and Asia. Data were drawn from an OpenStreetMap-based participatory mapping process developed as part of a research project focusing on understanding inequalities in healthcare access of slum residents in the Global South. Descriptive statistics and qualitative analysis were employed to examine the following research question: What is the spatial data quality of collaborative remote mapping achieved by volunteer mappers in morphologically complex urban areas? Findings show that the completeness achieved by remote mapping largely depends on the morphology and characteristics of slums such as building density and rooftop architecture, varying from 84% in the best case, to zero in the most difficult site. The major scientific contribution of this study is to provide evidence on the spatial data quality of remotely mapped data through volunteer mapping efforts in morphologically complex urban areas such as slums; the results could provide insights into how much fieldwork would be needed in what level of complexity and to what extent the involvement of local volunteers in these efforts is required.


Author(s):  
M. Salhab ◽  
A. Basiri

Abstract. Data gaps and poor data quality may lead to flawed conclusions and data-driven policies and decisions, such as the measurement of Sustainable Development Goals progress. This is particularly important for land cover data, as an essential source of data for a wide range of applications and real-world challenges including climate change mitigation, food security planning, resource allocation and mobilization. While global land cover datasets are available, their usability is limited by their coarse spatial and temporal resolutions. Furthermore, having a good understanding of the fitness for the purpose is imperative. This paper compares two datasets from a spatial data quality perspective: (1) a global land cover map, and (2) a fit-for-purpose training dataset that is generated using visual inspection of very high-resolution satellite data. The latter dataset is created using Google Earth Engine (GEE), a cloud-based computing platform and data repository. We systematically evaluate the two datasets from spatial data quality (SDQ) perspective using the Analytic Hierarchy Process (AHP) to prioritise the criteria, i.e. SDQ. To validate the results, land cover classifications are conducted using both datasets, also within GEE. Based on the results of the SDQ evaluation and land cover classification, we find that the second training dataset significantly outperformed the global land cover maps. Our study also shows that cloud-based computing platforms and publicly available data repositories can provide an effective approach to filling land cover data gaps in data-scarce regions.


2019 ◽  
Vol 1 ◽  
pp. 1-2
Author(s):  
Nils Mesterton ◽  
Mari Isomäki ◽  
Antti Jakobsson ◽  
Joonas Jokela

<p><strong>Abstract.</strong> The Finnish National Topographic Database (NTDB) is currently developed by the National Land Survey of Finland (NLS) together with municipalities and other governmental agencies. It will be a harmonized database for topographic data in Finland provided by municipalities, the NLS and other agencies. The NTDB has been divided into several themes, of which the buildings theme was the focus in the first stage of development. Data collection for the NTDB is performed by different municipalities and governmental organizations. Having many supplying organizations can lead to inconsistencies in spatial data. Without a robust quality process this could lead to a chaos. Fortunately data quality can be controlled with an automated data quality evaluation process. Reaching a better degree of harmonization across the database is one of the main goals of NTDB in the future, besides reducing the amount of overlapping work and making national topographic data more accessible to all potential users.</p><p>The aim of the NTDB spatial data management system architecture is to have a modular architecture. Therefore, the Data Quality Module named as QualityGuard can also be utilized in the National Geospatial Platform which will be a key component in the future Spatial Data Infrastructure of Finland. The National Geospatial Platform will include the NTDB data themes but also addresses, detailed plans and other land use information. FME was chosen as the implementation platform of the QualityGuard because it is robust and highly adaptable, allowing development of even the most complicated ETL workflows and spatial applications. This approach allows effortless communication with different applications via various types of interfaces, thus efficiently enabling the modularity requirement in all stages of development and integration.</p><p>The QualityGuard works in two modes: a) as a part of the import process to NTDB, and b) independently. Users can validate their data using the independent QualityGuard to find possible errors in their data and fix them. Once validated and the data is fixed, data producers can import their data using the import option. The users receive a data quality report containing statistics and a quality error dataset regarding their imported data, which can be inspected in any GIS software, e.g. overlaid on original features. Geographical locations of quality errors are displayed as points. Each error finding produces a row in the error dataset, containing information about the type and cause of the error as short descriptions.</p><p>Data quality evaluation is based on validating the conformance against data product specifications specified as quality rules. Three different ISO 19157 quality elements are utilized: format consistency, domain consistency and topological consistency. The quality rules have been defined in a co-operation with specialists in the field and the technical developing team. The definition work is based on the concept developed in the ESDIN project, quality specifications of INSPIRE, national topographic database quality specifications, national and international quality recommendations and standards, quality rules developed in European Location Framework (ELF) project and interviews of experts from National Land Survey of Finland and municipalities. In fact the NLS was one of the first agencies in the world who published a quality model for the digital topographic data in 1995.</p><p>Quality rules are currently documented in spreadsheet documents representing each theme. Each quality rule has been defined using RuleSpeak, a structured notation for expressing business rules. RuleSpeak provides a consistent structure for each definition. The rules are divided in general rules and feature-specific rules. General rules are relevant for all feature types of a specific theme, although exceptions can be defined.</p><p>A nation-wide, centralized automated spatial data quality process is one of the key elements in an effort towards achieving better harmonization of the NTDB. In principle, the greater aim is to achieve compliance with the auditing process described in ISO 19158. This process is meant to ensure that the supplying organizations are capable of delivering data of expected quality. However, implementing a nation-wide process is rather challenging because municipalities and other organizations might not have the capability or resources to repair the quality issues identified by the QualityGuard. Inconsistent data quality is not desirable, and data quality requirements will be less strict at first phases of implementation. Some of the issues will be automatically repaired by the software once the process has been established, but the organizations will still receive a notification about data quality issues in any conflicting features.</p><p>The Finnish NTDB is in a continuous state of development and currently effort is made towards reaching automation, improved data quality and less overlapping work in co-operation with municipalities and other data producers. The QualityGuard has enabled an automated spatial data quality validation process for incoming data and it is currently being evaluated in practice. The results have already been well received by the users. Automating data quality validation is no longer a work of fiction. As indicated earlier we believe this will be a common practice with all SDI datasets in Finland.</p></p>


2019 ◽  
Vol 23 (6) ◽  
pp. 1184-1203 ◽  
Author(s):  
Greg Brown ◽  
Jonathan Rhodes ◽  
Daniel Lunney ◽  
Ross Goldingay ◽  
Kelly Fielding ◽  
...  

2019 ◽  
Vol 1 ◽  
pp. 1-8
Author(s):  
Vaclav Talhofer ◽  
Šárka Hošková-Mayerová

<p><strong>Abstract.</strong> Multi-criterial analysis is becoming one of the main methods for evaluation of influence of geographic environment on human activity, or human activity on geographic environment, respectively. Analysis results are often used in command and control systems, especially in armed forces and units of rescue systems. For analyses, digital geographic data – whose quality significantly influences the reached results – are used. Visualization of results of analyses in command and control systems are usually thematic layers over raster images of topographic maps. That is why this visualization must correspond to cartographic principles used for the creation of thematic maps. The article presents problems that an analyst encounters within the evaluation of the quality of the used data, performance of the analysis itself as well as preparation of data files for their transfer and publishing in command and control systems.</p>


Urbani izziv ◽  
2019 ◽  
Vol 1 (30) ◽  
pp. 87-99 ◽  
Author(s):  
Simon STARČEK ◽  
Maruška ŠUBIC KOVAČ

Spatial data are directly linked to spatial planning, and to spatial management in general, including the property tax system. Spatial data quality impacts the efficiency of the property tax system, as well as its equity and reasonability. This article presents a methodological approach to analysing the quality of spatial databases managed by municipalities for assessing construction land fees. Adjusted Jaccard and Czekanowski indices were defined and applied for data quality analysis because they are applicable in cases in which differences between the data compared amount to less than 5%. The indices were used to establish the level of matching for areas of buildings and the unbuilt construction land in the municipal construction land fee assessment databases and in the real estate register. Based on an analysis of the completeness, logical consistency, and thematic accuracy of municipal construction land fee assessment databases, municipal databases were updated. Modifications to the municipal databases were analysed following updating in terms of the number of persons subject to construction land fee payment and the construction land fee amount payable. The results of the first study of this type have been obtained on a small sample, but the methodology is also applicable for analysis on a large sample or in all Slovenian municipalities. As such, the analysis may be of help to experts at municipal offices, spatial planners, and decision-makers in taxation policies.


2019 ◽  
Vol 11 (1) ◽  
pp. 219-235 ◽  
Author(s):  
Elżbieta Bielecka ◽  
Elżbieta Burek

Abstract Using the literature review and quantitative analysis, the research on the quality and uncertainty of spatial data have been compared and analysed according to years of publication, authors, document types, WoS categories, and countries. The paper portrayed the development in the field, studied the state and evolution of the most productive and influential journals, conferences, and research institutions. The results showed that remote sensing, computer science, and geography relate mostly to data imperfection and assessment of its uncertainty. This relation is clearly translated into the most productive journals, and conferences proceedings. The top-ranked countries in this field are United States, China, and the United Kingdom.


Sign in / Sign up

Export Citation Format

Share Document