scholarly journals Testing Metadata Existence of Web Map Services

2010 ◽  
Vol 5 ◽  
pp. 49-56
Author(s):  
Jan Růžička

For a general user is quite common to use data sources available on WWW. Almost all GIS software allow to use data sources available via Web Map Service (ISO/OGC standard) interface. The opportunity to use different sources and combine them brings a lot of problems that were discussed many times on conferences or journal papers. One of the problem is based on non existence of metadata for published sources. The question was: were the discussions effective? The article is partly based on comparison of situation for metadata between years 2007 and 2010. Second part of the article is focused only on 2010 year situation. The paper is created in a context of research of intelligent map systems, that can be used for an automatic or a semi-automatic map creation or a map evaluation.

2016 ◽  
Vol 15 (1) ◽  
pp. 3-9 ◽  
Author(s):  
Jan Růžička

There are several options how to configure Web Map Service using several map servers. GeoServer is one of most popular map servers nowadays. GeoServer is able to read data from several sources. Very popular data source is ESRI Shapefile. It is well documented and most of software for geodata processing is able to read and write data in this format. Another very popular data store is PostgreSQL/PostGIS object-relational database. Both data sources has advantages and disadvantages and user of GeoServer has to decide which one to use. The paper describes comparison of performance of GeoServer Web Map Service when reading data from ESRI Shapefile or from PostgreSQL/PostGIS database.


2017 ◽  
pp. 2485-2488
Author(s):  
Christopher D. Michaelis ◽  
Daniel P Ames
Keyword(s):  
Web Map ◽  

2016 ◽  
Vol 854 ◽  
pp. 163-166 ◽  
Author(s):  
Uwe Diekmann ◽  
Alex Miron ◽  
Andreea Trasca

The new MatPlus software supports the multi-dimensional modelling of materials properties using different data sources. Extensive mathematical functions allow curve fitting of data from different sources to any constitutive models and selectively combining models and datapoints along different dimensions. Physically consistent extrapolation of measured data within the complete multi-dimensional parametric space can be achieved. An integrated library of models can be extended by the user and already contains many popular equations like Hensel-Spittel and Zerilli-Armstrong for flow curves.


Author(s):  
Óscar Pérez-Gil ◽  
Rafael Barea ◽  
Elena López-Guillén ◽  
Luis M. Bergasa ◽  
Carlos Gómez-Huélamo ◽  
...  

AbstractNowadays, Artificial Intelligence (AI) is growing by leaps and bounds in almost all fields of technology, and Autonomous Vehicles (AV) research is one more of them. This paper proposes the using of algorithms based on Deep Learning (DL) in the control layer of an autonomous vehicle. More specifically, Deep Reinforcement Learning (DRL) algorithms such as Deep Q-Network (DQN) and Deep Deterministic Policy Gradient (DDPG) are implemented in order to compare results between them. The aim of this work is to obtain a trained model, applying a DRL algorithm, able of sending control commands to the vehicle to navigate properly and efficiently following a determined route. In addition, for each of the algorithms, several agents are presented as a solution, so that each of these agents uses different data sources to achieve the vehicle control commands. For this purpose, an open-source simulator such as CARLA is used, providing to the system with the ability to perform a multitude of tests without any risk into an hyper-realistic urban simulation environment, something that is unthinkable in the real world. The results obtained show that both DQN and DDPG reach the goal, but DDPG obtains a better performance. DDPG perfoms trajectories very similar to classic controller as LQR. In both cases RMSE is lower than 0.1m following trajectories with a range 180-700m. To conclude, some conclusions and future works are commented.


The recent progress for spatial resolution of remote sensing imagery led to generate many types of Very HighResolution (VHR) satellite images, consequently, general speaking, it is possible to prepare accurate base map larger than 1:10,000 scale. One of these VHR satellite image is WorldView-3 sensor that launched in August 2014. The resolution of 0.31m makes WorldView-3 the highest resolution commercial satellite in the world. In the current research, a pan-sharpen image from that type, covering an area at Giza Governorate in Egypt, used to determine the suitable large-scale map that could be produced from that image. To reach this objective, two different sources for acquiring Ground Control Points (GCPs). Firstly, very accurate field measurements using GPS and secondly, Web Map Service (WMS) server (in the current research is Google Earth) which is considered a good alternative when GCPs are not available, are used. Accordingly, three scenarios are tested, using the same set of both 16 Ground Control Points (GCPs) as well as 14 Check Points (CHKs), used for evaluation the accuracy of geometric correction of that type of images. First approach using both GCPs and CHKs coordinates acquired by GPS. Second approach using GCPs coordinates acquired by Google Earth and CHKs acquired by GPS. Third approach using GCPs and CHKs coordinates by Google Earth. Results showed that, first approach gives Root Mean Square Error (RMSE) planimeteric discrepancy for GCPs of 0.45m and RMSE planimeteric discrepancy for CHKs of 0.69m. Second approach gives RMSE for GCPs of 1.10m and RMSE for CHKs of 1.75m. Third approach gives RMSE for GCPs of 1.10m and RMSE for CHKs of 1.40m. Taking map accuracy specification of 0.5mm of map scale, the worst values for CHKs points (1.75m&1,4m) resulted from using Google Earth as a source, gives the possibility of producing 1:5000 large-scale map compared with the best value of (0.69m) (map scale 1:2500). This means, for the given parameters of the current research, large scale maps could be produced using Google Earth, in case of GCPs are not available accurately from the field surveying, which is very useful for many users.


2013 ◽  
Vol 21 (1) ◽  
pp. 31-36 ◽  
Author(s):  
Dušan Cibulka

Abstract The paper deals with the performance testing of web mapping services. The paper describes map service tests in which it is possible to determine the performance characteristics of a map service, depending on the location and scale of the map. The implementation of the test is tailored to the Web Map Service specifications provided by the Open Geospatial Consortium. The practical experiment consists of testing the map composition acquired from OpenStreetMap data for the area of southwestern Slovakia. These tests permit checking the performance of services in different positions, verifying the configuration of services, the composition of a map, and the visualization of geodata. The task of this paper is to also highlight the fact that it is not sufficient to only interpret a map service performance with conventional indicators. A map service’s performance should be linked to information about the map’s scale and location.


The IoT is a new concept that provides a world where smart, connected, embedded systems operate, giving rise to the amount of data from different sources that are considered to have highly useful and valuable information. Data mining would play a critical role in creating smarter IoT. Traditional care of an elderly person is a difficult and complex task. The need to have a caregiver with the elderly person almost all the time drains the human and financial resources of the health care system. The emergence of Artificial intelligence has allowed the conception of technical assistance where it helps and reduces the time spent by the caregiver with the elderly person. This work aims to focus on analyzing techniques that are used for prediction purposes of falls in the elderly. We examine the applicability of three classification algorithms for IoT data. These algorithms are analyzed and a comparative study is undertaken to find the classifier that performs the best analysis on the dataset using a set of predefined performance metrics to compare the results of each classifier.


2010 ◽  
Vol 2 (3) ◽  
pp. 30-47 ◽  
Author(s):  
Michael G. Leahy ◽  
G. Brent Hall

This paper discusses the research-based origins and modular architecture of an open source geospatial tool that facilitates synchronous individual and group discussions using the medium of a Web map service. The software draws on existing open source geospatial projects and associated libraries and techniques that have evolved as part of the new generation of Web applications. The purpose of the software is discussed, highlighting the fusion of existing open source projects to produce new tools. Two case studies are briefly discussed to illustrate the value an open source approach brings to communities who would remain otherwise outside the reach of proprietary software tools. The paper concludes with comments on the project’s future evolution as an open source participatory mapping platform.


Sign in / Sign up

Export Citation Format

Share Document