scholarly journals Diverse Visualization Techniques and Methods of Moving-Object-Trajectory Data: A Review

2019 ◽  
Vol 8 (2) ◽  
pp. 63 ◽  
Author(s):  
Jing He ◽  
Haonan Chen ◽  
Yijin Chen ◽  
Xinming Tang ◽  
Yebin Zou

Trajectory big data have significant applications in many areas, such as traffic management, urban planning and military reconnaissance. Traditional visualization methods, which are represented by contour maps, shading maps and hypsometric maps, are mainly based on the spatiotemporal information of trajectories, which can macroscopically study the spatiotemporal conditions of the entire trajectory set and microscopically analyze the individual movement of each trajectory; such methods are widely used in screen display and flat mapping. With the improvement of trajectory data quality, these data can generally describe information in the spatial and temporal dimensions and involve many other attributes (e.g., speed, orientation, and elevation) with large data amounts and high dimensions. Additionally, these data have relatively complicated internal relationships and regularities, whose analysis could cause many troubles; the traditional approaches can no longer fully meet the requirements of visualizing trajectory data and mining hidden information. Therefore, diverse visualization methods that present the value of massive trajectory information are currently a hot research topic. This paper summarizes the research status of trajectory data-visualization techniques in recent years and extracts common contemporary trajectory data-visualization methods to comprehensively cognize and understand the fundamental characteristics and diverse achievements of trajectory-data visualization.

2019 ◽  
Vol 8 (8) ◽  
pp. 348 ◽  
Author(s):  
Netek ◽  
Brus ◽  
Tomecka

We are now generating exponentially more data from more sources than a few years ago. Big data, an already familiar term, has been generally defined as a massive volume of structured, semi-structured, and/or unstructured data, which may not be effectively managed and processed using traditional databases and software techniques. It could be problematic to visualize easily and quickly a large amount of data via an Internet platform. From this perspective, the main aim of the paper is to test point data visualization possibilities of selected JavaScript Mapping Libraries to measure their performance and ability to cope with a big amount of data. Nine datasets containing 10,000 to 3,000,000 points were generated from the Nature Conservation Database. Five libraries for marker clustering and two libraries for heatmap visualization were analyzed. Loading time and the ability to visualize large data sets were compared for each dataset and each library. The best-evaluated library was a Mapbox GL JS (Graphics Library JavaScript) with the highest overall performance. Some of the tested libraries were not able to handle the desired amount of data. In general, an amount of less than 100,000 points was indicated as the threshold for implementation without a noticeable slowdown in performance. Their usage can be a limiting factor for point data visualization in such a dynamic environment as we live nowadays.


Author(s):  
Annie T. Chen ◽  
Shu-Hong Zhu ◽  
Mike Conway

Our aim in this work is to apply text mining and novel visualization techniques to textual data derived from online health discussion forums in order to better understand consumers experiences and perceptions of electronic cigarettes and hookah.


2017 ◽  
Author(s):  
Sarvesh Nikumbh ◽  
Peter Ebert ◽  
Nico Pfeifer

AbstractMost string kernels for comparison of genomic sequences are generally tied to using (absolute) positional information of the features in the individual sequences. This poses limitations when comparing variable-length sequences using such string kernels. For example, profiling chromatin interactions by 3C-based experiments results in variable-length genomic sequences (restriction fragments). Here, exact position-wise occurrence of signals in sequences may not be as important as in the scenario of analysis of the promoter sequences, that typically have a transcription start site as reference. Existing position-aware string kernels have been shown to be useful for the latter scenario.In this work, we propose a novel approach for sequence comparison that enables larger positional freedom than most of the existing approaches, can identify a possibly dispersed set of features in comparing variable-length sequences, and can handle both the aforementioned scenarios. Our approach, CoMIK, identifies not just the features useful towards classification but also their locations in the variable-length sequences, as evidenced by the results of three binary classification experiments, aided by recently introduced visualization techniques. Furthermore, we show that we are able to efficiently retrieve and interpret the weight vector for the complex setting of multiple multi-instance kernels.


2021 ◽  
Vol 10 (11) ◽  
pp. 757
Author(s):  
Pin Nie ◽  
Zhenjie Chen ◽  
Nan Xia ◽  
Qiuhao Huang ◽  
Feixue Li

Automatic Identification System (AIS) data have been widely used in many fields, such as collision detection, navigation, and maritime traffic management. Similarity analysis is an important process for most AIS trajectory analysis topics. However, most traditional AIS trajectory similarity analysis methods calculate the distance between trajectory points, which requires complex and time-consuming calculations, often leading to substantial errors when processing AIS trajectory data characterized by substantial differences in length or uneven trajectory points. Therefore, we propose a cell-based similarity analysis method that combines the weight of the direction and k-neighborhood (WDN-SIM). This method quantifies the similarity between trajectories based on the degree of proximity and differences in motion direction. In terms of its effectiveness and efficiency, WDN-SIM outperformed seven traditional methods for trajectory similarity analysis. Particularly, WDN-SIM has a high robustness to noise and can distinguish the similarities between trajectories under complex situations, such as when there are opposing directions of motion, large differences in length, and uneven point distributions.


Author(s):  
Margarita Martínez-Díaz ◽  
Ignacio Pérez Pérez

Most algorithms trying to analyze or forecast road traffic rely on many inputs, but in practice, calculations are usually limited by the available data and measurement equipment. Generally, some of these inputs are substituted by raw or even inappropriate estimations, which in some cases come into conflict with the fundamentals of traffic flow theory. This paper refers to one common example of these bad practices. Many traffic management centres depend on the data provided by double loop detectors, which supply, among others, vehicle speeds. The common data treatment is to compute the arithmetic mean of these speeds over different aggregation periods (i.e. the time mean speeds). Time mean speed is not consistent with Edie’s generalized definitions of traffic variables, and therefore it is not the average speed which relates flow to density. This means that current practice begins with an error that can have negative effects in later studies and applications. The algorithm introduced in this paper enables easily the estimation of space mean speeds from the data provided by the loops. It is based on two key hypotheses: stationarity of traffic and log-normal distribution of the individual speeds in each time interval of aggregation. It could also be used in case of transient traffic as a part of any data fusion methodology.DOI: http://dx.doi.org/10.4995/CIT2016.2016.3208 


2003 ◽  
Vol 1858 (1) ◽  
pp. 148-157 ◽  
Author(s):  
Sherif Ishak

Little information has been successfully extracted from the wealth of data collected by intelligent transportation systems. Such information is needed for the efficiency of operations and management functions of traffic-management centers. A new set of second-order statistical measures derived from texture characterization techniques in the field of digital image analysis is presented. The main objective is to improve the data-analysis tools used in performance-monitoring systems and assessment of level of service. The new measures can extract properties such as smoothness, homogeneity, regularity, and randomness in traffic operations directly from constructed spatiotemporal traffic contour maps. To avoid information redundancy, a correlation matrix was examined for nearly 14,000 15-min speed contour maps generated for a 3.4-mi freeway section over a period of 5 weekdays. The result was a set of three second-order measures: angular second moment, contrast, and entropy. Each measure was analyzed to examine its sensitivity to various traffic conditions, expressed by the overall speed mean of each contour map. The study also presented a tentative approach, similar to the conventional one used in the Highway Capacity Manual, to evaluate the level of service for each contour map. The new set of level-of-service criteria can be applied in real time by using a stand-alone module that was developed in the study. The module can be readily implemented online and allows traffic-management center operators to tune a large set of related parameters.


Author(s):  
Catarina Sampaio ◽  
Luísa Ribas

The representation of identity in digital media does not necessarily have to be conceived on the basis of criteria that mimic physical reality. This article presents a model for representing individual identity, based on the recording of human experience in the form of personal data, as an alternative to the common forms of mimetic portraiture. As such, the authors developed the project Data Self-Portrait that aims to explore the creative possibilities associated with the concept of data portrait. It can be described as a means of representing and expressing identity through the application of data visualization techniques to the domain of portraiture, according to an exploratory design approach, based on visualizing the digital footprint. It thus seeks to develop design proposals for representing identity that respond to the growing dematerialization of human activities and explores the representational and expressive role of data visualization, according to a creative use of computational technologies.


Author(s):  
Anna Ursyn ◽  
Edoardo L'Astorina

This chapter discusses some possible ways of how professionals, researchers and users representing various knowledge domains are collecting and visualizing big data sets. First it describes communication through senses as a basis for visualization techniques, computational solutions for enhancing senses and ways of enhancing senses by technology. The next part discusses ideas behind visualization of data sets and ponders what is and what not visualization is. Further discussion relates to data visualization through art as visual solutions of science and mathematics related problems, documentation objects and events, and a testimony to thoughts, knowledge and meaning. Learning and teaching through data visualization is the concluding theme of the chapter. Edoardo L'Astorina provides visual analysis of best practices in visualization: An overlay of Google Maps that showed all the arrival times - in real time - of all the buses in your area based on your location and visual representation of all the Tweets in the world about TfL (Transport for London) tube lines to predict disruptions.


Author(s):  
Clarissa Rodrigues ◽  
Elizabeth Carvalho

This paper describes an interactive data visualization application that aims to show how the Portuguese people spent culturally their leisure time between 1994 and 2009. The leisure trend is displayed to the end-user through the use of different visualization techniques and visual cues. The authors developed the visual representations based on the use of simple and regular visual shapes that could be easily combined, interpreted, memorized and used. To better evaluate their results, the authors tested their prototype against a preselected group of subjects.


Sign in / Sign up

Export Citation Format

Share Document