Graph Convolutional Network-based Model for Incident-related Congestion Prediction: A Case Study of Shanghai Expressways

2021 ◽  
Vol 12 (3) ◽  
pp. 1-22
Author(s):  
Xi Wang ◽  
Yibo Chai ◽  
Hui Li ◽  
Wenbin Wang ◽  
Weishan Sun

Traffic congestion has become a significant obstacle to the development of mega cities in China. Although local governments have used many resources in constructing road infrastructure, it is still insufficient for the increasing traffic demands. As a first step toward optimizing real-time traffic control, this study uses Shanghai Expressways as a case study to predict incident-related congestions. Our study proposes a graph convolutional network-based model to identify correlations in multi-dimensional sensor-detected data, while simultaneously taking into account environmental, spatiotemporal, and network features in predicting traffic conditions immediately after a traffic incident. The average accuracy, average AUC, and average F-1 score of the predictive model are 92.78%, 95.98%, and 88.78%, respectively, on small-scale ground-truth data. Furthermore, we improve the predictive model’s performance using semi-supervised learning by including more unlabeled data instances. As a result, the accuracy, AUC, and F-1 score of the model increase by 2.69%, 1.25%, and 4.72%, respectively. The findings of this article have important implications that can be used to improve the management and development of Expressways in Shanghai, as well as other metropolitan areas in China.

2020 ◽  
Vol 18 (2) ◽  
pp. 84-94
Author(s):  
Muhammad Henfi Abdul Khoir ◽  
◽  
Ahmad Rimba Dirgantara

Tourism village destinations are built and opened in addition to increasing the income of local communities and local governments as well as providing a new atmosphere for local and foreign tourists. The Local Government's program to develop tourism villages in Bandung Regency continues to be pursued, but the current managers are faced with managerial ability to manage tourism villages properly by tourism village management standards. This raises research questions about how to develop a management and development village tourism module that applies to village tourism managers to preserve the environment and to elevate the local potential based on local wisdom. This research uses the case study method. The analysis used is descriptive qualitative analysis. The results of the study recommend 12 topics of learning modules for the management and development of tourism villages which are expected to be able to help tourism village managers to manage their tourism villages well and sustainably. The learning modules are tourism village management, event management, customer satisfaction management, accommodation management, food and beverage management, handicraft management, marketing management, customer behavior, contemporary marketing, human resource management, conflict management, and tourism policy


Stroke ◽  
2020 ◽  
Vol 51 (Suppl_1) ◽  
Author(s):  
Benjamin Zahneisen ◽  
Matus Straka ◽  
Shalini Bammer ◽  
Greg Albers ◽  
Roland Bammer

Introduction: Ruling out hemorrhage (stroke or traumatic) prior to administration of thrombolytics is critical for Code Strokes. A triage software that identifies hemorrhages on head CTs and alerts radiologists would help to streamline patient care and increase diagnostic confidence and patient safety. ML approach: We trained a deep convolutional network with a hybrid 3D/2D architecture on unenhanced head CTs of 805 patients. Our training dataset comprised 348 positive hemorrhage cases (IPH=245, SAH=67, Sub/Epi-dural=70, IVH=83) (128 female) and 457 normal controls (217 female). Lesion outlines were drawn by experts and stored as binary masks that were used as ground truth data during the training phase (random 80/20 train/test split). Diagnostic sensitivity and specificity were defined on a per patient study level, i.e. a single, binary decision for presence/absence of a hemorrhage on a patient’s CT scan. Final validation was performed in 380 patients (167 positive). Tool: The hemorrhage detection module was prototyped in Python/Keras. It runs on a local LINUX server (4 CPUs, no GPUs) and is embedded in a larger image processing platform dedicated to stroke. Results: Processing time for a standard whole brain CT study (3-5mm slices) was around 2min. Upon completion, an instant notification (by email and/or mobile app) was sent to users to alert them about the suspected presence of a hemorrhage. Relative to neuroradiologist gold standard reads the algorithm’s sensitivity and specificity is 90.4% and 92.5% (95% CI: 85%-94% for both). Detection of acute intracranial hemorrhage can be automatized by deploying deep learning. It yielded very high sensitivity/specificity when compared to gold standard reads by a neuroradiologist. Volumes as small as 0.5mL could be detected reliably in the test dataset. The software can be deployed in busy practices to prioritize worklists and alert health care professionals to speed up therapeutic decision processes and interventions.


1987 ◽  
Vol 9 ◽  
pp. 253
Author(s):  
N. Young ◽  
I. Goodwin

Ground surveys of the ice sheet in Wilkes Land, Antarctica, have been made on oversnow traverses operating out of Casey. Data collected include surface elevation, accumulation rate, snow temperature, and physical characteristics of the snow cover. By the nature of the surveys, the data are mostly restricted to line profiles. In some regions, aerial surveys of surface topography have been made over a grid network. Satellite imagery and remote sensing are two means of extrapolating the results from measurements along lines to an areal presentation. They are also the only source of data over large areas of the continent. Landsat images in the visible and near infra-red wavelengths clearly depict many of the large- and small scale features of the surface. The intensity of the reflected radiation varies with the aspect and magnitude of the surface slope to reveal the surface topography. The multi-channel nature of the Landsat data is exploited to distinguish between different surface types through their different spectral signatures, e.g. bare ice, glaze, snow, etc. Additional information on surface type can be gained at a coarser scale from other satellite-borne sensors such as ESMR, SMMR, etc. Textural enhancement of the Landsat images reveals the surface micro-relief. Features in the enhanced images are compared to ground-truth data from the traverse surveys to produce a classification of surface types across the images and to determine the magnitude of the surface topography and micro-relief observed. The images can then be used to monitor changes over time.


Land ◽  
2020 ◽  
Vol 9 (9) ◽  
pp. 319 ◽  
Author(s):  
Mohamed Ali Mohamed

In this study, a knowledge-based fuzzy classification method was used to classify possible soil-landforms in urban areas based on analysis of morphometric parameters (terrain attributes) derived from digital elevation models (DEMs). A case study in the city area of Berlin was used to compare two different resolution DEMs in terms of their potential to find a specific relationship between landforms, soil types and the suitability of these DEMs for soil mapping. Almost all the topographic parameters were obtained from high-resolution light detection and ranging (LiDAR)-DEM (1 m) and Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER)-DEM (30 m), which were used as thresholds for the classification of landforms in the selected study area with a total area of about 39.40 km2. The accuracy of both classifications was evaluated by comparing ground point samples as ground truth data with the classification results. The LiDAR-DEM based classification has shown promising results for classification of landforms into geomorphological (sub)categories in urban areas. This is indicated by an acceptable overall accuracy of 93%. While the classification based on ASTER-DEM showed an accuracy of 70%. The coarser ASTER-DEM based classification requires additional and more detailed information directly related to soil-forming factors to extract geomorphological parameters. The importance of using LiDAR-DEM classification was particularly evident when classifying landforms that have narrow spatial extent such as embankments and channel banks or when determining the general accuracy of landform boundaries such as crests and flat lands. However, this LiDAR-DEM classification has shown that there are categories of landforms that received a large proportion of the misclassifications such as terraced land and steep embankments in other parts of the study area due to the increased distance from the major rivers and the complex nature of these landforms. In contrast, the results of the ASTER-DEM based classification have shown that the ASTER-DEM cannot deal with small-scale spatial variation of soil and landforms due to the increasing human impacts on landscapes in urban areas. The application of the approach used to extract terrain parameters from the LiDAR-DEM and their use in classification of landforms has shown that it can support soil surveys that require a lot of time and resources for traditional soil mapping.


Author(s):  
Zhongxiang Wang ◽  
Masoud Hamedi ◽  
Stanley Young

Crowdsourced GPS probe data, such as travel time on changeable-message signs and incident detection, have been gaining popularity in recent years as a source for real-time traffic information to driver operations and transportation systems management and operations. Efforts have been made to evaluate the quality of such data from different perspectives. Although such crowdsourced data are already in widespread use in many states, particularly the high traffic areas on the Eastern seaboard, concerns about latency—the time between traffic being perturbed as a result of an incident and reflection of the disturbance in the outsourced data feed—have escalated in importance. Latency is critical for the accuracy of real-time operations, emergency response, and traveler information systems. This paper offers a methodology for measuring probe data latency regarding a selected reference source. Although Bluetooth reidentification data are used as the reference source, the methodology can be applied to any other ground truth data source of choice. The core of the methodology is an algorithm for maximum pattern matching that works with three fitness objectives. To test the methodology, sample field reference data were collected on multiple freeway segments for a 2-week period by using portable Bluetooth sensors as ground truth. Equivalent GPS probe data were obtained from a private vendor, and their latency was evaluated. Latency at different times of the day, impact of road segmentation scheme on latency, and sensitivity of the latency to both speed-slowdown and recovery-from-slowdown episodes are also discussed.


2012 ◽  
Vol 18 (1) ◽  
pp. 77-85
Author(s):  
Shinya Tanaka ◽  
Tomoaki Takahashi ◽  
Hideki Saito ◽  
Yoshio Awaya ◽  
Toshiro Iehara ◽  
...  

2018 ◽  
Vol 30 (3) ◽  
pp. 281-291 ◽  
Author(s):  
Roozbeh Mohammadi ◽  
Amir Golroo ◽  
Mahdieh Hasani

In populated cities with high traffic congestion, traffic information may play a key role in choosing the fastest route between origins and destinations, thus saving travel time. Several research studies investigated the effect of traffic information on travel time. However, little attention has been given to the effect of traffic information on travel time according to trip distance. This paper aims to investigate the relation between real-time traffic information dissemination and travel time reduction for medium-distance trips. To examine this relation, a methodology is applied to compare travel times of two types of vehicle, with and without traffic information, travelling between an origin and a destination employing probe vehicles. A real case study in the metropolitan city of Tehran, the capital of Iran, is applied to test the methodology. There is no significant statistical evidence to prove that traffic information would have a significant impact on travel time reduction in a medium-distance trip according to the case study.


Author(s):  
Yu-Che Chen ◽  
Kurt Thurmaier

This chapter provides a case study of building a knowledge management system for collaboration between local governments. It describes the management and development of such a system including Web sites and online search and submission of collaborative agreements. It also stresses the importance of coordination and management support for a multi-party development team. Data quality assurance should also be an integral part of the data collection and migration from a paper-based to an electronic system. The authors hope to shed light on the interrelated components of building a knowledge management system on collaboration. Moreover, the ?ndings of the case study inform the practice of managing a multi-party development team.


2020 ◽  
Author(s):  
Lennart Schmidt ◽  
Hannes Mollenhauer ◽  
Corinna Rebmann ◽  
David Schäfer ◽  
Antje Claussnitzer ◽  
...  

<p>With more and more data being gathered from environmental sensor networks, the importance of automated quality-control (QC) routines to provide usable data in near-real time is becoming increasingly apparent. Machine-learning (ML) algorithms exhibit a high potential to this respect as they are able to exploit the spatio-temporal relation of multiple sensors to identify anomalies while allowing for non-linear functional relations in the data. In this study, we evaluate the potential of ML for automated QC on two spatio-temporal datasets at different spatial scales: One is a dataset of atmospheric variables at 53 stations across Northern Germany. The second dataset contains timeseries of soil moisture and temperature at 40 sensors at a small-scale measurement plot.</p><p>Furthermore, we investigate strategies to tackle three challenges that are commonly present when applying ML for QC: 1) As sensors might drop out, the ML models have to be designed to be robust against missing values in the input data. We address this by comparing different data imputation methods, coupled with a binary representation of whether a value is missing or not. 2) Quality flags that mark erroneous data points to serve as ground truth for model training might not be available. And 3) There is no guarantee that the system under study is stationary, which might render the outputs of a trained model useless in the future. To address 2) and 3), we frame the problem both as a supervised and unsupervised learning problem. Here, the use of unsupervised ML-models can be beneficial as they do not require ground truth data and can thus be retrained more easily should the system be subject to significant changes. In this presentation, we discuss the performance, advantages and drawbacks of the proposed strategies to tackle the aforementioned challenges. Thus, we provide a starting point for researchers in the largely untouched field of ML application for automated quality control of environmental sensor data.</p>


2021 ◽  
pp. 000276422110216
Author(s):  
Scott Althaus ◽  
Buddy Peyton ◽  
Dan Shalmon

Understanding how useful any particular set of event data might be for conflict research requires appropriate methods for assessing validity when ground truth data about the population of interest do not exist. We argue that a total error framework can provide better leverage on these critical questions than previous methods have been able to deliver. We first define a total event data error approach for identifying 19 types of error that can affect the validity of event data. We then address the challenge of applying a total error framework when authoritative ground truth about the actual distribution of relevant events is lacking. We argue that carefully constructed gold standard datasets can effectively benchmark validity problems even in the absence of ground truth data about event populations. To illustrate the limitations of conventional strategies for validating event data, we present a case study of Boko Haram activity in Nigeria over a 3-month offensive in 2015 that compares events generated by six prominent event extraction pipelines—ACLED, SCAD, ICEWS, GDELT, PETRARCH, and the Cline Center’s SPEED project. We conclude that conventional ways of assessing validity in event data using only published datasets offer little insight into potential sources of error or bias. Finally, we illustrate the benefits of validating event data using a total error approach by showing how the gold standard approach used to validate SPEED data offers a clear and robust method for detecting and evaluating the severity of temporal errors in event data.


Sign in / Sign up

Export Citation Format

Share Document