Maine’s Approach to Data Warehousing for State Departments of Transportation

2000 ◽  
Vol 1719 (1) ◽  
pp. 227-232
Author(s):  
Paul O’Packi ◽  
Rick Dubois ◽  
Nancy Armentrout ◽  
Steve Bower

Most transportation agencies are faced with changing needs, challenges, and limited resources. State departments of transportation need tools to address these issues. One such solution combines data warehouse and geographic information systems (GIS) technology to allow easy access to reliable information for systemwide query, analysis, and reporting. To meet these challenges, to be more responsive, and to provide staff and managers with a better platform with which to deliver integrated transportation information to both internal and external customers, the Maine Department of Transportation (MeDOT) has turned to integrating data warehousing and GIS technologies. A detailed overview of MeDOT’s Transportation Information for Decision Enhancement (TIDE), a robust GIS-linked data warehouse, is given. A range of inherent technical issues involved in a project of this nature is discussed. The role that TIDE has played in breaking down the functional boundaries that have existed on both informational and technical fronts and how this robust tool facilitates the growth of agency integration also are discussed.

Author(s):  
Bekir Bartin ◽  
Kaan Ozbay ◽  
Matthew D. Maggio ◽  
Hao Wang

Faced with a growing number of work zones, transportation agencies are being challenged to effectively manage the impacts of these zones, alleviate congestion, and maintain the safety of motorists and workers without disrupting project schedules. Coordinating work zones has already been practiced by various state departments of transportation and transportation agencies, yet there are no universal department of transportation policies that address how agencies should coordinate or consolidate projects. In addition, only a few states utilize computer tools specific to regional or corridor-based work zone coordination. State departments of transportation mostly coordinate significant and long-term projects. However, the majority of roadway projects include minor repair, roadway maintenance, bridge maintenance, surveying, and landscape and utility work that require relatively short-term work zones. The Work Zone Coordination Software tool was developed to provide the New Jersey Department of Transportation with an easy-to-use tool to evaluate the feasibility and effectiveness of coordinating short- and long-term work zones and to measure the benefits. This online tool is implemented with a web-based user interface. It integrates all scheduled and active construction projects, identifies conflicts between work zone projects, and estimates the benefits of conflict mitigation. The Work Zone Coordination Software tool works with the New Jersey work zone database by automatically importing data to provide up-to-date information to its users. However, the tool is built on a flexible framework that allows the integration of any work zone database provided that it includes all the required information.


Author(s):  
Reinhard Jung ◽  
Robert Winter

Project justification is regarded as one of the major methodological deficits in Data Warehousing practice. As reasons for applying inappropriate methods, performing incomplete evaluations, or even entirely omitting justifications, the special nature of Data Warehousing benefits and the large portion of infrastructure-related activities are stated. In this chapter, the economic justification of Data Warehousing projects is analyzed, and first results from a large academia-industry collaboration project in the field of non-technical issues of Data Warehousing are presented. As conceptual foundations, the role of the Data Warehouse system in corporate application architectures is analyzed, and the specific properties of Data Warehousing projects are discussed. Based on an applicability analysis of traditional approaches to economic IT project justification, basic steps and responsibilities for the justification of Data Warehousing projects are derived.


Author(s):  
Karla Diaz Corro ◽  
Taslima Akter ◽  
Sarah Hernandez

Increased demand for truck parking resulting from hours-of-service regulations and growing truck volumes, coupled with limited supply of parking facilities, is concerning for transportation agencies and industry stakeholders. To monitor truck parking congestion, the Arkansas Department of Transportation (ARDOT) conducts an annual observational survey of truck parking facilities. As a result of survey methodology, it cannot capture patterns of diurnal and seasonal use, arrival times, and duration. Truck Global Positioning System (GPS) data provide an apt alternative for monitoring parking facility utilization. The issue is that most truck GPS datasets represent a sample of the truck population and the representativeness of that sample may differ by application. Currently no method exists to accurately expand a GPS sample to reflect population-level truck parking facility utilization. This paper leverages the ARDOT study to estimate GPS “expansion factors” by parking facility type and defines two expansion factors: (1) the ratio of trucks parked derived from the GPS sample to those observed during the Overnight Study, and (2) the ratio of truck volume derived from the GPS sample to total truck volume measured on the nearest roadway. Varied expansion factors are found for public, private commercial (e.g., restaurant, retail store, etc.), and private truck stop facilities. Comparatively, the expansion factor based on roadway truck volumes was at least twice as high as that derived from the Overnight Study. Considering this, the method to determine expansion factors has significant implications on the estimated magnitudes of parking facility congestion, and thus will have consequences for investment prioritization.


2003 ◽  
Vol 12 (03) ◽  
pp. 325-363 ◽  
Author(s):  
Joseph Fong ◽  
Qing Li ◽  
Shi-Ming Huang

Data warehouse contains vast amount of data to support complex queries of various Decision Support Systems (DSSs). It needs to store materialized views of data, which must be available consistently and instantaneously. Using a frame metadata model, this paper presents an architecture of a universal data warehousing with different data models. The frame metadata model represents the metadata of a data warehouse, which structures an application domain into classes, and integrates schemas of heterogeneous databases by capturing their semantics. A star schema is derived from user requirements based on the integrated schema, catalogued in the metadata, which stores the schema of relational database (RDB) and object-oriented database (OODB). Data materialization between RDB and OODB is achieved by unloading source database into sequential file and reloading into target database, through which an object relational view can be defined so as to allow the users to obtain the same warehouse view in different data models simultaneously. We describe our procedures of building the relational view of star schema by multidimensional SQL query, and the object oriented view of the data warehouse by Online Analytical Processing (OLAP) through method call, derived from the integrated schema. To validate our work, an application prototype system has been developed in a product sales data warehousing domain based on this approach.


2016 ◽  
Vol 12 (3) ◽  
pp. 32-50
Author(s):  
Xiufeng Liu ◽  
Nadeem Iftikhar ◽  
Huan Huo ◽  
Per Sieverts Nielsen

In data warehousing, the data from source systems are populated into a central data warehouse (DW) through extraction, transformation and loading (ETL). The standard ETL approach usually uses sequential jobs to process the data with dependencies, such as dimension and fact data. It is a non-trivial task to process the so-called early-/late-arriving data, which arrive out of order. This paper proposes a two-level data staging area method to optimize ETL. The proposed method is an all-in-one solution that supports processing different types of data from operational systems, including early-/late-arriving data, and fast-/slowly-changing data. The introduced additional staging area decouples loading process from data extraction and transformation, which improves ETL flexibility and minimizes intervention to the data warehouse. This paper evaluates the proposed method empirically, which shows that it is more efficient and less intrusive than the standard ETL method.


2001 ◽  
Vol 10 (03) ◽  
pp. 377-397 ◽  
Author(s):  
LUCA CABIBBO ◽  
RICCARDO TORLONE

We report on the design of a novel architecture for data warehousing based on the introduction of an explicit "logical" layer to the traditional data warehousing framework. This layer serves to guarantee a complete independence of OLAP applications from the physical storage structure of the data warehouse and thus allows users and applications to manipulate multidimensional data ignoring implementation details. For example, it makes possible the modification of the data warehouse organization (e.g. MOLAP or ROLAP implementation, star scheme or snowflake scheme structure) without influencing the high level description of multidimensional data and programs that use the data. Also, it supports the integration of multidimensional data stored in heterogeneous OLAP servers. We propose [Formula: see text], a simple data model for multidimensional databases, as the reference for the logical layer. [Formula: see text] provides an abstract formalism to describe the basic concepts that can be found in any OLAP system (fact, dimension, level of aggregation, and measure). We show that [Formula: see text] databases can be implemented in both relational and multidimensional storage systems. We also show that [Formula: see text] can be profitably used in OLAP applications as front-end. We finally describe the design of a practical system that supports the above logical architecture; this system is used to show in practice how the architecture we propose can hide implementation details and provides a support for interoperability between different and possibly heterogeneous data warehouse applications.


Organization ◽  
2018 ◽  
Vol 26 (4) ◽  
pp. 537-552 ◽  
Author(s):  
Helene Ratner ◽  
Christopher Gad

Organization is increasingly entwined with databased governance infrastructures. Developing the idea of ‘infrastructure as partial connection’ with inspiration from Marilyn Strathern and Science and Technology Studies, this article proposes that database infrastructures are intrinsic to processes of organizing intra- and inter-organizational relations. Seeing infrastructure as partial connection brings our attention to the ontological experimentation with knowing organizations through work of establishing and cutting relations. We illustrate this claim through a multi-sited ethnographic study of ‘The Data Warehouse’. ‘The Data Warehouse’ is an important infrastructural component in the current reorganization of Danish educational governance which makes schools’ performance public and comparable. We suggest that ‘The Data Warehouse’ materializes different, but overlapping, infrastructural experiments with governing education at different organizational sites enacting a governmental hierarchy. Each site can be seen as belonging to the same governance infrastructure but also as constituting ‘centres’ in its own right. ‘The Data Warehouse’ participates in the always-unfinished business of organizational world making and is made to (partially) relate to different organizational concerns and practices. This argument has implications for how we analyze the organizational effects of pervasive databased governance infrastructures and invites exploring their multiple organizing effects.


In the standard ETL (Extract Processing Load), the data warehouse refreshment must be performed outside of peak hours. i It implies i that the i functioning and i analysis has stopped in their iall actions. iIt causes the iamount of icleanness of i data from the idata Warehouse which iisn't suggesting ithe latest i operational transections. This i issue is i known as i data i latency. The data warehousing is iemployed to ibe a iremedy for ithis iissue. It updates the idata warehouse iat a inear real-time iFashion, instantly after data found from the data source. Therefore, data i latency could i be reduced. Hence the near real time data warehousing was having issues which was not identified in traditional ETL. This paper claims to communicate the issues and accessible options at every point iin the i near real-time i data warehousing, i.e. i The i issues and Available alternatives iare based ion ia literature ireview by additional iStudy that ifocus ion near real-time data iwarehousing issue


Sign in / Sign up

Export Citation Format

Share Document