scholarly journals The Development of a Heterogeneous MP Data Model Based on the Ontological Approach

Symmetry ◽  
2021 ◽  
Vol 13 (5) ◽  
pp. 813
Author(s):  
Sergey Porshnev ◽  
Andrey Borodin ◽  
Olga Ponomareva ◽  
Sergey Mirvoda ◽  
Olga Chernova

The article discusses the approaches providing symmetric access of all industrial production services to the data of business processes of the enterprise by building a single warehouse of heterogeneous data of a metallurgical production. The warehouse is a part of an automated statistic quality control system for the products of a metallurgical enterprise. The article describes an ontological storage model of data coming from various sources of information in the production process. The concept of “a unit of production of metallurgical production” is introduced that is the connecting component of the entire production life cycle of a metallurgical production. The authors propose an ontological model of the production process, in terms of information flows which are formed in an enterprise at each stage of production. Based on the constructed ontological model, the structure of recording an array of information in the heterogeneous data warehouse is justified and formed. Heterogeneous data warehouse forms a single information space of the enterprise, which serves as the basis for analytical analysis throughout the production and decision—making process. For example, timely response to the deviation reasons from the given physical and chemical properties of the finished product.

2001 ◽  
Vol 10 (03) ◽  
pp. 377-397 ◽  
Author(s):  
LUCA CABIBBO ◽  
RICCARDO TORLONE

We report on the design of a novel architecture for data warehousing based on the introduction of an explicit "logical" layer to the traditional data warehousing framework. This layer serves to guarantee a complete independence of OLAP applications from the physical storage structure of the data warehouse and thus allows users and applications to manipulate multidimensional data ignoring implementation details. For example, it makes possible the modification of the data warehouse organization (e.g. MOLAP or ROLAP implementation, star scheme or snowflake scheme structure) without influencing the high level description of multidimensional data and programs that use the data. Also, it supports the integration of multidimensional data stored in heterogeneous OLAP servers. We propose [Formula: see text], a simple data model for multidimensional databases, as the reference for the logical layer. [Formula: see text] provides an abstract formalism to describe the basic concepts that can be found in any OLAP system (fact, dimension, level of aggregation, and measure). We show that [Formula: see text] databases can be implemented in both relational and multidimensional storage systems. We also show that [Formula: see text] can be profitably used in OLAP applications as front-end. We finally describe the design of a practical system that supports the above logical architecture; this system is used to show in practice how the architecture we propose can hide implementation details and provides a support for interoperability between different and possibly heterogeneous data warehouse applications.


TEM Journal ◽  
2021 ◽  
pp. 1336-1347
Author(s):  
Peter Malega ◽  
Naqib Daneshjo ◽  
Vladimír Rudy ◽  
Peter Drábik

The goal of this paper is to find suitable solutions for process optimization using PDCA methodology and quality management tools. It was realized in the company that is oriented on the assembly of key sets, locks and handles. It analyzes chosen assembly processes, their critical points and identifies root causes of problems that might occur during assembly. For this purpose, different quality methods and tools are used. In this paper there are also defined the corrective actions to avoid recurrence of identified problems, implementation of these actions in production process and its standardization.


Author(s):  
Alireza Pourshahid ◽  
Liam Peyton ◽  
Sepideh Ghanavati ◽  
Daniel Amyot ◽  
Pengfei Chen ◽  
...  

Validation should be done in the context of understanding how a business process is intended to contribute to the business strategies of an organization. Validation can take place along a variety of dimensions including legal compliance, financial cost, customer value, and service quality. A business process modeling tool cannot anticipate all the ways in which a business process might need to be validated. However, it can provide a framework for extending model elements to represent context for a business process. It can also support information exchange to facilitate validation with other tools and systems. This chapter demonstrates a model-based approach to validation using a hospital approval process for accessing patient data in a data warehouse. An extensible meta-model, a flexible data exchange layer, and linkage between business processes and enterprise context are shown to be the critical elements in model-based business process validation.


Author(s):  
Valéria Rocha Da Costa ◽  
José Márcio Diniz Filho

Process management, innovation, technology, and knowledge management are tools to achieve better results and create value for an organization, specifically for the law firm. This is why organizational processes, or business processes, have become fundamental structures for the management of modern organizations and to maintain the competitiveness of organizations. As a result, it was possible to identify that the use of process management techniques and tools is decisive for rational use of processes, increased productivity, and better customer service, presenting an ideal conceptual model.


2008 ◽  
pp. 3116-3141
Author(s):  
Shi-Ming Huang ◽  
David C. Yen ◽  
Hsiang-Yuan Hsueh

The materialized view approach is widely adopted in implementations of data warehouse systems in or-der for efficiency purposes. In terms of the construction of a materialized data warehouse system, some managerial problems still exist to most developers and users in the view resource maintenance area in particular. Resource redundancy and data inconsistency among materialized views in a data warehouse system is a problem that many developers and users struggle with. In this article, a space-efficient protocol for materialized view maintenance with a global data view on data warehouses with embedded proxies is proposed. In the protocol set, multilevel proxy-based protocols with a data compensating mechanism are provided to certify the consistency and uniqueness of materialized data among data resources and materialized views. The authors also provide a set of evaluation experiences and derivations to verify the feasibility of proposed protocols and mechanisms. With such protocols as proxy services, the performance and space utilization of the materialized view approach will be improved. Furthermore, the consistency issue among materialized data warehouses and heterogeneous data sources can be properly accomplished by applying a dynamic compensating and synchronization mechanism. The trade-off between efficiency, storage consumption, and data validity for view maintenance tasks can be properly balanced.


Author(s):  
Banek Marko ◽  
Vrdoljak Boris ◽  
Min Tjoa A ◽  
Skocir Zoran

A federated data warehouse is a logical integration of data warehouses applicable when physical integration is impossible due to privacy policy or legal restrictions. In healthcare systems federated data warehouses are a most feasible source of data for deducing guidelines for evidence-based medicine based on data material from different participating institutions. In order to enable the translation of queries in a federated approach, schemas of the federated warehouse and the local warehouses must be matched. In this paper we present a procedure that enables the matching process for schema structures specific to the multidimensional model of data warehouses: facts, measures, dimensions, aggregation levels and dimensional attributes. Similarities between warehouse-specific structures are computed by using linguistic and structural comparison. The calculated values are used to create necessary mappings.


2011 ◽  
pp. 1625-1632
Author(s):  
Volker Derballa ◽  
Key Pousttchi

IT support for knowledge management (KM) is a widely discussed issue. Whereas an overemphasis on technology is often criticized, the general consensus is that a well-balanced combination of technical and social approaches can be a rewarding departure (Alavi & Leidner, 1999). The usage of knowledge management systems (KMSs) (i.e., information systems including for example data warehouse techniques and artificial intelligence tools) is seen as a factor that can beneficially support different KM processes (Frank, 2001; Wiig, 1995). Due to the fact that an increasingly large proportion of work is not conducted in the context of stationary workplaces anymore, it becomes necessary to make KMSs available to those mobile workers (Rao, 2002; Sherman, 1999). Considering the different technological infrastructure in the stationary, as well as the mobile context, a KMS that so far is only available at a stationary workplace cannot simply become mobile without any changes. Further, the aspect of mobility implies specific design requirements for KMS. Taking together the rapid developments in the field of technology, allowing more and more mobile processes to be potentially supported through mobile KMS, as well as the current social and occupational developments, resulting in more mobile workplaces and business processes (Gruhn & Book, 2003), the relevance of mobile KM can be expected to increase in the future.


2021 ◽  
pp. 147-151
Author(s):  
Roberta Varriale ◽  
Fabiana Rocci ◽  
Orietta Luzi

In recent years, the Italian national institute of statistics (Istat), together with most National Statistical Institutes, is progressively moving from traditional production models based on the use of primary source of information - represented by direct surveys - to new production strategies based on the combined use of different primary and secondary sources of information. As result, new multisource statistical processes have been built, that guarantee a major improvement of both amount and quality of information about several phenomena of public interest. In this context, the Total Process Error (TPE) framework has been recently proposed in literature for assessing the quality of multisource processes. The TPE framework represents an evolution of the Zhang’s two-phase life-cycle approach and it additionally includes an operational tool to connect the steps of the multisource production process to the phases of the quality evaluation framework. TPE framework can be used both to support a multisource process design and to monitor an entire production process, in order to provide key elements to assess the quality of both the processes and their statistical outputs. In the present work, we describe as a case study in the new context of Istat production of official statistics the use of the TPE framework to support the process design of the Register for Public Administrations.


Author(s):  
James Yao ◽  
John Wang ◽  
Qiyang Chen ◽  
Ruben Xing

Data warehouse is a system which can integrate heterogeneous data sources to support the decision making process. Data warehouse design is a lengthy, time-consuming, and costly process. There has been a high failure in data warehouse development projects. Thus how to design and develop a data warehouse have become important issues for information systems designers and developers. This paper reviews and discusses some of the core data warehouse design and development methodologies in information system development. The paper presents in particular the most recent and much heated hybrid approach which is a combination of data-driven and requirement-driven approaches.


Sign in / Sign up

Export Citation Format

Share Document