scholarly journals An Intelligent Approach for Handling Complexity by Migrating from Conventional Databases to Big Data

Symmetry ◽  
2018 ◽  
Vol 10 (12) ◽  
pp. 698 ◽  
Author(s):  
Shabana Ramzan ◽  
Imran Bajwa ◽  
Rafaqut Kazmi

Handling complexity in the data of information systems has emerged into a serious challenge in recent times. The typical relational databases have limited ability to manage the discrete and heterogenous nature of modern data. Additionally, the complexity of data in relational databases is so high that the efficient retrieval of information has become a bottleneck in traditional information systems. On the side, Big Data has emerged into a decent solution for heterogenous and complex data (structured, semi-structured and unstructured data) by providing architectural support to handle complex data and by providing a tool-kit for efficient analysis of complex data. For the organizations that are sticking to relational databases and are facing the challenge of handling complex data, they need to migrate their data to a Big Data solution to get benefits such as horizontal scalability, real-time interaction, handling high volume data, etc. However, such migration from relational databases to Big Data is in itself a challenge due to the complexity of data. In this paper, we introduce a novel approach that handles complexity of automatic transformation of existing relational database (MySQL) into a Big data solution (Oracle NoSQL). The used approach supports a bi-fold transformation (schema-to-schema and data-to-data) to minimize the complexity of data and to allow improved analysis of data. A software prototype for this transformation is also developed as a proof of concept. The results of the experiments show the correctness of our transformations that outperform the other similar approaches.

2014 ◽  
Vol 14 (2) ◽  
pp. 38-49 ◽  
Author(s):  
Hristo Lesev ◽  
Alexander Penev

Abstract A novel approach is presented for recording high volume data about ray tracing rendering systems' runtime state and its subsequent dynamic analysis and interactive visualization in the algorithm computational domain. Our framework extracts light paths traced by the system and leverages on a powerful filtering subsystem, helping interactive visualization and exploration of the desired subset of recorded data. We introduce a versatile data logging format and acceleration structures for easy access and filtering. We have implemented a plugin based framework and a tool set that realize all ideas presented in this paper. The framework provides data logging API for instrumenting production-ready, multithreaded, distributed renderers. The framework visualization tool enables deeper understanding of the ray tracing algorithms for novices, as well as for experts.


2017 ◽  
Vol 31 (3) ◽  
pp. 45-61 ◽  
Author(s):  
Uday S. Murthy ◽  
Guido L. Geerts

ABSTRACT The term “Big Data” refers to massive volumes of data that grow at an increasing rate and encompass complex data types such as audio and video. While the applications of Big Data and analytic techniques for business purposes have received considerable attention, it is less clear how external sources of Big Data relate to the transaction processing-oriented world of accounting information systems. This paper uses the Resource-Event-Agent Enterprise Ontology (REA) (McCarthy 1982; International Standards Organization [ISO] 2007) to model the implications of external Big Data sources on business transactions. The five-phase REA-based specification of a business transaction as defined in ISO (2007) is used to formally define associations between specific Big Data elements and business transactions. Using Big Data technologies such as Apache Hadoop and MapReduce, a number of information extraction patterns are specified for extracting business transaction-related information from Big Data. We also present a number of analytics patterns to demonstrate how decision making in accounting can benefit from integrating specific external Big Data sources and conventional transactional data. The model and techniques presented in this paper can be used by organizations to formalize the associations between external Big Data elements in their environment and their accounting information artifacts, to build architectures that extract information from external Big Data sources for use in an accounting context, and to leverage the power of analytics for more effective decision making.


Author(s):  
Giancarlo Rodrigues ◽  
Alaine Margarete Guimarães

FMIS (farm management information systems) is the computational tool responsible to process data to get information that improves farmers' decision support. The data manipulated in FMIS is originated from diverse sources, stored, and read whenever necessary without subsequent modifications, thus dismissing the necessity of complex data storage systems such as offered by the relational model. Due to its capability to handle with high performance, a large amount of unstructured data and to reduce the complexity of applications, the NoSQL data storage model, a convenient alternative to relational model, recently gained a lot of attention in the information systems market. This way, this chapter discusses how NoSQL models could improve the FMIS architecture and performance when used as data storage. Some works that have already benefited from NoSQL model adoption are reviewed and convenient use cases where both data storage models could be well used in FMIS's architecture are advised and discussed.


Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-15
Author(s):  
Lei Wu ◽  
Ran Ding ◽  
Zhaohong Jia ◽  
Xuejun Li

In the era of big data, mining and analysis of the enormous amount of data has been widely used to support decision-making. This complex process including huge-volume data collecting, storage, transmission, and analysis could be modeled as workflow. Meanwhile, cloud environment provides sufficient computing and storage resources for big data management and analytics. Due to the clouds providing the pay-as-you-go pricing scheme, executing a workflow in clouds should pay for the provisioned resources. Thus, cost-effective resource provisioning for workflow in clouds is still a critical challenge. Also, the responses of the complex data management process are usually required to be real-time. Therefore, deadline is the most crucial constraint for workflow execution. In order to address the challenge of cost-effective resource provisioning while meeting the real-time requirements of workflow execution, a resource provisioning strategy based on dynamic programming is proposed to achieve cost-effectiveness of workflow execution in clouds and a critical-path based workflow partition algorithm is presented to guarantee that the workflow can be completed before deadline. Our approach is evaluated by simulation experiments with real-time workflows of different sizes and different structures. The results demonstrate that our algorithm outperforms the existing classical algorithms.


Author(s):  
Wen-Chen Hu ◽  
Naima Kaabouch ◽  
Hongyu Guo ◽  
Hung-Jen Yang

Relational databases have dominated the database markets for decades because they perform extremely well for traditional applications like electronic commerce and inventory systems. However, the relational databases do not suit some of the contemporary applications such as big data and cloud computing well because of various reasons like their low scalability and unable to handle a high volume of data. NoSQL (not only SQL) databases are part of the solution for developing those newer applications. The approach they use is different from the one used by relational databases. This chapter discusses NoSQL databases by using an empirical instead of theoretical approach. Other than introducing the types and features of generic NoSQL databases, practical NoSQL database programming and usage are shown by using MongoDB, a NoSQL database. A summary of this research is given at the end of this chapter.


2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Lingling Gu

Big data refers to a collection of data that cannot be captured, managed, and processed with conventional software tools within a certain time frame. It is a massive, high-volume, high-volume data that requires new processing models to have stronger decision-making power, insight and discovery, process optimization capabilities, growth rate, and diversified information assets. This article aims to study the integration and optimization of ancient literature information resources of big data technology, that is, to integrate and optimize ancient literature information resources through big data technology and make the literature more systematic and complete, allowing readers to find and browse literature more conveniently. This paper focuses on the literary works and the related collation, annotation, and textual research results and divides the scope of each subtopic according to the genre. The biggest difference between the information platform built in this paper and the existing ancient books database is that it has the functions of semantic analysis, subject retrieval, data generation, and so on. After text learning, the computer can automatically classify related vocabulary. Based on the effective integration of big data and cultural resources, the experimental results of this article show that, so far, through technical optimization and resource integration, the number of ancient literature reincorporated has exceeded 12,000 copies, and more than 10,000 publications have been restored. Therefore, big data technology is essential for the integration and optimization of cultural resources.


Cancers ◽  
2020 ◽  
Vol 13 (1) ◽  
pp. 86
Author(s):  
Mohit Kumar ◽  
Chellappagounder Thangavel ◽  
Richard C. Becker ◽  
Sakthivel Sadayappan

Immunotherapy is one of the most effective therapeutic options for cancer patients. Five specific classes of immunotherapies, which includes cell-based chimeric antigenic receptor T-cells, checkpoint inhibitors, cancer vaccines, antibody-based targeted therapies, and oncolytic viruses. Immunotherapies can improve survival rates among cancer patients. At the same time, however, they can cause inflammation and promote adverse cardiac immune modulation and cardiac failure among some cancer patients as late as five to ten years following immunotherapy. In this review, we discuss cardiotoxicity associated with immunotherapy. We also propose using human-induced pluripotent stem cell-derived cardiomyocytes/ cardiac-stromal progenitor cells and cardiac organoid cultures as innovative experimental model systems to (1) mimic clinical treatment, resulting in reproducible data, and (2) promote the identification of immunotherapy-induced biomarkers of both early and late cardiotoxicity. Finally, we introduce the integration of omics-derived high-volume data and cardiac biology as a pathway toward the discovery of new and efficient non-toxic immunotherapy.


2021 ◽  
pp. 1-30
Author(s):  
Lisa Grace S. Bersales ◽  
Josefina V. Almeda ◽  
Sabrina O. Romasoc ◽  
Marie Nadeen R. Martinez ◽  
Dannela Jann B. Galias

With the advancement of technology, digitalization, and the internet of things, large amounts of complex data are being produced daily. This vast quantity of various data produced at high speed is referred to as Big Data. The utilization of Big Data is being implemented with success in the private sector, yet the public sector seems to be falling behind despite the many potentials Big Data has already presented. In this regard, this paper explores ways in which the government can recognize the use of Big Data for official statistics. It begins by gathering and presenting Big Data-related initiatives and projects across the globe for various types and sources of Big Data implemented. Further, this paper discusses the opportunities, challenges, and risks associated with using Big Data, particularly in official statistics. This paper also aims to assess the current utilization of Big Data in the country through focus group discussions and key informant interviews. Based on desk review, discussions, and interviews, the paper then concludes with a proposed framework that provides ways in which Big Data may be utilized by the government to augment official statistics.


Omega ◽  
2021 ◽  
pp. 102479
Author(s):  
Zhongbao Zhou ◽  
Meng Gao ◽  
Helu Xiao ◽  
Rui Wang ◽  
Wenbin Liu

Sign in / Sign up

Export Citation Format

Share Document