scholarly journals A decentralised framework for efficient storage and processing of big data using HDFS and IPFS

2020 ◽  
Vol 1 (2) ◽  
pp. 131
Author(s):  
Franklin John ◽  
Suji Gopinath ◽  
Elizabeth Sherly
Keyword(s):  
Big Data ◽  
2019 ◽  
Vol 4 (2) ◽  
pp. 207-220
Author(s):  
김기수 ◽  
Yukun Hahm ◽  
장유림 ◽  
Jaejin Yi ◽  
HONGHOI KIM

2019 ◽  
Vol 35 (4) ◽  
pp. 893-903 ◽  
Author(s):  
Seemu Sharma ◽  
Seema Bawa

Abstract Cultural data and information on the web are continuously increasing, evolving, and reshaping in the form of big data due to globalization, digitization, and its vast exploration, with common people realizing the importance of ancient values. Therefore, before it becomes unwieldy and too complex to manage, its integration in the form of big data repositories is essential. This article analyzes the complexity of the growing cultural data and presents a Cultural Big Data Repository as an efficient way to store and retrieve cultural big data. The repository is highly scalable and provides integrated high-performance methods for big data analytics in cultural heritage. Experimental results demonstrate that the proposed repository outperforms in terms of space as well as storage and retrieval time of Cultural Big Data.


Big Data refers to large datasets and so it is not possible to store, manage and analyze it using commonly used software systems. The emergence of smart phones, social networks and online applications has led to the generation of massive amounts of structured, unstructured and semi structured data. Big data analytics has received sizeable attention since it offers a great opportunity to uncover potentials from heavy amounts of data. Data preprocessing techniques, when applied prior to analytics, can substantially improve the overall quality of the patterns mined and/or the time required for the actual mining. Thus this paper presents an efficient method for preprocessing data and also partitioning big dataset based on sensitivity parameters. The partitioned dataset can be uploaded to public and private cloud based on the importance of data in the partition. Thus hybrid cloud storage and processing of big data is supported by this approach. The experimental results show that the proposed method preprocesses and partition data with high accuracy and reduced processing time.


Author(s):  
Ming Yang ◽  
Wenchun He ◽  
Zhiqiang Zhang ◽  
Yongjun Xu ◽  
Heping Yang ◽  
...  

Abstract With the development of the meteorological IoT (Internet of Things) and meteorological sensing network, the collected multi-source meteorological data have the characteristics of large amount of information, multidimensional and high accuracy. Cloud computing technology has been applied to the storage and service of meteorological big data. Although the constant evolution of big data storage technology is improving the storage and access of meteorological data, storage and service efficiency is still far from meeting multi-source big data requirements. Traditional methods have been used for the storage and service of meteorological data, and a number of problems still persist, such as a lack of unified storage structure, poor scalability, and poor service performance. In this study, an efficient storage and service method for multidimensional meteorological data is designed based on NoSQL big data storage technology and the multidimensional characteristics of meteorological data. In the process of data storage, multidimensional block compression technology and data structures are applied to store and transmit meteorological data. In service, heterogeneous NoSQL common components are designed to improve the heterogeneity of the NoSQL database. The results show that the proposed method has good storage transmission efficiency and versatility, and can effectively improve the efficiency of meteorological data storage and service in meteorological applications.


2020 ◽  
Vol 2020 ◽  
pp. 1-9
Author(s):  
Chong Feng ◽  
Muhammad Adnan ◽  
Arshad Ahmad ◽  
Ayaz Ullah ◽  
Habib Ullah Khan

The aim of the Internet of things (IoT) is to bring every object (wearable sensors, healthcare sensors, cameras, home appliances, smart phones, etc.) online. These different objects generate huge data which consequently lead to the need of requirements of efficient storage and processing. Cloud computing is an emerging technology to overcome this problem. However, there are some applications (healthcare) which need to process data in real time to improve its performance and require low latency and delay. Fog computing is one of the promising solutions which facilitate healthcare domain in terms of reducing the delay multihop data communication, distributing resource demands, and promoting service flexibility. In this study, a fog-based IoT healthcare framework is proposed in order to minimize the energy consumption of the fog nodes. Experimental results reveal that the performance of the proposed framework is efficient in terms of network delay and energy usage. Furthermore, the authors discussed and suggested important services of big data infrastructure which need to be present in fog devices for the analytics of healthcare big data.


ASHA Leader ◽  
2013 ◽  
Vol 18 (2) ◽  
pp. 59-59
Keyword(s):  

Find Out About 'Big Data' to Track Outcomes


Sign in / Sign up

Export Citation Format

Share Document