A Survey on Implementation of Word-Count with Map Reduce Programming Oriented Model using Hadoop Framework

2019 ◽  
Author(s):  
Santosh Yadav ◽  
Jay Prakash
2017 ◽  
Vol 49 (3) ◽  
pp. 179-182 ◽  
Author(s):  
Keerthi Bangari ◽  
◽  
Sujitha Meduri ◽  
CY Rao

Now-a-days, sensing of remote satellite data processing is a very challenging task. The current development of satellite technology has led to explosive growth in quantity as well as the quality of the High-Resolution Remote Sensing (HRRS) images. These images can sometimes be in Gigabytes and Terabytes, which is heavy to load into the memory and also takes more time for processing. To address the challenges of processing HRRS images, a distributed map Reduce framework is proposed in this paper. This paper reflects Map-reduce as a distributed model using the Hadoop framework for processing large amounts of images. To process large amounts of images, block-based and size-based methods are introduced for effective processing. From the experiments, the proposed framework has proven to be effective in performance and speed.


IJARCCE ◽  
2017 ◽  
Vol 6 (3) ◽  
pp. 745-747
Author(s):  
Poonam Mahajan ◽  
Manish Patel ◽  
Amol Agarwal ◽  
Nikhil Raut ◽  
Devendra Gadekar

2020 ◽  
Vol 9 (08) ◽  
pp. 25125-25131
Author(s):  
kapil Sahu ◽  
Kaveri Bhatt ◽  
Prof. Amit Saxena ◽  
Kaptan Singh

Clustering As a result of the rapid development in cloud computing, it & fundamental to investigate the performance of extraordinary Hadoop MapReduce purposes and to realize the performance bottleneck in a cloud cluster that contributes to higher or diminish performance. It is usually primary to research the underlying hardware in cloud cluster servers to permit the optimization of program and hardware to achieve the highest performance feasible. Hadoop is founded on MapReduce, which is among the most popular programming items for huge knowledge analysis in a parallel computing environment. In this paper, we reward a particular efficiency analysis, characterization, and evaluation of Hadoop MapReduce Word Count utility. The main aim of this paper is to give implements of Hadoop map-reduce programming by giving a hands-on experience in developing Hadoop based Word-Count and Apriori application. Word count problem using Hadoop Map Reduce framework. The Apriori Algorithm has been used for finding frequent item set using Map Reduce framework.


2013 ◽  
Vol 427-429 ◽  
pp. 2126-2129
Author(s):  
An Sheng Lu ◽  
Zi Hui Li ◽  
Hui Xiu Jin ◽  
Jia Yi Zhang ◽  
Hang Wei

The study of distributed search engine based on Hadoop (referred to as DSEH) has put forward system structure of distributed web service search engine based on Map/Reduce, and made introduction to related modules. Built the whole system on Hadoop framework by Map/Reduce and analyzed the key technologies of distributed search engine.


Most of the current day applications are data and compute intensive which led to invention of technologies like Hadoop. Hadoop uses Map Reduce framework for parallel processing of big data applications using the computing resources of multiple nodes. Hadoop is designed for cluster environments and has few limitations when executed in cloud environments. Hadoop on cloud has become a common choice due to its easy establishment of infrastructure and pay as you use model. Hadoop performance on cloud infrastructures is affected by the virtualization overhead of cloud environment. The execution times of Hadoop on cloud can be improved if the virtual resources are effectively used to schedule the tasks by studying the resource usage characteristics of the tasks and resource availability of the nodes. The proposed work is to build a dynamic scheduler for Hadoop framework which can make scheduling decision dynamically based on job resource usage and node load. The results of the proposed work indicate an improvement of up to 23% in execution time of the Hadoop Map Reduce applications.


The study of Hadoop Distributed File System (HDFS) and Map Reduce (MR) are the key aspects of the Hadoop framework. The big data scenarios like Face Book (FB) data processing or the twitter analytics such as storing the tweets and processing the tweets is other scenario of big data which can depends on Hadoop framework to perform the storage and processing through which further analytics can be done. The point here is the usage of space and time in the processing of the above-mentioned huge amounts of the data definitely leads to higher amounts of space and time consumption of the Hadoop framework. The problem here is usage of huge amounts of the space and at the same time the processing time is also high which need to be reduced so as to get the fastest response from the framework. The attempt is important as all the other eco system tools also depends on HDFS and MR so as to perform the data storage and processing of the data and alternative architecture so as to improve the usage of the space and effective utilization of the resources so as to reduce the time requirements of the framework. The outcome of the work is faster data processing and less space utilization of the framework in the processing of MR along with other eco system tools like Hive, Flume, Sqoop and Pig Latin. The work is proposing an alternative framework of the HDFS and MR and the name we are assigning is Unified Space Allocation and Data Processing with Metadata based Distributed File System (USAMDFS).


Sign in / Sign up

Export Citation Format

Share Document