Big Data Analytics for Satellite Image Processing and Remote Sensing - Advances in Computer and Electrical Engineering
Latest Publications


TOTAL DOCUMENTS

10
(FIVE YEARS 0)

H-INDEX

1
(FIVE YEARS 0)

Published By IGI Global

9781522536437, 9781522536444

Author(s):  
Shreya Tuli ◽  
Gaurav Sharma ◽  
Nayan Mishr

Big data is a term for data sets that are so large or complex that traditional data processing application software is inadequate to deal with them. Its challenges include capturing data, data storage, data analysis, search, sharing, transfer, visualization, querying, updating, and information privacy. Lately, the term big data tends to refer to the use of predictive analytics, user behavior analytics, or certain other advanced data analytics methods that extract value from data, and seldom to a particular size of data set. In this chapter, the authors distinguish between fake note and a real note and would like to take it to a level where it can be used everywhere. Its data after the detection of the fakeness and the real note can be stored in the database. The data to store will be huge. To overcome this problem, we can go for big data. It will help to store large amounts of data in no time. The difference between real note and fake note is that real note has its thin strip to be more or less continuous while the fake strip has fragmented thin lines in the strip. One could say that the fake note has more than one line in the thin strip while the real note only has one line. Therefore, if we see just one line, it is real, but if we see more than one line, it is fake. In this chapter, the authors use foreign currency.


Author(s):  
Abhishek Mukherjee ◽  
Chetan Kumar ◽  
Leonid Datta

This chapter is a description of MapReduce, which serves as a programming algorithm for distributed computing in a parallel manner on huge chunks of data that can easily execute on commodity servers thus reducing the costs for server maintenance and removal of requirement of having dedicated servers towards for running these processes. This chapter is all about the various approaches towards MapReduce programming model and how to use it in an efficient manner for scalable text-based analysis in various domains like machine learning, data analytics, and data science. Hence, it deals with various approaches of using MapReduce in these fields and how to apply various techniques of MapReduce in these fields effectively and fitting the MapReduce programming model into any text mining application.


Author(s):  
Venkatesan M. ◽  
Prabhavathy P.

Effective and efficient strategies to acquire, manage, and analyze data leads to better decision making and competitive advantage. The development of cloud computing and the big data era brings up challenges to traditional data mining algorithms. The processing capacity, architecture, and algorithms of traditional database systems are not coping with big data analysis. Big data are now rapidly growing in all science and engineering domains, including biological, biomedical sciences, and disaster management. The characteristics of complexity formulate an extreme challenge for discovering useful knowledge from the big data. Spatial data is complex big data. The aim of this chapter is to propose a multi-ranking decision tree big data approach to handle complex spatial landslide data. The proposed classifier performance is validated with massive real-time dataset. The results indicate that the classifier exhibits both time efficiency and scalability.


Author(s):  
Remya S. ◽  
Ramasubbareddy Somula ◽  
Sravani Nalluri ◽  
Vaishali R. ◽  
Sasikala R.

This chapter presents an introduction to the basics in big data including architecture, modeling, and the tools used. Big data is a term that is used for serving the high volume of data that can be used as an alternative to RDBMS and the other analytical technologies such as OLAP. For every application there exist databases that contain the essential information. But the sizes of the databases vary in different applications and we need to store, extract, and modify these databases. In order to make it useful, we have to deal with it efficiently. This is the place that big data plays an important role. Big data exceeds the processing and the overall capacity of other traditional databases. In this chapter, the basic architecture, tools, modeling, and challenges are presented in each section.


Author(s):  
Utkarsh Srivastava ◽  
Ramanathan L.

Diabetes Mellitus has turned into a noteworthy general wellbeing issue in India. Most recent measurements on diabetes uncover that 63 million individuals in India are experiencing diabetes, and this figure is probably going to go up to 80 million by 2025. Given the rise of big data as a socio-technical phenomenon, there are various complications in analyzing big data and its related data handling issues. This chapter examines Hadoop, an open source structure that permits the disseminated handling for huge datasets on group of PCs and thus finally produces better results with the deployment of Iterative MapReduce. The goal of this chapter is to dissect and extricate the enhanced performance of data analysis in distributed environment. Iterative MapReduce (i-MapReduce) plays a major role in optimizing the analytics performance. Implementation is done on Cloudera Hadoop introduced on top of Hortonworks Data Platform (HDP) Sandbox.


Author(s):  
Aditya Ashvin Doshi ◽  
Prabu Sevugan ◽  
P. Swarnalatha

A number of methodologies are available in the field of data mining, machine learning, and pattern recognition for solving classification problems. In past few years, retrieval and extraction of information from a large amount of data is growing rapidly. Classification is nothing but a stepwise process of prediction of responses using some existing data. Some of the existing prediction algorithms are support vector machine and k-nearest neighbor. But there is always some drawback of each algorithm depending upon the type of data. To reduce misclassification, a new methodology of support vector machine is introduced. Instead of having the hyperplane exactly in middle, the position of hyperplane is to be change per number of data points of class available near the hyperplane. To optimize the time consumption for computation of classification algorithm, some multi-core architecture is used to compute more than one independent module simultaneously. All this results in reduction in misclassification and faster computation of class for data point.


Author(s):  
Pronay Peddiraju ◽  
P. Swarnalatha

The purpose of this chapter is to observe the 3D asset development and product development process for creating real-world solutions using augmented and virtual reality technologies. To do this, the authors create simulative software solutions that can be used in assisting corporations with training activities. The method involves using augmented reality (AR) and virtual reality (VR) training tools to cut costs. By applying AR and VR technologies for training purposes, a cost reduction can be observed. The application of AR and VR technologies can help in using smartphones, high performance computers, head mounted displays (HMDs), and other such technologies to provide solutions via simulative environments. By implementing a good UX (user experience), the solutions can be seen to cause improvements in training, reduce on-site training risks and cut costs rapidly. By creating 3D simulations driven by engine mechanics, the applications for AR and VR technologies are vast ranging from purely computer science oriented applications such as data and process simulations to mechanical equipment and environmental simulations. This can help users further familiarize with potential scenarios.


Author(s):  
Shweta Annasaheb Shinde ◽  
Prabu Sevugan

This chapter improves the SE scheme to grasp these contest difficulties. In the development, prototypical, hierarchical clustering technique is intended to lead additional search semantics with a supplementary feature of making the scheme to deal with the claim for reckless cipher text search in big-scale surroundings, such situations where there is a huge amount of data. Least relevance of threshold is considered for clustering the cloud document with hierarchical approach, and it divides the clusters into sub-clusters until the last cluster is reached. This method may affect the linear computational complexity versus the exponential growth of group of documents. To authenticate the validity for search, minimum hash sub tree is also implemented. This chapter focuses on fetching of cloud data of a subcontracted encrypted information deprived of loss of idea and of security and privacy by transmission attribute key to the information. In the next level, the typical is improved with a multilevel conviction privacy preserving scheme.


Author(s):  
Sameera K. ◽  
P. Swarnalatha

With the predominance of administration registering and distributed computing, an ever-increasing number of administrations are developing on the internet, producing tremendous volume of information. The mind-boggling administration-created information turn out to be too extensive and complex to be successfully prepared by customary methodologies. The most effective method to store, oversee, and make values from the administration-situated enormous information turn into a vital research issue. With the inexorably huge measure of information, a solitary framework that gives normal usefulness to overseeing and dissecting diverse sorts of administration-produced enormous information is critically required. To address this test, this chapter gives a review of administration-produced huge information and big data-as-a-service. Initially, three sorts of administration-produced huge information are abused to upgrade framework execution. At that point, big data-as-a-service, including big data infrastructure-as-a-Service, big data platform-as-a-service, and big data analytics software-as-a-service, is utilized to give regular huge information-related administrations (e.g., getting to benefit-produced huge information and information investigation results) to clients to improve effectiveness and lessen cost.


Author(s):  
Dhanasekaran K. Pillai

This chapter focuses on the development of new computational models for remote sensing applications with big data handling method using image data. Furthermore, this chapter presents an overview of the process of developing systems for remote sensing and monitoring. The issues and challenges are presented to discuss various problems related to the handling of image big data in wireless sensor networks that have various real-world applications. Moreover, the possible solutions and future recommendations to address the challenges have been presented and also this chapter includes discussion of emerging trends and a conclusion.


Sign in / Sign up

Export Citation Format

Share Document