scholarly journals Big Data Analytics For Organizations: Challenges and Opportunities and Its Effect on International Business Education

2019 ◽  
Vol 4 (2) ◽  
pp. 137-150 ◽  
Author(s):  
Twana Saeed Ali ◽  
Tugberk Kaya

Big Data refers to large volumes of information. This information varies from pictures, videos, texts, audios and other heterogeneous data. In recent years, the amount of such big data has exceeded the capacity of online or cloud storage systems. The amount of data collected yearly has doubled in the past years and the concern for the volume of this data has reached its Exabyte yearly range. This paper focuses on the major issues and opportunities as well as big data storage with the aid of academic tools and researches conducted earlier by scholars for big data analysis. Modern learning environment (MLE) has to be understood in order to know how it supports learning in areas of big data such as university education systems. The utilization of online resources and web pages with laptops and mobile phones need to be understood as an attempt to integrate the modern learning environment and improve teaching in international bossiness. Big data can be fine-tuned and used to create new online learning programmers. Data collected by government departments, universities and institutions could be used as a new innovative learning system such as (MLE) which has a passive and active character i.e. it can be accessed anywhere at any time. This would also help in minimizing extended classroom activities because students would have controlled access to online knowledge from their homes

Author(s):  
M. Sandeep Kumar ◽  
Prabhu J.

This chapter describes how big data consist of an extreme volume of data, velocity, and more complex variable data that demands current technology changes in capturing, storage, distribution, management, analysis data. Business facing more struggles in identifying the pragmatic approach in capturing the data about customer, products, and services. Usage of big data mainly with the analytical method, but it specifically compares with features of an analytical method based on unstructured data contributed around 95% of big data. The analytical approach depends on heterogeneous data and unstructured data's like text, audio, video format. It demands new effective tool for predictive analysis for big data with the unstructured format. This chapter describes explanation of big data and characteristics of big data compress of Volume, Velocity, Variety, Variability, and Value. Recent trends in the development of big data that applies in real time application perspectives like health care agriculture, education etc.


2016 ◽  
Vol 2016 ◽  
pp. 1-8 ◽  
Author(s):  
Fanyu Bu ◽  
Zhikui Chen ◽  
Peng Li ◽  
Tong Tang ◽  
Ying Zhang

With the development of Internet of Everything such as Internet of Things, Internet of People, and Industrial Internet, big data is being generated. Clustering is a widely used technique for big data analytics and mining. However, most of current algorithms are not effective to cluster heterogeneous data which is prevalent in big data. In this paper, we propose a high-order CFS algorithm (HOCFS) to cluster heterogeneous data by combining the CFS clustering algorithm and the dropout deep learning model, whose functionality rests on three pillars: (i) an adaptive dropout deep learning model to learn features from each type of data, (ii) a feature tensor model to capture the correlations of heterogeneous data, and (iii) a tensor distance-based high-order CFS algorithm to cluster heterogeneous data. Furthermore, we verify our proposed algorithm on different datasets, by comparison with other two clustering schemes, that is, HOPCM and CFS. Results confirm the effectiveness of the proposed algorithm in clustering heterogeneous data.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Marco Aiello ◽  
Giuseppina Esposito ◽  
Giulio Pagliari ◽  
Pasquale Borrelli ◽  
Valentina Brancato ◽  
...  

AbstractThe diagnostic imaging field is experiencing considerable growth, followed by increasing production of massive amounts of data. The lack of standardization and privacy concerns are considered the main barriers to big data capitalization. This work aims to verify whether the advanced features of the DICOM standard, beyond imaging data storage, are effectively used in research practice. This issue will be analyzed by investigating the publicly shared medical imaging databases and assessing how much the most common medical imaging software tools support DICOM in all its potential. Therefore, 100 public databases and ten medical imaging software tools were selected and examined using a systematic approach. In particular, the DICOM fields related to privacy, segmentation and reporting have been assessed in the selected database; software tools have been evaluated for reading and writing the same DICOM fields. From our analysis, less than a third of the databases examined use the DICOM format to record meaningful information to manage the images. Regarding software, the vast majority does not allow the management, reading and writing of some or all the DICOM fields. Surprisingly, if we observe chest computed tomography data sharing to address the COVID-19 emergency, there are only two datasets out of 12 released in DICOM format. Our work shows how the DICOM can potentially fully support big data management; however, further efforts are still needed from the scientific and technological community to promote the use of the existing standard, encouraging data sharing and interoperability for a concrete development of big data analytics.


Author(s):  
Guowei Cai ◽  
Sankaran Mahadevan

This manuscript explores the application of big data analytics in online structural health monitoring. As smart sensor technology is making progress and low cost online monitoring is increasingly possible, large quantities of highly heterogeneous data can be acquired during the monitoring, thus exceeding the capacity of traditional data analytics techniques. This paper investigates big data techniques to handle the highvolume data obtained in structural health monitoring. In particular, we investigate the analysis of infrared thermal images for structural damage diagnosis. We explore the MapReduce technique to parallelize the data analytics and efficiently handle the high volume, high velocity and high variety of information. In our study, MapReduce is implemented with the Spark platform, and image processing functions such as uniform filter and Sobel filter are wrapped in the mappers. The methodology is illustrated with concrete slabs, using actual experimental data with induced damage


2019 ◽  
Vol 8 (3) ◽  
pp. 4384-4392

Big data is being generating in a wide variety of formats at an exponential rate. Big data analytics deals with processing and analyzing voluminous data to provide useful insight for guided decision making. The traditional data storage and management tools are not well-equipped to handle big data and its application. Apache Hadoop is a popular open-source platform that supports storage and processing of extremely large datasets. For the purposes of big data analytics, Hadoop ecosystem provides a variety of tools. However, there is a need to select a tool that is best suited for a specific requirement of big data analytics. The tools have their own advantages and drawbacks over each other. Some of them have overlapping business use cases however they differ in critical functional areas. So, there is a need to consider the trade-offs between usability and suitability while selecting a tool from Hadoop ecosystem. This paper identifies the requirements for Big Data Analytics (BDA) and maps tools of the Hadoop framework that are best suited for them. For this, we have categorized Hadoop tools according to their functionality and usage. Different Hadoop tools are discussed from the users’ perspective along with their pros and cons, if any. Also, for each identified category, comparison of Hadoop tools based on important parameters is presented. The tools have been thoroughly studied and analyzed based on their suitability for the different requirements of big data analytics. A mapping of big data analytics requirements to the Hadoop tools has been established for use by the data analysts and predictive modelers.


2021 ◽  
Vol 15 (2) ◽  
pp. 19-28
Author(s):  
Alexander I. Kovalenko ◽  

The article is devoted to the issue of applying the Essential Facilities Doctrine to big data for the purpose of antitrust regulation of digital markets. The concept of big data is revealed, which currently does not have a normative legal fixation in the legislation of the Russian Federation. The article identifies the difference between big data and regular data, between the organizational, managerial, economic and legal content of these phenomena of the digital age. Based on the analysis of scientific publications, law enforcement and judicial practice of antimonopoly agencies of foreign countries and Russia, the author systematizes the directions of using big data in competition in digital markets. The article also reveals the content of the doctrine of essential facilities in competition law. The author describes the key ideas of the scientific discussion on the application of the doctrine of essential facilities to big data. The article contains author’s suggestions and recommendations for the correct division of the complex of big data technology into competitive (AI) and infrastructure (data sets) components. If necessary, the doctrine of essential facilities can be extended to computing power, server data storage, data sets (in an unstructured form) of digital giants. The author believes that applying the doctrine of essential facilities to big data can eliminate the abuse of dominance by tech giants. But the application of the doctrine of essential facilities is not able to systematically change the state of affairs in digital markets. In order to remove the incentives of Internet giants to abuse dominance with the help of big data, it is necessary to separate from the Internet giants business units, that would exclusively provide non-discriminatory access to Big Data management services. And the competencies associated with the development and application of big data analytics (AI) technologies should be assigned to business units, that would compete in derivative digital markets independently.


2019 ◽  
Vol 59 (6) ◽  
pp. 415-429 ◽  
Author(s):  
JUAN-PEDRO CABRERA-SÁNCHEZ ◽  
ÁNGEL F VILLAREJO-RAMOS

ABSTRACT With the total quantity of data doubling every two years, the low price of computing and data storage, make Big Data analytics (BDA) adoption desirable for companies, as a tool to get competitive advantage. Given the availability of free software, why have some companies failed to adopt these techniques? To answer this question, we extend the unified theory of technology adoption and use of technology model (UTAUT) adapted for the BDA context, adding two variables: resistance to use and perceived risk. We used the level of implementation of these techniques to divide companies into users and non-users of BDA. The structural models were evaluated by partial least squares (PLS). The results show the importance of good infrastructure exceeds the difficulties companies face in implementing it. While companies planning to use Big Data expect strong results, current users are more skeptical about its performance.


Author(s):  
Balasree K ◽  
Dharmarajan K

In rapid development of Big Data technology over the recent years, this paper discussing about the Machine Learning (ML) playing role that is based on methods and algorithms to Big Data Processing and Big Data Analytics. In evolutionary fields and computing fields of developments that both are complementing each other. Big Data: The rapid growth of such data solutions needed to be studied and provided to handle then to gain the knowledge from datasets and extracting values due to the data sets are very high in velocity and variety. The Big data analytics are involving and indicating the appropriate data storage and computational outline that enhanced by using Scalable Machine Learning Algorithms and Big Data Analytics then the analytics to reveal the massive amounts of hidden data’s and secret correlations. This type of Analytic information useful for organizations and companies to gain deeper knowledge, development and getting advantages over the competition. When using this Analytics we can predict the accurate implementation over the data. This paper presented about the detailed review of state-of-the-art developments and overview of advantages and challenges in Machine Learning Algorithms over big data analytics.


Author(s):  
Janet Chan

Internet and telecommunications, ubiquitous sensing devices, and advances in data storage and analytic capacities have heralded the age of Big Data, where the volume, velocity, and variety of data not only promise new opportunities for the harvesting of information, but also threaten to overload existing resources for making sense of this information. The use of Big Data technology for criminal justice and crime control is a relatively new development. Big Data technology has overlapped with criminology in two main areas: (a) Big Data is used as a type of data in criminological research, and (b) Big Data analytics is employed as a predictive tool to guide criminal justice decisions and strategies. Much of the debate about Big Data in criminology is concerned with legitimacy, including privacy, accountability, transparency, and fairness. Big Data is often made accessible through data visualization. Big Data visualization is a performance that simultaneously masks the power of commercial and governmental surveillance and renders information political. The production of visuality operates in an economy of attention. In crime control enterprises, future uncertainties can be masked by affective triggers that create an atmosphere of risk and suspicion. There have also been efforts to mobilize data to expose harms and injustices and garner support for resistance. While Big Data and visuality can perform affective modulation in the race for attention, the impact of data visualization is not always predictable. By removing the visibility of real people or events and by aestheticizing representations of tragedies, data visualization may achieve further distancing and deadening of conscience in situations where graphic photographic images might at least garner initial emotional impact.


Sign in / Sign up

Export Citation Format

Share Document