scholarly journals THE CHALLENGE OF MANAGING AND ANALYZING BIG DATA

2014 ◽  
pp. 204-209
Author(s):  
Hermann Heßling

The amounts of data produced in science are growing exponentially. Traditional methods for storing and maintaining the enormous flood of data seem to be no longer sufficient anymore. The complexity of the data that will be distributed more and more worldwide, is going to constitute a considerable challenge for their analysis. According to Alex Szalay there soon will be produced so many data that they cannot even be stored and maintained anymore. The data have to be analyzed in real time in order to extract the relevant information. An outline of the project Large Scale Management and Analysis (LSDMA) is given. The status of our research group on distributed real-time computing is reviewed. Finally, a novel approach to time-dependent image processing based on local thermodynamical methods is presented.

Author(s):  
Takeshi Shimmura ◽  
Takeshi Takenaka ◽  
Motoyuki Akamatsu

In full-service restaurants, it is important to share customer information among staff members in real time in order to perform complicated operations. Conventional point of sale (POS) systems in restaurants was developed to improve the verification and transmission of order information passed from the dining hall to the kitchen. However, POS systems have remained insufficient to share customers’ order information among many staff members in different positions. This paper introduces an information sharing system for full-service restaurants using an advanced POS system with which staff members can share order information in real time. Using this system, kitchen staff members can grasp the total number of orders and the elapsed time for preparation of each order. Moreover, service staff members can grasp the status of each customer quickly. Using this system in a large-scale restaurant, preparation processes can be made more efficient and reduce customer complaints.


Author(s):  
Jyotsna Talreja Wassan

The digitization of world in various areas including health care domain has brought up remarkable changes. Electronic Health Records (EHRs) have emerged for maintaining and analyzing health care real data online unlike traditional paper based system to accelerate clinical environment for providing better healthcare. These digitized health care records are form of Big Data, not because of the fact they are voluminous but also they are real time, dynamic, sporadic and heterogeneous in nature. It is desirable to extract relevant information from EHRs to facilitate various stakeholders of the clinical environment. The role, scope and impact of Big Data paradigm on health care is discussed in this chapter.


Author(s):  
Demetrio P. Zourarakis

Future humans interacting with water in Kentucky will bring to their experience not only the panoply of expectations, assumptions, background knowledge, and past experiences but also ultra-smart gadgetry which will shape the outcome of the event. The technoscapes inhabited by human communities and individuals are over imposed on the natural rhythms which hydrology obeys, providing opportunities for sensorial fusion. An ongoing evolutionary explosion in diversity, mobility and interconnectedness of sensors is manifesting itself as the Internet of Things, all denizens of the “Cloud”, allowing the citizen scientist to easily generate georeferenced sensor information. This augmented, hybrid sensorial ecosystem challenges us to rethink how we tap into big data, mostly unstructured, representing the status of water systems, and how we extract relevant information.


2018 ◽  
Vol 2018 ◽  
pp. 1-15 ◽  
Author(s):  
Qi Zhao ◽  
Shuchang Lyu ◽  
Boxue Zhang ◽  
Wenquan Feng

Convolutional neural networks (CNNs) are becoming more and more popular today. CNNs now have become a popular feature extractor applying to image processing, big data processing, fog computing, etc. CNNs usually consist of several basic units like convolutional unit, pooling unit, activation unit, and so on. In CNNs, conventional pooling methods refer to 2×2 max-pooling and average-pooling, which are applied after the convolutional or ReLU layers. In this paper, we propose a Multiactivation Pooling (MAP) Method to make the CNNs more accurate on classification tasks without increasing depth and trainable parameters. We add more convolutional layers before one pooling layer and expand the pooling region to 4×4, 8×8, 16×16, and even larger. When doing large-scale subsampling, we pick top-k activation, sum up them, and constrain them by a hyperparameter σ. We pick VGG, ALL-CNN, and DenseNets as our baseline models and evaluate our proposed MAP method on benchmark datasets: CIFAR-10, CIFAR-100, SVHN, and ImageNet. The classification results are competitive.


2021 ◽  
Author(s):  
Florian Krause ◽  
Nikolaos Kogias ◽  
Martin Krentz ◽  
Michael Luehrs ◽  
Rainer Goebel ◽  
...  

It has recently been shown that acute stress affects the allocation of neural resources between large-scale brain networks, and the balance between the executive control network and the salience network in particular. Maladaptation of this dynamic resource reallocation process is thought to play a major role in stress-related psychopathology, suggesting that stress resilience may be determined by the retained ability to adaptively reallocate neural resources between these two networks. Actively training this ability could hence be a potentially promising way to increase resilience in individuals at risk for developing stress-related symptomatology. Using real-time functional Magnetic Resonance Imaging, the current study investigated whether individuals can learn to self-regulate stress-related large-scale network balance. Participants were engaged in a bidirectional and implicit real-time fMRI neurofeedback paradigm in which they were intermittently provided with a visual representation of the difference signal between the average activation of the salience and executive control networks, and tasked with attempting to self-regulate this signal. Our results show that, given feedback about their performance over three training sessions, participants were able to (1) learn strategies to differentially control the balance between SN and ECN activation on demand, as well as (2) successfully transfer this newly learned skill to a situation where they (a) did not receive any feedback anymore, and (b) were exposed to an acute stressor in form of the prospect of a mild electric stimulation. The current study hence constitutes an important first successful demonstration of neurofeedback training based on stress-related large-scale network balance - a novel approach that has the potential to train control over the central response to stressors in real-life and could build the foundation for future clinical interventions that aim at increasing resilience.


Internet of Things (IoT), data analytics is supporting multiple applications. These numerous applications try to gather data from different environments, here the gathered data may be homogeneous or heterogeneous, but most of the data collected from multiple environments were heterogeneous, the task of gathering, processing, storing and the analysis that is being performed on data are still challenging. Providing security to all these things is also a challenging task due to untrusted networks and big data. Big data management in the ever-expanding network may rise several non-trivial concerns on data collection, data-efficient processing, analytics, and security. However, the above said scenarios depends on large scale sensor deployed. Sensors continuously transmit data to clouds for real time use, which can raise the issue of privacy disclosure because IoT devices may gather data including a kind of sensitive private information. In this context, we propose a two-layer system or model for analyzing IoT data, collected from multiple applications. The first layer is mainly used for gathering data from multiple environments and acts as a service-oriented interface to ingest data. The second layer is responsible for storing and analyses data securely. The Proposed solutions are implemented by the use of open source components.


2019 ◽  
Vol IV (II) ◽  
pp. 1-6
Author(s):  
Mark Perkins

The huge proliferation of textual (and other data) in digital and organisational sources has led to new techniques of text analysis. The potential thereby unleashed may be underpinned by further theoretical developments to the theory of Discourse Stream Analysis (DSA) as presented here. These include the notion of change in the discourse stream in terms of discourse stream fronts, linguistic elements evolving in real time, and notions of time itself in terms of relative speed, subject orientation and perception. Big data has also given rise to fake news, the manipulation of messages on a large scale. Fake news is conveyed in fake discourse streams and has led to a new field of description and analysis.


Sign in / Sign up

Export Citation Format

Share Document