scholarly journals Cloud Security in Middleware Architecture

2021 ◽  
Author(s):  
Jagdish Chandra Patni

The new Internet of Things (IoT) has increased the need for computing, connectivity, and storage capacities as the amount of sensitive data grows. Since it provides on-demand access to a common pool of resources such as processors, storage, software, and services, cloud computing can seem to be a convenient solution. However, there is a cost, as excessive communications burden not only the core network, but also the cloud data centre. As a result, it’s critical to consider appropriate approaches and security middleware solutions. In this chapter, we define a middleware architecture to address security concerns and explore the general concept of cloud to achieve a higher level of security. Since it is designed to pre-process data at the network’s edge, this security middleware functions as a smart gateway. Data can be processed and stored locally on fog or sent to the cloud for further processing, depending on the information obtained. Furthermore, the devices communicate via middleware, which gives them access to more computing power and improved security capabilities, allowing them to conduct safe communications. We discuss these concepts in detail, and explain how this is effective to cope with some of the most relevant security challenges.

Author(s):  
Chuntao Ding ◽  
Ao Zhou ◽  
Jie Huang ◽  
Ying Liu ◽  
Shangguang Wang

AbstractContent delivery network (CDN) has gained increasing popularity in recent years for facilitating content delivery. Most existing CDN-based works upload the content generated by mobile users to the cloud data center firstly. Then, the cloud data center delivers the content to the proxy server. Finally, the mobile users request the required content from the proxy server. However, uploading all the collected content to the cloud data center increases the pressure on the core network. In addition, it also wastes a lot of bandwidth resources because most of the content does not have to be uploaded. To make up for the shortcomings of existing CDN-based works, this article proposes an edge content delivery and update (ECDU) framework based on mobile edge computing architecture. In the ECDU framework, we deploy a number of content servers to store raw content collected from mobile users, and cache pools to store content that frequently requested at the edge of the network. Thus, it is not necessary to upload all content collected by mobile users to the cloud data center, thereby alleviating the pressure of the core network. Based on content popularity and cache pool ranking, we also propose edge content delivery (ECD) and edge content update (ECU) schemes. The ECD scheme is to deliver content from cloud data center to cache pool, and the ECU scheme is to mitigate the content to appropriate cache pools in terms of its request frequency and cache pool ranking. Finally, a representative case study is provided and several open research issues are discussed.


2020 ◽  
Vol 0 (0) ◽  
Author(s):  
Bakhe Nleya ◽  
Philani Khumalo ◽  
Andrew Mutsvangwa

AbstractHeterogeneous IoT-enabled networks generally accommodate both jitter tolerant and intolerant traffic. Optical Burst Switched (OBS) backbone networks handle the resultant volumes of such traffic by transmitting it in huge size chunks called bursts. Because of the lack of or limited buffering capabilities within the core network, burst contentions may frequently occur and thus affect overall supportable quality of service (QoS). Burst contention(s) in the core network is generally characterized by frequent burst losses as well as differential delays especially when traffic levels surge. Burst contention can be resolved in the core network by way of partial buffering using fiber delay lines (FDLs), wavelength conversion using wavelength converters (WCs) or deflection routing. In this paper, we assume that burst contention is resolved by way of deflecting contending bursts to other less congested paths even though this may lead to differential delays incurred by bursts as they traverse the network. This will contribute to undesirable jitter that may ultimately compromise overall QoS. Noting that jitter is mostly caused by deflection routing which itself is a result of poor wavelength and routing assigning, the paper proposes a controlled deflection routing (CDR) and wavelength assignment based scheme that allows the deflection of bursts to alternate paths only after controller buffer preset thresholds are surpassed. In this way, bursts (or burst fragments) intended for a common destination are always most likely to be routed on the same or least cost path end-to-end. We describe the scheme as well as compare its performance to other existing approaches. Overall, both analytical and simulation results show that the proposed scheme does lower both congestion (on deflection routes) as well as jitter, thus also improving throughput as well as avoiding congestion on deflection paths.


2021 ◽  
pp. 0271678X2110029
Author(s):  
Mitsouko van Assche ◽  
Elisabeth Dirren ◽  
Alexia Bourgeois ◽  
Andreas Kleinschmidt ◽  
Jonas Richiardi ◽  
...  

After stroke restricted to the primary motor cortex (M1), it is uncertain whether network reorganization associated with recovery involves the periinfarct or more remote regions. We studied 16 patients with focal M1 stroke and hand paresis. Motor function and resting-state MRI functional connectivity (FC) were assessed at three time points: acute (<10 days), early subacute (3 weeks), and late subacute (3 months). FC correlates of recovery were investigated at three spatial scales, (i) ipsilesional non-infarcted M1, (ii) core motor network (M1, premotor cortex (PMC), supplementary motor area (SMA), and primary somatosensory cortex), and (iii) extended motor network including all regions structurally connected to the upper limb representation of M1. Hand dexterity was impaired only in the acute phase ( P = 0.036). At a small spatial scale, clinical recovery was more frequently associated with connections involving ipsilesional non-infarcted M1 (Odds Ratio = 6.29; P = 0.036). At a larger scale, recovery correlated with increased FC strength in the core network compared to the extended motor network (rho = 0.71; P = 0.006). These results suggest that FC changes associated with motor improvement involve the perilesional M1 and do not extend beyond the core motor network. Core motor regions, and more specifically ipsilesional non-infarcted M1, could hence become primary targets for restorative therapies.


2021 ◽  
Vol 13 (1) ◽  
pp. 12
Author(s):  
Juan Wang ◽  
Yang Yu ◽  
Yi Li ◽  
Chengyang Fan ◽  
Shirong Hao

Network function virtualization (NFV) provides flexible and scalable network function for the emerging platform, such as the cloud computing, edge computing, and IoT platforms, while it faces more security challenges, such as tampering with network policies and leaking sensitive processing states, due to running in a shared open environment and lacking the protection of proprietary hardware. Currently, Intel® Software Guard Extensions (SGX) provides a promising way to build a secure and trusted VNF (virtual network function) by isolating VNF or sensitive data into an enclave. However, directly placing multiple VNFs in a single enclave will lose the scalability advantage of NFV. This paper combines SGX and click technology to design the virtual security function architecture based on multiple enclaves. In our design, the sensitive modules of a VNF are put into different enclaves and communicate by local attestation. The system can freely combine these modules according to user requirements, and increase the scalability of the system while protecting its running state security. In addition, we design a new hot-swapping scheme to enable the system to dynamically modify the configuration function at runtime, so that the original VNFs do not need to stop when the function of VNFs is modified. We implement an IDS (intrusion detection system) based on our architecture to verify the feasibility of our system and evaluate its performance. The results show that the overhead introduced by the system architecture is within an acceptable range.


Sensors ◽  
2019 ◽  
Vol 19 (6) ◽  
pp. 1339 ◽  
Author(s):  
Hasan Islam ◽  
Dmitrij Lagutin ◽  
Antti Ylä-Jääski ◽  
Nikos Fotiou ◽  
Andrei Gurtov

The Constrained Application Protocol (CoAP) is a specialized web transfer protocol which is intended to be used for constrained networks and devices. CoAP and its extensions (e.g., CoAP observe and group communication) provide the potential for developing novel applications in the Internet-of-Things (IoT). However, a full-fledged CoAP-based application may require significant computing capability, power, and storage capacity in IoT devices. To address these challenges, we present the design, implementation, and experimentation with the CoAP handler which provides transparent CoAP services through the ICN core network. In addition, we demonstrate how the CoAP traffic over an ICN network can unleash the full potential of the CoAP, shifting both overhead and complexity from the (constrained) endpoints to the ICN network. The experiments prove that the CoAP Handler helps to decrease the required computation complexity, communication overhead, and state management of the CoAP server.


2021 ◽  
Author(s):  
Thomas Weripuo Gyeera

<div>The National Institute of Standards and Technology defines the fundamental characteristics of cloud computing as: on-demand computing, offered via the network, using pooled resources, with rapid elastic scaling and metered charging. The rapid dynamic allocation and release of resources on demand to meet heterogeneous computing needs is particularly challenging for data centres, which process a huge amount of data characterised by its high volume, velocity, variety and veracity (4Vs model). Data centres seek to regulate this by monitoring and adaptation, typically reacting to service failures after the fact. We present a real cloud test bed with the capabilities of proactively monitoring and gathering cloud resource information for making predictions and forecasts. This contrasts with the state-of-the-art reactive monitoring of cloud data centres. We argue that the behavioural patterns and Key Performance Indicators (KPIs) characterizing virtualized servers, networks, and database applications can best be studied and analysed with predictive models. Specifically, we applied the Boosted Decision Tree machine learning algorithm in making future predictions on the KPIs of a cloud server and virtual infrastructure network, yielding an R-Square of 0.9991 at a 0.2 learning rate. This predictive framework is beneficial for making short- and long-term predictions for cloud resources.</div>


2015 ◽  
Vol 105 (10) ◽  
pp. 674-679
Author(s):  
P. Groche ◽  
J. Schreiner ◽  
J. Hohmann ◽  
S. Höhr ◽  
A. Lechler

Industrie 4.0 gestattet transparente sowie sachgerecht angepasste Wertschöpfungsketten. Dazu ist es nötig, ein tiefgreifendes Prozessverständnis zu besitzen sowie die Aufnahme, Auswertung und Speicherung der relevanten Daten zu bewerkstelligen. Der Beitrag gibt einen Einblick in Industrie 4.0-Ansätze in der Umformtechnik und zeigt ausgewählte Ergebnisse aus dem Verbundprojekt „RobIN 4.0“. &nbsp; Industrie 4.0 opens the possibility to realize a monitoring and qualified adaption along the entire value chain. Prerequisites for this include a deep understanding of the process as well as achieving the recording, analysis and storage of relevant process data. This paper gives an insight into Industrie 4.0 approaches for the forming industry and presents selected results of the RobIN 4.0-project.


Author(s):  
Poovizhi. M ◽  
Raja. G

Using Cloud Storage, users can tenuously store their data and enjoy the on-demand great quality applications and facilities from a shared pool of configurable computing resources, without the problem of local data storage and maintenance. However, the fact that users no longer have physical possession of the outsourced data makes the data integrity protection in Cloud Computing a formidable task, especially for users with constrained dividing resources. From users’ perspective, including both individuals and IT systems, storing data remotely into the cloud in a flexible on-demand manner brings tempting benefits: relief of the burden for storage management, universal data access with independent geographical locations, and avoidance of capital expenditure on hardware, software, and personnel maintenances, etc. To securely introduce an effective Sanitizer and third party auditor (TPA), the following two fundamental requirements have to be met: 1) TPA should be able to capably audit the cloud data storage without demanding the local copy of data, and introduce no additional on-line burden to the cloud user; 2) The third party auditing process should take in no new vulnerabilities towards user data privacy. In this project, utilize and uniquely combine the public auditing protocols with double encryption approach to achieve the privacy-preserving public cloud data auditing system, which meets all integrity checking without any leakage of data. To support efficient handling of multiple auditing tasks, we further explore the technique of online signature to extend our main result into a multi-user setting, where TPA can perform multiple auditing tasks simultaneously. We can implement double encryption algorithm to encrypt the data twice and stored cloud server in Electronic Health Record applications.


2016 ◽  
Vol 16 (3) ◽  
pp. 232-256 ◽  
Author(s):  
Hans-Jörg Schulz ◽  
Thomas Nocke ◽  
Magnus Heitzler ◽  
Heidrun Schumann

Visualization has become an important ingredient of data analysis, supporting users in exploring data and confirming hypotheses. At the beginning of a visual data analysis process, data characteristics are often assessed in an initial data profiling step. These include, for example, statistical properties of the data and information on the data’s well-formedness, which can be used during the subsequent analysis to adequately parametrize views and to highlight or exclude data items. We term this information data descriptors, which can span such diverse aspects as the data’s provenance, its storage schema, or its uncertainties. Gathered descriptors encapsulate basic knowledge about the data and can thus be used as objective starting points for the visual analysis process. In this article, we bring together these different aspects in a systematic form that describes the data itself (e.g. its content and context) and its relation to the larger data gathering and visual analysis process (e.g. its provenance and its utility). Once established in general, we further detail the concept of data descriptors specifically for tabular data as the most common form of structured data today. Finally, we utilize these data descriptors for tabular data to capture domain-specific data characteristics in the field of climate impact research. This procedure from the general concept via the concrete data type to the specific application domain effectively provides a blueprint for instantiating data descriptors for other data types and domains in the future.


Author(s):  
Magnus Olsson ◽  
Shabnam Sultana ◽  
Stefan Rommer ◽  
Lars Frid ◽  
Catherine Mulligan

Sign in / Sign up

Export Citation Format

Share Document