Reviewing recovery-management under real-time requirements in distributed systems

Author(s):  
R. Schumann
Author(s):  
Pengyi Zheng

Background:Middleware has been widely adopted as an efficient approach to facilitate the development process of large-scale distributed systems. Although this approach can achieve simplified system design and reduced time consumption, it has a strong demand for real-time task processing and data transmitting. Traditional middleware, which lacks the real-time performance guarantee mechanism, cannot be adopted by the distributed systems with strict time restrictions.Methods:In order to meet the real-time requirements of distributed systems, the functional requirements of middleware are studied, and a real-time publish/subscribe middleware is designed and implemented in this paper. A hierarchical design is adopted to build this middleware from a model layer, a communication layer and a support layer. The middleware has the data synchronization on distributed objects, Ethernet-based publish/subscribe mechanism and the real-time performance guarantee mechanism.Results:Experiments show that the proposed middleware satisfies the needs of distributed applications, and ensures that the end-to-end data communication delay is less than 1 millisecond, which meets the real-time requirements of data transmission in distributed systems.Conclusion:In this paper, we design and implement a real-time middleware based on the publish/ subscribe mechanism.


2021 ◽  
Vol 1 (1) ◽  
Author(s):  
E. Bertino ◽  
M. R. Jahanshahi ◽  
A. Singla ◽  
R.-T. Wu

AbstractThis paper addresses the problem of efficient and effective data collection and analytics for applications such as civil infrastructure monitoring and emergency management. Such problem requires the development of techniques by which data acquisition devices, such as IoT devices, can: (a) perform local analysis of collected data; and (b) based on the results of such analysis, autonomously decide further data acquisition. The ability to perform local analysis is critical in order to reduce the transmission costs and latency as the results of an analysis are usually smaller in size than the original data. As an example, in case of strict real-time requirements, the analysis results can be transmitted in real-time, whereas the actual collected data can be uploaded later on. The ability to autonomously decide about further data acquisition enhances scalability and reduces the need of real-time human involvement in data acquisition processes, especially in contexts with critical real-time requirements. The paper focuses on deep neural networks and discusses techniques for supporting transfer learning and pruning, so to reduce the times for training the networks and the size of the networks for deployment at IoT devices. We also discuss approaches based on machine learning reinforcement techniques enhancing the autonomy of IoT devices.


2021 ◽  
Vol 20 (3) ◽  
pp. 1-22
Author(s):  
David Langerman ◽  
Alan George

High-resolution, low-latency apps in computer vision are ubiquitous in today’s world of mixed-reality devices. These innovations provide a platform that can leverage the improving technology of depth sensors and embedded accelerators to enable higher-resolution, lower-latency processing for 3D scenes using depth-upsampling algorithms. This research demonstrates that filter-based upsampling algorithms are feasible for mixed-reality apps using low-power hardware accelerators. The authors parallelized and evaluated a depth-upsampling algorithm on two different devices: a reconfigurable-logic FPGA embedded within a low-power SoC; and a fixed-logic embedded graphics processing unit. We demonstrate that both accelerators can meet the real-time requirements of 11 ms latency for mixed-reality apps. 1


2021 ◽  
Vol 40 (2) ◽  
pp. 65-69
Author(s):  
Richard Wai

Modern day cloud native applications have become broadly representative of distributed systems in the wild. However, unlike traditional distributed system models with conceptually static designs, cloud-native systems emphasize dynamic scaling and on-line iteration (CI/CD). Cloud-native systems tend to be architected around a networked collection of distinct programs ("microservices") that can be added, removed, and updated in real-time. Typically, distinct containerized programs constitute individual microservices that then communicate among the larger distributed application through heavy-weight protocols. Common communication stacks exchange JSON or XML objects over HTTP, via TCP/TLS, and incur significant overhead, particularly when using small size message sizes. Additionally, interpreted/JIT/VM-based languages such as Javascript (NodeJS/Deno), Java, and Python are dominant in modern microservice programs. These language technologies, along with the high-overhead messaging, can impose superlinear cost increases (hardware demands) on scale-out, particularly towards hyperscale and/or with latency-sensitive workloads.


2020 ◽  
Vol 13 (1) ◽  
pp. 89
Author(s):  
Manuel Carranza-García ◽  
Jesús Torres-Mateo ◽  
Pedro Lara-Benítez ◽  
Jorge García-Gutiérrez

Object detection using remote sensing data is a key task of the perception systems of self-driving vehicles. While many generic deep learning architectures have been proposed for this problem, there is little guidance on their suitability when using them in a particular scenario such as autonomous driving. In this work, we aim to assess the performance of existing 2D detection systems on a multi-class problem (vehicles, pedestrians, and cyclists) with images obtained from the on-board camera sensors of a car. We evaluate several one-stage (RetinaNet, FCOS, and YOLOv3) and two-stage (Faster R-CNN) deep learning meta-architectures under different image resolutions and feature extractors (ResNet, ResNeXt, Res2Net, DarkNet, and MobileNet). These models are trained using transfer learning and compared in terms of both precision and efficiency, with special attention to the real-time requirements of this context. For the experimental study, we use the Waymo Open Dataset, which is the largest existing benchmark. Despite the rising popularity of one-stage detectors, our findings show that two-stage detectors still provide the most robust performance. Faster R-CNN models outperform one-stage detectors in accuracy, being also more reliable in the detection of minority classes. Faster R-CNN Res2Net-101 achieves the best speed/accuracy tradeoff but needs lower resolution images to reach real-time speed. Furthermore, the anchor-free FCOS detector is a slightly faster alternative to RetinaNet, with similar precision and lower memory usage.


Sign in / Sign up

Export Citation Format

Share Document