scholarly journals A Delay-Based Machine Learning Model for DMA Attack Mitigation

Cryptography ◽  
2021 ◽  
Vol 5 (3) ◽  
pp. 18
Author(s):  
Yutian Gui ◽  
Chaitanya Bhure ◽  
Marcus Hughes ◽  
Fareena Saqib

Direct Memory Access (DMA) is a state-of-the-art technique to optimize the speed of memory access and to efficiently use processing power during data transfers between the main system and a peripheral device. However, this advanced feature opens security vulnerabilities of access compromise and to manipulate the main memory of the victim host machine. The paper outlines a lightweight process that creates resilience against DMA attacks minimal modification to the configuration of the DMA protocol. The proposed scheme performs device identification of the trusted PCIe devices that have DMA capabilities and constructs a database of profiling time to authenticate the trusted devices before they can access the system. The results show that the proposed scheme generates a unique identifier for trusted devices and authenticates the devices. Furthermore, a machine learning–based real-time authentication scheme is proposed that enables runtime authentication and share the results of the time required for training and respective accuracy.

2016 ◽  
Vol 4 (1) ◽  
pp. 1-4
Author(s):  
Aman Agarwal ◽  
Arjun J Anil ◽  
Rahul Nair ◽  
K. Sivasankaran

Direct Memory Access is a method of transferring data between peripherals and memory without using the CPU. It is designed to improve system performance by allowing external devices to directly transfer information from the system memory. We generally use asynchronous type of DMA as they respond directly to input. The DMA controller issues signals to the peripheral device and main memory to execute read and write commands. In this paper DMA controller was designed using Verilog HDL and simulated in Cadence NC Launch. The design was synthesized using low power constraints. Through this design we have decreased the power consumption to 69%.


2020 ◽  
Vol 20 (3) ◽  
pp. 199-210
Author(s):  
Tobias Ziegler ◽  
Viktor Leis ◽  
Carsten Binnig

Abstract Remote Direct Memory Access (RDMA) is a networking protocol that provides high bandwidth and low latency accesses to a remote node’s main memory. Although there has been much work around RDMA, such as building libraries on top of RDMA or even applications leveraging RDMA, it remains a hard problem to identify the most suitable RDMA primitives and their combination for a given problem. While there have been some initial studies included in papers that aim to investigate selected performance characteristics of particular design choices, there has not been a systematic study to evaluate the communication patterns of scale-out systems. In this paper, we address this issue by systematically investigating how to efficiently use RDMA for building scale-out systems.


Author(s):  
Mark Endrei ◽  
Chao Jin ◽  
Minh Ngoc Dinh ◽  
David Abramson ◽  
Heidi Poxon ◽  
...  

Rising power costs and constraints are driving a growing focus on the energy efficiency of high performance computing systems. The unique characteristics of a particular system and workload and their effect on performance and energy efficiency are typically difficult for application users to assess and to control. Settings for optimum performance and energy efficiency can also diverge, so we need to identify trade-off options that guide a suitable balance between energy use and performance. We present statistical and machine learning models that only require a small number of runs to make accurate Pareto-optimal trade-off predictions using parameters that users can control. We study model training and validation using several parallel kernels and more complex workloads, including Algebraic Multigrid (AMG), Large-scale Atomic Molecular Massively Parallel Simulator, and Livermore Unstructured Lagrangian Explicit Shock Hydrodynamics. We demonstrate that we can train the models using as few as 12 runs, with prediction error of less than 10%. Our AMG results identify trade-off options that provide up to 45% improvement in energy efficiency for around 10% performance loss. We reduce the sample measurement time required for AMG by 90%, from 13 h to 74 min.


2021 ◽  
Vol 22 (Supplement_1) ◽  
Author(s):  
G Italiano ◽  
G Tamborini ◽  
V Mantegazza ◽  
V Volpato ◽  
L Fusini ◽  
...  

Abstract Funding Acknowledgements Type of funding sources: None. Objective. Preliminary studies showed the accuracy of machine learning based automated dynamic quantification of left ventricular (LV) and left atrial (LA) volumes. We aimed to evaluate the feasibility and accuracy of machine learning based automated dynamic quantification of LV and LA volumes in an unselected population. Methods. We enrolled 600 unselected patients (12% in atrial fibrillation) clinically referred for transthoracic echocardiography (2DTTE), who also underwent 3D echocardiography (3DE) imaging. LV ejection fraction (EF), LV and LA volumes were obtained from 2D images; 3D images were analysed using Dynamic Heart Model (DHM) software (Philips) resulting in LV and LA volume-time curves. A subgroup of 140 patients underwent also cardiac magnetic resonance (CMR) imaging. Average time of analysis, feasibility, and image quality were recorded and results were compared between 2DTTE, DHM and CMR. Results. The use of DHM was feasible in 522/600 cases (87%). When feasible, the boundary position was considered accurate in 335/522 patients (64%), while major (n = 38) or minor (n = 149) borders corrections were needed. The overall time required for DHM datasets was approximately 40 seconds, resulting in physiologically appearing LV and LA volume–time curves in all cases. As expected, DHM LV volumes were larger than 2D ones (end-diastolic volume: 173 ± 64 vs 142 ± 58 mL, respectively), while no differences were found for LV EF and LA volumes (EF: 55%±12 vs 56%±14; LA volume 89 ± 36 vs 89 ± 38 mL, respectively). The comparison between DHM and CMR values showed a high correlation for LV volumes (r = 0.70 and r = 0.82, p < 0.001 for end-diastolic and end-systolic volume, respectively) and an excellent correlation for EF (r= 0.82, p < 0.001) and LA volumes. Conclusions. The DHM software is feasible, accurate and quick in a large series of unselected patients, including those with suboptimal 2D images or in atrial fibrillation. Table 1 DHM quality Adjustment Feasibility Good Suboptimal Minor Major Total of patients (n, %) 522/600 (87%) 327/522 (62%) 195/522 (28%) 149/522 (29%) 38/522 (6%) Normal subjects (n, %) 39/40 (97%) 23/39 (57%) 16/39 (40%) 9/39 (21%) 1/39 (3%) Atrial Fibrillation (n, %) 59/73 (81%)* 28/59 (47%) 31/59 (53%) 15/59 (25%) 6/59 (10%) Valvular disease (n, %) 271/312 (87%) 120/271 (%) 151/271 (%) 65/271 (24%) 16/271 (6%) Coronary artery disease (n, %) 47/58 (81%)* 26/47 (46%) 21/47 (37%) 16/47 (34%) 5/47 (11%) Miscellaneous (n, %) 24/25 (96%) 18/24 (75%) 6/24 (25%) 5/24 (21%) 3/24 (12%) Feasibility of DHM, image quality and need to adjustments in global population and in each subgroup. Abstract Figure 1


2014 ◽  
Author(s):  
H. Shah ◽  
F. Marti ◽  
W. Noureddine ◽  
A. Eiriksson ◽  
R. Sharp

2021 ◽  
Author(s):  
Abigail Enders ◽  
Nicole North ◽  
Chase Fensore ◽  
Juan Velez-Alvarez ◽  
Heather Allen

<p>Fourier Transform Infrared Spectroscopy (FTIR) is a ubiquitous spectroscopic technique. Spectral interpretation is a time-consuming process, but it yields important information about functional groups present in compounds and in complex substances. We develop a generalizable model via a machine learning (ML) algorithm using Convolutional Neural Networks (CNNs) to identify the presence of functional groups in gas phase FTIR spectra. The ML models will reduce the amount of time required to analyze functional groups and facilitate interpretation of FTIR spectra. Through web scraping, we acquire intensity-frequency data from 8728 gas phase organic molecules within the NIST spectral database and transform the data into images. We successfully train models for 15 of the most common organic functional groups, which we then determine via identification from previously untrained spectra. These models serve to expand the application of FTIR measurements for facile analysis of organic samples. Our approach was done such that we have broad functional group models that inference in tandem to provide full interpretation of a spectrum. We present the first implementation of ML using image-based CNNs for predicting functional groups from a spectroscopic method.</p>


2021 ◽  
Vol 35 (11) ◽  
pp. 1350-1351
Author(s):  
Gopinath Gampala ◽  
C. J. Reddy

Traditional antenna optimization solves the modified version of the original antenna design for each iteration. Thus, the total time required to optimize a given antenna design is highly dependent on the convergence criteria of the selected algorithm and the time taken for each iteration. The use of machine learning enables the antenna designer to generate trained mathematical model that replicates the original antenna design and then apply optimization on the trained model. Use of trained model allows to run thousands of optimization iterations in a span of few seconds.


Sign in / Sign up

Export Citation Format

Share Document