scholarly journals Lyra2: Password Hashing Scheme with improved security against time-memory trade-offs

2017 ◽  
Author(s):  
Ewerton R. Andrade ◽  
Marcos A. Simplicio Junior

To protect against brute force attacks, modern password-based authentication systems usually employ mechanisms known as Password Hashing Schemes (PHS). Basically, a PHS is a cryptographic algorithm that generates a sequence of pseudorandom bits from a user-defined password, allowing the user to configure the computational costs involved in the process aiming to raise the costs of attackers testing multiple passwords trying to guess the correct one. In this context, the goal of this research effort is to propose a novel and superior PHS alternative. Specifically, the objective is to improve the Lyra algorithm, a PHS built upon cryptographic sponges whose project counted with the authors' participation. The resulting solution, called Lyra2, preserves the efficiency and flexibility of Lyra, and it brings important improvements when compared to its predecessor: (1) it allows a higher security level against attack venues involving time-memory trade-offs; (2) it includes tweaks for increasing the costs involved in the construction of dedicated hardware to attack; (3) it balances resistance against side-channel threats and attacks relying on cheaper (and, hence, slower) storage devices. Besides describing the algorithm's design rationale in detail, the thesis also includes a detailed analysis of its security and performance.

2020 ◽  
Vol 14 ◽  
Author(s):  
Khoirom Motilal Singh ◽  
Laiphrakpam Dolendro Singh ◽  
Themrichon Tuithung

Background: Data which are in the form of text, audio, image and video are used everywhere in our modern scientific world. These data are stored in physical storage, cloud storage and other storage devices. Some of it are very sensitive and requires efficient security while storing as well as in transmitting from the sender to the receiver. Objective: With the increase in data transfer operation, enough space is also required to store these data. Many researchers have been working to develop different encryption schemes, yet there exist many limitations in their works. There is always a need for encryption schemes with smaller cipher data, faster execution time and low computation cost. Methods: A text encryption based on Huffman coding and ElGamal cryptosystem is proposed. Initially, the text data is converted to its corresponding binary bits using Huffman coding. Next, the binary bits are grouped and again converted into large integer values which will be used as the input for the ElGamal cryptosystem. Results: Encryption and Decryption are successfully performed where the data size is reduced using Huffman coding and advance security with the smaller key size is provided by the ElGamal cryptosystem. Conclusion: Simulation results and performance analysis specifies that our encryption algorithm is better than the existing algorithms under consideration.


2021 ◽  
Author(s):  
Santiago Bouzas ◽  
María F. Barbarich ◽  
Eduardo M. Soto ◽  
Julián Padró ◽  
Valeria P. Carreira ◽  
...  

Sensors ◽  
2021 ◽  
Vol 21 (16) ◽  
pp. 5287
Author(s):  
Hiwa Mahmoudi ◽  
Michael Hofbauer ◽  
Bernhard Goll ◽  
Horst Zimmermann

Being ready-to-detect over a certain portion of time makes the time-gated single-photon avalanche diode (SPAD) an attractive candidate for low-noise photon-counting applications. A careful SPAD noise and performance characterization, however, is critical to avoid time-consuming experimental optimization and redesign iterations for such applications. Here, we present an extensive empirical study of the breakdown voltage, as well as the dark-count and afterpulsing noise mechanisms for a fully integrated time-gated SPAD detector in 0.35-μm CMOS based on experimental data acquired in a dark condition. An “effective” SPAD breakdown voltage is introduced to enable efficient characterization and modeling of the dark-count and afterpulsing probabilities with respect to the excess bias voltage and the gating duration time. The presented breakdown and noise models will allow for accurate modeling and optimization of SPAD-based detector designs, where the SPAD noise can impose severe trade-offs with speed and sensitivity as is shown via an example.


2021 ◽  
pp. 1-18
Author(s):  
ShuoYan Chou ◽  
Truong ThiThuy Duong ◽  
Nguyen Xuan Thao

Energy plays a central part in economic development, yet alongside fossil fuels bring vast environmental impact. In recent years, renewable energy has gradually become a viable source for clean energy to alleviate and decouple with a negative connotation. Different types of renewable energy are not without trade-offs beyond costs and performance. Multiple-criteria decision-making (MCDM) has become one of the most prominent tools in making decisions with multiple conflicting criteria existing in many complex real-world problems. Information obtained for decision making may be ambiguous or uncertain. Neutrosophic is an extension of fuzzy set types with three membership functions: truth membership function, falsity membership function and indeterminacy membership function. It is a useful tool when dealing with uncertainty issues. Entropy measures the uncertainty of information under neutrosophic circumstances which can be used to identify the weights of criteria in MCDM model. Meanwhile, the dissimilarity measure is useful in dealing with the ranking of alternatives in term of distance. This article proposes to build a new entropy and dissimilarity measure as well as to construct a novel MCDM model based on them to improve the inclusiveness of the perspectives for decision making. In this paper, we also give out a case study of using this model through the process of a renewable energy selection scenario in Taiwan performed and assessed.


Author(s):  
Kersten Schuster ◽  
Philip Trettner ◽  
Leif Kobbelt

We present a numerical optimization method to find highly efficient (sparse) approximations for convolutional image filters. Using a modified parallel tempering approach, we solve a constrained optimization that maximizes approximation quality while strictly staying within a user-prescribed performance budget. The results are multi-pass filters where each pass computes a weighted sum of bilinearly interpolated sparse image samples, exploiting hardware acceleration on the GPU. We systematically decompose the target filter into a series of sparse convolutions, trying to find good trade-offs between approximation quality and performance. Since our sparse filters are linear and translation-invariant, they do not exhibit the aliasing and temporal coherence issues that often appear in filters working on image pyramids. We show several applications, ranging from simple Gaussian or box blurs to the emulation of sophisticated Bokeh effects with user-provided masks. Our filters achieve high performance as well as high quality, often providing significant speed-up at acceptable quality even for separable filters. The optimized filters can be baked into shaders and used as a drop-in replacement for filtering tasks in image processing or rendering pipelines.


Author(s):  
Gaurav Chaurasia ◽  
Arthur Nieuwoudt ◽  
Alexandru-Eugen Ichim ◽  
Richard Szeliski ◽  
Alexander Sorkine-Hornung

We present an end-to-end system for real-time environment capture, 3D reconstruction, and stereoscopic view synthesis on a mobile VR headset. Our solution allows the user to use the cameras on their VR headset as their eyes to see and interact with the real world while still wearing their headset, a feature often referred to as Passthrough. The central challenge when building such a system is the choice and implementation of algorithms under the strict compute, power, and performance constraints imposed by the target user experience and mobile platform. A key contribution of this paper is a complete description of a corresponding system that performs temporally stable passthrough rendering at 72 Hz with only 200 mW power consumption on a mobile Snapdragon 835 platform. Our algorithmic contributions for enabling this performance include the computation of a coarse 3D scene proxy on the embedded video encoding hardware, followed by a depth densification and filtering step, and finally stereoscopic texturing and spatio-temporal up-sampling. We provide a detailed discussion and evaluation of the challenges we encountered, as well as algorithm and performance trade-offs in terms of compute and resulting passthrough quality.;AB@The described system is available to users as the Passthrough+ feature on Oculus Quest. We believe that by publishing the underlying system and methods, we provide valuable insights to the community on how to design and implement real-time environment sensing and rendering on heavily resource constrained hardware.


Author(s):  
Mostafa Rizk ◽  
Amer Baghdadi ◽  
Michel Jézéquel

Emergent wireless communication standards, which are employed in different transmission environments, support various modulation schemes. High-order constellations are targeted to achieve high bandwidth efficiency. However, the complexity of the symbol-by-symbol Maximum A Posteriori (MAP) algorithm increases dramatically for these high-order modulation schemes. In order to reduce the hardware complexity, the suboptimal Max-Log-MAP, which is the direct transformation of the MAP algorithm into logarithmic domain, is alternatively implemented. In the literature, a great deal of research effort has been invested into Max-Log-MAP demapping. Several simplifications are presented to meet with specific constellations. In addition, the hardware implementations dedicated for Max-Log-MAP demapping vary greatly in terms of design choices, supported flexibility and performance criteria, making them a challenge to compare. This paper explores the published Max-Log-MAP algorithm simplifications and existing hardware demapper designs and presents an extensive review of the current literature. In-depth comparisons are drawn amongst the designs and different key performance characteristics are described, namely, achieved throughput, hardware resource requirements and flexibility. This survey should facilitate fair comparisons of future designs, as well as opportunities for improving the design of Max-Log-MAP demappers.


Sign in / Sign up

Export Citation Format

Share Document