A Large-Scale Analysis of the Semantic Password Model and Linguistic Patterns in Passwords

2021 ◽  
Vol 24 (3) ◽  
pp. 1-21
Author(s):  
Rafael Veras ◽  
Christopher Collins ◽  
Julie Thorpe

In this article, we present a thorough evaluation of semantic password grammars. We report multifactorial experiments that test the impact of sample size, probability smoothing, and linguistic information on password cracking. The semantic grammars are compared with state-of-the-art probabilistic context-free grammar ( PCFG ) and neural network models, and tested in cross-validation and A vs. B scenarios. We present results that reveal the contributions of part-of-speech (syntactic) and semantic patterns, and suggest that the former are more consequential to the security of passwords. Our results show that in many cases PCFGs are still competitive models compared to their latest neural network counterparts. In addition, we show that there is little performance gain in training PCFGs with more than 1 million passwords. We present qualitative analyses of four password leaks (Mate1, 000webhost, Comcast, and RockYou) based on trained semantic grammars, and derive graphical models that capture high-level dependencies between token classes. Finally, we confirm the similarity inferences from our qualitative analysis by examining the effectiveness of grammars trained and tested on all pairs of leaks.

2021 ◽  
Author(s):  
Aristeidis Seretis

A fundamental challenge for machine learning models for electromagnetics is their ability to predict output quantities of interest (such as fields and scattering parameters) in geometries that the model has not been trained for. Addressing this challenge is a key to fulfilling one of the most appealing promises of machine learning for computational electromagnetics: the rapid solution of problems of interest just by processing the geometry and the sources involved. The impact of such models that can "generalize" to new geometries is more profound for large-scale computations, such as those encountered in wireless propagation scenarios. We present generalizable models for indoor propagation that can predict received signal strengths within new geometries, beyond those of the training set of the model, for transmitters and receivers of multiple positions, and for new frequencies. We show that a convolutional neural network can "learn" the physics of indoor radiowave propagation from ray-tracing solutions of a small set of training geometries, so that it can eventually deal with substantially different geometries. We emphasize the role of exploiting physical insights in the training of the network, by defining input parameters and cost functions that assist the network to efficiently learn basic and complex propagation mechanisms.


2021 ◽  
Author(s):  
Aristeidis Seretis

A fundamental challenge for machine learning models for electromagnetics is their ability to predict output quantities of interest (such as fields and scattering parameters) in geometries that the model has not been trained for. Addressing this challenge is a key to fulfilling one of the most appealing promises of machine learning for computational electromagnetics: the rapid solution of problems of interest just by processing the geometry and the sources involved. The impact of such models that can "generalize" to new geometries is more profound for large-scale computations, such as those encountered in wireless propagation scenarios. We present generalizable models for indoor propagation that can predict received signal strengths within new geometries, beyond those of the training set of the model, for transmitters and receivers of multiple positions, and for new frequencies. We show that a convolutional neural network can "learn" the physics of indoor radiowave propagation from ray-tracing solutions of a small set of training geometries, so that it can eventually deal with substantially different geometries. We emphasize the role of exploiting physical insights in the training of the network, by defining input parameters and cost functions that assist the network to efficiently learn basic and complex propagation mechanisms.


1997 ◽  
pp. 931-935 ◽  
Author(s):  
Anders Lansner ◽  
Örjan Ekeberg ◽  
Erik Fransén ◽  
Per Hammarlund ◽  
Tomas Wilhelmsson

2018 ◽  
Vol 7 (3.15) ◽  
pp. 95 ◽  
Author(s):  
M Zabir ◽  
N Fazira ◽  
Zaidah Ibrahim ◽  
Nurbaity Sabri

This paper aims to evaluate the accuracy performance of pre-trained Convolutional Neural Network (CNN) models, namely AlexNet and GoogLeNet accompanied by one custom CNN. AlexNet and GoogLeNet have been proven for their good capabilities as these network models had entered ImageNet Large Scale Visual Recognition Challenge (ILSVRC) and produce relatively good results. The evaluation results in this research are based on the accuracy, loss and time taken of the training and validation processes. The dataset used is Caltech101 by California Institute of Technology (Caltech) that contains 101 object categories. The result reveals that custom CNN architecture produces 91.05% accuracy whereas AlexNet and GoogLeNet achieve similar accuracy which is 99.65%. GoogLeNet consistency arrives at an early training stage and provides minimum error function compared to the other two models. 


2020 ◽  
Vol 31 (3) ◽  
pp. 287-296
Author(s):  
Ahmed A. Moustafa ◽  
Angela Porter ◽  
Ahmed M. Megreya

AbstractMany students suffer from anxiety when performing numerical calculations. Mathematics anxiety is a condition that has a negative effect on educational outcomes and future employment prospects. While there are a multitude of behavioral studies on mathematics anxiety, its underlying cognitive and neural mechanism remain unclear. This article provides a systematic review of cognitive studies that investigated mathematics anxiety. As there are no prior neural network models of mathematics anxiety, this article discusses how previous neural network models of mathematical cognition could be adapted to simulate the neural and behavioral studies of mathematics anxiety. In other words, here we provide a novel integrative network theory on the links between mathematics anxiety, cognition, and brain substrates. This theoretical framework may explain the impact of mathematics anxiety on a range of cognitive and neuropsychological tests. Therefore, it could improve our understanding of the cognitive and neurological mechanisms underlying mathematics anxiety and also has important applications. Indeed, a better understanding of mathematics anxiety could inform more effective therapeutic techniques that in turn could lead to significant improvements in educational outcomes.


Author(s):  
Ratish Puduppully ◽  
Li Dong ◽  
Mirella Lapata

Recent advances in data-to-text generation have led to the use of large-scale datasets and neural network models which are trained end-to-end, without explicitly modeling what to say and in what order. In this work, we present a neural network architecture which incorporates content selection and planning without sacrificing end-to-end training. We decompose the generation task into two stages. Given a corpus of data records (paired with descriptive documents), we first generate a content plan highlighting which information should be mentioned and in which order and then generate the document while taking the content plan into account. Automatic and human-based evaluation experiments show that our model1 outperforms strong baselines improving the state-of-the-art on the recently released RotoWIRE dataset.


Electronics ◽  
2021 ◽  
Vol 10 (21) ◽  
pp. 2687
Author(s):  
Eun-Hun Lee ◽  
Hyeoncheol Kim

The significant advantage of deep neural networks is that the upper layer can capture the high-level features of data based on the information acquired from the lower layer by stacking layers deeply. Since it is challenging to interpret what knowledge the neural network has learned, various studies for explaining neural networks have emerged to overcome this problem. However, these studies generate the local explanation of a single instance rather than providing a generalized global interpretation of the neural network model itself. To overcome such drawbacks of the previous approaches, we propose the global interpretation method for the deep neural network through features of the model. We first analyzed the relationship between the input and hidden layers to represent the high-level features of the model, then interpreted the decision-making process of neural networks through high-level features. In addition, we applied network pruning techniques to make concise explanations and analyzed the effect of layer complexity on interpretability. We present experiments on the proposed approach using three different datasets and show that our approach could generate global explanations on deep neural network models with high accuracy and fidelity.


2020 ◽  
Vol 2 (3) ◽  
pp. 156-164 ◽  
Author(s):  
Dr. Akey Sungheetha ◽  
Dr. Rajesh Sharma R

In the field of image processing, all types of computation models are almost evolved to solve the issues through encoded neurons. However, compared with decoding orientation and regression analysis, still the doors are open due to its complexity. At present technologies uses two steps such as, decoding the intermediate terms and reconstruction using decoded information. The performance in terms of regression analysis is lagging due to the decoded intermediate terms. Conventional neural network models perform better in feature classification and representation, though the performance is reduced while handling high level features. Considering these issues in image classification and regression, the proposed model is designed with capsule network as an innovative method which is suitable to handle high level features. The experimental results of the proposed model are compared with conventional neural network models such as BPNN and CNN to validate the superior performance. The proposed model achieves better retrieval efficiency of 95.4% which is much better than other neural network models.


Sign in / Sign up

Export Citation Format

Share Document