Estimating product evolution graph using Kolmogorov complexity

Author(s):  
Yasuhiro Hayase ◽  
Tetsuya Kanda ◽  
Takashi Ishio
2020 ◽  
pp. 1-28
Author(s):  
NIKITA MORIAKOV

Abstract A theorem of Brudno says that the Kolmogorov–Sinai entropy of an ergodic subshift over $\mathbb {N}$ equals the asymptotic Kolmogorov complexity of almost every word in the subshift. The purpose of this paper is to extend this result to subshifts over computable groups that admit computable regular symmetric Følner monotilings, which we introduce in this work. For every $d \in \mathbb {N}$ , the groups $\mathbb {Z}^d$ and $\mathsf{UT}_{d+1}(\mathbb {Z})$ admit computable regular symmetric Følner monotilings for which the required computing algorithms are provided.


2007 ◽  
Vol 72 (3) ◽  
pp. 1003-1018 ◽  
Author(s):  
John Chisholm ◽  
Jennifer Chubb ◽  
Valentina S. Harizanov ◽  
Denis R. Hirschfeldt ◽  
Carl G. Jockusch ◽  
...  

AbstractWe study the weak truth-table and truth-table degrees of the images of subsets of computable structures under isomorphisms between computable structures. In particular, we show that there is a low c.e. set that is not weak truth-table reducible to any initial segment of any scattered computable linear ordering. Countable subsets of 2ω and Kolmogorov complexity play a major role in the proof.


Author(s):  
Alessandro Achille ◽  
Giovanni Paolini ◽  
Glen Mbeng ◽  
Stefano Soatto

Abstract We introduce an asymmetric distance in the space of learning tasks and a framework to compute their complexity. These concepts are foundational for the practice of transfer learning, whereby a parametric model is pre-trained for a task, and then fine tuned for another. The framework we develop is non-asymptotic, captures the finite nature of the training dataset and allows distinguishing learning from memorization. It encompasses, as special cases, classical notions from Kolmogorov complexity and Shannon and Fisher information. However, unlike some of those frameworks, it can be applied to large-scale models and real-world datasets. Our framework is the first to measure complexity in a way that accounts for the effect of the optimization scheme, which is critical in deep learning.


2021 ◽  
Author(s):  
Cheng Chen ◽  
Jesse Mullis ◽  
Beshoy Morkos

Abstract Risk management is vital to a product’s lifecycle. The current practice of reducing risks relies on domain experts or management tools to identify unexpected engineering changes, where such approaches are prone to human errors and laborious operations. However, this study presents a framework to contribute to requirements management by implementing a generative probabilistic model, the supervised latent Dirichlet allocation (LDA) with collapsed Gibbs sampling (CGS), to study the topic composition within three unlabeled and unstructured industrial requirements documents. As finding the preferred number of topics remains an open-ended question, a case study estimates an appropriate number of topics to represent each requirements document based on both perplexity and coherence values. Using human evaluations and interpretable visualizations, the result demonstrates the different level of design details by varying the number of topics. Further, a relevance measurement provides the flexibility to improve the quality of topics. Designers can increase design efficiency by understanding, organizing, and analyzing high-volume requirements documents in confirmation management based on topics across different domains. With domain knowledge and purposeful interpretation of topics, designers can make informed decisions on product evolution and mitigate the risks of unexpected engineering changes.


Sign in / Sign up

Export Citation Format

Share Document