scholarly journals Flaw Selection Strategies for Partial-Order Planning

1997 ◽  
Vol 6 ◽  
pp. 223-262 ◽  
Author(s):  
M. E. Pollack ◽  
D. Joslin ◽  
M. Paolucci

Several recent studies have compared the relative efficiency of alternative flaw selection strategies for partial-order causal link (POCL) planning. We review this literature, and present new experimental results that generalize the earlier work and explain some of the discrepancies in it. In particular, we describe the Least-Cost Flaw Repair (LCFR) strategy developed and analyzed by Joslin and Pollack (1994), and compare it with other strategies, including Gerevini and Schubert's (1996) ZLIFO strategy. LCFR and ZLIFO make very different, and apparently conflicting claims about the most effective way to reduce search-space size in POCL planning. We resolve this conflict, arguing that much of the benefit that Gerevini and Schubert ascribe to the LIFO component of their ZLIFO strategy is better attributed to other causes. We show that for many problems, a strategy that combines least-cost flaw selection with the delay of separable threats will be effective in reducing search-space size, and will do so without excessive computational overhead. Although such a strategy thus provides a good default, we also show that certain domain characteristics may reduce its effectiveness.

2003 ◽  
Vol 20 ◽  
pp. 405-430 ◽  
Author(s):  
H. L.S. Younes ◽  
R. G. Simmons

VHPOP is a partial order causal link (POCL) planner loosely based on UCPOP. It draws from the experience gained in the early to mid 1990's on flaw selection strategies for POCL planning, and combines this with more recent developments in the field of domain independent planning such as distance based heuristics and reachability analysis. We present an adaptation of the additive heuristic for plan space planning, and modify it to account for possible reuse of existing actions in a plan. We also propose a large set of novel flaw selection strategies, and show how these can help us solve more problems than previously possible by POCL planners. VHPOP also supports planning with durative actions by incorporating standard techniques for temporal constraint reasoning. We demonstrate that the same heuristic techniques used to boost the performance of classical POCL planning can be effective in domains with durative actions as well. The result is a versatile heuristic POCL planner competitive with established CSP-based and heuristic state space planners.


2010 ◽  
Vol 39 ◽  
pp. 217-268 ◽  
Author(s):  
M. O. Riedl ◽  
R. M. Young

Narrative, and in particular storytelling, is an important part of the human experience. Consequently, computational systems that can reason about narrative can be more effective communicators, entertainers, educators, and trainers. One of the central challenges in computational narrative reasoning is narrative generation, the automated creation of meaningful event sequences. There are many factors -- logical and aesthetic -- that contribute to the success of a narrative artifact. Central to this success is its understandability. We argue that the following two attributes of narratives are universal: (a) the logical causal progression of plot, and (b) character believability. Character believability is the perception by the audience that the actions performed by characters do not negatively impact the audience's suspension of disbelief. Specifically, characters must be perceived by the audience to be intentional agents. In this article, we explore the use of refinement search as a technique for solving the narrative generation problem -- to find a sound and believable sequence of character actions that transforms an initial world state into a world state in which goal propositions hold. We describe a novel refinement search planning algorithm -- the Intent-based Partial Order Causal Link (IPOCL) planner -- that, in addition to creating causally sound plot progression, reasons about character intentionality by identifying possible character goals that explain their actions and creating plan structures that explain why those characters commit to their goals. We present the results of an empirical evaluation that demonstrates that narrative plans generated by the IPOCL algorithm support audience comprehension of character intentions better than plans generated by conventional partial-order planners.


Time-lock encryption is a type of encryption in which the process is bound by a factor of time that enables previously impossible applications such as secure auctions, mortgage payment, key escrow, or fair multiparty computations. Existing solution approaches of time lock either employ computational overhead to calculate time or use analogues to map the real-world time, hence lacks reliability. We propose a reliable time-lock encryption scheme, where even receivers with relatively weak computational resources can decrypt the cipher after an accurate real-world deadline, without any interaction with the sender. Proposed solution uses time fetched from timeservers over secured https channel for time lock accuracy and strong AES-256 encryption/decryption techniques for reliability. The paper briefly discusses a java based prototype implementation of the proposed approach and the experimental results


2021 ◽  
Author(s):  
Che-Hang Cliff Chan

The thesis presents a Genetic Algorithm with Adaptive Search Space (GAASS) proposed to improve both convergence performance and solution accuracy of traditional Genetic Algorithms(GAs). The propsed GAASS method has bee hybridized to a real-coded genetic algorithm to perform hysteresis parameters identification and hystereis invers compensation of an electromechanical-valve acuator installed on a pneumatic system. The experimental results have demonstrated the supreme performance of the proposed GAASS in the search of optimum solutions.


2021 ◽  
Vol 27 (11) ◽  
pp. 563-574
Author(s):  
V. V. Kureychik ◽  
◽  
S. I. Rodzin ◽  

Computational models of bio heuristics based on physical and cognitive processes are presented. Data on such characteristics of bio heuristics (including evolutionary and swarm bio heuristics) are compared.) such as the rate of convergence, computational complexity, the required amount of memory, the configuration of the algorithm parameters, the difficulties of software implementation. The balance between the convergence rate of bio heuristics and the diversification of the search space for solutions to optimization problems is estimated. Experimental results are presented for the problem of placing Peco graphs in a lattice with the minimum total length of the graph edges.


Author(s):  
Tüze Kuyucu ◽  
Ivan Tanev ◽  
Katsunori Shimohara

In Genetic Programming (GP), most often the search space grows in a greater than linear fashion as the number of tasks required to be accomplished increases. This is a cause for one of the greatest problems in Evolutionary Computation (EC): scalability. The aim of the work presented here is to facilitate the evolution of control systems for complex robotic systems. The authors use a combination of mechanisms specifically designed to facilitate the fast evolution of systems with multiple objectives. These mechanisms are: a genetic transposition inspired seeding, a strongly-typed crossover, and a multiobjective optimization. The authors demonstrate that, when used together, these mechanisms not only improve the performance of GP but also the reliability of the final designs. They investigate the effect of the aforementioned mechanisms on the efficiency of GP employed for the coevolution of locomotion gaits and sensing of a simulated snake-like robot (Snakebot). Experimental results show that the mechanisms set forth contribute to significant increase in the efficiency of the evolution of fast moving and sensing Snakebots as well as the robustness of the final designs.


2014 ◽  
Vol 5 (1) ◽  
pp. 39-58 ◽  
Author(s):  
Rashmi Malhotra

To make sound decisions, managers analyze data from multiple sources using different dimensions and eventually integrate the results of their analysis. This study proposes the design of a multi-attribute-decision-support-system that combines the analytical power of two different tools: data envelopment analysis (DEA) and particle swarm optimization (PSO), one of the major algorithms using swarm intelligence. DEA measures the relative efficiency of decision making units that use multiple inputs and outputs to provide non-objective measures without making any specific assumptions about data. On the other hand PSO's main strength lies in exploring the entire search space. This study proposes a modeling technique that jointly uses the two techniques to benefit from the two methodologies.


2020 ◽  
pp. 1-16
Author(s):  
Rui Sun ◽  
Meng Han ◽  
Chunyan Zhang ◽  
Mingyao Shen ◽  
Shiyu Du

High utility itemset mining(HUIM) with negative utility is an emerging data mining task. However, the setting of the minimum utility threshold is always a challenge when mining high utility itemsets(HUIs) with negative items. Although the top-k HUIM method is very common, this method can only mine itemsets with positive items, and the problem of missing itemsets occurs when mining itemsets with negative items. To solve this problem, we first propose an effective algorithm called THN (Top-k High Utility Itemset Mining with Negative Utility). It proposes a strategy for automatically increasing the minimum utility threshold. In order to solve the problem of multiple scans of the database, it uses transaction merging and dataset projection technology. It uses a redefined sub-tree utility value and a redefined local utility value to prune the search space. Experimental results on real datasets show that THN is efficient in terms of runtime and memory usage, and has excellent scalability. Moreover, experiments show that THN performs particularly well on dense datasets.


2020 ◽  
Vol 21 (1) ◽  
Author(s):  
Zhaojuan Zhang ◽  
Wanliang Wang ◽  
Ruofan Xia ◽  
Gaofeng Pan ◽  
Jiandong Wang ◽  
...  

Abstract Background Reconstructing ancestral genomes is one of the central problems presented in genome rearrangement analysis since finding the most likely true ancestor is of significant importance in phylogenetic reconstruction. Large scale genome rearrangements can provide essential insights into evolutionary processes. However, when the genomes are large and distant, classical median solvers have failed to adequately address these challenges due to the exponential increase of the search space. Consequently, solving ancestral genome inference problems constitutes a task of paramount importance that continues to challenge the current methods used in this area, whose difficulty is further increased by the ongoing rapid accumulation of whole-genome data. Results In response to these challenges, we provide two contributions for ancestral genome inference. First, an improved discrete quantum-behaved particle swarm optimization algorithm (IDQPSO) by averaging two of the fitness values is proposed to address the discrete search space. Second, we incorporate DCJ sorting into the IDQPSO (IDQPSO-Median). In comparison with the other methods, when the genomes are large and distant, IDQPSO-Median has the lowest median score, the highest adjacency accuracy, and the closest distance to the true ancestor. In addition, we have integrated our IDQPSO-Median approach with the GRAPPA framework. Our experiments show that this new phylogenetic method is very accurate and effective by using IDQPSO-Median. Conclusions Our experimental results demonstrate the advantages of IDQPSO-Median approach over the other methods when the genomes are large and distant. When our experimental results are evaluated in a comprehensive manner, it is clear that the IDQPSO-Median approach we propose achieves better scalability compared to existing algorithms. Moreover, our experimental results by using simulated and real datasets confirm that the IDQPSO-Median, when integrated with the GRAPPA framework, outperforms other heuristics in terms of accuracy, while also continuing to infer phylogenies that were equivalent or close to the true trees within 5 days of computation, which is far beyond the difficulty level that can be handled by GRAPPA.


Sign in / Sign up

Export Citation Format

Share Document