An Algorithm for the Generation of an Optimum CMM Inspection Path

1994 ◽  
Vol 116 (3) ◽  
pp. 396-404 ◽  
Author(s):  
E. Lu ◽  
J. Ni ◽  
S. M. Wu

An algorithm for generating an optimum CMM inspection path is developed to improve the throughput of CMMs. In this algorithm, a modified 3-D ray tracing technique is applied to an octree database of a CMM configuration space to detect obstacles between any two target points. After an obstacle is detected, collision-free silhouette contour vertices of the object are generated, from a selection criterion, as potential points of a vertex path. As the ray advances, a sequential-decision-making technique is used to derive the suboptimum vertex path from possible collision-free vertex paths. After the suboptimum vertex path is generated, a selection strategy is employed to ensure a correct edge path sequence for deriving an optimum edge point path. A 3-D simulation shows that the proposed global algorithm eliminates the dynamically undesirable characteristics of octree based algorithms and saves searching time in congested work spaces by finding paths around colliding objects. Actual measurement of a test part indicates that the proposed method can reduce the inspection time to less than half as compared to the interactive graphic method.

Author(s):  
Ming-Sheng Ying ◽  
Yuan Feng ◽  
Sheng-Gang Ying

AbstractMarkov decision process (MDP) offers a general framework for modelling sequential decision making where outcomes are random. In particular, it serves as a mathematical framework for reinforcement learning. This paper introduces an extension of MDP, namely quantum MDP (qMDP), that can serve as a mathematical model of decision making about quantum systems. We develop dynamic programming algorithms for policy evaluation and finding optimal policies for qMDPs in the case of finite-horizon. The results obtained in this paper provide some useful mathematical tools for reinforcement learning techniques applied to the quantum world.


2021 ◽  
pp. 1-16
Author(s):  
Pegah Alizadeh ◽  
Emiliano Traversi ◽  
Aomar Osmani

Markov Decision Process Models (MDPs) are a powerful tool for planning tasks and sequential decision-making issues. In this work we deal with MDPs with imprecise rewards, often used when dealing with situations where the data is uncertain. In this context, we provide algorithms for finding the policy that minimizes the maximum regret. To the best of our knowledge, all the regret-based methods proposed in the literature focus on providing an optimal stochastic policy. We introduce for the first time a method to calculate an optimal deterministic policy using optimization approaches. Deterministic policies are easily interpretable for users because for a given state they provide a unique choice. To better motivate the use of an exact procedure for finding a deterministic policy, we show some (theoretical and experimental) cases where the intuitive idea of using a deterministic policy obtained after “determinizing” the optimal stochastic policy leads to a policy far from the exact deterministic policy.


Sign in / Sign up

Export Citation Format

Share Document