Topological Reconfigurations Based on a Concatenation of Bennett and RPRP Mechanisms

Author(s):  
Kuan-Lun Hsu ◽  
Kwun-Lon Ting

Abstract This paper presents a family of over-constrained mechanisms with revolute and prismatic joints. They are constructed by concatenating a Bennett 4R and a spatial RPRP mechanism. This is a major breakthrough because an assembly of two different source-modules, for the first time, will be used in the modular construction. A Bennett 4R mechanism and a spatial RPRP mechanism are mated for the purpose of demonstration. Topological reconfigurations of synthesized mechanisms are also discussed. The results indicate that synthesized mechanisms can be topologically reconfigured with either a plane-symmetric structure or a spatial four-bar RCRC loop. These synthesized mechanisms along with their reconfigurations represent the first and unique contribution in theoretical and applied kinematics. Academically, proposed methodology can be used to synthesize several families of over-constrained mechanisms. Each family of new mechanisms is unique and has its own academic significance because they are theoretical exceptions outside Chebychev–Grübler–Kutzbach criterion. The geometrical principles that address the combination of hybrid loops can treat the topological synthesis of over-constrained mechanisms as a systematic approach instead of a random search. Industrially, such paradoxical mechanisms could also be potentially valuable. The ambiguity of their structural synthesis stops ones from being aware of these theoretical exceptions. Hence, people fail to implement these mechanisms into real-world applications. The findings of this research can help people sufficiently acquire the knowledge of how to configure such mechanisms with desired mobility. From a practical point of view, over-constrained mechanisms can transmit motions with less number of links than the general types need. This means that engineers could achieve a compact design with fewer components. These features could be an attractive advantage to real world applications.

Author(s):  
Adnan Darwiche ◽  
Knot Pipatsrisawat

Complete SAT algorithms form an important part of the SAT literature. From a theoretical perspective, complete algorithms can be used as tools for studying the complexities of different proof systems. From a practical point of view, these algorithms form the basis for tackling SAT problems arising from real-world applications. The practicality of modern, complete SAT solvers undoubtedly contributes to the growing interest in the class of complete SAT algorithms. We review these algorithms in this chapter, including Davis-Putnum resolution, Stalmarck’s algorithm, symbolic SAT solving, the DPLL algorithm, and modern clause-learning SAT solvers. We also discuss the issue of certifying the answers of modern complete SAT solvers.


2012 ◽  
Vol 22 (02) ◽  
pp. 1250024 ◽  
Author(s):  
HONGCHUN WANG ◽  
KEQING HE ◽  
BING LI ◽  
JINHU LÜ

Complex software networks, as a typical kind of man-made complex networks, have attracted more and more attention from various fields of science and engineering over the past ten years. With the dramatic increase of scale and complexity of software systems, it is essential to develop a systematic approach to further investigate the complex software systems by using the theories and methods of complex networks and complex adaptive systems. This paper attempts to briefly review some recent advances in complex software networks and also develop some novel tools to further analyze complex software networks, including modeling, analysis, evolution, measurement, and some potential real-world applications. More precisely, this paper first describes some effective modeling approaches for characterizing various complex software systems. Based on the above theoretical and practical models, this paper introduces some recent advances in analyzing the static and dynamical behaviors of complex software networks. It is then followed by some further discussions on potential real-world applications of complex software networks. Finally, this paper outlooks some future research topics from an engineering point of view.


Author(s):  
Chunsheng Yang ◽  
Yanni Zou ◽  
Jie Liu ◽  
Kyle R Mulligan

In the past decades, machine learning techniques or algorithms, particularly, classifiers have been widely applied to various real-world applications such as PHM. In developing high-performance classifiers, or machine learning-based models, i.e. predictive model for PHM, the predictive model evaluation remains a challenge. Generic methods such as accuracy may not fully meet the needs of models evaluation for prognostic applications. This paper addresses this issue from the point of view of PHM systems. Generic methods are first reviewed while outlining their limitations or deficiencies with respect to PHM. Then, two approaches developed for evaluating predictive models are presented with emphasis on specificities and requirements of PHM. A case of real prognostic application is studies to demonstrate the usefulness of two proposed methods for predictive model evaluation. We argue that predictive models for PHM must be evaluated not only using generic methods, but also domain-oriented approaches in order to deploy the models in real-world applications.


1994 ◽  
Vol 6 (2) ◽  
pp. 150-154
Author(s):  
Shigeki Abe ◽  
◽  
Michitaka Kameyama ◽  
Tatsuo Higuchi ◽  
◽  
...  

To achieve the safety of an intelligent digital system for real-world applications, not only the hardware faults in the processors but also any other faults and errors related to the real world such as sensor faults, actuator faults and human errors must be removed. From this point of view, an intelligent fault-tolerant system for real-world applications is proposed based on triple-modular redundancy. The system consists of a master processor that performs the actual control operations and two redundant processors which simulate real-world process together with the control operations using knowledge-based inference strategy. To realize the independency between the triplicated modules, the simulation for error detection and recovery is performed without actual external sensor signals used in the master processor.


Sensor Review ◽  
2016 ◽  
Vol 36 (3) ◽  
pp. 277-286 ◽  
Author(s):  
Wenhao Zhang ◽  
Melvyn Lionel Smith ◽  
Lyndon Neal Smith ◽  
Abdul Rehman Farooq

Purpose This paper aims to introduce an unsupervised modular approach for eye centre localisation in images and videos following a coarse-to-fine, global-to-regional scheme. The design of the algorithm aims at excellent accuracy, robustness and real-time performance for use in real-world applications. Design/methodology/approach A modular approach has been designed that makes use of isophote and gradient features to estimate eye centre locations. This approach embraces two main modalities that progressively reduce global facial features to local levels for more precise inspections. A novel selective oriented gradient (SOG) filter has been specifically designed to remove strong gradients from eyebrows, eye corners and self-shadows, which sabotage most eye centre localisation methods. The proposed algorithm, tested on the BioID database, has shown superior accuracy. Findings The eye centre localisation algorithm has been compared with 11 other methods on the BioID database and six other methods on the GI4E database. The proposed algorithm has outperformed all the other algorithms in comparison in terms of localisation accuracy while exhibiting excellent real-time performance. This method is also inherently robust against head poses, partial eye occlusions and shadows. Originality/value The eye centre localisation method uses two mutually complementary modalities as a novel, fast, accurate and robust approach. In addition, other than assisting eye centre localisation, the SOG filter is able to resolve general tasks regarding the detection of curved shapes. From an applied point of view, the proposed method has great potentials in benefiting a wide range of real-world human-computer interaction (HCI) applications.


1998 ◽  
Vol 4 (3) ◽  
pp. 237-257 ◽  
Author(s):  
Moshe Sipper

The study of artificial self-replicating structures or machines has been taking place now for almost half a century. My goal in this article is to present an overview of research carried out in the domain of self-replication over the past 50 years, starting from von Neumann's work in the late 1940s and continuing to the most recent research efforts. I shall concentrate on computational models, that is, ones that have been studied from a computer science point of view, be it theoretical or experimental. The systems are divided into four major classes, according to the model on which they are based: cellular automata, computer programs, strings (or strands), or an altogether different approach. With the advent of new materials, such as synthetic molecules and nanomachines, it is quite possible that we shall see this somewhat theoretical domain of study producing practical, real-world applications.


1998 ◽  
Vol 13 (2) ◽  
pp. 185-194 ◽  
Author(s):  
PATRICK BRÉZILLON ◽  
MARCOS CAVALCANTI

The first International and Interdisciplinary Conference on Modeling and Using Context (CONTEXT-97) was held at Rio de Janeiro, Brazil on February 4–6 1997. This article provides a summary of the presentations and discussions during the three days with a focus on context in applications. The notion of context is far from defined, and is dependent in its interpretation on a cognitive science versus an engineering (or system building) point of view. However, the conference makes it possible to identify new trends in the formalization of context at a theoretical level, as well as in the use of context in real-world applications. Results presented at the conference are ascribed in the realm of the works on context over the past few years at specific workshops and symposia. The diversity of the attendees' origins (artificial intelligence, linguistics, philosophy, psychology, etc.) demonstrates that there are different types of context, not a unique one. For instance, logicians model context at the level of the knowledge representation and the reasoning mechanisms, while cognitive scientists consider context at the level of the interaction between two agents (i.e. two humans or a human and a machine). In the latter case, there are now strong arguments proving that one can speak of context only in reference to its use (e.g. context of an item or of a problem solving exercise). Moreover, there are different types of context that are interdependent. This makes it possible to understand why, despite the consensus on some context aspects, agreement on the notion of context is not yet achieved.


2020 ◽  
Vol 34 (03) ◽  
pp. 2442-2449
Author(s):  
Yi Zhou ◽  
Jingwei Xu ◽  
Zhenyu Guo ◽  
Mingyu Xiao ◽  
Yan Jin

The problem of enumerating all maximal cliques in a graph is a key primitive in a variety of real-world applications such as community detection and so on. However, in practice, communities are rarely formed as cliques due to data noise. Hence, k-plex, a subgraph in which any vertex is adjacent to all but at most k vertices, is introduced as a relaxation of clique. In this paper, we investigate the problem of enumerating all maximal k-plexes and present FaPlexen, an enumeration algorithm which integrates the “pivot” heuristic and new branching schemes. To our best knowledge, for the first time, FaPlexen lists all maximal k-plexes with provably worst-case running time O(n2γn) in a graph with n vertices, where γ < 2. Then, we propose another algorithm CommuPlex which non-trivially extends FaPlexen to find all maximal k-plexes of prescribed size for community detection in massive real-life networks. We finally carry out experiments on both real and synthetic graphs and demonstrate that our algorithms run much faster than the state-of-the-art algorithms.


2020 ◽  
Vol 4 (1) ◽  
pp. 67-86
Author(s):  
Elisabeth Heyne

AbstractAlthough visual culture of the 21th century increasingly focuses on representation of death and dying, contemporary discourses still lack a language of death adequate to the event shown by pictures and visual images from an outside point of view. Following this observation, this article suggests a re-reading of 20th century author Elias Canetti. His lifelong notes have been edited and published posthumously for the first time in 2014. Thanks to this edition Canetti's short texts and aphorisms can be focused as a textual laboratory in which he tries to model a language of death on experimental practices of natural sciences. The miniature series of experiments address the problem of death, not representable in discourses of cultural studies, system theory or history of knowledge, and in doing so, Canetti creates liminal texts at the margins of western concepts of (human) life, science and established textual form.


Sign in / Sign up

Export Citation Format

Share Document