Vertex-labeling algorithms for the Hilbert spacefilling curve

2001 ◽  
Vol 31 (5) ◽  
pp. 395-408 ◽  
Author(s):  
John J. Bartholdi ◽  
Paul Goldsman
Keyword(s):  
2015 ◽  
Vol 1 (1) ◽  
pp. 1-30 ◽  
Author(s):  
Andreas Gemsa ◽  
Jan-Henrik Haunert ◽  
Martin Nöllenburg

2007 ◽  
Vol 111 (1120) ◽  
pp. 389-396 ◽  
Author(s):  
G. Campa ◽  
M. R. Napolitano ◽  
M. Perhinschi ◽  
M. L. Fravolini ◽  
L. Pollini ◽  
...  

Abstract This paper describes the results of an effort on the analysis of the performance of specific ‘pose estimation’ algorithms within a Machine Vision-based approach for the problem of aerial refuelling for unmanned aerial vehicles. The approach assumes the availability of a camera on the unmanned aircraft for acquiring images of the refuelling tanker; also, it assumes that a number of active or passive light sources – the ‘markers’ – are installed at specific known locations on the tanker. A sequence of machine vision algorithms on the on-board computer of the unmanned aircraft is tasked with the processing of the images of the tanker. Specifically, detection and labeling algorithms are used to detect and identify the markers and a ‘pose estimation’ algorithm is used to estimate the relative position and orientation between the two aircraft. Detailed closed-loop simulation studies have been performed to compare the performance of two ‘pose estimation’ algorithms within a simulation environment that was specifically developed for the study of aerial refuelling problems. Special emphasis is placed on the analysis of the required computational effort as well as on the accuracy and the error propagation characteristics of the two methods. The general trade offs involved in the selection of the pose estimation algorithm are discussed. Finally, simulation results are presented and analysed.


2020 ◽  
Vol 3 (1) ◽  
Author(s):  
Stav Hertz ◽  
Benjamin Weiner ◽  
Nisim Perets ◽  
Michael London

AbstractMice emit sequences of ultrasonic vocalizations (USVs) but little is known about the rules governing their temporal order and no consensus exists on the classification of USVs into syllables. To address these questions, we recorded USVs during male-female courtship and found a significant temporal structure. We labeled USVs using three popular algorithms and found that there was no one-to-one relationships between their labels. As label assignment affects the high order temporal structure, we developed the Syntax Information Score (based on information theory) to rank labeling algorithms based on how well they predict the next syllable in a sequence. Finally, we derived a novel algorithm (Syntax Information Maximization) that utilizes sequence statistics to improve the clustering of individual USVs with respect to the underlying sequence structure. Improvement in USV classification is crucial for understanding neural control of vocalization. We demonstrate that USV syntax holds valuable information towards achieving this goal.


1998 ◽  
Vol 8 (2) ◽  
pp. 206-220 ◽  
Author(s):  
Chun Wang ◽  
H.Q. Cao ◽  
Weiping Li ◽  
K.K. Tzeng

Networks ◽  
2018 ◽  
Vol 72 (1) ◽  
pp. 84-127 ◽  
Author(s):  
Andrea Raith ◽  
Marie Schmidt ◽  
Anita Schöbel ◽  
Lisa Thom

Author(s):  
M. Sumathi ◽  
T. Balaji

The main objective of this paper is to carry out a detailed analysis of the most popular Connected Component Labeling (CCL) algorithms for remote sensing image classification. This algorithm searches line-by-line, top to bottom to assign a splotch label to each current pixel that is connected to a splotch. This paper presents two new strategies that can be used to greatly improve the speed of connected component labeling algorithms. It assigns a label to a new object, most labeling algorithms use a scanning step that examines some of its neighbors. The first strategy deeds the dependencies among the neighbors to reduce the number of neighbors examined. The second strategy uses an array to store the equivalence information among the labels. This replaces the pointer based deep rooted trees used to store the same equivalence information. It reduces the memory required and also produces consecutive final labels. The connected component labeling assigns labels to a pixel such that adjacent pixels of the same features are assigned the same label. The paper presents a modification of this algorithm that allows the resolution of merged labels and experimental results demonstrate that proposed method is much more efficient than conventional methods for various kinds of color images. This method is improving the labeling algorithms and also benefits for other applications in computer vision and pattern recognition


Author(s):  
Malek Zakarya Alksasbeh ◽  
Ahmad H AL-Omari ◽  
Bassam A. Y. Alqaralleh ◽  
Tamer Abukhalil ◽  
Anas Abukarki ◽  
...  

<span>Sign languages are the most basic and natural form of languages which were used even before the evolution of spoken languages. These sign languages were developed using various sign "gestures" that are made using hand palm. Such gestures are called "hand gestures". Hand gestures are being widely used as an international assistive communication method for deaf people and many life aspects such as sports, traffic control and religious acts. However, the meanings of hand gestures vary among different civilization cultures. Therefore, because of the importance of understanding the meanings of hand gestures, this study presents a procedure whichcan translate such gestures into an annotated explanation. The proposed system implements image and video processing which are recently conceived as one of the most important technologies. The system initially, analyzes a classroom video as an input, and then extracts the vocabulary of twenty gestures. Various methods have been applied sequentially, namely: motion detection, RGB to HSV conversion, and noise removing using labeling algorithms. The extraction of hand parameters is determined by a K-NN algorithm to eventually determine the hand gesture and, hence showing their meanings. To estimate the performance of the proposed method, an experiment using a hand gesture database is performed. The results showed that the suggested method has an average recognition rate of 97%. </span>


Sign in / Sign up

Export Citation Format

Share Document