Diversity-Enhanced Condensation Algorithm and Its Application for Robust and Accurate Endoscope Three-Dimensional Motion Tracking

Author(s):  
Xiongbiao Luo ◽  
Ying Wan ◽  
Xiangjian He ◽  
Jie Yang ◽  
Kensaku Mori
2017 ◽  
Vol 14 (5) ◽  
pp. 172988141773275 ◽  
Author(s):  
Francisco J Perez-Grau ◽  
Fernando Caballero ◽  
Antidio Viguria ◽  
Anibal Ollero

This article presents an enhanced version of the Monte Carlo localization algorithm, commonly used for robot navigation in indoor environments, which is suitable for aerial robots moving in a three-dimentional environment and makes use of a combination of measurements from an Red,Green,Blue-Depth (RGB-D) sensor, distances to several radio-tags placed in the environment, and an inertial measurement unit. The approach is demonstrated with an unmanned aerial vehicle flying for 10 min indoors and validated with a very precise motion tracking system. The approach has been implemented using the robot operating system framework and works smoothly on a regular i7 computer, leaving plenty of computational capacity for other navigation tasks such as motion planning or control.


2019 ◽  
Vol 1 ◽  
pp. 1-1
Author(s):  
Bernhard Jenny ◽  
Kadek Ananta Satriadi ◽  
Yalong Yang ◽  
Christopher R. Austin ◽  
Simond Lee ◽  
...  

<p><strong>Abstract.</strong> Augmented reality (AR) and virtual reality (VR) technology are increasingly used for the analysis and visualisation of geospatial data. It has become simple to create an immersive three-dimensional AR or VR map with a combination of game engines (e.g., Unity), software development kits for streaming and rendering geospatial data (e.g., Mapbox), and affordable hardware (e.g., HTC Vive). However, it is not clear how to best interact with geospatial visualisations in AR and VR. For example, there are no established standards to efficiently zoom and pan, select map features, or place markers on AR and VR maps. In this paper, we explore interaction with AR and VR maps using gestures and handheld controllers.</p><p>As for gesture-controlled interaction, we present the results of recent research projects exploring how body gestures can control basic AR and VR map operations. We use motion-tracking controllers (e.g., Leap Motion) to capture and interpret gestures. We conducted a set of user studies to identify, explore and compare various gestures for controlling map-related operations. This includes, for example, mid-air hand gestures for zooming and panning (Satriadi et al. 2019), selecting points of interest, adjusting the orientation of maps, or placing markers on maps. Additionally, we present novel VR interfaces and interaction methods for controlling the content of maps with gestures.</p><p>As for handheld controllers, we discuss interaction with exocentric globes, egocentric globes (where the user stands inside a large virtual globe), flat maps, and curved maps in VR. We demonstrate controller-based interaction for adjusting the centre of world maps displayed on these four types of projection surfaces (Yang et al. 2018), and illustrate the utility of interactively movable VR maps by the example of three-dimensional origin-destination flow maps (Yang et al. 2019).</p>


Leonardo ◽  
2016 ◽  
Vol 49 (3) ◽  
pp. 203-210 ◽  
Author(s):  
Shaltiel Eloul ◽  
Gil Zissu ◽  
Yehiel H. Amo ◽  
Nori Jacoby

The authors have mapped the three-dimensional motion of a fish onto various electronic music performance gestures, including loops, melodies, arpeggio and DJ-like interventions. They combine an element of visualization, using an LED screen installed on the back of an aquarium, to create a link between the fish’s motion and the sonified music. This visual addition provides extra information about the fish’s role in the music, enabling the perception of versatile and developing auditory structures during the performance that extend beyond the sonification of the momentary motion of objects.


2014 ◽  
Vol 111 ◽  
pp. S86
Author(s):  
L. Brix ◽  
S. Ringgaard ◽  
T. Sangild Sørensen ◽  
P. Rugaard Poulsen

2015 ◽  
Vol 137 (11) ◽  
Author(s):  
Jennifer N. Jackson ◽  
Chris J. Hass ◽  
Benjamin J. Fregly

Patient-specific gait optimizations capable of predicting post-treatment changes in joint motions and loads could improve treatment design for gait-related disorders. To maximize potential clinical utility, such optimizations should utilize full-body three-dimensional patient-specific musculoskeletal models, generate dynamically consistent gait motions that reproduce pretreatment marker measurements closely, and achieve accurate foot motion tracking to permit deformable foot-ground contact modeling. This study enhances an existing residual elimination algorithm (REA) Remy, C. D., and Thelen, D. G., 2009, “Optimal Estimation of Dynamically Consistent Kinematics and Kinetics for Forward Dynamic Simulation of Gait,” ASME J. Biomech. Eng., 131(3), p. 031005) to achieve all three requirements within a single gait optimization framework. We investigated four primary enhancements to the original REA: (1) manual modification of tracked marker weights, (2) automatic modification of tracked joint acceleration curves, (3) automatic modification of algorithm feedback gains, and (4) automatic calibration of model joint and inertial parameter values. We evaluated the enhanced REA using a full-body three-dimensional dynamic skeletal model and movement data collected from a subject who performed four distinct gait patterns: walking, marching, running, and bounding. When all four enhancements were implemented together, the enhanced REA achieved dynamic consistency with lower marker tracking errors for all segments, especially the feet (mean root-mean-square (RMS) errors of 3.1 versus 18.4 mm), compared to the original REA. When the enhancements were implemented separately and in combinations, the most important one was automatic modification of tracked joint acceleration curves, while the least important enhancement was automatic modification of algorithm feedback gains. The enhanced REA provides a framework for future gait optimization studies that seek to predict subject-specific post-treatment gait patterns involving large changes in foot-ground contact patterns made possible through deformable foot-ground contact models.


2005 ◽  
Vol 62 (7) ◽  
pp. 1513-1522 ◽  
Author(s):  
Zhiqun Deng ◽  
Gregory R Guensch ◽  
Craig A McKinstry ◽  
Robert P Mueller ◽  
Dennis D Dauble ◽  
...  

Understanding the factors that injure or kill turbine-passed fish is important to the operation and design of the turbines. Motion-tracking analysis was performed on high-speed, high-resolution digital videos of juvenile salmonids exposed to a laboratory-generated shear environment to isolate injury mechanisms. Hatchery-reared fall chinook salmon (Oncorhynchus tshawytscha, 93–128 mm in length) were introduced into a submerged, 6.35-cm-diameter water jet at velocities ranging from 12.2 to 19.8 m·s–1, with a reference control group released at 3 m·s–1. Injuries typical of turbine-passed fish were observed and recorded. Three-dimensional trajectories were generated for four locations on each fish released. Time series of velocity, acceleration, force, jerk, and bending angle were computed from the three-dimensional trajectories. The onset of minor, major, and fatal injuries occurred at nozzle velocities of 12.2, 13.7, and 16.8 m·s–1, respectively. Opercle injuries occurred at 12.2 m·s–1 nozzle velocity, while eye injuries, bruising, and loss of equilibrium were common at velocities of 16.8 m·s–1 and above. Of the computed dynamic parameters, acceleration showed the strongest predictive power for eye and opercle injuries and overall injury level, and it may provide the best potential link between laboratory studies of fish injury, field studies designed to collect similar data in situ, and numerical modeling.


Sign in / Sign up

Export Citation Format

Share Document