3D Recovery with Free Hand Camera Motion

Author(s):  
G. Sosa-Ramirez ◽  
M. Arias-Estrada
Keyword(s):  
Symmetry ◽  
2021 ◽  
Vol 13 (4) ◽  
pp. 598
Author(s):  
Massimiliano Pau ◽  
Bruno Leban ◽  
Michela Deidda ◽  
Federica Putzolu ◽  
Micaela Porta ◽  
...  

The majority of people with Multiple Sclerosis (pwMS), report lower limb motor dysfunctions, which may relevantly affect postural control, gait and a wide range of activities of daily living. While it is quite common to observe a different impact of the disease on the two limbs (i.e., one of them is more affected), less clear are the effects of such asymmetry on gait performance. The present retrospective cross-sectional study aimed to characterize the magnitude of interlimb asymmetry in pwMS, particularly as regards the joint kinematics, using parameters derived from angle-angle diagrams. To this end, we analyzed gait patterns of 101 pwMS (55 women, 46 men, mean age 46.3, average Expanded Disability Status Scale (EDSS) score 3.5, range 1–6.5) and 81 unaffected individuals age- and sex-matched who underwent 3D computerized gait analysis carried out using an eight-camera motion capture system. Spatio-temporal parameters and kinematics in the sagittal plane at hip, knee and ankle joints were considered for the analysis. The angular trends of left and right sides were processed to build synchronized angle–angle diagrams (cyclograms) for each joint, and symmetry was assessed by computing several geometrical features such as area, orientation and Trend Symmetry. Based on cyclogram orientation and Trend Symmetry, the results show that pwMS exhibit significantly greater asymmetry in all three joints with respect to unaffected individuals. In particular, orientation values were as follows: 5.1 of pwMS vs. 1.6 of unaffected individuals at hip joint, 7.0 vs. 1.5 at knee and 6.4 vs. 3.0 at ankle (p < 0.001 in all cases), while for Trend Symmetry we obtained at hip 1.7 of pwMS vs. 0.3 of unaffected individuals, 4.2 vs. 0.5 at knee and 8.5 vs. 1.5 at ankle (p < 0.001 in all cases). Moreover, the same parameters were sensitive enough to discriminate individuals of different disability levels. With few exceptions, all the calculated symmetry parameters were found significantly correlated with the main spatio-temporal parameters of gait and the EDSS score. In particular, large correlations were detected between Trend Symmetry and gait speed (with rho values in the range of –0.58 to –0.63 depending on the considered joint, p < 0.001) and between Trend Symmetry and EDSS score (rho = 0.62 to 0.69, p < 0.001). Such results suggest not only that MS is associated with significantly marked interlimb asymmetry during gait but also that such asymmetry worsens as the disease progresses and that it has a relevant impact on gait performances.


2020 ◽  
Vol 39 (6) ◽  
pp. 1-14
Author(s):  
Ana Serrano ◽  
Daniel Martin ◽  
Diego Gutierrez ◽  
Karol Myszkowski ◽  
Belen Masia

Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 507
Author(s):  
Le Wang ◽  
Lirong Xiang ◽  
Lie Tang ◽  
Huanyu Jiang

Accurate corn stand count in the field at early season is of great interest to corn breeders and plant geneticists. However, the commonly used manual counting method is time consuming, laborious, and prone to error. Nowadays, unmanned aerial vehicles (UAV) tend to be a popular base for plant-image-collecting platforms. However, detecting corn stands in the field is a challenging task, primarily because of camera motion, leaf fluttering caused by wind, shadows of plants caused by direct sunlight, and the complex soil background. As for the UAV system, there are mainly two limitations for early seedling detection and counting. First, flying height cannot ensure a high resolution for small objects. It is especially difficult to detect early corn seedlings at around one week after planting, because the plants are small and difficult to differentiate from the background. Second, the battery life and payload of UAV systems cannot support long-duration online counting work. In this research project, we developed an automated, robust, and high-throughput method for corn stand counting based on color images extracted from video clips. A pipeline developed based on the YoloV3 network and Kalman filter was used to count corn seedlings online. The results demonstrate that our method is accurate and reliable for stand counting, achieving an accuracy of over 98% at growth stages V2 and V3 (vegetative stages with two and three visible collars) with an average frame rate of 47 frames per second (FPS). This pipeline can also be mounted easily on manned cart, tractor, or field robotic systems for online corn counting.


2020 ◽  
pp. 1-11
Author(s):  
Shufang Li ◽  
Wang Juan

For the English classroom teaching video denoising algorithm, it is not only necessary to consider whether the noise removal of the output video is thorough, but also to consider the actual operating efficiency and robustness of the algorithm. In the process of the thesis research, after reading a large number of internal and external documents on video denoising algorithms and analyzing the pros and cons of various denoising algorithms, this paper proposes a new video denoising algorithm, which uses the recently proposed grid flow motion model based on camera motion compensation to generate denoised video. Compared with the current advanced video denoising schemes, our method processes noisy frames faster and has good robustness. In addition, this article improves the algorithm framework so that the algorithm can not only deal with offline video denoising, but also deal with online video denoising.


Biology Open ◽  
2016 ◽  
Vol 5 (9) ◽  
pp. 1334-1342 ◽  
Author(s):  
Brandon E. Jackson ◽  
Dennis J. Evangelista ◽  
Dylan D. Ray ◽  
Tyson L. Hedrick

Author(s):  
Christian M. Puttlitz ◽  
Robert P. Melcher ◽  
Vedat Deviren ◽  
Dezsoe Jeszenszky ◽  
Ju¨rgen Harms

Reconstruction of C2 after tumor destruction and resection remains a significant challenge. Most constructs utilize a strutgraft with plate or screw fixation. A novel C2 prosthesis combining a titanium mesh cage with bilateral C1 shelves and a T-plate has been used successfully in 18 patients. Supplemental posterior instrumentation includes C0-C3 or C1-C3. Biomechanical comparisons of this C2 prosthesis with traditional fixation options have not been reported. Five fresh-frozen human cadaveric cervical spines (C0-C5) were tested intact. Next, the C2 prosthesis, and strut graft and anterior plate constructs were tested with occiput-C3 and C1-C3 posterior fixation. Pure moment loads (up to 1.5 N-m) were applied in flexion and extension, lateral bending, and axial rotation. C1-C3 motion was evaluated using 3 camera motion analysis. Statistical significance was evaluated using one-way repeated measures ANOVA with Student-Newman-Keuls post hoc pairwise comparisons. All constructs provided a statistically significant decrease in motion in this C2 corpectomy model as compared to the intact condition. There was no significant difference in C1-C3 motion between the 4 constructs, regardless of whether the occiput was included in the fixation. Under these loading conditions, both the C2 prostheisis and strut-graft-plate constructs provided initial C1-C3 stability beyond that of the intact specimen. The occiput does not need to be included in the posterior instrumentation.


2013 ◽  
Vol 117 (1197) ◽  
pp. 1075-1101 ◽  
Author(s):  
S. M. Parkes ◽  
I. Martin ◽  
M. N. Dunstan ◽  
N. Rowell ◽  
O. Dubois-Matra ◽  
...  

Abstract The use of machine vision to guide robotic spacecraft is being considered for a wide range of missions, such as planetary approach and landing, asteroid and small body sampling operations and in-orbit rendezvous and docking. Numerical simulation plays an essential role in the development and testing of such systems, which in the context of vision-guidance means that realistic sequences of navigation images are required, together with knowledge of the ground-truth camera motion. Computer generated imagery (CGI) offers a variety of benefits over real images, such as availability, cost, flexibility and knowledge of the ground truth camera motion to high precision. However, standard CGI methods developed for terrestrial applications lack the realism, fidelity and performance required for engineering simulations. In this paper, we present the results of our ongoing work to develop a suitable CGI-based test environment for spacecraft vision guidance systems. We focus on the various issues involved with image simulation, including the selection of standard CGI techniques and the adaptations required for use in space applications. We also describe our approach to integration with high-fidelity end-to-end mission simulators, and summarise a variety of European Space Agency research and development projects that used our test environment.


2021 ◽  
Author(s):  
Yaqing Ding ◽  
Yingna Su ◽  
Chengzhong Xu ◽  
Jian Yang ◽  
Hui Kong

Sign in / Sign up

Export Citation Format

Share Document