scholarly journals Automatic Myotendinous Junction Tracking in Ultrasound Images with Phase-Based Segmentation

2018 ◽  
Vol 2018 ◽  
pp. 1-12
Author(s):  
Guang-Quan Zhou ◽  
Yi Zhang ◽  
Ruo-Li Wang ◽  
Ping Zhou ◽  
Yong-Ping Zheng ◽  
...  

Displacement of the myotendinous junction (MTJ) obtained by ultrasound imaging is crucial to quantify the interactive length changes of muscles and tendons for understanding the mechanics and pathological conditions of the muscle-tendon unit during motion. However, the lack of a reliable automatic measurement method restricts its application in human motion analysis. This paper presents an automated measurement of MTJ displacement using prior knowledge on tendinous tissues and MTJ, precluding the influence of nontendinous components on the estimation of MTJ displacement. It is based on the perception of tendinous features from musculoskeletal ultrasound images using Radon transform and thresholding methods, with information about the symmetric measures obtained from phase congruency. The displacement of MTJ is achieved by tracking manually marked points on tendinous tissues with the Lucas-Kanade optical flow algorithm applied over the segmented MTJ region. The performance of this method was evaluated on ultrasound images of the gastrocnemius obtained from 10 healthy subjects (26.0±2.9 years of age). Waveform similarity between the manual and automatic measurements was assessed by calculating the overall similarity with the coefficient of multiple correlation (CMC). In vivo experiments demonstrated that MTJ tracking with the proposed method (CMC = 0.97±0.02) was more consistent with the manual measurements than existing optical flow tracking methods (CMC = 0.79±0.11). This study demonstrated that the proposed method was robust to the interference of nontendinous components, resulting in a more reliable measurement of MTJ displacement, which may facilitate further research and applications related to the architectural change of muscles and tendons.

2009 ◽  
Vol 21 (03) ◽  
pp. 223-232 ◽  
Author(s):  
Tsung-Yuan Tsai ◽  
Tung-Wu Lu ◽  
Mei-Ying Kuo ◽  
Horng-Chaung Hsu

Skin marker-based stereophotogrammetry has been widely used in the in vivo, noninvasive measurement of three-dimensional (3D) joint kinematics in many clinical applications. However, the measured poses of body segments are subject to errors called soft tissue artifacts (STA). No study has reported the unrestricted STA of markers on the thigh and shank in normal subjects during functional activities. The purpose of this study was to assess the 3D movement of skin markers relative to the underlying bones in normal subjects during functional activities using a noninvasive method based on the integration of 3D fluoroscopy and stereophotogrammetry. Generally, thigh markers had greater STA than shank ones and the STA of the markers were in nonlinear relationships with knee flexion angles. The STA of a marker also appeared to vary among subjects and were affected by activities. This suggests that correction of STA in human motion analysis may have to consider the multijoint nature of functional activities such as using a global compensation approach with individual anthropometric data. The results of the current study may be helpful for establishing guidelines of marker location selection and for developing STA compensation methods in human motion analysis.


2019 ◽  
Vol 8 (3) ◽  
pp. 839-846
Author(s):  
Nur Ayuni Mohamed ◽  
Mohd Asyraf Zulkifley

There is a growing demand for surveillance systems that can detect fall-down events because of the increased number of surveillance cameras being installed in many public indoor and outdoor locations. Fall-down event detection has been vigorously and extensively researched for safety purposes, particularly to monitor elderly peoples, patients, and toddlers. This computer vision detector has become more affordable with the development of high-speed computer networks and low-cost video cameras. This paper proposes moving object detection method based on human motion analysis for human fall-down events. The method comprises of three parts, which are preprocessing part to reduce image noises, motion detection part by using TV-L1 optical flow algorithm, and performance measure part. The last part will analyze the results of the object detection part in term of the bounding boxes, which are compared with the given ground truth. The proposed method is tested on Fall Down Detection (FDD) dataset and compared with Gunnar-Farneback optical flow by measuring intersection over union (IoU) of the output with respect to the ground truth bounding box. The experimental results show that the proposed method achieves an average IoU of 0.92524.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Kristi Powers ◽  
Raymond Chang ◽  
Justin Torello ◽  
Rhonda Silva ◽  
Yannick Cadoret ◽  
...  

AbstractEchocardiography is a widely used and clinically translatable imaging modality for the evaluation of cardiac structure and function in preclinical drug discovery and development. Echocardiograms are among the first in vivo diagnostic tools utilized to evaluate the heart due to its relatively low cost, high throughput acquisition, and non-invasive nature; however lengthy manual image analysis, intra- and inter-operator variability, and subjective image analysis presents a challenge for reproducible data generation in preclinical research. To combat the image-processing bottleneck and address both variability and reproducibly challenges, we developed a semi-automated analysis algorithm workflow to analyze long- and short-axis murine left ventricle (LV) ultrasound images. The long-axis B-mode algorithm executes a script protocol that is trained using a reference library of 322 manually segmented LV ultrasound images. The short-axis script was engineered to analyze M-mode ultrasound images in a semi-automated fashion using a pixel intensity evaluation approach, allowing analysts to place two seed-points to triangulate the local maxima of LV wall boundary annotations. Blinded operator evaluation of the semi-automated analysis tool was performed and compared to the current manual segmentation methodology for testing inter- and intra-operator reproducibility at baseline and after a pharmacologic challenge. Comparisons between manual and semi-automatic derivation of LV ejection fraction resulted in a relative difference of 1% for long-axis (B-mode) images and 2.7% for short-axis (M-mode) images. Our semi-automatic workflow approach reduces image analysis time and subjective bias, as well as decreases inter- and intra-operator variability, thereby enhancing throughput and improving data quality for pre-clinical in vivo studies that incorporate cardiac structure and function endpoints.


2021 ◽  
Vol 43 (2) ◽  
pp. 74-87
Author(s):  
Weimin Zheng ◽  
Shangkun Liu ◽  
Qing-Wei Chai ◽  
Jeng-Shyang Pan ◽  
Shu-Chuan Chu

In this study, an automatic pennation angle measuring approach based on deep learning is proposed. Firstly, the Local Radon Transform (LRT) is used to detect the superficial and deep aponeuroses on the ultrasound image. Secondly, a reference line are introduced between the deep and superficial aponeuroses to assist the detection of the orientation of muscle fibers. The Deep Residual Networks (Resnets) are used to judge the relative orientation of the reference line and muscle fibers. Then, reference line is revised until the line is parallel to the orientation of the muscle fibers. Finally, the pennation angle is obtained according to the direction of the detected aponeuroses and the muscle fibers. The angle detected by our proposed method differs by about 1° from the angle manually labeled. With a CPU, the average inference time for a single image of the muscle fibers with the proposed method is around 1.6 s, compared to 0.47 s for one of the image of a sequential image sequence. Experimental results show that the proposed method can achieve accurate and robust measurements of pennation angle.


2021 ◽  
Vol 10 ◽  
pp. 117957272110223
Author(s):  
Thomas Hellsten ◽  
Jonny Karlsson ◽  
Muhammed Shamsuzzaman ◽  
Göran Pulkkis

Background: Several factors, including the aging population and the recent corona pandemic, have increased the need for cost effective, easy-to-use and reliable telerehabilitation services. Computer vision-based marker-less human pose estimation is a promising variant of telerehabilitation and is currently an intensive research topic. It has attracted significant interest for detailed motion analysis, as it does not need arrangement of external fiducials while capturing motion data from images. This is promising for rehabilitation applications, as they enable analysis and supervision of clients’ exercises and reduce clients’ need for visiting physiotherapists in person. However, development of a marker-less motion analysis system with precise accuracy for joint identification, joint angle measurements and advanced motion analysis is an open challenge. Objectives: The main objective of this paper is to provide a critical overview of recent computer vision-based marker-less human pose estimation systems and their applicability for rehabilitation application. An overview of some existing marker-less rehabilitation applications is also provided. Methods: This paper presents a critical review of recent computer vision-based marker-less human pose estimation systems with focus on their provided joint localization accuracy in comparison to physiotherapy requirements and ease of use. The accuracy, in terms of the capability to measure the knee angle, is analysed using simulation. Results: Current pose estimation systems use 2D, 3D, multiple and single view-based techniques. The most promising techniques from a physiotherapy point of view are 3D marker-less pose estimation based on a single view as these can perform advanced motion analysis of the human body while only requiring a single camera and a computing device. Preliminary simulations reveal that some proposed systems already provide a sufficient accuracy for 2D joint angle estimations. Conclusions: Even though test results of different applications for some proposed techniques are promising, more rigour testing is required for validating their accuracy before they can be widely adopted in advanced rehabilitation applications.


Author(s):  
Bappaditya Debnath ◽  
Mary O’Brien ◽  
Motonori Yamaguchi ◽  
Ardhendu Behera

AbstractThe computer vision community has extensively researched the area of human motion analysis, which primarily focuses on pose estimation, activity recognition, pose or gesture recognition and so on. However for many applications, like monitoring of functional rehabilitation of patients with musculo skeletal or physical impairments, the requirement is to comparatively evaluate human motion. In this survey, we capture important literature on vision-based monitoring and physical rehabilitation that focuses on comparative evaluation of human motion during the past two decades and discuss the state of current research in this area. Unlike other reviews in this area, which are written from a clinical objective, this article presents research in this area from a computer vision application perspective. We propose our own taxonomy of computer vision-based rehabilitation and assessment research which are further divided into sub-categories to capture novelties of each research. The review discusses the challenges of this domain due to the wide ranging human motion abnormalities and difficulty in automatically assessing those abnormalities. Finally, suggestions on the future direction of research are offered.


Sign in / Sign up

Export Citation Format

Share Document