scholarly journals CT‐less electron radiotherapy simulation and planning with a consumer 3D camera

Author(s):  
Lawrie Skinner ◽  
Rick Knopp ◽  
Yi‐Chun Wang ◽  
Piotr Dubrowski ◽  
Karl K. Bush ◽  
...  
Sensors ◽  
2020 ◽  
Vol 21 (1) ◽  
pp. 103
Author(s):  
Jan Kohout ◽  
Ludmila Verešpejová ◽  
Pavel Kříž ◽  
Lenka Červená ◽  
Karel Štícha ◽  
...  

An advanced statistical analysis of patients’ faces after specific surgical procedures that temporarily negatively affect the patient’s mimetic muscles is presented. For effective planning of rehabilitation, which typically lasts several months, it is crucial to correctly evaluate the improvement of the mimetic muscle function. The current way of describing the development of rehabilitation depends on the subjective opinion and expertise of the clinician and is not very precise concerning when the most common classification (House–Brackmann scale) is used. Our system is based on a stereovision Kinect camera and an advanced mathematical approach that objectively quantifies the mimetic muscle function independently of the clinician’s opinion. To effectively deal with the complexity of the 3D camera input data and uncertainty of the evaluation process, we designed a three-stage data-analytic procedure combining the calculation of indicators determined by clinicians with advanced statistical methods including functional data analysis and ordinal (multiple) logistic regression. We worked with a dataset of 93 distinct patients and 122 sets of measurements. In comparison to the classification with the House–Brackmann scale the developed system is able to automatically monitor reinnervation of mimetic muscles giving us opportunity to discriminate even small improvements during the course of rehabilitation.


2021 ◽  
Vol 11 (4) ◽  
pp. 1953
Author(s):  
Francisco Martín ◽  
Fernando González ◽  
José Miguel Guerrero ◽  
Manuel Fernández ◽  
Jonatan Ginés

The perception and identification of visual stimuli from the environment is a fundamental capacity of autonomous mobile robots. Current deep learning techniques make it possible to identify and segment objects of interest in an image. This paper presents a novel algorithm to segment the object’s space from a deep segmentation of an image taken by a 3D camera. The proposed approach solves the boundary pixel problem that appears when a direct mapping from segmented pixels to their correspondence in the point cloud is used. We validate our approach by comparing baseline approaches using real images taken by a 3D camera, showing that our method outperforms their results in terms of accuracy and reliability. As an application of the proposed algorithm, we present a semantic mapping approach for a mobile robot’s indoor environments.


2021 ◽  
Vol 11 (9) ◽  
pp. 4248
Author(s):  
Hong Hai Hoang ◽  
Bao Long Tran

With the rapid development of cameras and deep learning technologies, computer vision tasks such as object detection, object segmentation and object tracking are being widely applied in many fields of life. For robot grasping tasks, object segmentation aims to classify and localize objects, which helps robots to be able to pick objects accurately. The state-of-the-art instance segmentation network framework, Mask Region-Convolution Neural Network (Mask R-CNN), does not always perform an excellent accurate segmentation at the edge or border of objects. The approach using 3D camera, however, is able to extract the entire (foreground) objects easily but can be difficult or require a large amount of computation effort to classify it. We propose a novel approach, in which we combine Mask R-CNN with 3D algorithms by adding a 3D process branch for instance segmentation. Both outcomes of two branches are contemporaneously used to classify the pixels at the edge objects by dealing with the spatial relationship between edge region and mask region. We analyze the effectiveness of the method by testing with harsh cases of object positions, for example, objects are closed, overlapped or obscured by each other to focus on edge and border segmentation. Our proposed method is about 4 to 7% higher and more stable in IoU (intersection of union). This leads to a reach of 46% of mAP (mean Average Precision), which is a higher accuracy than its counterpart. The feasibility experiment shows that our method could be a remarkable promoting for the research of the grasping robot.


Author(s):  
Cuong Vo-Le ◽  
Pham Van Muoi ◽  
Nguyen Hong Son ◽  
Nguyen Van San ◽  
Vu Khac Duong ◽  
...  

2015 ◽  
Vol 115 ◽  
pp. S522
Author(s):  
J. Lopez-Tarjuelo ◽  
A. Santos-Serra ◽  
V. Morillo-Macías ◽  
A. Bouché-Babiloni ◽  
N. Luquero-Llopis ◽  
...  

Author(s):  
A. A. Grigorieva ◽  
A. A. Bulavskaya ◽  
D. A. Belousov ◽  
I. A. Miloichikova ◽  
Yu. M. Cherepennikov ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document