scholarly journals Hyperspectral Imaging for Skin Feature Detection: Advances in Markerless Tracking for Spine Surgery

2020 ◽  
Vol 10 (12) ◽  
pp. 4078 ◽  
Author(s):  
Francesca Manni ◽  
Fons van der Sommen ◽  
Svitlana Zinger ◽  
Caifeng Shan ◽  
Ronald Holthuizen ◽  
...  

In spinal surgery, surgical navigation is an essential tool for safe intervention, including the placement of pedicle screws without injury to nerves and blood vessels. Commercially available systems typically rely on the tracking of a dynamic reference frame attached to the spine of the patient. However, the reference frame can be dislodged or obscured during the surgical procedure, resulting in loss of navigation. Hyperspectral imaging (HSI) captures a large number of spectral information bands across the electromagnetic spectrum, providing image information unseen by the human eye. We aim to exploit HSI to detect skin features in a novel methodology to track patient position in navigated spinal surgery. In our approach, we adopt two local feature detection methods, namely a conventional handcrafted local feature and a deep learning-based feature detection method, which are compared to estimate the feature displacement between different frames due to motion. To demonstrate the ability of the system in tracking skin features, we acquire hyperspectral images of the skin of 17 healthy volunteers. Deep-learned skin features are detected and localized with an average error of only 0.25 mm, outperforming the handcrafted local features with respect to the ground truth based on the use of optical markers.

2018 ◽  
Vol 1 (2) ◽  
pp. 2
Author(s):  
Chiung Chyi Shen

Use of pedicle screws is widespread in spinal surgery for degenerative, traumatic, and oncological diseases. The conventional technique is based on the recognition of anatomic landmarks, preparation and palpation of cortices of the pedicle under control of an intraoperative C-arm (iC-arm) fluoroscopy. With these conventional methods, the median pedicle screw accuracy ranges from 86.7% to 93.8%, even if perforation rates range from 21.1% to 39.8%.The development of novel intraoperative navigational techniques, commonly referred to as image-guided surgery (IGS), provide simultaneous and multiplanar views of spinal anatomy. IGS technology can increase the accuracy of spinal instrumentation procedures and improve patient safety. These systems, such as fluoroscopy-based image guidance ("virtual fluoroscopy") and computed tomography (CT)-based computer-guidance systems, have sensibly minimized risk of pedicle screw misplacement, with overall perforation rates ranging from between 14.3% and 9.3%, respectively."Virtual fluoroscopy" allows simultaneous two-dimensional (2D) guidance in multiple planes, but does not provide any axial images; quality of images is directly dependent on the resolution of the acquired fluoroscopic projections. Furthermore, computer-assisted surgical navigation systems decrease the reliance on intraoperative imaging, thus reducing the use of intraprocedure ionizing radiation. The major limitation of this technique is related to the variation of the position of the patient from the preoperative CT scan, usually obtained before surgery in a supine position, and the operative position (prone). The next technological evolution is the use of an intraoperative CT (iCT) scan, which would allow us to solve the position-dependent changes, granting a higher accuracy in the navigation system. 


2021 ◽  
Vol 11 (13) ◽  
pp. 6006
Author(s):  
Huy Le ◽  
Minh Nguyen ◽  
Wei Qi Yan ◽  
Hoa Nguyen

Augmented reality is one of the fastest growing fields, receiving increased funding for the last few years as people realise the potential benefits of rendering virtual information in the real world. Most of today’s augmented reality marker-based applications use local feature detection and tracking techniques. The disadvantage of applying these techniques is that the markers must be modified to match the unique classified algorithms or they suffer from low detection accuracy. Machine learning is an ideal solution to overcome the current drawbacks of image processing in augmented reality applications. However, traditional data annotation requires extensive time and labour, as it is usually done manually. This study incorporates machine learning to detect and track augmented reality marker targets in an application using deep neural networks. We firstly implement the auto-generated dataset tool, which is used for the machine learning dataset preparation. The final iOS prototype application incorporates object detection, object tracking and augmented reality. The machine learning model is trained to recognise the differences between targets using one of YOLO’s most well-known object detection methods. The final product makes use of a valuable toolkit for developing augmented reality applications called ARKit.


2014 ◽  
Vol 36 (3) ◽  
pp. E5 ◽  
Author(s):  
Kern H. Guppy ◽  
Indro Chakrabarti ◽  
Amit Banerjee

Imaging guidance using intraoperative CT (O-arm surgical imaging system) combined with a navigation system has been shown to increase accuracy in the placement of spinal instrumentation. The authors describe 4 complex upper cervical spine cases in which the O-arm combined with the StealthStation surgical navigation system was used to accurately place occipital screws, C-1 screws anteriorly and posteriorly, C-2 lateral mass screws, and pedicle screws in C-6. This combination was also used to navigate through complex bony anatomy altered by tumor growth and bony overgrowth. The 4 cases presented are: 1) a developmental deformity case in which the C-1 lateral mass was in the center of the cervical canal causing cord compression; 2) a case of odontoid compression of the spinal cord requiring an odontoidectomy in a patient with cerebral palsy; 3) a case of an en bloc resection of a C2–3 chordoma with instrumentation from the occiput to C-6 and placement of C-1 lateral mass screws anteriorly and posteriorly; and 4) a case of repeat surgery for a non-union at C1–2 with distortion of the anatomy and overgrowth of the bony structure at C-2.


2021 ◽  
Author(s):  
Andreas Beckert ◽  
Lea Eisenstein ◽  
Tim Hewson ◽  
George C. Craig ◽  
Marc Rautenhaus

<p><span>Atmospheric fronts, a widely used conceptual model in meteorology, describe sharp boundaries between two air masses of different thermal properties. In the mid-latitudes, these sharp boundaries are commonly associated with extratropical cyclones. The passage of a frontal system is accompanied by significant weather changes, and therefore fronts are of particular interest in weather forecasting. Over the past decades, several two-dimensional, horizontal feature detection methods to objectively identify atmospheric fronts in numerical weather prediction (NWP) data were proposed in the literature (e.g. Hewson, Met.Apps. 1998). In addition, recent research (Kern et al., IEEE Trans. Visual. Comput. Graphics, 2019) has shown the feasibility of detecting atmospheric fronts as three-dimensional surfaces representing the full 3D frontal structure. In our work, we build on the studies by Hewson (1998) and Kern et al. (2019) to make front detection usable for forecasting purposes in an interactive 3D visualization environment. We consider the following aspects: (a) As NWP models evolved in recent years to resolve atmospheric processes on scales far smaller than the scale of midlatitude-cyclone- fronts, we evaluate whether previously developed detection methods are still capable to detect fronts in current high-resolution NWP data. (b) We present integration of our implementation into the open-source “Met.3D” software (http://met3d.wavestoweather.de) and analyze two- and three-dimensional frontal structures in selected cases of European winter storms, comparing different models and model resolution. (c) The considered front detection methods rely on threshold parameters, which mostly refer to the magnitude of the thermal gradient within the adjacent frontal zone - the frontal strength. If the frontal strength exceeds the threshold, a so-called feature candidate is classified as a front, while others are discarded. If a single, fixed, threshold is used, unwanted “holes” can be observed in the detected fronts. Hence, we use transparency mapping with fuzzy thresholds to generate continuous frontal features. We pay particular attention to the adjustment of filter thresholds and evaluate the dependence of thresholds and resolution of the underlying data.</span></p>


2019 ◽  
Vol 18 (5) ◽  
pp. 496-502 ◽  
Author(s):  
Erik Edström ◽  
Gustav Burström ◽  
Rami Nachabe ◽  
Paul Gerdhem ◽  
Adrian Elmi Terander

Abstract BACKGROUND Treatment of several spine disorders requires placement of pedicle screws. Detailed 3-dimensional (3D) anatomic information facilitates this process and improves accuracy. OBJECTIVE To present a workflow for a novel augmented-reality-based surgical navigation (ARSN) system installed in a hybrid operating room for anatomy visualization and instrument guidance during pedicle screw placement. METHODS The workflow includes surgical exposure, imaging, automatic creation of a 3D model, and pedicle screw path planning for instrument guidance during surgery as well as the actual screw placement, spinal fixation, and wound closure and intraoperative verification of the treatment results. Special focus was given to process integration and minimization of overhead time. Efforts were made to manage staff radiation exposure avoiding the need for lead aprons. Time was kept throughout the procedure and subdivided to reflect key steps. The navigation workflow was validated in a trial with 20 cases requiring pedicle screw placement (13/20 scoliosis). RESULTS Navigated interventions were performed with a median total time of 379 min per procedure (range 232-548 min for 4-24 implanted pedicle screws). The total procedure time was subdivided into surgical exposure (28%), cone beam computed tomography imaging and 3D segmentation (2%), software planning (6%), navigated surgery for screw placement (17%) and non-navigated instrumentation, wound closure, etc (47%). CONCLUSION Intraoperative imaging and preparation for surgical navigation totaled 8% of the surgical time. Consequently, ARSN can routinely be used to perform highly accurate surgery potentially decreasing the risk for complications and revision surgery while minimizing radiation exposure to the staff.


Sign in / Sign up

Export Citation Format

Share Document