scholarly journals Evaluation of HMDs by QFD for Augmented Reality Applications in the Maxillofacial Surgery Domain

2021 ◽  
Vol 11 (22) ◽  
pp. 11053
Author(s):  
Alessandro Carpinello ◽  
Enrico Vezzetti ◽  
Guglielmo Ramieri ◽  
Sandro Moos ◽  
Andrea Novaresio ◽  
...  

Today, surgical operations are less invasive than they were a few decades ago and, in medicine, there is a growing trend towards precision surgery. Among many technological advancements, augmented reality (AR) can be a powerful tool for improving the surgery practice through its ability to superimpose the 3D geometrical information of the pre-planned operation over the surgical field as well as medical and instrumental information gathered from operating room equipment. AR is fundamental to reach new standards in maxillofacial surgery. The surgeons will be able to not shift their focus from the patients while looking to the monitors. Osteotomies will not require physical tools to be fixed on patient bones as guides to make resections. Handling grafts and 3D models directly in the operating room will permit a fine tuning of the procedure before harvesting the implant. This article aims to study the application of AR head-mounted displays (HMD) in three operative scenarios (oncological and reconstructive surgery, orthognathic surgery, and maxillofacial trauma surgery) by the means of quantitative logic using the Quality Function Deployment (QFD) tool to determine their requirements. The article provides an evaluation of the readiness degree of HMD currently on market and highlights the lacking features.

2019 ◽  
Vol 6 (1) ◽  
Author(s):  
Christina Gsaxner ◽  
Jürgen Wallner ◽  
Xiaojun Chen ◽  
Wolfgang Zemann ◽  
Jan Egger

AbstractMedical augmented reality (AR) is an increasingly important topic in many medical fields. AR enables x-ray vision to see through real world objects. In medicine, this offers pre-, intra- or post-interventional visualization of “hidden” structures. In contrast to a classical monitor view, AR applications provide visualization not only on but also in relation to the patient. However, research and development of medical AR applications is challenging, because of unique patient-specific anatomies and pathologies. Working with several patients during the development for weeks or even months is not feasible. One alternative are commercial patient phantoms, which are very expensive. Hence, this data set provides a unique collection of head and neck cancer patient PET-CT scans with corresponding 3D models, provided as stereolitography (STL) files. The 3D models are optimized for effective 3D printing at low cost. This data can be used in the development and evaluation of AR applications for head and neck surgery.


2021 ◽  
Vol 51 (2) ◽  
pp. E14
Author(s):  
Tim Fick ◽  
Jesse A. M. van Doormaal ◽  
Lazar Tosic ◽  
Renate J. van Zoest ◽  
Jene W. Meulstee ◽  
...  

OBJECTIVE For currently available augmented reality workflows, 3D models need to be created with manual or semiautomatic segmentation, which is a time-consuming process. The authors created an automatic segmentation algorithm that generates 3D models of skin, brain, ventricles, and contrast-enhancing tumor from a single T1-weighted MR sequence and embedded this model into an automatic workflow for 3D evaluation of anatomical structures with augmented reality in a cloud environment. In this study, the authors validate the accuracy and efficiency of this automatic segmentation algorithm for brain tumors and compared it with a manually segmented ground truth set. METHODS Fifty contrast-enhanced T1-weighted sequences of patients with contrast-enhancing lesions measuring at least 5 cm3 were included. All slices of the ground truth set were manually segmented. The same scans were subsequently run in the cloud environment for automatic segmentation. Segmentation times were recorded. The accuracy of the algorithm was compared with that of manual segmentation and evaluated in terms of Sørensen-Dice similarity coefficient (DSC), average symmetric surface distance (ASSD), and 95th percentile of Hausdorff distance (HD95). RESULTS The mean ± SD computation time of the automatic segmentation algorithm was 753 ± 128 seconds. The mean ± SD DSC was 0.868 ± 0.07, ASSD was 1.31 ± 0.63 mm, and HD95 was 4.80 ± 3.18 mm. Meningioma (mean 0.89 and median 0.92) showed greater DSC than metastasis (mean 0.84 and median 0.85). Automatic segmentation had greater accuracy for measuring DSC (mean 0.86 and median 0.87) and HD95 (mean 3.62 mm and median 3.11 mm) of supratentorial metastasis than those of infratentorial metastasis (mean 0.82 and median 0.81 for DSC; mean 5.26 mm and median 4.72 mm for HD95). CONCLUSIONS The automatic cloud-based segmentation algorithm is reliable, accurate, and fast enough to aid neurosurgeons in everyday clinical practice by providing 3D augmented reality visualization of contrast-enhancing intracranial lesions measuring at least 5 cm3. The next steps involve incorporation of other sequences and improving accuracy with 3D fine-tuning in order to expand the scope of augmented reality workflow.


2019 ◽  
Vol 31 (1) ◽  
pp. 139-146 ◽  
Author(s):  
Camilo A. Molina ◽  
Nicholas Theodore ◽  
A. Karim Ahmed ◽  
Erick M. Westbroek ◽  
Yigal Mirovsky ◽  
...  

OBJECTIVEAugmented reality (AR) is a novel technology that has the potential to increase the technical feasibility, accuracy, and safety of conventional manual and robotic computer-navigated pedicle insertion methods. Visual data are directly projected to the operator’s retina and overlaid onto the surgical field, thereby removing the requirement to shift attention to a remote display. The objective of this study was to assess the comparative accuracy of AR-assisted pedicle screw insertion in comparison to conventional pedicle screw insertion methods.METHODSFive cadaveric male torsos were instrumented bilaterally from T6 to L5 for a total of 120 inserted pedicle screws. Postprocedural CT scans were obtained, and screw insertion accuracy was graded by 2 independent neuroradiologists using both the Gertzbein scale (GS) and a combination of that scale and the Heary classification, referred to in this paper as the Heary-Gertzbein scale (HGS). Non-inferiority analysis was performed, comparing the accuracy to freehand, manual computer-navigated, and robotics-assisted computer-navigated insertion accuracy rates reported in the literature. User experience analysis was conducted via a user experience questionnaire filled out by operators after the procedures.RESULTSThe overall screw placement accuracy achieved with the AR system was 96.7% based on the HGS and 94.6% based on the GS. Insertion accuracy was non-inferior to accuracy reported for manual computer-navigated pedicle insertion based on both the GS and the HGS scores. When compared to accuracy reported for robotics-assisted computer-navigated insertion, accuracy achieved with the AR system was found to be non-inferior when assessed with the GS, but superior when assessed with the HGS. Last, accuracy results achieved with the AR system were found to be superior to results obtained with freehand insertion based on both the HGS and the GS scores. Accuracy results were not found to be inferior in any comparison. User experience analysis yielded “excellent” usability classification.CONCLUSIONSAR-assisted pedicle screw insertion is a technically feasible and accurate insertion method.


2021 ◽  
Vol 186 (Supplement_1) ◽  
pp. 295-299
Author(s):  
Debbie L Teodorescu ◽  
Stephen Okajima ◽  
Asad Moten ◽  
Mike H M Teodorescu ◽  
Majed El Hechi ◽  
...  

ABSTRACT Introduction Scarcity of operating rooms and personal protective equipment in far-forward field settings make surgical infections a potential concern for combat mortality and morbidity. Surgical and transport personnel also face infectious risks from bodily fluid exposures. Our study aimed to describe the serial, proof-of-concept testing of the SurgiBox technology: an inflatable sterile environment that addresses the aforementioned problems, fits on gurneys and backpacks, and drapes over incisions. Materials and Methods The SurgiBox environmental control unit and inflatable enclosure were optimized over five generations based on iterative feedback from stakeholders experienced in surgery in austere settings. The airflow system was developed by analytic modeling, verified through in silico modeling in SOLIDWORKS, and confirmed with prototype smoke-trail checking. Particulate counts evaluated the enclosure’s ability to control and mitigate users’ exposures to potentially infectious contaminants from the surgical field in various settings. SurgiBox enclosures were setup over a mannequin’s torso, in a configuration and position for either thoracic or abdominal surgery. A particle counter was serially positioned in sternotomy and laparotomy positions, as well as bilateral flank positions. This setup was repeated with open ports exposing the enclosure to the external environment. To simulate stress scenarios, sampling was repeated with enclosure measurements during an increase in external particulate concentration. Results The airflow technology effectively kept contaminants away from the incision and maintained a pressure differential to reduce particle entry. Benchtop testing demonstrated that even when ports were opened or the external environment had high contaminant burden, the enclosed surgical field consistently registered 0 particle count in all positions. Time from kit opening to incision averaged 54.5 seconds, with the rate-limiting step being connecting the environmental control unit to the enclosure. The portable kit weighted 5.9 lbs. Conclusions Analytic, in silico, and mechanical airflow modeling and benchtop testing have helped to quantify the SurgiBox system’s reliability in creating and maintaining an operating room-quality surgical field within the enclosure as well as protecting the surgical team outside the enclosure. More recent and ongoing work has focused on specifying optimal use settings in the casualty chain of care, expanding support for circumferential procedures, automating airflow control, and accelerating system setup. SurgiBox’s ultimate goal is to take timely, safe surgery to patients in even the most austere of settings.


ORL ◽  
2021 ◽  
pp. 1-10
Author(s):  
Claudia Scherl ◽  
Johanna Stratemeier ◽  
Nicole Rotter ◽  
Jürgen Hesser ◽  
Stefan O. Schönberg ◽  
...  

<b><i>Introduction:</i></b> Augmented reality can improve planning and execution of surgical procedures. Head-mounted devices such as the HoloLens® (Microsoft, Redmond, WA, USA) are particularly suitable to achieve these aims because they are controlled by hand gestures and enable contactless handling in a sterile environment. <b><i>Objectives:</i></b> So far, these systems have not yet found their way into the operating room for surgery of the parotid gland. This study explored the feasibility and accuracy of augmented reality-assisted parotid surgery. <b><i>Methods:</i></b> 2D MRI holographic images were created, and 3D holograms were reconstructed from MRI DICOM files and made visible via the HoloLens. 2D MRI slices were scrolled through, 3D images were rotated, and 3D structures were shown and hidden only using hand gestures. The 3D model and the patient were aligned manually. <b><i>Results:</i></b> The use of augmented reality with the HoloLens in parotic surgery was feasible. Gestures were recognized correctly. Mean accuracy of superimposition of the holographic model and patient’s anatomy was 1.3 cm. Highly significant differences were seen in position error of registration between central and peripheral structures (<i>p</i> = 0.0059), with a least deviation of 10.9 mm (centrally) and highest deviation for the peripheral parts (19.6-mm deviation). <b><i>Conclusion:</i></b> This pilot study offers a first proof of concept of the clinical feasibility of the HoloLens for parotid tumor surgery. Workflow is not affected, but additional information is provided. The surgical performance could become safer through the navigation-like application of reality-fused 3D holograms, and it improves ergonomics without compromising sterility. Superimposition of the 3D holograms with the surgical field was possible, but further invention is necessary to improve the accuracy.


2021 ◽  
Vol 45 (5) ◽  
Author(s):  
Yuri Nagayo ◽  
Toki Saito ◽  
Hiroshi Oyama

AbstractThe surgical education environment has been changing significantly due to restricted work hours, limited resources, and increasing public concern for safety and quality, leading to the evolution of simulation-based training in surgery. Of the various simulators, low-fidelity simulators are widely used to practice surgical skills such as sutures because they are portable, inexpensive, and easy to use without requiring complicated settings. However, since low-fidelity simulators do not offer any teaching information, trainees do self-practice with them, referring to textbooks or videos, which are insufficient to learn open surgical procedures. This study aimed to develop a new suture training system for open surgery that provides trainees with the three-dimensional information of exemplary procedures performed by experts and allows them to observe and imitate the procedures during self-practice. The proposed system consists of a motion capture system of surgical instruments and a three-dimensional replication system of captured procedures on the surgical field. Motion capture of surgical instruments was achieved inexpensively by using cylindrical augmented reality (AR) markers, and replication of captured procedures was realized by visualizing them three-dimensionally at the same position and orientation as captured, using an AR device. For subcuticular interrupted suture, it was confirmed that the proposed system enabled users to observe experts’ procedures from any angle and imitate them by manipulating the actual surgical instruments during self-practice. We expect that this training system will contribute to developing a novel surgical training method that enables trainees to learn surgical skills by themselves in the absence of experts.


2014 ◽  
Vol 42 (8) ◽  
pp. 1970-1976 ◽  
Author(s):  
Giovanni Badiali ◽  
Vincenzo Ferrari ◽  
Fabrizio Cutolo ◽  
Cinzia Freschi ◽  
Davide Caramella ◽  
...  

2020 ◽  
Vol 4 (1) ◽  
pp. 11-22
Author(s):  
Deli Deli

Implementation of Augmented Reality for Earth Layer Structure on Android Based as A Learning Media isa research that aims to help in presenting material to Elementary School children. The research methodchosen in the completion of this study uses the 4D method (Define, Design, Develop and Disseminate) witha data collecting method using Technology Acceptance Model (TAM) built one construct with threedimensions of user assessment level of technology acceptance to support the basis of questionnaire design.AR design supported by 3D models, in order to be able to support the details of each explanation of thematerial contained, thus helping users to understand the material and ease of interaction on the media.The final result obtained in this research is that the application is stated to be able to help the school, it is used as a media display in the classroom so students do not need to imagine themselves, but simply byusing learning media is able to present the material to students.Keywords: Learning Media, 4D Method, User Acceptance Test, Augmented reality, Android.


Author(s):  
Vivek Parashar

Augmented Reality is the technology using which we can integrate 3D virtual objects in our physical environment in real time. Augmented Reality helps us in bring the virtual world closer to our physical worlds and gives us the ability to interact with the surrounding. This paper will give you an idea that how Augmented Reality can transform Education Industry. In this paper we have used Augmented Reality to simplify the learning process and allow people to interact with 3D models with the help of gestures. This advancement in the technology is changing the way we interact with our surrounding, rather than watching videos or looking at a static diagram in your text book, Augmented Reality enables you to do more. So rather than putting someone in the animated world, the goal of augmented reality is to blend the virtual objects in the real world.


2021 ◽  
Author(s):  
Phathompat Boonyasaknanon ◽  
Raymond Pols ◽  
Katja Schulze ◽  
Robert Rundle

Abstract An augmented reality (AR) system is presented which enhances the real-time collaboration of domain experts involved in the geologic modeling of complex reservoirs. An evaluation of traditional techniques is compared with this new approach. The objective of geologic modeling is to describe the subsurface as accurately and in as much detail as possible given the available data. This is necessarily an iterative process since as new wells are drilled more data becomes available which either validates current assumptions or forces a re-evaluation of the model. As the speed of reservoir development increases there is a need for expeditious updates of the subsurface model as working with an outdated model can lead to costly mistakes. Common practice is for a geologist to maintain the geologic model while working closely with other domain experts who are frequently not co-located with the geologist. Time-critical analysis can be hampered by the fact that reservoirs, which are inherently 3D objects, are traditionally viewed with 2D screens. The system presented here allows the geologic model to be rendered as a hologram in multiple locations to allow domain experts to collaborate and analyze the reservoir in real-time. Collaboration on 3D models has not changed significantly in a generation. For co-located personnel the approach is to gather around a 2D screen. For remote personnel the approach has been sharing a model through a 2D screen along with video chat. These approaches are not optimal for many reasons. Over the years various attempts have been tried to enhance the collaboration experience and have all fallen short. In particular virtual reality (VR) has been seen as a solution to this problem. However, we have found that augmented reality (AR) is a much better solution for many subtle reasons which are explored in the paper. AR has already acquired an impressive track record in various industries. AR will have applications in nearly all industries. For various historical reasons, the uptake for AR is much faster in some industries than others. It is too early to tell whether the use of augmented reality in geological applications will be transformative, however the results of this initial work are promising.


Sign in / Sign up

Export Citation Format

Share Document