scholarly journals Real-Time Tool Detection for Workflow Identification in Open Cranial Vault Remodeling

Entropy ◽  
2021 ◽  
Vol 23 (7) ◽  
pp. 817
Author(s):  
Alicia Pose Díez de la Lastra ◽  
Lucía García-Duarte Sáenz ◽  
David García-Mato ◽  
Luis Hernández-Álvarez ◽  
Santiago Ochandiano ◽  
...  

Deep learning is a recent technology that has shown excellent capabilities for recognition and identification tasks. This study applies these techniques in open cranial vault remodeling surgeries performed to correct craniosynostosis. The objective was to automatically recognize surgical tools in real-time and estimate the surgical phase based on those predictions. For this purpose, we implemented, trained, and tested three algorithms based on previously proposed Convolutional Neural Network architectures (VGG16, MobileNetV2, and InceptionV3) and one new architecture with fewer parameters (CranioNet). A novel 3D Slicer module was specifically developed to implement these networks and recognize surgical tools in real time via video streaming. The training and test data were acquired during a surgical simulation using a 3D printed patient-based realistic phantom of an infant’s head. The results showed that CranioNet presents the lowest accuracy for tool recognition (93.4%), while the highest accuracy is achieved by the MobileNetV2 model (99.6%), followed by VGG16 and InceptionV3 (98.8% and 97.2%, respectively). Regarding phase detection, InceptionV3 and VGG16 obtained the best results (94.5% and 94.4%), whereas MobileNetV2 and CranioNet presented worse values (91.1% and 89.8%). Our results prove the feasibility of applying deep learning architectures for real-time tool detection and phase estimation in craniosynostosis surgeries.

AI ◽  
2021 ◽  
Vol 2 (1) ◽  
pp. 1-16
Author(s):  
Juan Cruz-Benito ◽  
Sanjay Vishwakarma ◽  
Francisco Martin-Fernandez ◽  
Ismael Faro

In recent years, the use of deep learning in language models has gained much attention. Some research projects claim that they can generate text that can be interpreted as human writing, enabling new possibilities in many application areas. Among the different areas related to language processing, one of the most notable in applying this type of modeling is programming languages. For years, the machine learning community has been researching this software engineering area, pursuing goals like applying different approaches to auto-complete, generate, fix, or evaluate code programmed by humans. Considering the increasing popularity of the deep learning-enabled language models approach, we found a lack of empirical papers that compare different deep learning architectures to create and use language models based on programming code. This paper compares different neural network architectures like Average Stochastic Gradient Descent (ASGD) Weight-Dropped LSTMs (AWD-LSTMs), AWD-Quasi-Recurrent Neural Networks (QRNNs), and Transformer while using transfer learning and different forms of tokenization to see how they behave in building language models using a Python dataset for code generation and filling mask tasks. Considering the results, we discuss each approach’s different strengths and weaknesses and what gaps we found to evaluate the language models or to apply them in a real programming context.


Author(s):  
Luis C. García-Peraza-Herrera ◽  
Wenqi Li ◽  
Caspar Gruijthuijsen ◽  
Alain Devreker ◽  
George Attilakos ◽  
...  

2019 ◽  
Vol 24 (6) ◽  
pp. 632-641 ◽  
Author(s):  
Du Cheng ◽  
Melissa Yuan ◽  
Imali Perera ◽  
Ashley O’Connor ◽  
Alexander I. Evins ◽  
...  

OBJECTIVECraniosynostosis correction, including cranial vault remodeling, fronto-orbital advancement (FOA), and endoscopic suturectomy, requires practical experience with complex anatomy and tools. The infrequent exposure to complex neurosurgical procedures such as these during residency limits extraoperative training. Lack of cadaveric teaching tools given the pediatric nature of synostosis compounds this challenge. The authors sought to create lifelike 3D printed models based on actual cases of craniosynostosis in infants and incorporate them into a practical course for endoscopic and open correction. The authors hypothesized that this training tool would increase extraoperative facility and familiarity with cranial vault reconstruction to better prepare surgeons for in vivo procedures.METHODSThe authors utilized representative craniosynostosis patient scans to create 3D printed models of the calvaria, soft tissues, and cranial contents. Two annual courses implementing these models were held, and surveys were completed by participants (n = 18, 5 attending physicians, 4 fellows, 9 residents) on the day of the course. These participants were surveyed during the course and 1 year later to assess the impact of this training tool. A comparable cohort of trainees who did not participate in the course (n = 11) was also surveyed at the time of the 1-year follow-up to assess their preparation and confidence with performing craniosynostosis surgeries.RESULTSAn iterative process using multiple materials and the various printing parameters was used to create representative models. Participants performed all major surgical steps, and we quantified the fidelity and utility of the model through surveys. All attendees reported that the model was a valuable training tool for open reconstruction (n = 18/18 [100%]) and endoscopic suturectomy (n = 17/18 [94%]). In the first year, 83% of course participants (n = 14/17) agreed or strongly agreed that the skin and bone materials were realistic and appropriately detailed; the second year, 100% (n = 16/16) agreed or strongly agreed that the skin material was realistic and appropriately detailed, and 88% (n = 14/16) agreed or strongly agreed that the bone material was realistic and appropriately detailed. All participants responded that they would use the models for their own personal training and the training of residents and fellows in their programs.CONCLUSIONSThe authors have developed realistic 3D printed models of craniosynostosis including soft tissues that allow for surgical practice simulation. The use of these models in surgical simulation provides a level of preparedness that exceeds what currently exists through traditional resident training experience. Employing practical modules using such models as part of a standardized resident curriculum is a logical evolution in neurosurgical education and training.


Author(s):  
Helen Chen ◽  
Shubhankar Mohapatra ◽  
George Michalopoulos ◽  
Xi He ◽  
Ian McKillop

Using deep learning to advance personalized healthcare requires data about patients to be collected and aggregated from disparate sources that often span institutions and geographies. Researchers regularly come face-to-face with legitimate security and privacy policies that constrain access to these data. In this work, we present a vision for privacy-preserving federated neural network architectures that permit data to remain at a custodian’s institution while enabling the data to be discovered and used in neural network modeling. Using a diabetes dataset, we demonstrate that accuracy and processing efficiencies using federated deep learning architectures are equivalent to the models built on centralized datasets.


2021 ◽  
Vol 1 (2) ◽  
pp. 387-413
Author(s):  
Chowdhury Erfan Shourov ◽  
Mahasweta Sarkar ◽  
Arash Jahangiri ◽  
Christopher Paolini

Skateboarding as a method of transportation has become prevalent, which has increased the occurrence and likelihood of pedestrian–skateboarder collisions and near-collision scenarios in shared-use roadway areas. Collisions between pedestrians and skateboarders can result in significant injury. New approaches are needed to evaluate shared-use areas prone to hazardous pedestrian–skateboarder interactions, and perform real-time, in situ (e.g., on-device) predictions of pedestrian–skateboarder collisions as road conditions vary due to changes in land usage and construction. A mechanism called the Surrogate Safety Measures for skateboarder–pedestrian interaction can be computed to evaluate high-risk conditions on roads and sidewalks using deep learning object detection models. In this paper, we present the first ever skateboarder–pedestrian safety study leveraging deep learning architectures. We view and analyze state of the art deep learning architectures, namely the Faster R-CNN and two variants of the Single Shot Multi-box Detector (SSD) model to select the correct model that best suits two different tasks: automated calculation of Post Encroachment Time (PET) and finding hazardous conflict zones in real-time. We also contribute a new annotated data set that contains skateboarder–pedestrian interactions that has been collected for this study. Both our selected models can detect and classify pedestrians and skateboarders correctly and efficiently. However, due to differences in their architectures and based on the advantages and disadvantages of each model, both models were individually used to perform two different set of tasks. Due to improved accuracy, the Faster R-CNN model was used to automate the calculation of post encroachment time, whereas to determine hazardous regions in real-time, due to its extremely fast inference rate, the Single Shot Multibox MobileNet V1 model was used. An outcome of this work is a model that can be deployed on low-cost, small-footprint mobile and IoT devices at traffic intersections with existing cameras to perform on-device inferencing for in situ Surrogate Safety Measurement (SSM), such as Time-To-Collision (TTC) and Post Encroachment Time (PET). SSM values that exceed a hazard threshold can be published to an Message Queuing Telemetry Transport (MQTT) broker, where messages are received by an intersection traffic signal controller for real-time signal adjustment, thus contributing to state-of-the-art vehicle and pedestrian safety at hazard-prone intersections.


2020 ◽  
Vol 2 (3) ◽  
pp. 186-194
Author(s):  
Smys S. ◽  
Joy Iong Zong Chen ◽  
Subarna Shakya

In the present research era, machine learning is an important and unavoidable zone where it provides better solutions to various domains. In particular deep learning is one of the cost efficient, effective supervised learning model, which can be applied to various complicated issues. Since deep learning has various illustrative features and it doesn’t depend on any limited learning methods which helps to obtain better solutions. As deep learning has significant performance and advancements it is widely used in various applications like image classification, face recognition, visual recognition, language processing, speech recognition, object detection and various science, business analysis, etc., This survey work mainly provides an insight about deep learning through an intensive analysis of deep learning architectures and its characteristics along with its limitations. Also, this research work analyses recent trends in deep learning through various literatures to explore the present evolution in deep learning models.


2020 ◽  
Vol 39 (4) ◽  
pp. 5699-5711
Author(s):  
Shirong Long ◽  
Xuekong Zhao

The smart teaching mode overcomes the shortcomings of traditional teaching online and offline, but there are certain deficiencies in the real-time feature extraction of teachers and students. In view of this, this study uses the particle swarm image recognition and deep learning technology to process the intelligent classroom video teaching image and extracts the classroom task features in real time and sends them to the teacher. In order to overcome the shortcomings of the premature convergence of the standard particle swarm optimization algorithm, an improved strategy for multiple particle swarm optimization algorithms is proposed. In order to improve the premature problem in the search performance algorithm of PSO algorithm, this paper combines the algorithm with the useful attributes of other algorithms to improve the particle diversity in the algorithm, enhance the global search ability of the particle, and achieve effective feature extraction. The research indicates that the method proposed in this paper has certain practical effects and can provide theoretical reference for subsequent related research.


2019 ◽  
Vol 2019 (1) ◽  
pp. 360-368
Author(s):  
Mekides Assefa Abebe ◽  
Jon Yngve Hardeberg

Different whiteboard image degradations highly reduce the legibility of pen-stroke content as well as the overall quality of the images. Consequently, different researchers addressed the problem through different image enhancement techniques. Most of the state-of-the-art approaches applied common image processing techniques such as background foreground segmentation, text extraction, contrast and color enhancements and white balancing. However, such types of conventional enhancement methods are incapable of recovering severely degraded pen-stroke contents and produce artifacts in the presence of complex pen-stroke illustrations. In order to surmount such problems, the authors have proposed a deep learning based solution. They have contributed a new whiteboard image data set and adopted two deep convolutional neural network architectures for whiteboard image quality enhancement applications. Their different evaluations of the trained models demonstrated their superior performances over the conventional methods.


2019 ◽  
Author(s):  
Giraso Kabandana ◽  
Curtis G. Jones ◽  
Sahra Khan Sharifi ◽  
Chengpeng Chen

We developed a novel microfluidic system that enables automated and near real-time quantitation of indole release kinetics from biofilms.


Sign in / Sign up

Export Citation Format

Share Document