scholarly journals Augmented reality visualisation for orthopaedic surgical guidance with pre‐ and intra‐operative multimodal image data fusion

2018 ◽  
Vol 5 (5) ◽  
pp. 189-193 ◽  
Author(s):  
Houssam El‐Hariri ◽  
Prashant Pandey ◽  
Antony J. Hodgson ◽  
Rafeef Garbi
2007 ◽  
Vol 62 (2) ◽  
pp. 192-198 ◽  
Author(s):  
Stefan Franz Nemec ◽  
Markus Alexander Donat ◽  
Sheida Mehrain ◽  
Klaus Friedrich ◽  
Christian Krestan ◽  
...  

Agriculture ◽  
2022 ◽  
Vol 12 (1) ◽  
pp. 77
Author(s):  
Tsu Chiang Lei ◽  
Shiuan Wan ◽  
You Cheng Wu ◽  
Hsin-Ping Wang ◽  
Chia-Wen Hsieh

This study employed a data fusion method to extract the high-similarity time series feature index of a dataset through the integration of MS (Multi-Spectrum) and SAR (Synthetic Aperture Radar) images. The farmlands are divided into small pieces that consider the different behaviors of farmers for their planting contents in Taiwan. Hence, the conventional image classification process cannot produce good outcomes. The crop phenological information will be a core factor to multi-period image data. Accordingly, the study intends to resolve the previous problem by using three different SPOT6 satellite images and nine Sentinel-1A synthetic aperture radar images, which were used to calculate features such as texture and indicator information, in 2019. Considering that a Dynamic Time Warping (DTW) index (i) can integrate different image data sources, (ii) can integrate data of different lengths, and (iii) can generate information with time characteristics, this type of index can resolve certain classification problems with long-term crop classification and monitoring. More specifically, this study used the time series data analysis of DTW to produce “multi-scale time series feature similarity indicators”. We used three approaches (Support Vector Machine, Neural Network, and Decision Tree) to classify paddy patches into two groups: (a) the first group did not apply a DTW index, and (b) the second group extracted conflict predicted data from (a) to apply a DTW index. The outcomes from the second group performed better than the first group in regard to overall accuracy (OA) and kappa. Among those classifiers, the Neural Network approach had the largest improvement of OA and kappa from 89.51, 0.66 to 92.63, 0.74, respectively. The rest of the two classifiers also showed progress. The best performance of classification results was obtained from the Decision Tree of 94.71, 0.81. Observing the outcomes, the interference effects of the image were resolved successfully by various image problems using the spectral image and radar image for paddy rice classification. The overall accuracy and kappa showed improvement, and the maximum kappa was enhanced by about 8%. The classification performance was improved by considering the DTW index.


2009 ◽  
Author(s):  
Cristian A. Linte ◽  
John Moore ◽  
Andrew Wiles ◽  
Jennifer Lo ◽  
Chris Wedlake ◽  
...  

2010 ◽  
Vol 73 (2) ◽  
pp. 224-229 ◽  
Author(s):  
Stefan Franz Nemec ◽  
Philipp Peloschek ◽  
Maria Theresa Schmook ◽  
Christian Robert Krestan ◽  
Wolfgang Hauff ◽  
...  

Author(s):  
B. Aiazzi ◽  
L. Alparone ◽  
S. Baronti ◽  
V. Cappellini ◽  
R. Carlà ◽  
...  

Sensors ◽  
2018 ◽  
Vol 18 (11) ◽  
pp. 3960 ◽  
Author(s):  
Jeremy Castagno ◽  
Ella Atkins

Geographic information systems (GIS) provide accurate maps of terrain, roads, waterways, and building footprints and heights. Aircraft, particularly small unmanned aircraft systems (UAS), can exploit this and additional information such as building roof structure to improve navigation accuracy and safely perform contingency landings particularly in urban regions. However, building roof structure is not fully provided in maps. This paper proposes a method to automatically label building roof shape from publicly available GIS data. Satellite imagery and airborne LiDAR data are processed and manually labeled to create a diverse annotated roof image dataset for small to large urban cities. Multiple convolutional neural network (CNN) architectures are trained and tested, with the best performing networks providing a condensed feature set for support vector machine and decision tree classifiers. Satellite image and LiDAR data fusion is shown to provide greater classification accuracy than using either data type alone. Model confidence thresholds are adjusted leading to significant increases in models precision. Networks trained from roof data in Witten, Germany and Manhattan (New York City) are evaluated on independent data from these cities and Ann Arbor, Michigan.


Sign in / Sign up

Export Citation Format

Share Document