Incorporating Population-Level Variability in Orthopedic Biomechanical Analysis: A Review

2014 ◽  
Vol 136 (2) ◽  
Author(s):  
Jeffrey E. Bischoff ◽  
Yifei Dai ◽  
Casey Goodlett ◽  
Brad Davis ◽  
Marc Bandi

Effectively addressing population-level variability within orthopedic analyses requires robust data sets that span the target population and can be greatly facilitated by statistical methods for incorporating such data into functional biomechanical models. Data sets continue to be disseminated that include not just anatomical information but also key mechanical data including tissue or joint stiffness, gait patterns, and other inputs relevant to analysis of joint function across a range of anatomies and physiologies. Statistical modeling can be used to establish correlations between a variety of structural and functional biometrics rooted in these data and to quantify how these correlations change from health to disease and, finally, to joint reconstruction or other clinical intervention. Principal component analysis provides a basis for effectively and efficiently integrating variability in anatomy, tissue properties, joint kinetics, and kinematics into mechanistic models of joint function. With such models, bioengineers are able to study the effects of variability on biomechanical performance, not just on a patient-specific basis but in a way that may be predictive of a larger patient population. The goal of this paper is to demonstrate the broad use of statistical modeling within orthopedics and to discuss ways to continue to leverage these techniques to improve biomechanical understanding of orthopedic systems across populations.

Author(s):  
Sebastiano Caprara ◽  
Fabio Carrillo ◽  
Jess G. Snedeker ◽  
Mazda Farshad ◽  
Marco Senteler

State-of-the-art preoperative biomechanical analysis for the planning of spinal surgery not only requires the generation of three-dimensional patient-specific models but also the accurate biomechanical representation of vertebral joints. The benefits offered by computational models suitable for such purposes are still outweighed by the time and effort required for their generation, thus compromising their applicability in a clinical environment. In this work, we aim to ease the integration of computerized methods into patient-specific planning of spinal surgery. We present the first pipeline combining deep learning and finite element methods that allows a completely automated model generation of functional spine units (FSUs) of the lumbar spine for patient-specific FE simulations (FEBio). The pipeline consists of three steps: (a) multiclass segmentation of cropped 3D CT images containing lumbar vertebrae using the DenseVNet network, (b) automatic landmark-based mesh fitting of statistical shape models onto 3D semantic segmented meshes of the vertebral models, and (c) automatic generation of patient-specific FE models of lumbar segments for the simulation of flexion-extension, lateral bending, and axial rotation movements. The automatic segmentation of FSUs was evaluated against the gold standard (manual segmentation) using 10-fold cross-validation. The obtained Dice coefficient was 93.7% on average, with a mean surface distance of 0.88 mm and a mean Hausdorff distance of 11.16 mm (N = 150). Automatic generation of finite element models to simulate the range of motion (ROM) was successfully performed for five healthy and five pathological FSUs. The results of the simulations were evaluated against the literature and showed comparable ROMs in both healthy and pathological cases, including the alteration of ROM typically observed in severely degenerated FSUs. The major intent of this work is to automate the creation of anatomically accurate patient-specific models by a single pipeline allowing functional modeling of spinal motion in healthy and pathological FSUs. Our approach reduces manual efforts to a minimum and the execution of the entire pipeline including simulations takes approximately 2 h. The automation, time-efficiency and robustness level of the pipeline represents a first step toward its clinical integration.


2007 ◽  
Vol 46 (01) ◽  
pp. 38-42 ◽  
Author(s):  
V. Schulz ◽  
I. Nickel ◽  
A. Nömayr ◽  
A. H. Vija ◽  
C. Hocke ◽  
...  

SummaryThe aim of this study was to determine the clinical relevance of compensating SPECT data for patient specific attenuation by the use of CT data simultaneously acquired with SPECT/CT when analyzing the skeletal uptake of polyphosphonates (DPD). Furthermore, the influence of misregistration between SPECT and CT data on uptake ratios was investigated. Methods: Thirty-six data sets from bone SPECTs performed on a hybrid SPECT/CT system were retrospectively analyzed. Using regions of interest (ROIs), raw counts were determined in the fifth lumbar vertebral body, its facet joints, both anterior iliacal spinae, and of the whole transversal slice. ROI measurements were performed in uncorrected (NAC) and attenuation-corrected (AC) images. Furthermore, the ROI measurements were also performed in AC scans in which SPECT and CT images had been misaligned by 1 cm in one dimension beforehand (ACX, ACY, ACZ). Results: After AC, DPD uptake ratios differed significantly from the NAC values in all regions studied ranging from 32% for the left facet joint to 39% for the vertebral body. AC using misaligned pairs of patient data sets led to a significant change of whole-slice uptake ratios whose differences ranged from 3,5 to 25%. For ACX, the average left-to-right ratio of the facet joints was by 8% and for the superior iliacal spines by 31% lower than the values determined for the matched images (p <0.05). Conclusions: AC significantly affects DPD uptake ratios. Furthermore, misalignment between SPECT and CT may introduce significant errors in quantification, potentially also affecting leftto- right ratios. Therefore, at clinical evaluation of attenuation- corrected scans special attention should be given to possible misalignments between SPECT and CT.


2021 ◽  
Vol 11 (7) ◽  
pp. 592
Author(s):  
Sonja A. G. A. Grothues ◽  
Klaus Radermacher

The native femoral J-Curve is known to be a relevant determinant of knee biomechanics. Similarly, after total knee arthroplasty, the J-Curve of the femoral implant component is reported to have a high impact on knee kinematics. The shape of the native femoral J-Curve has previously been analyzed in 2D, however, the knee motion is not planar. In this study, we investigated the J-Curve in 3D by principal component analysis (PCA) and the resulting mean shapes and modes by geometric parameter analysis. Surface models of 90 cadaveric femora were available, 56 male, 32 female and two without respective information. After the translation to a bone-specific coordinate system, relevant contours of the femoral condyles were derived using virtual rotating cutting planes. For each derived contour, an extremum search was performed. The extremum points were used to define the 3D J-Curve of each condyle. Afterwards a PCA and a geometric parameter analysis were performed on the medial and lateral 3D J-Curves. The normalized measures of the mean shapes and the aspects of shape variation of the male and female 3D J-Curves were found to be similar. When considering both female and male J-Curves in a combined analysis, the first mode of the PCA primarily consisted of changes in size, highlighting size differences between female and male femora. Apart from changes in size, variation regarding aspect ratio, arc lengths, orientation, circularity, as well as regarding relative location of the 3D J-Curves was found. The results of this study are in agreement with those of previous 2D analyses on shape and shape variation of the femoral J-Curves. The presented 3D analysis highlights new aspects of shape variability, e.g., regarding curvature and relative location in the transversal plane. Finally, the analysis presented may support the design of (patient-specific) femoral implant components for TKA.


2007 ◽  
Vol 56 (6) ◽  
pp. 75-83 ◽  
Author(s):  
X. Flores ◽  
J. Comas ◽  
I.R. Roda ◽  
L. Jiménez ◽  
K.V. Gernaey

The main objective of this paper is to present the application of selected multivariable statistical techniques in plant-wide wastewater treatment plant (WWTP) control strategies analysis. In this study, cluster analysis (CA), principal component analysis/factor analysis (PCA/FA) and discriminant analysis (DA) are applied to the evaluation matrix data set obtained by simulation of several control strategies applied to the plant-wide IWA Benchmark Simulation Model No 2 (BSM2). These techniques allow i) to determine natural groups or clusters of control strategies with a similar behaviour, ii) to find and interpret hidden, complex and casual relation features in the data set and iii) to identify important discriminant variables within the groups found by the cluster analysis. This study illustrates the usefulness of multivariable statistical techniques for both analysis and interpretation of the complex multicriteria data sets and allows an improved use of information for effective evaluation of control strategies.


Author(s):  
Pradeep Lall ◽  
Tony Thomas

Electronics in automotive underhood environments is used for a number of safety critical functions. Reliable continued operation of electronic safety systems without catastrophic failure is important for safe operation of the vehicle. There is need for prognostication methods, which can be integrated, with on-board sensors for assessment of accrued damage and impending failure. In this paper, leadfree electronic assemblies consisting of daisy-chained parts have been subjected to high temperature vibration at 5g and 155°C. Spectrogram has been used to identify the emergence of new low frequency components with damage progression in electronic assemblies. Principal component analysis has been used to reduce the dimensionality of large data-sets and identify patterns without the loss of features that signify damage progression and impending failure. Variance of the principal components of the instantaneous frequency has been shown to exhibit an increasing trend during the initial damage progression, attaining a maximum value and decreasing prior to failure. The unique behavior of the instantaneous frequency over the period of vibration can be used as a health-monitoring feature for identifying the impending failures in automotive electronics. Further, damage progression has been studied using Empirical Mode Decomposition (EMD) technique in order to decompose the signals into Independent Mode Functions (IMF). The IMF’s were investigated based on their kurtosis values and a reconstructed strain signal was formulated with all IMF’s greater than a kurtosis value of three. PCA analysis on the reconstructed strain signal gave better patterns that can be used for prognostication of the life of the components.


2014 ◽  
Vol 11 (4) ◽  
pp. 597-608
Author(s):  
Dragan Antic ◽  
Miroslav Milovanovic ◽  
Stanisa Peric ◽  
Sasa Nikolic ◽  
Marko Milojkovic

The aim of this paper is to present a method for neural network input parameters selection and preprocessing. The purpose of this network is to forecast foreign exchange rates using artificial intelligence. Two data sets are formed for two different economic systems. Each system is represented by six categories with 70 economic parameters which are used in the analysis. Reduction of these parameters within each category was performed by using the principal component analysis method. Component interdependencies are established and relations between them are formed. Newly formed relations were used to create input vectors of a neural network. The multilayer feed forward neural network is formed and trained using batch training. Finally, simulation results are presented and it is concluded that input data preparation method is an effective way for preprocessing neural network data.


2020 ◽  
Author(s):  
Sarah C. Brüningk ◽  
Juliane Klatt ◽  
Madlen Stange ◽  
Alfredo Mari ◽  
Myrta Brunner ◽  
...  

Transmission chains within cities provide an important contribution to case burden and economic impact during the ongoing COVID-19 pandemic, and should be a major focus for preventive measures to achieve containment. Here, at very high spatio-temporal resolution, we analysed determinants of SARS-CoV-2 transmission in a medium-sized European city. We combined detailed epidemiological, mobility, and socioeconomic data-sets with whole genome sequencing during the first SARS-CoV-2 wave. Both phylogenetic clustering and compartmental modelling analysis were performed based on the dominating viral variant (B.1-C15324T; 60% of all cases). Here we show that transmissions on the city population level are driven by the socioeconomically weaker and highly mobile groups. Simulated vaccination scenarios showed that vaccination of a third of the population at 90% efficacy prioritising the latter groups would induce a stronger preventive effect compared to vaccinating exclusively senior population groups first. Our analysis accounts for both social interaction and mobility on the basis of molecularly related cases, thereby providing high confidence estimates of the underlying epidemic dynamics that may readily be translatable to other municipal areas.


1995 ◽  
Vol 7 (3) ◽  
pp. 507-517 ◽  
Author(s):  
Marco Idiart ◽  
Barry Berk ◽  
L. F. Abbott

Model neural networks can perform dimensional reductions of input data sets using correlation-based learning rules to adjust their weights. Simple Hebbian learning rules lead to an optimal reduction at the single unit level but result in highly redundant network representations. More complex rules designed to reduce or remove this redundancy can develop optimal principal component representations, but they are not very compelling from a biological perspective. Neurons in biological networks have restricted receptive fields limiting their access to the input data space. We find that, within this restricted receptive field architecture, simple correlation-based learning rules can produce surprisingly efficient reduced representations. When noise is present, the size of the receptive fields can be optimally tuned to maximize the accuracy of reconstructions of input data from a reduced representation.


Author(s):  
Andrew J. Connolly ◽  
Jacob T. VanderPlas ◽  
Alexander Gray ◽  
Andrew J. Connolly ◽  
Jacob T. VanderPlas ◽  
...  

With the dramatic increase in data available from a new generation of astronomical telescopes and instruments, many analyses must address the question of the complexity as well as size of the data set. This chapter deals with how we can learn which measurements, properties, or combinations thereof carry the most information within a data set. It describes techniques that are related to concepts discussed when describing Gaussian distributions, density estimation, and the concepts of information content. The chapter begins with an exploration of the problems posed by high-dimensional data. It then describes the data sets used in this chapter, and introduces perhaps the most important and widely used dimensionality reduction technique, principal component analysis (PCA). The remainder of the chapter discusses several alternative techniques which address some of the weaknesses of PCA.


Author(s):  
Petr Praus

In this chapter the principals and applications of principal component analysis (PCA) applied on hydrological data are presented. Four case studies showed the possibility of PCA to obtain information about wastewater treatment process, drinking water quality in a city network and to find similarities in the data sets of ground water quality results and water-related images. In the first case study, the composition of raw and cleaned wastewater was characterised and its temporal changes were displayed. In the second case study, drinking water samples were divided into clusters in consistency with their sampling localities. In the case study III, the similar samples of ground water were recognised by the calculation of cosine similarity, the Euclidean and Manhattan distances. In the case study IV, 32 water-related images were transformed into a large image matrix whose dimensionality was reduced by PCA. The images were clustered using the PCA scatter plots.


Sign in / Sign up

Export Citation Format

Share Document