scholarly journals Exploring a model-based analysis of patient derived xenograft studies in oncology drug development

PeerJ ◽  
2021 ◽  
Vol 9 ◽  
pp. e10681
Author(s):  
Jake Dickinson ◽  
Marcel de Matas ◽  
Paul A. Dickinson ◽  
Hitesh B. Mistry

Purpose To assess whether a model-based analysis increased statistical power over an analysis of final day volumes and provide insights into more efficient patient derived xenograft (PDX) study designs. Methods Tumour xenograft time-series data was extracted from a public PDX drug treatment database. For all 2-arm studies the percent tumour growth inhibition (TGI) at day 14, 21 and 28 was calculated. Treatment effect was analysed using an un-paired, two-tailed t-test (empirical) and a model-based analysis, likelihood ratio-test (LRT). In addition, a simulation study was performed to assess the difference in power between the two data-analysis approaches for PDX or standard cell-line derived xenografts (CDX). Results The model-based analysis had greater statistical power than the empirical approach within the PDX data-set. The model-based approach was able to detect TGI values as low as 25% whereas the empirical approach required at least 50% TGI. The simulation study confirmed the findings and highlighted that CDX studies require fewer animals than PDX studies which show the equivalent level of TGI. Conclusions The study conducted adds to the growing literature which has shown that a model-based analysis of xenograft data improves statistical power over the common empirical approach. The analysis conducted showed that a model-based approach, based on the first mathematical model of tumour growth, was able to detect smaller size of effect compared to the empirical approach which is common of such studies. A model-based analysis should allow studies to reduce animal use and experiment length providing effective insights into compound anti-tumour activity.

2019 ◽  
Author(s):  
Jake Dickinson ◽  
Marcel de Matas ◽  
Paul A Dickinson ◽  
Hitesh Mistry

AbstractBackgroundPreclinical Oncology drug development is heavily reliant on xenograft studies to assess the anti-tumour effect of new compounds. Patient derived xenograft (PDX) have become popular as they may better represent the clinical disease, however variability is greater than in cell-line derived xenografts. The typical approach of analysing these studies involves performing an un-paired t-test on the mean tumour volumes between the treated and control group at the end of the study. This approach ignores the time-series and may result in false conclusions, especially when considering the increased variability of PDX studies.AimTo test the hypothesis that a model-based analysis provides increased power than analysis of final day volumes and to provide insights into more efficient PDX study designs.MethodsData was extracted from tumour xenograft time-series data from a large publicly available PDX drug treatment database released by Novartis. For all 2-arm studies the percent tumour growth inhibition (TGI) at two time-points, day 10 and day 14 was calculated. For each study, the effect of treatment was calculated using an un-paired t-test and also a model-based analysis using the likelihood ratio-test. In addition a simulation study was also performed to assess the difference in power between the two data-analysis approaches for different levels of TGI for PDX or standard cell-line derived xenografts (CDX).ResultsThe model-based analysis had greater statistical power than the un-paired t-test approach within the PDX data-set. The model-based approach was able to detect TGI values as low as 25 percent whereas the un-paired t-test approach required at least 50 percent TGI. These findings were confirmed within the simulation study performed which also highlighted that CDX studies require less animals than PDX studies which show the equivalent level of TGI.ConclusionThe analysis of 59 2-arm PDX studies highlighted that taking a model-based approach gave increased statistical power over simply performing an un-paired t-test on the final study day. Importantly the model-based approach was able to detect smaller size of effect compared to the un-paired t-test approach is which maybe common of such studies. These findings were confirmed within simulated studies which also highlighted the same sample size used for CDX studies would lead to inadequately powered PDX studies. Application of a model-based analysis should allow studies to use less animals and run experiments for a shorter period thus providing effective insight into compound anti-tumour activity


Author(s):  
Haji A. Haji ◽  
Kusman Sadik ◽  
Agus Mohamad Soleh

Simulation study is used when real world data is hard to find or time consuming to gather and it involves generating data set by specific statistical model or using random sampling. A simulation of the process is useful to test theories and understand behavior of the statistical methods. This study aimed to compare ARIMA and Fuzzy Time Series (FTS) model in order to identify the best model for forecasting time series data based on 100 replicates on 100 generated data of the ARIMA (1,0,1) model.There are 16 scenarios used in this study as a combination between 4 data generation variance error values (0.5, 1, 3,5) with 4 ARMA(1,1) parameter values. Furthermore, The performances were evaluated based on three metric mean absolute percentage error (MAPE),Root mean squared error (RMSE) and Bias statistics criterion to determine the more appropriate method and performance of model. The results of the study show a lowest bias for the chen fuzzy time series model and the performance of all measurements is small then other models. The results also proved that chen method is compatible with the advanced forecasting techniques in all of the consided situation in providing better forecasting accuracy.


Author(s):  
Iris K. Minichmayr ◽  
Mats O. Karlsson ◽  
Siv Jönsson

Abstract Purpose Pharmacometric models provide useful tools to aid the rational design of clinical trials. This study evaluates study design-, drug-, and patient-related features as well as analysis methods for their influence on the power to demonstrate a benefit of pharmacogenomics (PGx)-based dosing regarding myelotoxicity. Methods Two pharmacokinetic and one myelosuppression model were assembled to predict concentrations of irinotecan and its metabolite SN-38 given different UGT1A1 genotypes (poor metabolizers: CLSN-38: -36%) and neutropenia following conventional versus PGx-based dosing (350 versus 245 mg/m2 (-30%)). Study power was assessed given diverse scenarios (n = 50–400 patients/arm, parallel/crossover, varying magnitude of CLSN-38, exposure-response relationship, inter-individual variability) and using model-based data analysis versus conventional statistical testing. Results The magnitude of CLSN-38 reduction in poor metabolizers and the myelosuppressive potency of SN-38 markedly influenced the power to show a difference in grade 4 neutropenia (<0.5·109 cells/L) after PGx-based versus standard dosing. To achieve >80% power with traditional statistical analysis (χ2/McNemar’s test, α = 0.05), 220/100 patients per treatment arm/sequence (parallel/crossover study) were required. The model-based analysis resulted in considerably smaller total sample sizes (n = 100/15 given parallel/crossover design) to obtain the same statistical power. Conclusions The presented findings may help to avoid unfeasible trials and to rationalize the design of pharmacogenetic studies.


2020 ◽  
Vol 39 (5) ◽  
pp. 6419-6430
Author(s):  
Dusan Marcek

To forecast time series data, two methodological frameworks of statistical and computational intelligence modelling are considered. The statistical methodological approach is based on the theory of invertible ARIMA (Auto-Regressive Integrated Moving Average) models with Maximum Likelihood (ML) estimating method. As a competitive tool to statistical forecasting models, we use the popular classic neural network (NN) of perceptron type. To train NN, the Back-Propagation (BP) algorithm and heuristics like genetic and micro-genetic algorithm (GA and MGA) are implemented on the large data set. A comparative analysis of selected learning methods is performed and evaluated. From performed experiments we find that the optimal population size will likely be 20 with the lowest training time from all NN trained by the evolutionary algorithms, while the prediction accuracy level is lesser, but still acceptable by managers.


Sign in / Sign up

Export Citation Format

Share Document