scholarly journals Use of a Bioinformatics-Based Toxicity Scoring System to Assess Serotonin Burden and Predict Population-Level Adverse Drug Events from Concomitant Serotonergic Drug Therapy

2019 ◽  
Vol 39 (2) ◽  
pp. 171-181 ◽  
Author(s):  
Vaughn L. Culbertson ◽  
Shaikh Emdadur Rahman ◽  
Grayson C. Bosen ◽  
Matthew L. Caylor ◽  
Dong Xu
Cancers ◽  
2021 ◽  
Vol 13 (9) ◽  
pp. 2136
Author(s):  
Daniel Lin ◽  
Shalini Moningi ◽  
Joseph Abi Jaoude ◽  
Ben S. Singh ◽  
Irina M. Cazacu ◽  
...  

We developed and implemented an objective toxicity scoring system to be used during endoscopic evaluation of the upper gastrointestinal (GI) tract in order to directly assess changes in toxicity during the radiation treatment of pancreatic cancer. We assessed and validated the upper GI toxicity of 19 locally advanced pancreatic cancer trial patients undergoing stereotactic body radiation therapy (SBRT). Wilcoxon-signed rank tests were used to compare pre- and post-SBRT scores. Comparison of the toxicity scores measured before and after SBRT revealed a mild increase in toxicity in the stomach and duodenum (p < 0.005), with no cases of severe toxicity observed. Kappa and AC1 statistics analysis were used to evaluate interobserver agreement. Our toxicity scoring system was reliable in determining GI toxicity with a good overall interobserver agreement for pre-treatment scores (stomach, κ = 0.71, p < 0.005; duodenum, κ = 0.88, p < 0.005) and post-treatment scores (stomach, κ = 0.71, p < 0.005; duodenum, κ = 0.76, p < 0.005). The AC1 statistics yielded similar results. With future usage, we hope this scoring system will be a useful tool for objectively and reliably assessing changes in GI toxicity during the treatment of pancreatic cancer and for GI toxicity assessments and comparisons during radiation therapy research trials.


2014 ◽  
Vol 32 (3_suppl) ◽  
pp. 420-420
Author(s):  
Gillian Gresham ◽  
Jasleen Sidhu ◽  
Navraj Malhi ◽  
Winson Y. Cheung

420 Background: Baseline demographics and clinical factors may contribute to a pt’s overall risk for developing chemotherapy-related GI toxicity, such as nausea, vomiting, and diarrhea. We aimed to develop a GI toxicity scoring system to better stratify early CC pts who may be at higher risk of developing GI toxicity from adjuvant FOLFOX chemotherapy. Methods: Pts diagnosed with early CC from 2005 to 2008 and treated with FOLFOX at the British Columbia Cancer Agency were reviewed. GI toxicities of interest included (1) nausea/vomiting, (2) diarrhea, and (3) any GI side effects. Baseline variables that were analyzed consisted of age, sex, ECOG, time to adjuvant chemotherapy (TTAC), and laboratory parameters. Stepwise regression was used to develop a multivariate model for each toxicity and a weighted risk scoring system was subsequently devised based on the magnitude of the parameter estimates in the multivariate model. Results: In total, 475 pts were included: median age was 62 years (range 26-89), 16.2% were aged >70 years, and 54.5% were men. The majority (90.1%) was ECOG 0/1. Independent predictors for nausea/vomiting included age >70 years (OR 2.46, 95%CI 1.3-4.8, p=0.011), GFR <50 (OR 1.68, 95% CI 1.1-2.7, p=0.025), and TTAC >8 weeks (OR 1.33, 95% CI 0.9-2.1, p=0.14) whereas independent predictors for diarrhea included age >70 years (OR 1.44, 95% CI 0.79-2.62, p=0.12) and GFR <50 (OR 1.67, 95% CI 1.1-2.6, p=0.018). The multivariate model for the risk of any GI toxicity included age >70 years (OR: 2.61, 95% CI 1.1-6.1, p=0.049), GFR <50 (OR 1.69, 95% CI 1.1-2.6, p=0.0038), and TTAC >8 weeks (OR 1.79, 95% CI 1.2-2.7, p=0.0059). Points were assigned: 2 points for long TTAC and poor GFR and 1 point for advanced age. The study cohort was classified into their risk groups based on their score (Table). Conclusions: We developed a simple 5-point scoring system to stratify early CC pts receiving adjuvant FOLFOX into low and high risk groups for GI toxicity based on baseline clinical factors. Further validation of this scoring system is required. [Table: see text]


2018 ◽  
Author(s):  
Jérémie Scire ◽  
Nathanaël Hozé ◽  
Hildegard Uecker

AbstractAntimicrobial resistance is one of the major public health threats of the 21st century. There is a pressing need to adopt more efficient treatment strategies in order to prevent the emergence and spread of resistant strains. The common approach is to treat patients with high drug doses, both to clear the infection quickly and to reduce the risk of de novo resistance. Recently, several studies have argued that, at least in some cases, low-dose treatments could be more suitable to reduce the within-host emergence of antimicrobial resistance. However, the choice of a drug dose may have consequences at the population level, which has received little attention so far.Here, we study the influence of the drug dose on resistance and disease management at the host and population levels. We develop a nested two-strain model and unravel trade-offs in treatment benefits between an individual and the community. We use several measures to evaluate the benefits of any dose choice. Two measures focus on the emergence of resistance, at the host level and at the population level. The other two focus on the overall treatment success: the outbreak probability and the disease burden. We find that different measures can suggest different dosing strategies. In particular, we identify situations where low doses minimize the risk of emergence of resistance at the individual level, while high or intermediate doses prove most beneficial to improve the treatment efficiency or even to reduce the risk of resistance in the population.Author summaryThe obvious goals of antimicrobial drug therapy are rapid patient recovery and low disease prevalence in the population. However, achieving these goals is complicated by the rapid evolution and spread of antimicrobial resistance. A sustainable treatment strategy needs to account for the risk of resistance and keep it in check. One parameter of treatment is the drug dosage, which can vary within certain limits. It has been proposed that lower doses may, in some cases, be more suitable than higher doses to reduce the risk of resistance evolution in any one patient. However, if lower doses prolong the period of infectiousness, such a strategy has consequences for the pathogen dynamics of both strains at the population level. Here, we set up a nested model of within-host and between-host dynamics for an acute self-limiting infection. We explore the consequences of drug dosing on several measures of treatment success: the risk of resistance at the individual and population levels and the outbreak probability and the disease burden of an epidemic. Our analysis shows that trade-offs may exist between optimal treatments under these various criteria. The criterion given most weight in the decision process ultimately depends on the disease and population under consideration.


2019 ◽  
Vol 48 (Supplement_4) ◽  
pp. iv34-iv39
Author(s):  
Renukha Sellappans ◽  
Anand Prakash ◽  
Ahlam Sundus

Abstract Polypharmacy refers to the numerical value of medications used by a patient and is typically described as the concurrent use of five or more medications per day, but the definition is variable and there is no global consensus. Polypharmacy is unfortunate consequence of rapid advancement of medicine. Polypharmacy could be appropriate or problematic. Problematic polypharmacy can arise if the medicines are used without good evidence and in this situation risk of harm outweighs benefit. While there might be clinical indication for the medications to treat co-morbidities of a given patient, polypharmacy in older persons are associated with negative outcomes such as falls, adverse drug events and increased healthcare utilisation. This is mainly due to the physiological changes in this population which increases their risk of adverse drug events as well as the problems associated with remembering, managing and administering the medications appropriately. Hence, researchers and clinicians around the world have been actively looking at ways to reduce polypharmacy and optimise drug therapy in older persons to improve clinical, economic and social outcomes. To date, medication review by pharmacists in various settings has been the most researched and documented evidence in tackling this issue. Medication review refers to the structured evaluation of a patient’s medication with the aim to reach agreement with the patient about drug therapy, optimising the impact of medication, and minimising the number of drug-related problems. This session aims to summarise the available evidence on various types and outcomes of medication review with the aim to optimise polypharmacy among older persons around the world and provide a practical recommendation in tackling the phenomena.


2022 ◽  
Author(s):  
Zhizhen Zhao ◽  
Ruoqi Liu ◽  
Lei Wang ◽  
Lang Li ◽  
Chi Song ◽  
...  

The identification of associations between drugs and adverse drug events (ADEs) is crucial for drug safety surveillance. An increasing number of studies have revealed that children and seniors are susceptible to ADEs at the population level. However, the comprehensive explorations of age risks in drug-ADE pairs are still limited. The FDA Adverse Event Reporting System (FAERS) provides individual case reports, which can be used for quantifying different age risks. In this study, we developed a statistical computational framework to detect age group of patients who are susceptible to some ADEs after taking specific drugs. We adopted different Chi-squared tests and conducted disproportionality analysis to detect drug-ADE pairs with age differences. We analyzed 4,580,113 drug-ADE pairs in FAERS (2004 to 2018Q3) and identified 2,523 pairs with the highest age risk. Furthermore, we conducted a case study on statin-induced ADE in children and youth. The code and results are available at https://github.com/Zhizhen-Zhao/Age-Risk-Identification


2021 ◽  
Vol 14 (5) ◽  
pp. 487
Author(s):  
Martina Hahn ◽  
Sibylle C. Roll

Drug interactions are a well-known cause of adverse drug events, and drug interaction databases can help the clinician to recognize and avoid such interactions and their adverse events. However, not every interaction leads to an adverse drug event. This is because the clinical relevance of drug–drug interactions also depends on the genetic profile of the patient. If inhibitors or inducers of drug metabolising enzymes (e.g., CYP and UGT) are added to the drug therapy, phenoconcversion can occur. This leads to a genetic phenotype that mismatches the observable phenotype. Drug–drug–gene and drug–gene–gene interactions influence the toxicity and/or ineffectivness of the drug therapy. To date, there have been limited published studies on the impact of genetic variations on drug–drug interactions. This review discusses the current evidence of drug–drug–gene interactions, as well as drug–gene–gene interactions. Phenoconversion is explained, the and methods to calculate the phenotypes are described. Clinical recommendations are given regarding the integratation of the PGx results in the assessment of the relevance of drug interactions in the future.


Sign in / Sign up

Export Citation Format

Share Document