Evaluation of amikacin use and comparison of the models implemented in two Bayesian forecasting software packages to guide dosing

Author(s):  
Alice C. Ryan ◽  
Jane E. Carland ◽  
Robert C. McLeay ◽  
Cindy Lau ◽  
Deborah J.E. Marriott ◽  
...  
1992 ◽  
Vol 31 (01) ◽  
pp. 18-28 ◽  
Author(s):  
C. Combi ◽  
G. Pozzi ◽  
R. Rossi ◽  
F. Pinciroli

Abstract:Many clinics are interested to use software packages in daily practice, but lack of integration of such packages seriously limits their scope. In practice this often entails switching between programs and interrupting the run of an individual program. A multi-task approach would not solve this problem as it would not eliminate the need to input the same data many times, as often occurs when using separate packages. The construction of a Multi-Service Medical Software package (MSx2) is described, which was also developed as an example of practical integration of some clinically relevant functions. The package runs on a personal computer in an MS-DOS environment and integrates a time-oriented medical record management unit (TOMRU) for data of ambulatory patients, and a drug information management unit (DIMU) concerning posology, content, effects, and possible interactions. Of the possible database configurations allowed by MSx2, the cardiology patient database (MSx2/C) and hypertensive patient database (MSx2/H) were developed and described here. Clinical information to be included in the configurations was obtained after discussion and consensus of clinical practitioners. MSx2/C was distributed to several hundred clinical centers during computerized courses to train future users. MSx2 can easily transfer patient data to statistical processing packages.


Mousaion ◽  
2017 ◽  
Vol 34 (3) ◽  
pp. 36-59 ◽  
Author(s):  
Jan R. Maluleka ◽  
Omwoyo B. Onyancha

This study sought to assess the extent of research collaboration in Library and Information Science (LIS) schools in South Africa between 1991 and 2012. Informetric research techniques were used to obtain relevant data for the study. The data was extracted from two EBSCO-hosted databases, namely, Library and Information Science Source (LISS) and Library, Information Science and Technology Abstracts (LISTA). The search was limited to scholarly peer reviewed articles published between 1991 and 2012. The data was analysed using Microsoft Excel ©2010 and UCINET for Windows ©2002 software packages. The findings revealed that research collaboration in LIS schools in South Africa has increased over the past two decades and mainly occurred between colleagues from the same department and institution; there were also collaborative activities at other levels, such as inter-institutional and inter-country, although to a limited extent; differences were noticeable when ranking authors according to different computations of their collaborative contributions; and educator-practitioner collaboration was rare. Several conclusions and recommendations based on the findings are offered in the article.


Author(s):  
Irnawati Irnawati ◽  
Florentinus Dika Octa Riswanto ◽  
Sugeng Riyanto ◽  
Sudibyo Martono ◽  
Abdul Rohman

Several oils have been reported as nutritional source and providing potential benefits for human life. Oil adulteration becomes major issue due to economical attempt to reduce the price of high cost oils. The employment of FTIR spectroscopy combined with Principal Component Analysis (PCA) technique can be applied in oils authentication study. Two of R software packages namely factoextra and FactoMineR were exploited to perform PCA for analysis sixteen various oils from market in Yogyakarta, Indonesia. The results showed that PCA model have been successfully generated using these two statistical packages. Individual plot, variable plot, and biplot were presented to visualize the PCA model. It was also proved that extra virgin olive oil (EVOO) has similar chemical characteristics to palm oil (PO) as reported in the previous study.


2020 ◽  
Vol 98 (Supplement_3) ◽  
pp. 25-25
Author(s):  
Austin M Putz ◽  
Patrick Charagu ◽  
Abe Huisman

Abstract Two commonly used population structure software packages are freely available for breed authentication, Structure and Admixture. Structure uses a Bayesian approach to model population structure, while Admixture uses a frequentist approach. More recently, an allele frequency method has been updated to use quadratic programming to constrain the multiple linear regression coefficients of the regression of genotype count (divided by two) on the matrix of allele frequencies for each known breed or line. This constraint forced coefficients to sum to one and be greater than or equal to 0 and less than or equal to 1. The goal of this research was to compare and contrast these three methods to determine the breed/line authenticity for each of the five genetic lines. These five lines included Large White, Landrace, a lean Duroc, a meat quality Duroc, and a Pietrain line. Only animals with a 50K SNP panel were used in this analysis. Analyses were run five times for Structure and Admixture to check repeatability. The allele frequency method did not need to be repeated because it remains the same as long as the reference allele frequency matrix stays constant. For Structure, results of breed composition were inconsistent across replicates. Structure separated at least one of the maternal lines in three out of the five replicates with only 500 animals and kept the Duroc lines together as one population. Only 500 animals could be utilized in each run of Structure due to computational restraints. Admixture was very consistent across runs for each animal, but also failed to separate the two Duroc lines, instead splitting one of the two maternal lines. Finally, the allele frequency method split all five lines correctly and was 100% reproducible as long as the reference allele frequency matrix stays the same across runs.


1977 ◽  
Vol 11 (3) ◽  
pp. 1-117 ◽  
Author(s):  
Compuater Graphics staff

2021 ◽  
Vol 9 (1) ◽  
Author(s):  
Julia Mang ◽  
Helmut Küchenhoff ◽  
Sabine Meinck ◽  
Manfred Prenzel

Abstract Background Standard methods for analysing data from large-scale assessments (LSA) cannot merely be adopted if hierarchical (or multilevel) regression modelling should be applied. Currently various approaches exist; they all follow generally a design-based model of estimation using the pseudo maximum likelihood method and adjusted weights for the corresponding hierarchies. Specifically, several different approaches to using and scaling sampling weights in hierarchical models are promoted, yet no study has compared them to provide evidence of which method performs best and therefore should be preferred. Furthermore, different software programs implement different estimation algorithms, leading to different results. Objective and method In this study, we determine based on a simulation, the estimation procedure showing the smallest distortion to the actual population features. We consider different estimation, optimization and acceleration methods, and different approaches on using sampling weights. Three scenarios have been simulated using the statistical program R. The analyses have been performed with two software packages for hierarchical modelling of LSA data, namely Mplus and SAS. Results and conclusions The simulation results revealed three weighting approaches performing best in retrieving the true population parameters. One of them implies using only level two weights (here: final school weights) and is because of its simple implementation the most favourable one. This finding should provide a clear recommendation to researchers for using weights in multilevel modelling (MLM) when analysing LSA data, or data with a similar structure. Further, we found only little differences in the performance and default settings of the software programs used, with the software package Mplus providing slightly more precise estimates. Different algorithm starting settings or different accelerating methods for optimization could cause these distinctions. However, it should be emphasized that with the recommended weighting approach, both software packages perform equally well. Finally, two scaling techniques for student weights have been investigated. They provide both nearly identical results. We use data from the Programme for International Student Assessment (PISA) 2015 to illustrate the practical importance and relevance of weighting in analysing large-scale assessment data with hierarchical models.


Author(s):  
John Zobolas ◽  
Vasundra Touré ◽  
Martin Kuiper ◽  
Steven Vercruysse

Abstract Summary We present a set of software packages that provide uniform access to diverse biological vocabulary resources that are instrumental for current biocuration efforts and tools. The Unified Biological Dictionaries (UniBioDicts or UBDs) provide a single query-interface for accessing the online API services of leading biological data providers. Given a search string, UBDs return a list of matching term, identifier and metadata units from databases (e.g. UniProt), controlled vocabularies (e.g. PSI-MI) and ontologies (e.g. GO, via BioPortal). This functionality can be connected to input fields (user-interface components) that offer autocomplete lookup for these dictionaries. UBDs create a unified gateway for accessing life science concepts, helping curators find annotation terms across resources (based on descriptive metadata and unambiguous identifiers), and helping data users search and retrieve the right query terms. Availability and implementation The UBDs are available through npm and the code is available in the GitHub organisation UniBioDicts (https://github.com/UniBioDicts) under the Affero GPL license. Supplementary information Supplementary data are available at Bioinformatics online.


Sign in / Sign up

Export Citation Format

Share Document