flexible modeling
Recently Published Documents


TOTAL DOCUMENTS

174
(FIVE YEARS 14)

H-INDEX

17
(FIVE YEARS 1)

Author(s):  
Hong Zhu ◽  
Qinglin Sun ◽  
Jin Tao ◽  
Zengqiang Chen ◽  
Matthias Dehmer ◽  
...  

2021 ◽  
Author(s):  
John Harlim ◽  
Shixiao Willing Jiang ◽  
Hwanwoo Kim ◽  
Daniel Sanz-Alonso

Abstract This paper develops manifold learning techniques for the numerical solution of PDE-constrained Bayesian inverse problems on manifolds with boundaries. We introduce graphical Matérn-type Gaussian field priors that enable flexible modeling near the boundaries, representing boundary values by superposition of harmonic functions with appropriate Dirichlet boundary conditions. We also investigate the graph-based approximation of forward models from PDE parameters to observed quantities. In the construction of graph-based prior and forward models, we leverage the ghost point diffusion map algorithm to approximate second-order elliptic operators with classical boundary conditions. Numerical results validate our graph-based approach and demonstrate the need to design prior covariance models that account for boundary conditions.


Stats ◽  
2021 ◽  
Vol 4 (3) ◽  
pp. 616-633
Author(s):  
Ejike R. Ugba ◽  
Daniel Mörlein ◽  
Jan Gertheiss

The so-called proportional odds assumption is popular in cumulative, ordinal regression. In practice, however, such an assumption is sometimes too restrictive. For instance, when modeling the perception of boar taint on an individual level, it turns out that, at least for some subjects, the effects of predictors (androstenone and skatole) vary between response categories. For more flexible modeling, we consider the use of a ‘smooth-effects-on-response penalty’ (SERP) as a connecting link between proportional and fully non-proportional odds models, assuming that parameters of the latter vary smoothly over response categories. The usefulness of SERP is further demonstrated through a simulation study. Besides flexible and accurate modeling, SERP also enables fitting of parameters in cases where the pure, unpenalized non-proportional odds model fails to converge.


2021 ◽  
Vol 8 ◽  
Author(s):  
Samuel Spaulding ◽  
Jocelyn Shen ◽  
Hae Won Park ◽  
Cynthia Breazeal

Across a wide variety of domains, artificial agents that can adapt and personalize to users have potential to improve and transform how social services are provided. Because of the need for personalized interaction data to drive this process, long-term (or longitudinal) interactions between users and agents, which unfold over a series of distinct interaction sessions, have attracted substantial research interest. In recognition of the expanded scope and structure of a long-term interaction, researchers are also adjusting the personalization models and algorithms used, orienting toward “continual learning” methods, which do not assume a stationary modeling target and explicitly account for the temporal context of training data. In parallel, researchers have also studied the effect of “multitask personalization,” an approach in which an agent interacts with users over multiple different tasks contexts throughout the course of a long-term interaction and learns personalized models of a user that are transferrable across these tasks. In this paper, we unite these two paradigms under the framework of “Lifelong Personalization,” analyzing the effect of multitask personalization applied to dynamic, non-stationary targets. We extend the multi-task personalization approach to the more complex and realistic scenario of modeling dynamic learners over time, focusing in particular on interactive scenarios in which the modeling agent plays an active role in teaching the student whose knowledge the agent is simultaneously attempting to model. Inspired by the way in which agents use active learning to select new training data based on domain context, we augment a Gaussian Process-based multitask personalization model with a mechanism to actively and continually manage its own training data, allowing a modeling agent to remove or reduce the weight of observed data from its training set, based on interactive context cues. We evaluate this method in a series of simulation experiments comparing different approaches to continual and multitask learning on simulated student data. We expect this method to substantially improve learning in Gaussian Process models in dynamic domains, establishing Gaussian Processes as another flexible modeling tool for Long-term Human-Robot Interaction (HRI) Studies.


Author(s):  
Alexander Bajic ◽  
Georg T. Becker

AbstractWith numbers of exploitable vulnerabilities and attacks on networks constantly increasing, it is important to employ defensive techniques to protect one’s systems. A wide range of defenses are available and new paradigms such as Moving Target Defense (MTD) rise in popularity. But to make informed decisions on which defenses to implement, it is necessary to evaluate their effectiveness first. In many cases, the full impact these techniques have on security is not well understood yet. In this paper we propose network defense evaluation based on detailed attack simulation. Using a flexible modeling language, networks, attacks, and defenses are described in high detail, yielding a fine-grained scenario definition. Based on this, an automated instantiator generates a wide range of realistic benchmark networks. These serve to perform simulations, allowing to evaluate the security impact of different defenses, both quantitatively and qualitatively. A case study based on a mid-sized corporate network scenario and different Moving Target Defenses illustrates the usefulness of this approach. Results show that virtual machine migration, a frequently suggested MTD technique, more often degrades than improves security. Hence, we argue that evaluation based on realistic attack simulation is a qualified approach to examine and verify claims of newly proposed defense techniques.


2021 ◽  
Vol 10 (8) ◽  
pp. 1657
Author(s):  
Morgane Mounier ◽  
Gaëlle Romain ◽  
Mary Callanan ◽  
Akoua Alla ◽  
Olayidé Boussari ◽  
...  

With improvements in acute myeloid leukemia (AML) diagnosis and treatment, more patients are surviving for longer periods. A French population of 9453 AML patients aged ≥15 years diagnosed from 1995 to 2015 was studied to quantify the proportion cured (P), time to cure (TTC) and median survival of patients who are not cured (MedS). Net survival (NS) was estimated using a flexible model adjusted for age and sex in sixteen AML subtypes. When cure assumption was acceptable, the flexible cure model was used to estimate P, TTC and MedS for the uncured patients. The 5-year NS varied from 68% to 9% in men and from 77% to 11% in women in acute promyelocytic leukemia (AML-APL) and in therapy-related AML (t-AML), respectively. Major age-differenced survival was observed for patients with a diagnosis of AML with recurrent cytogenetic abnormalities. A poorer survival in younger patients was found in t-AML and AML with minimal differentiation. An atypical survival profile was found for acute myelomonocytic leukemia and AML without maturation in both sexes and for AML not otherwise specified (only for men) according to age, with a better prognosis for middle-aged compared to younger patients. Sex disparity regarding survival was observed in younger patients with t-AML diagnosed at 25 years of age (+28% at 5 years in men compared to women) and in AML with minimal differentiation (+23% at 5 years in women compared to men). All AML subtypes included an age group for which the assumption of cure was acceptable, although P varied from 90% in younger women with AML-APL to 3% in older men with acute monoblastic and monocytic leukemia. Increased P was associated with shorter TTC. A sizeable proportion of AML patients do not achieve cure, and MedS for these did not exceed 23 months. We identify AML subsets where cure assumption is negative, thus pointing to priority areas for future research efforts.


Author(s):  
Gregory Haber ◽  
Joshua Sampson ◽  
Katherine M Flegal ◽  
Barry Graubard

ABSTRACT Background Several studies have assessed the relation of body composition to health outcomes by using values of fat and lean mass that were not measured but instead were predicted from anthropometric variables such as weight and height. Little research has been done on how substituting predicted values for measured covariates might affect analytic results. Objectives We aimed to explore statistical issues causing bias in analytical studies that use predicted rather than measured values of body composition. Methods We used data from 8014 adults ≥40 y old included in the 1999–2006 US NHANES. We evaluated the relations of predicted total body fat (TF) and predicted total body lean mass (TLM) with all-cause mortality. We then repeated the evaluation using measured body composition variables from DXA. Quintiles and restricted cubic splines allowed flexible modeling of the HRs in unadjusted and multivariable-adjusted Cox regression models. Results The patterns of associations between body composition and all-cause mortality depended on whether body composition was defined using predicted values or DXA measurements. The largest differences were observed in multivariable-adjusted models which mutually adjusted for both TF and TLM. For instance, compared with analyses based on DXA measurements, analyses using predicted values for males overestimated the HRs for TF in splines and in quintiles [HRs (95% CIs) for fourth and fifth quintiles compared with first quintile, DXA: 1.22 (0.88, 1.70) and 1.46 (0.99, 2.14); predicted: 1.86 (1.29, 2.67) and 3.24 (2.02, 5.21)]. Conclusions It is important for researchers to be aware of the potential pitfalls and limitations inherent in the substitution of predicted values for measured covariates in order to draw proper conclusions from such studies.


2021 ◽  
Author(s):  
Sergio Camelo ◽  
Dragos F. Ciocan ◽  
Dan A. Iancu ◽  
Xavier S. Warnes ◽  
Spyros I. Zoumpoulis

To respond to pandemics such as COVID-19, policy makers have relied on interventions that target specific population groups or activities. Such targeting is potentially contentious, so rigorously quantifying its benefits and downsides is critical for designing effective and equitable pandemic control policies. We propose a flexible modeling framework and a set of associated algorithms that compute optimally targeted, time-dependent interventions that coordinate across two dimensions of heterogeneity: population group characteristics and the specific activities that individuals engage in during the normal course of a day. We showcase a complete implementation in a case study focused on the Île-de-France region of France, based on commonly available hospitalization, community mobility, social contacts and economic data. We find that optimized dual-targeted policies have a simple and explainable structure, imposing less confinement on group-activity pairs that generate a relatively high economic value prorated by activity-specific social contacts. When compared to confinements based on uniform or less granular targeting, dual-targeted policies generate substantial complementarities that lead to Pareto improvements, reducing the number of deaths and the economic losses overall and reducing the time in confinement foreach population group. Since dual-targeted policies could lead to increased discrepancies in the confinements faced by distinct groups, we also quantify the impact of requirements that explicitly limit such disparities, and find that satisfactory intermediate trade-offs may be achievable through limited targeting.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Ben Lopman ◽  
Carol Y. Liu ◽  
Adrien Le Guillou ◽  
Andreas Handel ◽  
Timothy L. Lash ◽  
...  

AbstractUniversity administrators face decisions about how to safely return and maintain students, staff and faculty on campus throughout the 2020–21 school year. We developed a susceptible-exposed-infectious-recovered (SEIR) deterministic compartmental transmission model of SARS-CoV-2 among university students, staff, and faculty. Our goals were to inform planning at our own university, Emory University, a medium-sized university with around 15,000 students and 15,000 faculty and staff, and to provide a flexible modeling framework to inform the planning efforts at similar academic institutions. Control strategies of isolation and quarantine are initiated by screening (regardless of symptoms) or testing (of symptomatic individuals). We explored a range of screening and testing frequencies and performed a probabilistic sensitivity analysis. We found that among students, monthly and weekly screening can reduce cumulative incidence by 59% and 87%, respectively, while testing with a 2-, 4- and 7-day delay between onset of infectiousness and testing results in an 84%, 74% and 55% reduction in cumulative incidence. Smaller reductions were observed among staff and faculty. Community-introduction of SARS-CoV-2 onto campus may be controlled with testing, isolation, contract tracing and quarantine. Screening would need to be performed at least weekly to have substantial reductions beyond disease surveillance. This model can also inform resource requirements of diagnostic capacity and isolation/quarantine facilities associated with different strategies.


Mathematics ◽  
2021 ◽  
Vol 9 (5) ◽  
pp. 485
Author(s):  
Marco Gribaudo ◽  
Mauro Iacono ◽  
Daniele Manini

We applied a flexible modeling technique capable of representing dynamics of large populations interacting in space and time, namely Markovian Agents, to study the evolution of COVID-19 in Italy. Our purpose was to show that this modeling approach, that is based on mean field analysis models, provides good performances in describing the diffusion of phenomena, like COVID-19. The paper describes the application of this modeling approach to the Italian scenario and results are validated against real data available about the Italian official documentation of the diffusion of COVID-19. The model of each agent is organized similarly to what largely established in literature in the Susceptible-Infected-Recovered (SIR) family of approaches. Results match the main events taken by the Italian government and their effects.


Sign in / Sign up

Export Citation Format

Share Document