scholarly journals Computational psychiatry 2.0 and implications for stress research

2018 ◽  
Author(s):  
Anton A. Pashkov ◽  
Mikhail A. Berebin

AbstractStress-related disorders are highly prevalent in modern society and pose significant challenge to human’s health. Being recently emerged branch of psychiatry, computational psychiatry is geared toward mathematical modeling of psychiatric disorders. Harnessing power of computer sciences and statistics may bridge the complex nature of psychiatric illnesses with hidden brain computational mechanisms. Stress represents an adaptive response to environmental threats but, while getting chronic, it leads to progressive deflection from homeostasis or result in buildup of allostatic load, providing researches with unique opportunity to track patterns of deviations from adaptive responding toward full-blown disease development. Computational psychiatry toolkit enables us to quantitatively assess the extent of such deviations, to explicitly test competing hypotheses which compare the models with real data for goodness-of-fit and, finally, to tethering these computational operations to structural or functional brain alterations as may be revealed by non-invasive neuroimaging and stimulation techniques.It is worth noting that brain does not directly face environmental demands imposed on human or animal, but rather through detecting signals and acting out via bodily systems. Therefore, it is of critical importance to take homeostatic and allostatic mechanisms into account when considering sophisticated interactions between brain and body and how their partnership may result in establishment of stress-susceptible or resilient profiles.In this article, with a particulate focus on brain-gut interactions, we outline several possible directions to widen the scope of application of computational approach in mental health care field trying to integrate computational psychiatry, psychosomatics and nutritional medicine

Author(s):  
Russell Cheng

Parametric bootstrapping (BS) provides an attractive alternative, both theoretically and numerically, to asymptotic theory for estimating sampling distributions. This chapter summarizes its use not only for calculating confidence intervals for estimated parameters and functions of parameters, but also to obtain log-likelihood-based confidence regions from which confidence bands for cumulative distribution and regression functions can be obtained. All such BS calculations are very easy to implement. Details are also given for calculating critical values of EDF statistics used in goodness-of-fit (GoF) tests, such as the Anderson-Darling A2 statistic whose null distribution is otherwise difficult to obtain, as it varies with different null hypotheses. A simple proof is given showing that the parametric BS is probabilistically exact for location-scale models. A formal regression lack-of-fit test employing parametric BS is given that can be used even when the regression data has no replications. Two real data examples are given.


Econometrics ◽  
2021 ◽  
Vol 9 (1) ◽  
pp. 10
Author(s):  
Šárka Hudecová ◽  
Marie Hušková ◽  
Simos G. Meintanis

This article considers goodness-of-fit tests for bivariate INAR and bivariate Poisson autoregression models. The test statistics are based on an L2-type distance between two estimators of the probability generating function of the observations: one being entirely nonparametric and the second one being semiparametric computed under the corresponding null hypothesis. The asymptotic distribution of the proposed tests statistics both under the null hypotheses as well as under alternatives is derived and consistency is proved. The case of testing bivariate generalized Poisson autoregression and extension of the methods to dimension higher than two are also discussed. The finite-sample performance of a parametric bootstrap version of the tests is illustrated via a series of Monte Carlo experiments. The article concludes with applications on real data sets and discussion.


2017 ◽  
Vol 29 (5) ◽  
pp. 529-542 ◽  
Author(s):  
Marko Intihar ◽  
Tomaž Kramberger ◽  
Dejan Dragan

The paper examines the impact of integration of macroeconomic indicators on the accuracy of container throughput time series forecasting model. For this purpose, a Dynamic factor analysis and AutoRegressive Integrated Moving-Average model with eXogenous inputs (ARIMAX) are used. Both methodologies are integrated into a novel four-stage heuristic procedure. Firstly, dynamic factors are extracted from external macroeconomic indicators influencing the observed throughput. Secondly, the family of ARIMAX models of different orders is generated based on the derived factors. In the third stage, the diagnostic and goodness-of-fit testing is applied, which includes statistical criteria such as fit performance, information criteria, and parsimony. Finally, the best model is heuristically selected and tested on the real data of the Port of Koper. The results show that by applying macroeconomic indicators into the forecasting model, more accurate future throughput forecasts can be achieved. The model is also used to produce future forecasts for the next four years indicating a more oscillatory behaviour in (2018-2020). Hence, care must be taken concerning any bigger investment decisions initiated from the management side. It is believed that the proposed model might be a useful reinforcement of the existing forecasting module in the observed port.


2014 ◽  
Vol 2014 ◽  
pp. 1-11 ◽  
Author(s):  
Huibing Hao ◽  
Chun Su

A novel reliability assessment method for degradation product with two dependent performance characteristics (PCs) is proposed, which is different from existing work that only utilized one dimensional degradation data. In this model, the dependence of two PCs is described by the Frank copula function, and each PC is governed by a random effected nonlinear diffusion process where random effects capture the unit to unit differences. Considering that the model is so complicated and analytically intractable, Markov Chain Monte Carlo (MCMC) method is used to estimate the unknown parameters. A numerical example about LED lamp is given to demonstrate the usefulness and validity of the proposed model and method. Numerical results show that the random effected nonlinear diffusion model is very useful by checking the goodness of fit of the real data, and ignoring the dependence between PCs may result in different reliability conclusion.


Author(s):  
Lingtao Kong

The exponential distribution has been widely used in engineering, social and biological sciences. In this paper, we propose a new goodness-of-fit test for fuzzy exponentiality using α-pessimistic value. The test statistics is established based on Kullback-Leibler information. By using Monte Carlo method, we obtain the empirical critical points of the test statistic at four different significant levels. To evaluate the performance of the proposed test, we compare it with four commonly used tests through some simulations. Experimental studies show that the proposed test has higher power than other tests in most cases. In particular, for the uniform and linear failure rate alternatives, our method has the best performance. A real data example is investigated to show the application of our test.


Entropy ◽  
2020 ◽  
Vol 22 (6) ◽  
pp. 603
Author(s):  
Abdulhakim A. Al-Babtain ◽  
Abdul Hadi N. Ahmed ◽  
Ahmed Z. Afify

In this paper, we propose and study a new probability mass function by creating a natural discrete analog to the continuous Lindley distribution as a mixture of geometric and negative binomial distributions. The new distribution has many interesting properties that make it superior to many other discrete distributions, particularly in analyzing over-dispersed count data. Several statistical properties of the introduced distribution have been established including moments and moment generating function, residual moments, characterization, entropy, estimation of the parameter by the maximum likelihood method. A bias reduction method is applied to the derived estimator; its existence and uniqueness are discussed. Applications of the goodness of fit of the proposed distribution have been examined and compared with other discrete distributions using three real data sets from biological sciences.


2020 ◽  
pp. 026666692096738
Author(s):  
Jia Yu ◽  
Jun Xia

As information and communication technologies (ICT) continue to impact and shape modern society, and e-justice has gained momentum in recent years. China’s Supreme People’s Court (SPC) set up a trial management information system that connects all courts in China. Designed to be online, transparent, and intelligent, China’s Smart Court development began in 2016. In practice, SPC promoted modernization of the trial systems to improve the flow of information between courts throughout China. However, significant investments in ICT e-justice services have caused some to question whether these investments have achieved the expected ends. Thus, how to evaluate e-justice services becomes an urgent theoretical and policy issue in the process of e-justice construction in China. E-justice value is not clearly defined in theory in China, nor is easy to measure in practice. Because of the sensitive and complex nature of such evaluation, little research has been conducted in this regard. The objective of this paper is to fill this gap. Relevant literature is reviewed before the article moves on to describe various approaches withe regard to e-government and e-justice evaluation, as well as the characteristics of China’s Smart Court. Evaluation factors and constructs are found based on the Chinese circumstances. This study contributes to the development of a holistic evaluation framework for e-justice system. It also adds the Chinese case to the existing literature. Evaluation factors found in this article can also serve as a foundation for future development and study of e-justice services.


2019 ◽  
Vol 44 (3) ◽  
pp. 167-181 ◽  
Author(s):  
Wenchao Ma

Limited-information fit measures appear to be promising in assessing the goodness-of-fit of dichotomous response cognitive diagnosis models (CDMs), but their performance has not been examined for polytomous response CDMs. This study investigates the performance of the Mord statistic and standardized root mean square residual (SRMSR) for an ordinal response CDM—the sequential generalized deterministic inputs, noisy “and” gate model. Simulation studies showed that the Mord statistic had well-calibrated Type I error rates, but the correct detection rates were influenced by various factors such as item quality, sample size, and the number of response categories. In addition, the SRMSR was also influenced by many factors and the common practice of comparing the SRMSR against a prespecified cut-off (e.g., .05) may not be appropriate. A set of real data was analyzed as well to illustrate the use of Mord statistic and SRMSR in practice.


Author(s):  
Ibrahim Sule ◽  
Sani Ibrahim Doguwa ◽  
Audu Isah ◽  
Haruna Muhammad Jibril

Background: In the last few years, statisticians have introduced new generated families of univariate distributions. These new generators are obtained by adding one or more extra shape parameters to the underlying distribution to get more flexibility in fitting data in different areas such as medical sciences, economics, finance and environmental sciences. The addition of parameter(s) has been proven useful in exploring tail properties and also for improving the goodness-of-fit of the family of distributions under study. Methods: A new three-parameter family of distributions was introduced by using the idea of T-X methodology. Some statistical properties of the new family were derived and studied. Results: A new Topp Leone Kumaraswamy-G family of distributions was introduced. Two special sub-models, that is, the Topp Leone Kumaraswamy exponential distribution and Topp Leone Kumaraswamy log-logistic distribution were investigated. Two real data sets were used to assess the flexibility of the sub-models. Conclusion: The results suggest that the two sub-models performed better than their competitors.


2003 ◽  
Vol 33 (2) ◽  
pp. 365-381 ◽  
Author(s):  
Vytaras Brazauskas ◽  
Robert Serfling

Several recent papers treated robust and efficient estimation of tail index parameters for (equivalent) Pareto and truncated exponential models, for large and small samples. New robust estimators of “generalized median” (GM) and “trimmed mean” (T) type were introduced and shown to provide more favorable trade-offs between efficiency and robustness than several well-established estimators, including those corresponding to methods of maximum likelihood, quantiles, and percentile matching. Here we investigate performance of the above mentioned estimators on real data and establish — via the use of goodness-of-fit measures — that favorable theoretical properties of the GM and T type estimators translate into an excellent practical performance. Further, we arrive at guidelines for Pareto model diagnostics, testing, and selection of particular robust estimators in practice. Model fits provided by the estimators are ranked and compared on the basis of Kolmogorov-Smirnov, Cramér-von Mises, and Anderson-Darling statistics.


Sign in / Sign up

Export Citation Format

Share Document