Learning grammatical structure using statistical decision-trees

Author(s):  
David M. Magerman
2020 ◽  
Vol 94 (4) ◽  
pp. 1135-1149
Author(s):  
Felix M. Kluxen ◽  
Ludwig A. Hothorn

1999 ◽  
Vol 38 (01) ◽  
pp. 50-55 ◽  
Author(s):  
P. F. de Vries Robbé ◽  
A. L. M. Verbeek ◽  
J. L. Severens

Abstract:The problem of deciding the optimal sequence of diagnostic tests can be structured in decision trees, but unmanageable bushy decision trees result when the sequence of two or more tests is investigated. Most modelling techniques include tests on the basis of gain in certainty. The aim of this study was to explore a model for optimizing the sequence of diagnostic tests based on efficiency criteria. The probability modifying plot shows, when in a specific test sequence further testing is redundant and which costs are involved. In this way different sequences can be compared. The model is illustrated with data on urinary tract infection. The sequence of diagnostic tests was optimized on the basis of efficiency, which was either defined as the test sequence with the least number of tests or the least total cost for testing. Further research on the model is needed to handle current limitations.


1986 ◽  
Vol 25 (04) ◽  
pp. 207-214 ◽  
Author(s):  
P. Glasziou

SummaryThe development of investigative strategies by decision analysis has been achieved by explicitly drawing the decision tree, either by hand or on computer. This paper discusses the feasibility of automatically generating and analysing decision trees from a description of the investigations and the treatment problem. The investigation of cholestatic jaundice is used to illustrate the technique.Methods to decrease the number of calculations required are presented. It is shown that this method makes practical the simultaneous study of at least half a dozen investigations. However, some new problems arise due to the possible complexity of the resulting optimal strategy. If protocol errors and delays due to testing are considered, simpler strategies become desirable. Generation and assessment of these simpler strategies are discussed with examples.


1998 ◽  
Vol 37 (03) ◽  
pp. 235-238 ◽  
Author(s):  
M. El-Taha ◽  
D. E. Clark

AbstractA Logistic-Normal random variable (Y) is obtained from a Normal random variable (X) by the relation Y = (ex)/(1 + ex). In Monte-Carlo analysis of decision trees, Logistic-Normal random variates may be used to model the branching probabilities. In some cases, the probabilities to be modeled may not be independent, and a method for generating correlated Logistic-Normal random variates would be useful. A technique for generating correlated Normal random variates has been previously described. Using Taylor Series approximations and the algebraic definitions of variance and covariance, we describe methods for estimating the means, variances, and covariances of Normal random variates which, after translation using the above formula, will result in Logistic-Normal random variates having approximately the desired means, variances, and covariances. Multiple simulations of the method using the Mathematica computer algebra system show satisfactory agreement with the theoretical results.


TAPPI Journal ◽  
2015 ◽  
Vol 14 (6) ◽  
pp. 395-402
Author(s):  
FLÁVIO MARCELO CORREIA ◽  
JOSÉ VICENTE HALLAK D’ANGELO ◽  
SUELI APARECIDA MINGOTI

Alkali charge is one of the most relevant variables in the continuous kraft cooking process. The white liquor mass flow rate can be determined by analyzing the chip bulk density fed to the process. At the mills, the total time for this analysis usually is greater than the residence time in the digester. This can lead to an increasing error in the mass of white liquor added relative to the specified alkali charge. This paper proposes a new approach using the Box-Jenkins methodology to develop a dynamic model for predicting chip bulk density. Industrial data were gathered on 1948 observations over a period of 12 months from a Kamyr continuous digester at a bleached eucalyptus kraft pulp mill in Brazil. Autoregressive integrated moving average (ARIMA) models were evaluated according to different statistical decision criteria, leading to the choice of ARIMA (2,0,2) as the best forecasting model, which was validated against a new dataset gathered during 2 months of operations. A combination of predictors has shown more accurate results compared to those obtained by laboratory analysis, allowing a reduction of around 25% of the chip bulk density error to the alkali addition amount.


2018 ◽  
pp. 35-38
Author(s):  
O. Hyryn

The article deals with natural language processing, namely that of an English sentence. The article describes the problems, which might arise during the process and which are connected with graphic, semantic, and syntactic ambiguity. The article provides the description of how the problems had been solved before the automatic syntactic analysis was applied and the way, such analysis methods could be helpful in developing new analysis algorithms. The analysis focuses on the issues, blocking the basis for the natural language processing — parsing — the process of sentence analysis according to their structure, content and meaning, which aims to analyze the grammatical structure of the sentence, the division of sentences into constituent components and defining links between them.


Sign in / Sign up

Export Citation Format

Share Document