pivot methods
Recently Published Documents


TOTAL DOCUMENTS

11
(FIVE YEARS 1)

H-INDEX

5
(FIVE YEARS 0)

Author(s):  
Prasanna M. Rathod ◽  
Prof. Dr. Anjali B. Raut

Preparing a data set for analysis is generally the most time consuming task in a data mining project, requiring many complex SQL queries, joining tables, and aggregating columns. Existing SQL aggregations have limitations to prepare data sets because they return one column per aggregated group. In general, a significant manual effort is required to build data sets, where a horizontal layout is required. We propose simple, yet powerful, methods to generate SQL code to return aggregated columns in a horizontal tabular layout, returning a set of numbers instead of one number per row. This new class of functions is called horizontal aggregations. Horizontal aggregations build data sets with a horizontal denormalized layout (e.g., point-dimension, observation variable, instance-feature), which is the standard layout required by most data mining algorithms. We propose three fundamental methods to evaluate horizontal aggregations: ? CASE: Exploiting the programming CASE construct; ? SPJ: Based on standard relational algebra operators (SPJ queries); ? PIVOT: Using the PIVOT operator, which is offered by some DBMSs. Experiments with large tables compare the proposed query evaluation methods. Our CASE method has similar speed to the PIVOT operator and it is much faster than the SPJ method. In general, the CASE and PIVOT methods exhibit linear scalability, whereas the SPJ method does not. For query optimization the distance computation and nearest cluster in the k-means are based on SQL. Workload balancing is the assignment of work to processors in a way that maximizes application performance. The process of load balancing can be generalized into four basic steps: 1. Monitoring processor load and state; 2. Exchanging workload and state information between processors; 3. Decision making; 4. Data migration. The decision phase is triggered when the load imbalance is detected to calculate optimal data redistribution. In the fourth and last phase, data migrates from overloaded processors to under-loaded ones.


Author(s):  
Yu Shu ◽  
Aiyi Liu ◽  
Zhaohai Li

When hypotheses concerning the sensitivity and specificity of a binary medical diagnostic test are simultaneously tested using a group sequential procedure, constructing point and interval estimates of the parameters is challenging because there is no unique way to order sample points in the two-dimensional space. In this paper, upon termination of a group sequential procedure, we compare the bias and mean squared errors of the maximum-likelihood and Rao–Blackwell unbiased estimators of sensitivity and specificity. Confidence intervals (CIs) of the two parameters were constructed using normal approximation and Woodroofe's pivot methods based on maximum-likelihood and Rao–Blackwell unbiased estimates. The coverage probability and the expected length of CIs for the parameters were compared by simulation studies.


1997 ◽  
Vol 106 (17) ◽  
pp. 7170-7177 ◽  
Author(s):  
Pablo Serra ◽  
Aaron F. Stanton ◽  
Sabre Kais ◽  
Richard E. Bleil

1995 ◽  
Vol 11 (1) ◽  
pp. 51-58
Author(s):  
Chen Wanji ◽  
Chen Guoqing ◽  
Feng Enmin

Sign in / Sign up

Export Citation Format

Share Document