scholarly journals Learning to Rank Sports Teams on a Graph

2020 ◽  
Vol 10 (17) ◽  
pp. 5833
Author(s):  
Jian Shi ◽  
Xin-Yu Tian

To improve the prediction ability of ranking models in sports, a generalized PageRank model is introduced. In the model, a game graph is constructed from the perspective of Bayesian correction with game results. In the graph, nodes represent teams, and a link function is used to synthesize the information of each game to calculate the weight on the graph’s edge. The parameters of the model are estimated by minimizing the loss function, which measures the gap between the predicted rank obtained by the model and the actual rank. The application to the National Basketball Association (NBA) data shows that the proposed model can achieve better prediction performance than the existing ranking models.

2021 ◽  
Vol 3 (4) ◽  
Author(s):  
Jianlei Zhang ◽  
Yukun Zeng ◽  
Binil Starly

AbstractData-driven approaches for machine tool wear diagnosis and prognosis are gaining attention in the past few years. The goal of our study is to advance the adaptability, flexibility, prediction performance, and prediction horizon for online monitoring and prediction. This paper proposes the use of a recent deep learning method, based on Gated Recurrent Neural Network architecture, including Long Short Term Memory (LSTM), which try to captures long-term dependencies than regular Recurrent Neural Network method for modeling sequential data, and also the mechanism to realize the online diagnosis and prognosis and remaining useful life (RUL) prediction with indirect measurement collected during the manufacturing process. Existing models are usually tool-specific and can hardly be generalized to other scenarios such as for different tools or operating environments. Different from current methods, the proposed model requires no prior knowledge about the system and thus can be generalized to different scenarios and machine tools. With inherent memory units, the proposed model can also capture long-term dependencies while learning from sequential data such as those collected by condition monitoring sensors, which means it can be accommodated to machine tools with varying life and increase the prediction performance. To prove the validity of the proposed approach, we conducted multiple experiments on a milling machine cutting tool and applied the model for online diagnosis and RUL prediction. Without loss of generality, we incorporate a system transition function and system observation function into the neural net and trained it with signal data from a minimally intrusive vibration sensor. The experiment results showed that our LSTM-based model achieved the best overall accuracy among other methods, with a minimal Mean Square Error (MSE) for tool wear prediction and RUL prediction respectively.


2021 ◽  
Vol 16 (1) ◽  
Author(s):  
Mingli Wang ◽  
Huikuan Gu ◽  
Jiang Hu ◽  
Jian Liang ◽  
Sisi Xu ◽  
...  

Abstract Background and purpose To explore whether a highly refined dose volume histograms (DVH) prediction model can improve the accuracy and reliability of knowledge-based volumetric modulated arc therapy (VMAT) planning for cervical cancer. Methods and materials The proposed model underwent repeated refining through progressive training until the training samples increased from initial 25 prior plans up to 100 cases. The estimated DVHs derived from the prediction models of different runs of training were compared in 35 new cervical cancer patients to analyze the effect of such an interactive plan and model evolution method. The reliability and efficiency of knowledge-based planning (KBP) using this highly refined model in improving the consistency and quality of the VMAT plans were also evaluated. Results The prediction ability was reinforced with the increased number of refinements in terms of normal tissue sparing. With enhanced prediction accuracy, more than 60% of automatic plan-6 (AP-6) plans (22/35) can be directly approved for clinical treatment without any manual revision. The plan quality scores for clinically approved plans (CPs) and manual plans (MPs) were on average 89.02 ± 4.83 and 86.48 ± 3.92 (p < 0.001). Knowledge-based planning significantly reduced the Dmean and V18 Gy for kidney (L/R), the Dmean, V30 Gy, and V40 Gy for bladder, rectum, and femoral head (L/R). Conclusion The proposed model evolution method provides a practical way for the KBP to enhance its prediction ability with minimal human intervene. This highly refined prediction model can better guide KBP in improving the consistency and quality of the VMAT plans.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Wen-ze Wu ◽  
Wanli Xie ◽  
Chong Liu ◽  
Tao Zhang

PurposeA new method for forecasting wind turbine capacity of China is proposed through grey modelling technique.Design/methodology/approachFirst of all, the concepts of discrete grey model are introduced into the NGBM(1,1) model to reduce the discretization error from the differential equation to its discrete forms. Then incorporating the conformable fractional accumulation into the discrete NGBM(1,1) model is carried out to further improve the predictive performance. Finally, in order to effectively seek the emerging coefficients, namely, fractional order and nonlinear coefficient, the whale optimization algorithm (WOA) is employed to determine the emerging coefficients.FindingsThe empirical results show that the newly proposed model has a better prediction performance compared to benchmark models; the wind turbine capacity from 2019 to 2021 is expected to reach 275954.42 Megawatts in 2021. According to the forecasts, policy suggestions are provided for policy-makers.Originality/valueBy combing the fractional accumulation and the concepts of discrete grey model, a new method to improve the prediction performance of the NGBM(1,1) model is proposed. The newly proposed model is firstly applied to predict wind turbine capacity of China.


Author(s):  
Ashwini Rahangdale ◽  
Shital Raut

Learning-to-rank (LTR) is a very hot topic of research for information retrieval (IR). LTR framework usually learns the ranking function using available training data that are very cost-effective, time-consuming and biased. When sufficient amount of training data is not available, semi-supervised learning is one of the machine learning paradigms that can be applied to get pseudo label from unlabeled data. Cluster and label is a basic approach for semi-supervised learning to identify the high-density region in data space which is mainly used to support the supervised learning. However, clustering with conventional method may lead to prediction performance which is worse than supervised learning algorithms for application of LTR. Thus, we propose rank preserving clustering (RPC) with PLocalSearch and get pseudo label for unlabeled data. We present semi-supervised learning that adopts clustering-based transductive method and combine it with nonmeasure specific listwise approach to learn the LTR model. Moreover, each cluster follows the multi-task learning to avoid optimization of multiple loss functions. It reduces the training complexity of adopted listwise approach from an exponential order to a polynomial order. Empirical analysis on the standard datasets (LETOR) shows that the proposed model gives better results as compared to other state-of-the-arts.


2019 ◽  
Vol 2019 ◽  
pp. 1-6 ◽  
Author(s):  
Wen-Ze Wu ◽  
Jianming Jiang ◽  
Qi Li

This paper aims to further increase the prediction accuracy of the grey model based on the existing discrete grey model, DGM(1,1). Herein, we begin by studying the connection between forecasts and the first entry of the original series. The results comprehensively show that the forecasts are independent of the first entry in the original series. On this basis, an effective method of inserting an arbitrary number in front of the first item of the original series to extract messages is applied to produce a novel grey model, which is abbreviated as FDGM(1,1) for simplicity. Incidentally, the proposed model can even forecast future data using only three historical data. To demonstrate the effectiveness of the proposed model, two classical examples of the tensile strength and life of the product are employed in this paper. The numerical results indicate that FDGM(1,1) has a better prediction performance than most commonly used grey models.


Author(s):  
Y Cao ◽  
J Mao ◽  
H Ching ◽  
J Yang

Using the quality loss function developed by Taguchi, the manufacturing time and cost of a product can be reduced to improve the factory's competitiveness. However, the fuzziness in quality loss has not been considered in the Taguchi method. This article presents a fuzzy quality loss function model. First, fuzzy logic is used to describe the semantic of the quality, and the quality level is divided into several grades. Then the fuzzy quality loss function is developed utilizing the loss in monetary terms, which indicates the quality loss of each quality level and the normalized expected probability to each quality grade. Moreover, a new optimization model for tolerance design under fuzzy quality loss function is established. An example is used to illustrate the validity of the proposed model. The result shows that the proposed method is more flexible and can achieve the balance of quality and cost in tolerance design. It can be easily used in accordance with practical engineering applications.


2015 ◽  
Vol 22 (7) ◽  
pp. 1281-1300 ◽  
Author(s):  
Satyendra Kumar Sharma ◽  
Vinod Kumar

Purpose – Selection of logistics service provider (LSP) (also known as Third-party logistics (3PL) is a critical decision, because logistics affects top and bottom line as well. Companies consider logistics as a cost driver and at the time of LSP selection decision, many important decision criteria’s are left out. 3PL selection is multi-criteria decision-making process. The purpose of this paper is to develop an integrated approach, combining quality function deployment (QFD), and Taguchi loss function (TLF) to select optimal 3PL. Design/methodology/approach – Multiple criteria are derived from the company requirements using house of quality. The 3PL service attributes are developed using QFD and the relative importance of the attributes are assessed. TLFs are used to measure performance of each 3PL on each decision variable. Composite weighted loss scores are used to rank 3PLs. Findings – QFD is a better tool which connects attributes used in a decision problem to decision maker’s requirements. In total, 15 criteria were used and TLF provides performance on these criteria. Practical implications – The proposed model provides a methodology to make informed decision related to 3PL selection. The proposed model may be converted into decision support system. Originality/value – Proposed approach in this paper is a novel approach that connects the 3PL selection problem to practice in terms of identifying criteria’s and provides a single numerical value in terms of Taghui loss.


Author(s):  
Osval Antonio Montesinos-López ◽  
José Cricelio Montesinos-López ◽  
Abelardo Montesinos-Lopez ◽  
Juan Manuel Ramírez-Alcaraz ◽  
Jesse Poland ◽  
...  

Abstract When multi-trait data are available, the preferred models are those that are able to account for correlations between phenotypic traits because when the degree of correlation is moderate or large, this increases the genomic prediction accuracy. For this reason, in this paper we explore Bayesian multi-trait kernel methods for genomic prediction and we illustrate the power of these models with three real datasets. The kernels under study were the linear, Gaussian, polynomial and sigmoid kernels; they were compared with the conventional Ridge regression and GBLUP multi-trait models. The results show that, in general, the Gaussian kernel method outperformed conventional Bayesian Ridge and GBLUP multi-trait linear models by 2.2 to 17.45% (datasets 1 to 3) in terms of prediction performance based on the mean square error of prediction. This improvement in terms of prediction performance of the Bayesian multi-trait kernel method can be attributed to the fact that the proposed model is able to capture non-linear patterns more efficiently than linear multi-trait models. However, not all kernels perform well in the datasets used for evaluation, which is why more than one kernel should be evaluated to be able to choose the best kernel.


Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Zhiming Hu ◽  
Chong Liu

Grey prediction models have been widely used in various fields of society due to their high prediction accuracy; accordingly, there exists a vast majority of grey models for equidistant sequences; however, limited research is focusing on nonequidistant sequence. The development of nonequidistant grey prediction models is very slow due to their complex modeling mechanism. In order to further expand the grey system theory, a new nonequidistant grey prediction model is established in this paper. To further improve the prediction accuracy of the NEGM (1, 1, t2) model, the background values of the improved nonequidistant grey model are optimized based on Simpson formula, which is abbreviated as INEGM (1, 1, t2). Meanwhile, to verify the validity of the proposed model, this model is applied in two real-world cases in comparison with three other benchmark models, and the modeling results are evaluated through several commonly used indicators. The results of two cases show that the INEGM (1, 1, t2) model has the best prediction performance among these competitive models.


2020 ◽  
Author(s):  
Mostafa Hussien

The problem of selecting the modulation and coding scheme (MCS) that maximizes the system throughput, known as link adaptation, has been investigated extensively, especially for IEEE 802.11 (WiFi) standards. Recently, deep learning has widely been adopted as an efficient solution to this problem. However, in model failure cases, predicting a higher-rate MCS can result in a failed transmission. In this case, retransmission is required, which largely degrades the system throughput. To address this issue, we formulate the adaptive modulation and coding (AMC) problem as a multi-label multi-class classification problem. The proposed formulation allows more control over what the model predicts in failure cases. In this context, we propose a simple, yet powerful, loss function to reduce the number of retransmissions due to higher-rate MCS classification errors. Since wireless channels change significantly due to the surrounding environment, a huge dataset is generated to cover all possible propagation conditions. However, to reduce training complexity, we train the CNN model using part of the dataset. The effect of different subdataset selection criteria on the classification accuracy is studied. It is shown that some criteria for dataset selection consistently behave better than others. To confirm the performance, we applied the proposed model for adapting the IEEE 802.11ax standard in outdoor propagation scenarios. The simulation results show that the proposed loss function reduces up to 50% of retransmissions compared to traditional loss functions. Finally, we propose an optimal subdataset selection criterion.


Sign in / Sign up

Export Citation Format

Share Document