weighting problem
Recently Published Documents


TOTAL DOCUMENTS

14
(FIVE YEARS 1)

H-INDEX

7
(FIVE YEARS 0)

2018 ◽  
Vol 30 (4) ◽  
pp. 104-122 ◽  
Author(s):  
Saroj Kr Biswas ◽  
Debashree Devi ◽  
Manomita Chakraborty

This article describes how the enormous size of data in IoT needs efficient data mining model for information extraction, classification and mining hidden patterns from data. CBR is a learning, mining and problem-solving approach which solves a problem by relating past similar solved problems. One issue with CBR is feature weight to measure the similarity among cases to mine similar past cases. NN's pruning is a popular method, which extracts feature weights from a trained neural network without losing much generality of the training set by using four mechanisms: sensitivity, activity, saliency and relevance. However, training NN with imbalanced data leads the classifier to get biased towards the majority class. Therefore, this article proposes a hybrid CBR model with RUS and cost sensitive back propagation neural network in IoT environment to deal with the feature weighting problem in imbalance data. The proposed model is validated with six real-life datasets. The experimental results show that the proposed model is better than other feature weighting methods.


2017 ◽  
Vol 7 (2) ◽  
pp. 251-275
Author(s):  
Edgar Dobriban

Abstract Researchers in data-rich disciplines—think of computational genomics and observational cosmology—often wish to mine large bodies of $P$-values looking for significant effects, while controlling the false discovery rate or family-wise error rate. Increasingly, researchers also wish to prioritize certain hypotheses, for example, those thought to have larger effect sizes, by upweighting, and to impose constraints on the underlying mining, such as monotonicity along a certain sequence. We introduce Princessp, a principled method for performing weighted multiple testing by constrained convex optimization. Our method elegantly allows one to prioritize certain hypotheses through upweighting and to discount others through downweighting, while constraining the underlying weights involved in the mining process. When the $P$-values derive from monotone likelihood ratio families such as the Gaussian means model, the new method allows exact solution of an important optimal weighting problem previously thought to be non-convex and computationally infeasible. Our method scales to massive data set sizes. We illustrate the applications of Princessp on a series of standard genomics data sets and offer comparisons with several previous ‘standard’ methods. Princessp offers both ease of operation and the ability to scale to extremely large problem sizes. The method is available as open-source software from github.com/dobriban/pvalue_weighting_matlab (accessed 11 October 2017).


2017 ◽  
Vol 2017 ◽  
pp. 1-8
Author(s):  
Ying Yan ◽  
Bin Suo

Due to the complexity of system and lack of expertise, epistemic uncertainties may present in the experts’ judgment on the importance of certain indices during group decision-making. A novel combination weighting method is proposed to solve the index weighting problem when various uncertainties are present in expert comments. Based on the idea of evidence theory, various types of uncertain evaluation information are uniformly expressed through interval evidence structures. Similarity matrix between interval evidences is constructed, and expert’s information is fused. Comment grades are quantified using the interval number, and cumulative probability function for evaluating the importance of indices is constructed based on the fused information. Finally, index weights are obtained by Monte Carlo random sampling. The method can process expert’s information with varying degrees of uncertainties, which possesses good compatibility. Difficulty in effectively fusing high-conflict group decision-making information and large information loss after fusion is avertible. Original expert judgments are retained rather objectively throughout the processing procedure. Cumulative probability function constructing and random sampling processes do not require any human intervention or judgment. It can be implemented by computer programs easily, thus having an apparent advantage in evaluation practices of fairly huge index systems.


2016 ◽  
Vol 08 (02) ◽  
pp. 1650035
Author(s):  
Jean-Claude Bermond ◽  
Cristiana Gomes Huiban ◽  
Patricio Reyes

In this paper, we consider the problem of gathering information in a gateway in a radio mesh access network. Due to interferences, calls (transmissions) cannot be performed simultaneously. This leads us to define a round as a set of non-interfering calls. Following the work of Klasing, Morales and Pérennes, we model the problem as a Round Weighting Problem (RWP) in which the objective is to minimize the overall period of non-interfering calls activations (total number of rounds) providing enough capacity to satisfy the throughput demand of the nodes. We develop tools to obtain lower and upper bounds for general graphs. Then, more precise results are obtained considering a symmetric interference model based on distance of graphs, called the distance-[Formula: see text] interference model (the particular case [Formula: see text] corresponds to the primary node model). We apply the presented tools to get lower bounds for grids with the gateway either in the middle or in the corner. We obtain upper bounds which in most of the cases match the lower bounds, using strategies that either route the demand of a single node or route simultaneously flow from several source nodes. Therefore, we obtain exact and constructive results for grids, in particular for the case of uniform demands answering a problem asked by Klasing, Morales and Pérennes.


Transport ◽  
2015 ◽  
Vol 30 (3) ◽  
pp. 298-306 ◽  
Author(s):  
Paola Carolina Bueno Cadena ◽  
José Manuel Vassallo Magro

Although the Multi-Criteria Decision Analysis (MCDA) has made progress towards appraising and measuring the performance of smart and sustainable transport projects, it still has important issues that need to be addressed such as the problem associated with incomparable quantities, the inherent subjective qualitative assessment, the complexity of identifying impacts to be included and its measurement method, and the corresponding weights. The issue of trading-off different sustainability criteria is the main unresolved matter. This problem may lead to lack of accuracy in the decision making process. This paper presents a new methodology to set the weights of the sustainability criteria used in the MCDA in order to reduce subjectivity and imprecision. We suggest eliciting criteria weights based on both expert preferences and the importance that the sustainability criteria have in the geographical and social context where the project is developed. This novel methodology is applied to a real case study to quantify sustainable practices associated with the design and construction of a new roadway in Spain. The outcome demonstrates that the approach to the weighting problem has significance and general application in a multi-criteria evaluation process.


Author(s):  
Yanping Lu ◽  
Shaozi Li

This chapter aims at developing effective particle swarm optimization (PSO) for two problems commonly encountered in studies related to high-dimensional data clustering, namely the variable weighting problem in soft projected clustering with known the number of clusters k and the problem of automatically determining the number of clusters k. Each problem is formulated to minimize a nonlinear continuous objective function subjected to bound constraints. Special treatments of encoding schemes and search strategies are also proposed to tailor PSO for these two problems. Experimental results on both synthetic and real high-dimensional data show that these two proposed algorithms greatly improve cluster quality. In addition, the results of the new algorithms are much less dependent on the initial cluster centroids. Experimental results indicate that the promising potential pertaining to PSO applicability to clustering high-dimensional data.


2002 ◽  
Vol 45 (9) ◽  
pp. 325-332 ◽  
Author(s):  
A. van Griensven ◽  
A. Francos ◽  
W. Bauwens

ESWAT – Extended Soil and Water Assessment Tool – was developed to allow for an integral modelling of the water quantity and quality processes in river basins. ESWAT is a physically based, semi-distributed model, with a moderate-to-large number of parameters and input and output variables (depending on the desegregation scheme). An auto-calibration procedure was implemented for the optimisation of the process parameters. The procedure is based on a new approach for multi-objective calibration and incorporates the algorithms of the Shuffled Complex Evolution Method. The optimisation uses a global optimisation criterion, whereby several output variables can be taken into account simultaneously. A statistical method enables the aggregation of the objective functions for individual variables, hereby avoiding the weighting problem. To select the important parameters for the optimisation, a sensitivity analysis precedes the calibration. The latter analysis is based on the One-factor-At-a-Time (OAT) design approach. The sensitivity analysis and the calibration procedure are applied to the river Dender in Belgium. The river is characterised by high pollution loads and long residence times in summer periods.


Sign in / Sign up

Export Citation Format

Share Document