scholarly journals Double-Granule Conditional-Entropies Based on Three-Level Granular Structures

Entropy ◽  
2019 ◽  
Vol 21 (7) ◽  
pp. 657 ◽  
Author(s):  
Taopin Mu ◽  
Xianyong Zhang ◽  
Zhiwen Mo

Rough set theory is an important approach for data mining, and it refers to Shannon’s information measures for uncertainty measurements. The existing local conditional-entropies have both the second-order feature and application limitation. By improvements of hierarchical granulation, this paper establishes double-granule conditional-entropies based on three-level granular structures (i.e., micro-bottom, meso-middle, macro-top ), and then investigates the relevant properties. In terms of the decision table and its decision classification, double-granule conditional-entropies are proposed at micro-bottom by the dual condition-granule system. By virtue of successive granular summation integrations, they hierarchically evolve to meso-middle and macro-top, to respectively have part and complete condition-granulations. Then, the new measures acquire their number distribution, calculation algorithm, three bounds, and granulation non-monotonicity at three corresponding levels. Finally, the hierarchical constructions and achieved properties are effectively verified by decision table examples and data set experiments. Double-granule conditional-entropies carry the second-order characteristic and hierarchical granulation to deepen both the classical entropy system and local conditional-entropies, and thus they become novel uncertainty measures for information processing and knowledge reasoning.

Author(s):  
Ayaho Miyamoto

This paper describes an acquisitive method of rule‐type knowledge from the field inspection data on highway bridges. The proposed method is enhanced by introducing an improvement to a traditional data mining technique, i.e. applying the rough set theory to the traditional decision table reduction method. The new rough set theory approach helps in cases of exceptional and contradictory data, which in the traditional decision table reduction method are simply removed from analyses. Instead of automatically removing all apparently contradictory data cases, the proposed method determines whether the data really is contradictory and therefore must be removed or not. The method has been tested with real data on bridge members including girders and filled joints in bridges owned and managed by a highway corporation in Japan. There are, however, numerous inconsistent data in field data. A new method is therefore proposed to solve the problem of data loss. The new method reveals some generally unrecognized decision rules in addition to generally accepted knowledge. Finally, a computer program is developed to perform calculation routines, and some field inspection data on highway bridges is used to show the applicability of the proposed method.


2013 ◽  
Vol 347-350 ◽  
pp. 3119-3122
Author(s):  
Yan Xue Dong ◽  
Fu Hai Huang

The basic theory of rough set is given and a method for texture classification is proposed. According to the GCLM theory, texture feature is extracted and generate 32 feature vectors to form a decision table, find a minimum set of rules for classification after attribute discretization and knowledge reduction, experimental results show that using rough set theory in texture classification, accompanied by appropriate discrete method and reduction algorithm can get better classification results


2021 ◽  
pp. 1-21
Author(s):  
Wenguang Wang ◽  
Yonglin Xu ◽  
Chunhui Du ◽  
Yunwen Chen ◽  
Yijie Wang ◽  
...  

Abstract With the development of entity extraction, relationship extraction, knowledge reasoning, and entity linking, knowledge graph technology has been in full swing in recent years. To better promote the development of knowledge graph, especially in the Chinese language and in the financial industry, we built a high-quality data set, named financial research report knowledge graph (FR2KG), and organized the automated construction of financial knowledge graph evaluation at the 2020 China Knowledge Graph and Semantic Computing Conference (CCKS2020). FR2KG consists of 17,799 entities, 26,798 relationship triples, and 1,328 attribute triples covering 10 entity types, 19 relationship types, and 6 attributes. Participants are required to develop a constructor that will automatically construct a financial knowledge graph based on the FR2KG. In addition, we summarized the technologies for automatically constructing knowledge graphs, and introduced the methods used by the winners and the results of this evaluation.


Mathematics ◽  
2019 ◽  
Vol 7 (8) ◽  
pp. 761 ◽  
Author(s):  
Rupšys

This study focuses on the stochastic differential calculus of Itô, as an effective tool for the analysis of noise in forest growth and yield modeling. Idea of modeling state (tree size) variable in terms of univariate stochastic differential equation is exposed to a multivariate stochastic differential equation. The new developed multivariate probability density function and its marginal univariate, bivariate and trivariate distributions, and conditional univariate, bivariate and trivariate probability density functions can be applied for the modeling of tree size variables and various stand attributes such as the mean diameter, height, crown base height, crown width, volume, basal area, slenderness ratio, increments, and much more. This study introduces generalized multivariate interaction information measures based on the differential entropy to capture multivariate dependencies between state variables. The present study experimentally confirms the effectiveness of using multivariate interaction information measures to reconstruct multivariate relationships of state variables using measurements obtained from a real-world data set.


2018 ◽  
Vol 7 (2) ◽  
pp. 75-84 ◽  
Author(s):  
Shivam Shreevastava ◽  
Anoop Kumar Tiwari ◽  
Tanmoy Som

Feature selection is one of the widely used pre-processing techniques to deal with large data sets. In this context, rough set theory has been successfully implemented for feature selection of discrete data set but in case of continuous data set it requires discretization, which may cause information loss. Fuzzy rough set theory approaches have also been used successfully to resolve this issue as it can handle continuous data directly. Moreover, almost all feature selection techniques are used to handle homogeneous data set. In this article, the center of attraction is on heterogeneous feature subset reduction. A novel intuitionistic fuzzy neighborhood models have been proposed by combining intuitionistic fuzzy sets and neighborhood rough set models by taking an appropriate pair of lower and upper approximations and generalize it for feature selection, supported with theory and its validation. An appropriate algorithm along with application to a data set has been added.


2016 ◽  
Vol 2016 ◽  
pp. 1-7
Author(s):  
Zhizheng Liang

Feature scaling has attracted considerable attention during the past several decades because of its important role in feature selection. In this paper, a novel algorithm for learning scaling factors of features is proposed. It first assigns a nonnegative scaling factor to each feature of data and then adopts a generalized performance measure to learn the optimal scaling factors. It is of interest to note that the proposed model can be transformed into a convex optimization problem: second-order cone programming (SOCP). Thus the scaling factors of features in our method are globally optimal in some sense. Several experiments on simulated data, UCI data sets, and the gene data set are conducted to demonstrate that the proposed method is more effective than previous methods.


2008 ◽  
Vol 178 (1) ◽  
pp. 181-202 ◽  
Author(s):  
Yuhua Qian ◽  
Jiye Liang ◽  
Deyu Li ◽  
Haiyun Zhang ◽  
Chuangyin Dang

2014 ◽  
Vol 521 ◽  
pp. 418-422 ◽  
Author(s):  
Yan Xu ◽  
Xin Chen

Transient Stability Assessment (TSA) aims at assessing stability of power system operation state quickly. This paper introduces rough set theory and clustering analysis to assess power system transient stability. At first, the stability operation parameters and fault places are taken as feature attributes based on the trait of power system transient ability. K-means algorithm is used to make continuous attributes among feature attributes discrete. Then feature attributes and stability types are taken as conditional attributes and decision attributes respectively. Initial decision table is established. Finally, rough set theory is used to form final decision table and rules of TSA are obtained. The IEEE 9-Bus system is employed to demonstrate the validity of the proposed approach.


2014 ◽  
Vol 533 ◽  
pp. 237-241
Author(s):  
Xiao Jing Liu ◽  
Wei Feng Du ◽  
Xiao Min

The measure of the significance of the attribute and attribute reduction is one of the core content of rough set theory. The classical rough set model based on equivalence relation, suitable for dealing with discrete-valued attributes. Fuzzy-rough set theory, integrating fuzzy set and rough set theory together, extending equivalence relation to fuzzy relation, can deal with fuzzy-valued attributes. By analyzing three problems of FRAR which is a fuzzy decision table attribute reduction algorithm having extensive use, this paper proposes a new reduction algorithm which has better overcome the problem, can handle larger fuzzy decision table. Experimental results show that our reduction algorithm is much quicker than the FRAR algorithm.


Sign in / Sign up

Export Citation Format

Share Document