scholarly journals The Principle of Justifiable Granularity and an Optimization of Information Granularity Allocation as Fundamentals of Granular Computing

2011 ◽  
Vol 7 (3) ◽  
pp. 397-412 ◽  
Author(s):  
Witold Pedrycz
Author(s):  
Witold Pedrycz

In spite of their striking diversity, numerous tasks and architectures of intelligent systems such as those permeating multivariable data analysis, decision-making processes along with their underlying models, recommender systems and others exhibit two evident commonalities. They promote (a) human centricity and (b) vigorously engage perceptions (rather than plain numeric entities) in the realization of the systems and their further usage. Information granules play a pivotal role in such settings. Granular Computing delivers a cohesive framework supporting a formation of information granules and facilitating their processing. The author exploits two essential concepts of Granular Computing. The first one deals with the construction of information granules. The second one helps endow constructs of intelligent systems with a much needed conceptual and modeling flexibility. The study elaborates in detail on the three representative studies. In the first study being focused on the Analytic Hierarchy Process (AHP) used in decision-making, the author shows how an optimal allocation of granularity helps improve the quality of the solution and facilitate collaborative activities in models of group decision-making. The second study is concerned with a granular interpretation of temporal data where the role of information granularity is profoundly visible when effectively supporting human centric description of relationships existing in data. The third study concerns a formation of granular logic descriptors on a basis of a family of logic descriptors.


Author(s):  
WITOLD PEDRYCZ

In this study, we highlight some fundamental issues of knowledge management and cast them in the setting of Granular Computing (GrC). We show how its formal constructs — information granules are instrumental in knowledge representation and specification of its level of abstraction.


Author(s):  
Georg Peters

It is well accepted that in many real life situations information is not certain and precise but rather uncertain or imprecise. To describe uncertainty probability theory emerged in the 17th and 18th century. Bernoulli, Laplace and Pascal are considered to be the fathers of probability theory. Today probability can still be considered as the prevalent theory to describe uncertainty. However, in the year 1965 Zadeh seemed to have challenged probability theory by introducing fuzzy sets as a theory dealing with uncertainty (Zadeh, 1965). Since then it has been discussed whether probability and fuzzy set theory are complementary or rather competitive (Zadeh, 1995). Sometimes fuzzy sets theory is even considered as a subset of probability theory and therefore dispensable. Although the discussion on the relationship of probability and fuzziness seems to have lost the intensity of its early years it is still continuing today. However, fuzzy set theory has established itself as a central approach to tackle uncertainty. For a discussion on the relationship of probability and fuzziness the reader is referred to e.g. Dubois, Prade (1993), Ross et al. (2002) or Zadeh (1995). In the meantime further ideas how to deal with uncertainty have been suggested. For example, Pawlak introduced rough sets in the beginning of the eighties of the last century (Pawlak, 1982), a theory that has risen increasing attentions in the last years. For a comparison of probability, fuzzy sets and rough sets the reader is referred to Lin (2002). Presently research is conducted to develop a Generalized Theory of Uncertainty (GTU) as a framework for any kind of uncertainty whether it is based on probability, fuzziness besides others (Zadeh, 2005). Cornerstones in this theory are the concepts of information granularity (Zadeh, 1979) and generalized constraints (Zadeh, 1986). In this context the term Granular Computing was first suggested by Lin (1998a, 1998b), however it still lacks of a unique and well accepted definition. So, for example, Zadeh (2006a) colorfully calls granular computing “ballpark computing” or more precisely “a mode of computation in which the objects of computation are generalized constraints”.


2021 ◽  
pp. 1-16
Author(s):  
Wen Sheng Du

Granular computing is a relatively new platform for constructing, describing and processing information or knowledge. For crisp information granulation, the universe is decomposed into granules by binary relations on the universe, say, preorder, tolerance and equivalence relations. A knowledge structure is composed of all information granules induced by a relation that corresponds to the granulation. This paper establishes a novel theoretical framework for the measurement of information granularity of knowledge structures. First, two new relations between knowledge structures are introduced through the use of their respective Boolean relation matrices, where the granular equality relation is defined based on an orthogonal transformation with the transformation matrix being a permutation matrix, and the granularly finer relation is presented by combining the classical finer relation and the orthogonal transformation. Then, it is demonstrated that the simplified knowledge structure base with the granularly finer relation is a partially ordered set, which can be represented by a Hasse diagram. Subsequently, an axiomatic definition of information granularity is proposed to satisfy the constraints regarding these two relations. Moreover, a general form of the information granularity is given, and some existing measures are proved to be its special cases. Finally, as an application of the proposed measure, the attribute significance measure is developed based on the information granularity.


Author(s):  
Witold Pedrycz

In spite of their striking diversity, numerous tasks and architectures of intelligent systems such as those permeating multivariable data analysis, decision-making processes along with their underlying models, recommender systems and others exhibit two evident commonalities. They promote (a) human centricity and (b) vigorously engage perceptions (rather than plain numeric entities) in the realization of the systems and their further usage. Information granules play a pivotal role in such settings. Granular Computing delivers a cohesive framework supporting a formation of information granules and facilitating their processing. The author exploits two essential concepts of Granular Computing. The first one deals with the construction of information granules. The second one helps endow constructs of intelligent systems with a much needed conceptual and modeling flexibility. The study elaborates in detail on the three representative studies. In the first study being focused on the Analytic Hierarchy Process (AHP) used in decision-making, the author shows how an optimal allocation of granularity helps improve the quality of the solution and facilitate collaborative activities in models of group decision-making. The second study is concerned with a granular interpretation of temporal data where the role of information granularity is profoundly visible when effectively supporting human centric description of relationships existing in data. The third study concerns a formation of granular logic descriptors on a basis of a family of logic descriptors.


Sign in / Sign up

Export Citation Format

Share Document