Novel Developments in Granular Computing
Latest Publications


TOTAL DOCUMENTS

19
(FIVE YEARS 0)

H-INDEX

3
(FIVE YEARS 0)

Published By IGI Global

9781605663241, 9781605663258

Author(s):  
Jianchao Han

Granular computing as a methodology of problem solving has been extensively applied in a variety of fields for a long history, but the special research interest in granular computing has only been developed in past decades. So far most granular computing researchers address the mathematical foundation and/or the computation model of granular computing. However, granular computing is not only a computing model for computer-centered problem solving, but also a thinking model for human-centered problem solving. Fortunately, some authors have presented the structures of such kind models and investigated various perspectives of granular computing from different application points of views. In this paper we present the principles, models, components, strategies, and applications of granular computing. Our focus will be on the applications of granular computing in various aspects and phases of the object-oriented software development process, including user requirement specification and analysis, software system analysis and design, algorithm design, structured programming, software testing, and system deployment design. Our objective is to reveal the importance and usefulness of granular computing as a human-centered problem solving strategy in object-oriented software development process.


Author(s):  
Witold Pedrycz ◽  
Athanasios Vasilakos

In contrast to numeric models, granular models produce results coming in a form of some information granules. Owing to the granularity of information these constructs dwell upon, such models become highly transparent and interpretable as well as operationally effective. Given also the fact that information granules come with a clearly defined semantics, granular models are often referred to as linguistic models. The crux of the design of the linguistic models studied in this paper exhibits two important features. First, the model is constructed on a basis of information granules which are assembled in the form of a web of associations between the granules formed in the output and input spaces. Given the semantics of information granules, we envision that a blueprint of the granular model can be formed effortlessly and with a very limited computing overhead. Second, the interpretability of the model is retained as the entire construct dwells on the conceptual entities of a well-defined semantics. The granulation of available data is accomplished by a carefully designed mechanism of fuzzy clustering which takes into consideration specific problem-driven requirements expressed by the designer at the time of the conceptualization of the model. We elaborate on a so-called context – based (conditional) Fuzzy C-Means (cond-FCM, for brief) to demonstrate how the fuzzy clustering is engaged in the design process. The clusters formed in the input space become induced (implied) by the context fuzzy sets predefined in the output space. The context fuzzy sets are defined in advance by the designer of the model so this design facet provides an active way of forming the model and in this manner becomes instrumental in the determination of a perspective at which a certain phenomenon is to be captured and modeled. This stands in a sharp contrast with most modeling approaches where the development is somewhat passive by being predominantly based on the existing data. The linkages between the fuzzy clusters induced by the given context fuzzy set in the output space are combined by forming a blueprint of the overall granular model. The membership functions of the context fuzzy sets are used as granular weights (connections) of the output processing unit (linear neuron) which subsequently lead to the granular output of the model thus identifying a feasible region of possible output values for the given input. While the above design is quite generic addressing a way in which information granules are assembled in the form of the model, we discuss further refinements which include (a) optimization of the context fuzzy sets, (b) inclusion of bias in the linear neuron at the output layer.


Author(s):  
Hung Son Nguyen ◽  
Andrzej Jankowski ◽  
James F. Peters ◽  
Andrzej Skowron ◽  
Jaroslaw Stepaniuk ◽  
...  

The rapid expansion of the Internet has resulted not only in the ever-growing amount of data stored therein, but also in the burgeoning complexity of the concepts and phenomena pertaining to that data. This issue has been vividly compared by the renowned statistician J.F. Friedman (Friedman, 1997) of Stanford University to the advances in human mobility from the period of walking afoot to the era of jet travel. These essential changes in data have brought about new challenges in the discovery of new data mining methods, especially the treatment of these data that increasingly involves complex processes that elude classic modeling paradigms. “Hot” datasets like biomedical, financial or net user behavior data are just a few examples. Mining such temporal or stream data is a focal point in the agenda of many research centers and companies worldwide (see, e.g., (Roddick et al., 2001; Aggarwal, 2007)). In the data mining community, there is a rapidly growing interest in developing methods for process mining, e.g., for discovery of structures of temporal processes from observed sample data. Research on process mining (e.g., (Unnikrishnan et al., 2006; de Medeiros et al., 2007; Wu, 2007; Borrett et al., 2007)) have been undertaken by many renowned centers worldwide1. This research is also related to functional data analysis (see, e.g., (Ramsay & Silverman, 2002)), cognitive networks (see, e.g., (Papageorgiou & Stylios, 2008)), and dynamical system modeling, e.g., in biology (see, e.g., (Feng et al., 2007)). We outline an approach to the discovery of processes from data and domain knowledge. The proposed approach to discovery of process models is based on rough-granular computing. In particular, we discuss how changes along trajectories of such processes can be discovered from sample data and domain knowledge.


Author(s):  
Carlos Pinheiro ◽  
Fernando Gomide ◽  
Otávio Carpinteiro ◽  
Isaías Lima

This chapter suggests a new method to develop rule-based models using concepts about rough sets. The rules encapsulate relations among variables and give a mechanism to link granular descriptions of the models with their computational procedures. An estimation procedure is suggested to compute values from granular representations encoded by rule sets. The method is useful to develop granular models of static and dynamic nonlinear systems and processes. Numerical examples illustrate the main features and the usefulness of the method.


Author(s):  
Yan Zhao

A granular structure includes granules, levels, hierarchies and multiple hierarchies. Classification can be modelled by granular computing regarding these components. More specifically, classification tasks can be understood as a search in a certain search space represented by a granule network. This chapter discusses the basic components of a granular structure, followed by the modelling of classification in terms of these components. The top-down, bottom-up strategies for searching classification solutions within different granule networks are discussed.


Author(s):  
Dariusz Malyszko ◽  
Jaroslaw Stepaniuk

Clustering understood as a data grouping technique represents fundamental procedures in image processing. The present chapter’s concerns are combining the concept of rough sets and entropy measures in the area of image segmentation. In this context, comprehensive investigations into rough set entropy based clustering image segmentation techniques have been performed. Segmentation presents low-level image transformation routines concerned with image partitioning into distinct disjoint and homogenous regions. In the area of segmentation routines, threshold based algorithms and clustering algorithms most often are applied in practical solutions when there is a pressing need for simplicity and robustness. Rough entropy threshold based segmentation algorithms simultaneously combine optimal threshold determination with rough region approximations and region entropy measures. In the present chapter, new algorithmic schemes RECA in the area of rough entropy based partitioning routines have been proposed. Rough entropy clustering incorporates the notion of rough entropy into clustering models, taking advantage of dealing with some degree of uncertainty in analyzed data. RECA algorithmic schemes performed usually equally robust compared to standard k-means algorithms. At the same time, in many runs they yielded slightly better performances making possible future implementation in clustering applications.


Author(s):  
Yan Chen ◽  
Yan-Qing Zhang

For most Web searching applications, queries are commonly ambiguous because words or phrases have different linguistic meanings for different Web users. The conventional keyword-based search engines cannot disambiguate queries to provide relevant results matching Web users’ intents. Traditional Word Sense Disambiguation (WSD) methods use statistic models or ontology-based knowledge systems to measure associations among words. The contexts of queries are used for disambiguation in these methods. However, due to the fact that numerous combinations of words may appear in queries and documents, it is difficult to extract concepts’ relations for all possible combinations. Moreover, queries are usually short, so contexts in queries do not always provide enough information to disambiguate queries. Therefore, the traditional WSD methods are not sufficient to provide accurate search results for ambiguous queries. In this chapter, a new model, Granular Semantic Tree (GST), is introduced for more conveniently representing associations among concepts than the traditional WSD methods. Additionally, users’ preferences are used to provide personalized search results that better adapt to users’ unique intents. Fuzzy logic is used to determine the most appropriate concepts related to queries based on contexts and users’ preferences. Finally, Web pages are analyzed by the GST model. The concepts of pages for the queries are evaluated, and the pages are re-ranked according to similarities of concepts between pages and queries.


Author(s):  
James F. Peters

The problem considered in this chapter is how to discover perceptual granules that are in some sense near each other. One approach to the solution to the problem of discovering perceptual granules close to each other comes from near set theory. This is made clear in this chapter by considering various nearness relations that define coverings of sets of perceptual objects that are near each other. A perceptual granule is something that is graspable by the senses or by the mind. Every perceptual granule is represented by a set of perceptual objects that have their origin in the physical world. This means that a perceptual granule does not include the empty set. Hence, each family of perceptual granules is a dual chopped lattice. Both perceptual near sets and tolerance near sets are presented in this chapter. Both the theory and applications of perceptually near granules are presented in this chapter.


Author(s):  
Qing Liu

In this Chapter, we analyse the semantics of the rough logic. Related operations and related properties of semantics based on the rough logic are also discussed. Related reasoning of the semantis are also studied. Significant of studying semantics of the rough logic will hopefully offer a new idea for the applications to classical logic and other nonstandard logic, also hopefully offer a new theory and methodology for problem resolving in artificial intelligence.


Author(s):  
Guoyin Wang ◽  
Jun Hu ◽  
Qinghua Zhang ◽  
Xianquan Liu ◽  
Jiaqing Zhou

Granular computing (GrC) is a label of theories, methodologies, techniques, and tools that make use of granules in the process of problem solving. The philosophy of granular computing has appeared in many fields, and it is likely playing a more and more important role in data mining. Rough set theory and fuzzy set theory, as two very important paradigms of granular computing, are often used to process vague information in data mining. In this chapter, based on the opinion of data is also a format for knowledge representation, a new understanding for data mining, domain-oriented data-driven data mining (3DM), is introduced at first. Its key idea is that data mining is a process of knowledge transformation. Then, the relationship of 3DM and GrC, especially from the view of rough set and fuzzy set, is discussed. Finally, some examples are used to illustrate how to solve real problems in data mining using granular computing. Combining rough set theory and fuzzy set theory, a flexible way for processing incomplete information systems is introduced firstly. Then, the uncertainty measure of covering based rough set is studied by converting a covering into a partition using an equivalence domain relation. Thirdly, a high efficient attribute reduction algorithm is developed by translating set operation of granules into logical operation of bit strings with bitmap technology. Finally, two rule generation algorithms are introduced, and experiment results show that the rule sets generated by these two algorithms are simpler than other similar algorithms.


Sign in / Sign up

Export Citation Format

Share Document