A New Dynamic Neighbourhood-Based Semantic Dissimilarity Measure for Ontology

2019 ◽  
Vol 15 (3) ◽  
pp. 24-41 ◽  
Author(s):  
Sathiya Balasubramanian ◽  
Geetha T. V.

The semantic web is a global initiative which employs ontologies to offer rich, semantic-based knowledge representation. Concepts in these ontologies are explored to find (dis)similarities between them using (dis)similarity measures. Despite the existence of numerous (dis)similarity measures, none have dynamically determined the quantum of information required to discover (dis)similarities between concepts. In this article, a new, efficient, feature-based semantic dissimilarity measure is proposed where the prime novelty lies in the dynamic selection of the semantic neighourhood (features) of the concepts. The neighbourhood is dynamically selected in accordance with the local density of the concept and the density of the ontology determined by the proposed density coefficient. Further, the proposed measure also scales down the dissimilarity value in accordance with the depth of the concept pair, using the novel Depth Coefficient.

2019 ◽  
Vol 16 (05) ◽  
pp. 1950029
Author(s):  
Mohammed Abdul Rahman AlShehri ◽  
Shailendra Mishra

Software defined network (SDN) controller selection in SDN is a key challenge to the network administrator. In SDN, control plane is an isolated process and operate on control layer. The controller provides a universal view of the entire network and support applications and services. The three focused parameters for controller selection are productivity, campus network and open source. In SDN, it is vital to have a good device for the efficient processing of all requests made by the switch and for good behavior of the network. For selecting best controller for the specified parameters, decision logic has to be developed that allow us to do comparison of the available controllers. Therefore, in this research we have suggested a methodology that uses analytic-hierarchy-process (AHP) to find a best controller. The approach has been studied and verified for a big organization network setup of Al-Majmaah University, Saudi Arabia. The approach is found to be more effective and increase the network performance significantly.


2000 ◽  
Vol 11 (1) ◽  
pp. 73-81 ◽  
Author(s):  
V. Subramaniam ◽  
G. K. Lee ◽  
G. S. Hong ◽  
Y. S. Wong ◽  
T. Ramesh

2013 ◽  
Vol 2013 ◽  
pp. 1-11 ◽  
Author(s):  
Jia-Rou Liu ◽  
Po-Hsiu Kuo ◽  
Hung Hung

Large-p-small-ndatasets are commonly encountered in modern biomedical studies. To detect the difference between two groups, conventional methods would fail to apply due to the instability in estimating variances int-test and a high proportion of tied values in AUC (area under the receiver operating characteristic curve) estimates. The significance analysis of microarrays (SAM) may also not be satisfactory, since its performance is sensitive to the tuning parameter, and its selection is not straightforward. In this work, we propose a robust rerank approach to overcome the above-mentioned diffculties. In particular, we obtain a rank-based statistic for each feature based on the concept of “rank-over-variable.” Techniques of “random subset” and “rerank” are then iteratively applied to rank features, and the leading features will be selected for further studies. The proposed re-rank approach is especially applicable for large-p-small-ndatasets. Moreover, it is insensitive to the selection of tuning parameters, which is an appealing property for practical implementation. Simulation studies and real data analysis of pooling-based genome wide association (GWA) studies demonstrate the usefulness of our method.


Author(s):  
Cristina Garrigós

Forgetting and remembering are as inevitably linked as lifeand death. Sometimes, forgetting is motivated by a biological disorder, brain damage, or it is the product of an unconscious desire derived from a traumatic event (psychological repression). But in some cases, we can motivate forgetting consciously (thought suppression). It is through the conscious repression of memories that we can find self-preservation and move forward, although this means that we create a fable of our lives, as Nietzsche says in his essay “On the Uses and Disadvantages of History for Life” (1997). In Jonathan Franzen’s novel, Purity (2015), forgetting is an active and conscious process by which the characters choose to forget certain episodes of their lives to be able to construct new identities. The erased memories include murder, economical privileges derived from illegal or unethical commercial processes, or dark sexual episodes. The obsession with forgetting the past links the lives of the main characters, and structures the narrative of the novel. The motivated erasure of memories becomes, thus, a way that the characters have to survive and face the present according to a (fake) narrative that they have constructed. But is motivated forgetting possible? Can one completely suppress facts in an active way? This paper analyses the role of forgetting in Franzen’s novel in relation to the need in our contemporary society to deny, hide, or erase uncomfortable data from our historical or personal archives; the need to make disappear stories which we do not want to accept, recognize, and much less make known to the public. This is related to how we manage information in the age of technology, the “selection” of what is to be the official story, and how we rewrite our own history


Sign in / Sign up

Export Citation Format

Share Document