scholarly journals Multiagent Hierarchical Cognition Difference Policy for Multiagent Cooperation

Algorithms ◽  
2021 ◽  
Vol 14 (3) ◽  
pp. 98
Author(s):  
Huimu Wang ◽  
Zhen Liu ◽  
Jianqiang Yi ◽  
Zhiqiang Pu

Multiagent cooperation is one of the most attractive research fields in multiagent systems. There are many attempts made by researchers in this field to promote cooperation behavior. However, several issues still exist, such as complex interactions among different groups of agents, redundant communication contents of irrelevant agents, which prevents the learning and convergence of agent cooperation behaviors. To address the limitations above, a novel method called multiagent hierarchical cognition difference policy (MA-HCDP) is proposed in this paper. It includes a hierarchical group network (HGN), a cognition difference network (CDN), and a soft communication network (SCN). HGN is designed to distinguish different underlying information of diverse groups’ observations (including friendly group, enemy group, and object group) and extract different high-dimensional state representations of different groups. CDN is designed based on a variational auto-encoder to allow each agent to choose its neighbors (communication targets) adaptively with its environment cognition difference. SCN is designed to handle the complex interactions among the agents with a soft attention mechanism. The results of simulations demonstrate the superior effectiveness of our method compared with existing methods.

2021 ◽  
Vol 108 (Supplement_6) ◽  
Author(s):  
A Rajgor ◽  
A McQueen ◽  
T Ali ◽  
E Aboagye ◽  
B Obara ◽  
...  

Abstract Background Radiomics is a novel method of extracting data from medical images that is difficult to visualise through the naked eye. This technique transforms digital images that hold information on pathology into high-dimensional-data for analysis. Radiomics has the potential to enhance laryngeal cancer care and to date, has shown promise in various other specialties. Aim The aim of this review is to summarise the applications of this technique to laryngeal cancer and potential future benefits. Method A comprehensive systematic review-informed search of the MEDLINE and EMBASE online databases was undertaken. Keywords ‘laryngeal cancer’ OR ‘larynx’ OR ‘larynx cancer’ OR ‘head and neck cancer’ were combined with ‘radiomic’ OR ‘signature’ OR ‘machine learning’ OR ‘artificial intelligence’. Additional articles were obtained from bibliographies using the ‘snowball method’. Results Seventeen articles were identified that evaluated the role of radiomics in laryngeal cancer. Two studies affirmed the value of radiomics in improving the accuracy of staging, whilst fifteen studies highlighted the potential prognostic value of radiomics in laryngeal cancer. Twelve (of thirteen) studies incorporated an array of different head and neck cancers in the analysis and only one study assessed laryngeal cancer exclusively. Conclusions Literature to date has various limitations including, small and heterogeneous cohorts incorporating patients with head and neck cancers of distinct anatomical subsites and stages. The lack of uniform data on solely laryngeal cancer and radiomics means drawing conclusions is difficult, although these studies have affirmed its value. Further large prospective studies exclusively in laryngeal cancer are required to unlock its true potential.


2022 ◽  
Vol 13 (1) ◽  
pp. 1-17
Author(s):  
Ankit Kumar ◽  
Abhishek Kumar ◽  
Ali Kashif Bashir ◽  
Mamoon Rashid ◽  
V. D. Ambeth Kumar ◽  
...  

Detection of outliers or anomalies is one of the vital issues in pattern-driven data mining. Outlier detection detects the inconsistent behavior of individual objects. It is an important sector in the data mining field with several different applications such as detecting credit card fraud, hacking discovery and discovering criminal activities. It is necessary to develop tools used to uncover the critical information established in the extensive data. This paper investigated a novel method for detecting cluster outliers in a multidimensional dataset, capable of identifying the clusters and outliers for datasets containing noise. The proposed method can detect the groups and outliers left by the clustering process, like instant irregular sets of clusters (C) and outliers (O), to boost the results. The results obtained after applying the algorithm to the dataset improved in terms of several parameters. For the comparative analysis, the accurate average value and the recall value parameters are computed. The accurate average value is 74.05% of the existing COID algorithm, and our proposed algorithm has 77.21%. The average recall value is 81.19% and 89.51% of the existing and proposed algorithm, which shows that the proposed work efficiency is better than the existing COID algorithm.


2019 ◽  
Author(s):  
Oriol Tintó Prims ◽  
Mario C. Acosta ◽  
Andrew M. Moore ◽  
Miguel Castrillo ◽  
Kim Serradell ◽  
...  

Abstract. Mixed-precision approaches can provide substantial speed-ups for both computing- and memory-bound codes requiring little effort. Most scientific codes have overengineered the numerical precision leading to a situation where models are using more resources than required without having a clue about where these resources are unnecessary and where are really needed. Consequently, there is the possibility to obtain performance benefits from using a more appropriate choice of precision and the only thing that is needed is a method to determine which real variables can be represented with fewer bits without affecting the accuracy of the results. This paper presents a novel method to enable modern and legacy codes to benefit from a reduction of precision without sacrificing accuracy. It consists in a simple idea: if we can measure how reducing the precision of a group of variables affects the outputs, we can evaluate the level of precision this group of variables need. Modifying and recompiling the code for each case that has to be evaluated would require an amount of effort that makes this task prohibitive. Instead, the method presented in this paper relies on the use of a tool called Reduced Precision Emulator (RPE) that can significantly streamline the process . Using the RPE and a list of parameters containing the precisions that will be used for each real variable in the code, it is possible within a single binary to emulate the effect on the outputs of a specific choice of precision. Once we have the potential of emulating the effects of reduced precision, we can proceed with the design of the tests required to obtain knowledge about all the variables in the model. The number of possible combinations is prohibitively large and impossible to explore. The alternative of performing a screening of the variables individually can give certain insight about the precision needed by the variables, but on the other hand some more complex interactions that involve several variables may remain hidden. Instead, we use a divide-and-conquer algorithm that identifies the parts that cannot handle reduced precision and builds a set of variables that can. The method has been put to proof using two state-of-the-art ocean models, NEMO and ROMS, with very promising results. Obtaining this information is crucial to build afterwards an actual mixed precision version of the code that will bring the promised performance benefits.


2019 ◽  
Vol 7 (1) ◽  
pp. 5
Author(s):  
Akhila CNV ◽  
Ravi Prakash A ◽  
Rajini Kanth M ◽  
Sreenath G ◽  
Sowmya K ◽  
...  

Most of the diseases in humans are as a result of complex interactions occurring at cellular and molecular level. Research today has been focused in an attempt to reveal precisely the cellular evolution into pathogenesis. There are vast array of research fields, which include molecular biology, imaging techniques, etc. One of such field recently advancing worldwide is “Organotyping”. It is the successor of two dimensional cell cultures. Miniature organs and disease models can be produced from cells having the ability to proliferate and differentiate, by adopting definite protocols. Organoids are the potential tools to probe human biology and diseases; thereby they may change the approach to study diseases and provide treatment, in a more beneficiary way to the patient. Also organoids are used in vaccine production, cancer research, microbiology, tissue regeneration, drug testing, etc. Clinical trials are more devastating and may cost life of patients included in study. As such, organoids can be included in the protocols of clinical trials, through which the outcome of the study can be estimated. They open the doors for newer research methods and innovations, which are in peak requirement of present day scenario where new diseases are emerging and the diseases already existing are not yet cured.   


2019 ◽  
Vol 9 (14) ◽  
pp. 2841 ◽  
Author(s):  
Nan Zhang ◽  
Xueyi Gao ◽  
Tianyou Yu

Attribute reduction is a challenging problem in rough set theory, which has been applied in many research fields, including knowledge representation, machine learning, and artificial intelligence. The main objective of attribute reduction is to obtain a minimal attribute subset that can retain the same classification or discernibility properties as the original information system. Recently, many attribute reduction algorithms, such as positive region preservation, generalized decision preservation, and distribution preservation, have been proposed. The existing attribute reduction algorithms for generalized decision preservation are mainly based on the discernibility matrix and are, thus, computationally very expensive and hard to use in large-scale and high-dimensional data sets. To overcome this problem, we introduce the similarity degree for generalized decision preservation. On this basis, the inner and outer significance measures are proposed. By using heuristic strategies, we develop two quick reduction algorithms for generalized decision preservation. Finally, theoretical and experimental results show that the proposed heuristic reduction algorithms are effective and efficient.


2020 ◽  
Vol 130 (1) ◽  
pp. 178-194 ◽  
Author(s):  
Margot Bon ◽  
Carla Bardua ◽  
Anjali Goswami ◽  
Anne-Claire Fabre

Abstract Phenotypic integration and modularity are concepts that represent the pattern of connectivity of morphological structures within an organism. Integration describes the coordinated variation of traits, and analyses of these relationships among traits often reveals the presence of modules, sets of traits that are highly integrated but relatively independent of other traits. Phenotypic integration and modularity have been studied at both the evolutionary and static level across a variety of clades, although most studies thus far are focused on amniotes, and especially mammals. Using a high-dimensional geometric morphometric approach, we investigated the pattern of cranial integration and modularity of the Italian fire salamander (Salamandra salamandra giglioli). We recovered a highly modular pattern, but this pattern did not support either entirely developmental or functional hypotheses of cranial organisation, possibly reflecting complex interactions amongst multiple influencing factors. We found that size had no significant effect on cranial shape, and that morphological variance of individual modules had no significant relationship with degree of within-module integration. The pattern of cranial integration in the fire salamander is similar to that previously recovered for caecilians, with highly integrated jaw suspensorium and occipital regions, suggesting possible conservation of patterns across lissamphibians.


2016 ◽  
Vol 2016 ◽  
pp. 1-8 ◽  
Author(s):  
Song Guo ◽  
Chunhua Liu ◽  
Peng Zhou ◽  
Yanling Li

Tyrosine sulfation is one of the ubiquitous protein posttranslational modifications, where some sulfate groups are added to the tyrosine residues. It plays significant roles in various physiological processes in eukaryotic cells. To explore the molecular mechanism of tyrosine sulfation, one of the prerequisites is to correctly identify possible protein tyrosine sulfation residues. In this paper, a novel method was presented to predict protein tyrosine sulfation residues from primary sequences. By means of informative feature construction and elaborate feature selection and parameter optimization scheme, the proposed predictor achieved promising results and outperformed many other state-of-the-art predictors. Using the optimal features subset, the proposed method achieved mean MCC of 94.41% on the benchmark dataset, and a MCC of 90.09% on the independent dataset. The experimental performance indicated that our new proposed method could be effective in identifying the important protein posttranslational modifications and the feature selection scheme would be powerful in protein functional residues prediction research fields.


Author(s):  
Zhenqiu Liu ◽  
Feng Jiang ◽  
Guoliang Tian ◽  
Suna Wang ◽  
Fumiaki Sato ◽  
...  

In this paper, we propose a novel method for sparse logistic regression with non-convex regularization Lp (p <1). Based on smooth approximation, we develop several fast algorithms for learning the classifier that is applicable to high dimensional dataset such as gene expression. To the best of our knowledge, these are the first algorithms to perform sparse logistic regression with an Lp and elastic net (Le) penalty. The regularization parameters are decided through maximizing the area under the ROC curve (AUC) of the test data. Experimental results on methylation and microarray data attest the accuracy, sparsity, and efficiency of the proposed algorithms. Biomarkers identified with our methods are compared with that in the literature. Our computational results show that Lp Logistic regression (p <1) outperforms the L1 logistic regression and SCAD SVM. Software is available upon request from the first author.


2009 ◽  
Vol 2009 ◽  
pp. 1-8 ◽  
Author(s):  
Eimad E. Abusham ◽  
E. K. Wong

A novel method based on the local nonlinear mapping is presented in this research. The method is called Locally Linear Discriminate Embedding (LLDE). LLDE preserves a local linear structure of a high-dimensional space and obtains a compact data representation as accurately as possible in embedding space (low dimensional) before recognition. For computational simplicity and fast processing, Radial Basis Function (RBF) classifier is integrated with the LLDE. RBF classifier is carried out onto low-dimensional embedding with reference to the variance of the data. To validate the proposed method, CMU-PIE database has been used and experiments conducted in this research revealed the efficiency of the proposed methods in face recognition, as compared to the linear and non-linear approaches.


Sign in / Sign up

Export Citation Format

Share Document