scholarly journals Spikyball Sampling: Exploring Large Networks via an Inhomogeneous Filtered Diffusion

Algorithms ◽  
2020 ◽  
Vol 13 (11) ◽  
pp. 275
Author(s):  
Benjamin Ricaud ◽  
Nicolas Aspert ◽  
Volodymyr Miz

Studying real-world networks such as social networks or web networks is a challenge. These networks often combine a complex, highly connected structure together with a large size. We propose a new approach for large scale networks that is able to automatically sample user-defined relevant parts of a network. Starting from a few selected places in the network and a reduced set of expansion rules, the method adopts a filtered breadth-first search approach, that expands through edges and nodes matching these properties. Moreover, the expansion is performed over a random subset of neighbors at each step to mitigate further the overwhelming number of connections that may exist in large graphs. This carries the image of a “spiky” expansion. We show that this approach generalize previous exploration sampling methods, such as Snowball or Forest Fire and extend them. We demonstrate its ability to capture groups of nodes with high interactions while discarding weakly connected nodes that are often numerous in social networks and may hide important structures.

2019 ◽  
Vol 7 (5) ◽  
pp. 641-658 ◽  
Author(s):  
Zeynab Samei ◽  
Mahdi Jalili

Abstract Many real-world complex systems can be better modelled as multiplex networks, where the same individuals develop connections in multiple layers. Examples include social networks between individuals on multiple social networking platforms, and transportation networks between cities based on air, rail and road networks. Accurately predicting spurious links in multiplex networks is a challenging issue. In this article, we show that one can effectively use interlayer information to build an algorithm for spurious link prediction. We propose a similarity index that combines intralayer similarity with interlayer relevance for the link prediction purpose. The proposed similarity index is used to rank the node pairs, and identify those that are likely to be spurious. Our experimental results show that the proposed metric is much more accurate than intralayer similarity measures in correctly predicting the spurious links. The proposed method is an unsupervised method and has low computation complexity, and thus can be effectively applied for spurious link prediction in large-scale networks.


2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Dayong Zhang ◽  
Yang Wang ◽  
Zhaoxin Zhang

Abstract Quantifying the nodal spreading abilities and identifying the potential influential spreaders has been one of the most engaging topics recently, which is essential and beneficial to facilitate information flow and ensure the stabilization operations of social networks. However, most of the existing algorithms just consider a fundamental quantification through combining a certain attribute of the nodes to measure the nodes’ importance. Moreover, reaching a balance between the accuracy and the simplicity of these algorithms is difficult. In order to accurately identify the potential super-spreaders, the CumulativeRank algorithm is proposed in the present study. This algorithm combines the local and global performances of nodes for measuring the nodal spreading abilities. In local performances, the proposed algorithm considers both the direct influence from the node’s neighbourhoods and the indirect influence from the nearest and the next nearest neighbours. On the other hand, in the global performances, the concept of the tenacity is introduced to assess the node’s prominent position in maintaining the network connectivity. Extensive experiments carried out with the Susceptible-Infected-Recovered (SIR) model on real-world social networks demonstrate the accuracy and stability of the proposed algorithm. Furthermore, the comparison of the proposed algorithm with the existing well-known algorithms shows that the proposed algorithm has lower time complexity and can be applicable to large-scale networks.


Author(s):  
Petter Nielsen

As a result of a steady increase in reach, range, and processing capabilities, information systems no longer appear as independent, but rather as integrated, parts of large scale networks. These networks offer a shared resource for information delivery and exchange to communities, which appropriate them for their respective purposes. Such information infrastructures are complex in several ways. As they are composed of a variety of different components, their openness and heterogeneity make them inherently uncontrollable; through their expansion, these various interconnected networks enter new interdependencies; while they are based on extending existing technical and social networks, they also need to develop and grow over a long period of time; and, they are developed as a distributed activity. Examples of such information infrastructures include the Internet, National Information Infrastructure (NII) initiatives and industry-wide EDI networks, as well as corporate-wide implementations of enterprise systems.


2020 ◽  
Vol 508 ◽  
pp. 200-213
Author(s):  
Bin Zheng ◽  
Ouyang Liu ◽  
Jing Li ◽  
Yong Lin ◽  
Chong Chang ◽  
...  

2014 ◽  
Vol 644-650 ◽  
pp. 2562-2567
Author(s):  
Yi Tong Cui ◽  
Bing Yi Zhang ◽  
Guo Zheng Rao

Due to the advancement of technology, modern networks such as social networks, citation networks, Web networks have been extremely large, reaching millions of nodes in a network. But most of the existing graph clustering algorithms can only tackle with small or medium-size networks. In this paper, we introduce a new method which can achieve high graph clustering quality for large scale networks by optimizing the modularity function. It is based on the iterative idea and takes good advantage of the exsiting multilevel local search heuristics. After introducing this modularity-based method, we evaluate its performance by applying it to several well-known network datasets. With a cost of more but acceptable time, it outperforms the best algorithms in the literature in the case of modularity optimization quality.


Author(s):  
V. Skibchyk ◽  
V. Dnes ◽  
R. Kudrynetskyi ◽  
O. Krypuch

Аnnotation Purpose. To increase the efficiency of technological processes of grain harvesting by large-scale agricultural producers due to the rational use of combine harvesters available on the farm. Methods. In the course of the research the methods of system analysis and synthesis, induction and deduction, system-factor and system-event approaches, graphic method were used. Results. Characteristic events that occur during the harvesting of grain crops, both within a single production unit and the entire agricultural producer are identified. A method for predicting time intervals of use and downtime of combine harvesters of production units has been developed. The roadmap of substantiation the rational seasonal scenario of the use of grain harvesters of large-scale agricultural producers is developed, which allows estimating the efficiency of each of the scenarios of multivariate placement of grain harvesters on fields taking into account influence of natural production and agrometeorological factors on the efficiency of technological cultures. Conclusions 1. Known scientific and methodological approaches to optimization of machine used in agriculture do not take into account the risks of losses of crops due to late harvesting, as well as seasonal natural and agrometeorological conditions of each production unit of the farmer, which requires a new approach to the rational use of rational seasonal combines of large agricultural producers. 2. The developed new approach to the substantiation of the rational seasonal scenario of the use of combined harvesters of large-scale agricultural producers allows taking into account the costs of harvesting of grain and the cost of the lost crop because of the lateness of harvesting at optimum variants of attraction of additional free combine harvesters. provides more profit. 3. The practical application of the developed road map will allow large-scale agricultural producers to use combine harvesters more efficiently and reduce harvesting costs. Keywords: combine harvesters, use, production divisions, risk, seasonal scenario, large-scale agricultural producers.


Author(s):  
S. Pragati ◽  
S. Kuldeep ◽  
S. Ashok ◽  
M. Satheesh

One of the situations in the treatment of disease is the delivery of efficacious medication of appropriate concentration to the site of action in a controlled and continual manner. Nanoparticle represents an important particulate carrier system, developed accordingly. Nanoparticles are solid colloidal particles ranging in size from 1 to 1000 nm and composed of macromolecular material. Nanoparticles could be polymeric or lipidic (SLNs). Industry estimates suggest that approximately 40% of lipophilic drug candidates fail due to solubility and formulation stability issues, prompting significant research activity in advanced lipophile delivery technologies. Solid lipid nanoparticle technology represents a promising new approach to lipophile drug delivery. Solid lipid nanoparticles (SLNs) are important advancement in this area. The bioacceptable and biodegradable nature of SLNs makes them less toxic as compared to polymeric nanoparticles. Supplemented with small size which prolongs the circulation time in blood, feasible scale up for large scale production and absence of burst effect makes them interesting candidates for study. In this present review this new approach is discussed in terms of their preparation, advantages, characterization and special features.


Author(s):  
M. E. J. Newman ◽  
R. G. Palmer

Developed after a meeting at the Santa Fe Institute on extinction modeling, this book comments critically on the various modeling approaches. In the last decade or so, scientists have started to examine a new approach to the patterns of evolution and extinction in the fossil record. This approach may be called "statistical paleontology," since it looks at large-scale patterns in the record and attempts to understand and model their average statistical features, rather than their detailed structure. Examples of the patterns these studies examine are the distribution of the sizes of mass extinction events over time, the distribution of species lifetimes, or the apparent increase in the number of species alive over the last half a billion years. In attempting to model these patterns, researchers have drawn on ideas not only from paleontology, but from evolutionary biology, ecology, physics, and applied mathematics, including fitness landscapes, competitive exclusion, interaction matrices, and self-organized criticality. A self-contained review of work in this field.


Sign in / Sign up

Export Citation Format

Share Document