scholarly journals Realistic Aspects of Simulation Models for Fake News Epidemics over Social Networks

2021 ◽  
Vol 13 (3) ◽  
pp. 76
Author(s):  
Quintino Francesco Lotito ◽  
Davide Zanella ◽  
Paolo Casari

The pervasiveness of online social networks has reshaped the way people access information. Online social networks make it common for users to inform themselves online and share news among their peers, but also favor the spreading of both reliable and fake news alike. Because fake news may have a profound impact on the society at large, realistically simulating their spreading process helps evaluate the most effective countermeasures to adopt. It is customary to model the spreading of fake news via the same epidemic models used for common diseases; however, these models often miss concepts and dynamics that are peculiar to fake news spreading. In this paper, we fill this gap by enriching typical epidemic models for fake news spreading with network topologies and dynamics that are typical of realistic social networks. Specifically, we introduce agents with the role of influencers and bots in the model and consider the effects of dynamical network access patterns, time-varying engagement, and different degrees of trust in the sources of circulating information. These factors concur with making the simulations more realistic. Among other results, we show that influencers that share fake news help the spreading process reach nodes that would otherwise remain unaffected. Moreover, we emphasize that bots dramatically speed up the spreading process and that time-varying engagement and network access change the effectiveness of fake news spreading.

2022 ◽  
pp. 255-263
Author(s):  
Chirag Visani ◽  
Vishal Sorathiya ◽  
Sunil Lavadiya

The popularity of the internet has increased the use of e-commerce websites and news channels. Fake news has been around for many years, and with the arrival of social media and modern-day news at its peak, easy access to e-platform and exponential growth of the knowledge available on social media networks has made it intricate to differentiate between right and wrong information, which has caused large effects on the offline society already. A crucial goal in improving the trustworthiness of data in online social networks is to spot fake news so the detection of spam news becomes important. For sentiment mining, the authors specialise in leveraging Facebook, Twitter, and Whatsapp, the most prominent microblogging platforms. They illustrate how to assemble a corpus automatically for sentiment analysis and opinion mining. They create a sentiment classifier using the corpus that can classify between fake, real, and neutral opinions in a document.


2019 ◽  
Vol 2019 ◽  
pp. 1-8
Author(s):  
Yuan Xu ◽  
Renjie Mei ◽  
Yujie Yang ◽  
Zhengmin Kong

It is of great practical significance to figure out the propagation mechanism and outbreak condition of rumor spreading on online social networks. In our paper, we propose a multi-state reinforcement diffusion model for rumor spreading, in which the reinforcement mechanism is introduced to depict individual willingness towards rumor spreading. Multiple intermediate states are introduced to characterize the process that an individual's diffusion willingness is enhanced step by step. We study the rumor spreading process with the proposed reinforcement diffusion mechanism on two typical networks. The outbreak thresholds of rumor spreading on both two networks are obtained. Numerical simulations and Monte Carlo simulations are conducted to illustrate the spreading process and verify the correctness of theoretical results. We believe that our work will shed some light on understanding how human sociality affects the rumor spreading on online social networks.


Author(s):  
Tianyi Hao ◽  
Longbo Huang

In this paper, we consider the problem of user modeling in online social networks, and propose a social interaction activity based user vectorization framework, called the time-varying user vectorization (Tuv), to infer and make use of important user features. Tuv is designed based on a novel combination of word2vec, negative sampling and a smoothing technique for model training. It jointly handles multi-format user data and computes user representing vectors, by taking into consideration user feature variation, self-similarity and pairwise interactions among users. The framework enables us to extract hidden user properties and to produce user vectors. We conduct extensive experiments based on a real-world dataset, which show that Tuv significantly outperforms several state-of-the-art user vectorization methods.


2021 ◽  
Vol 13 (5) ◽  
pp. 107
Author(s):  
Vincenza Carchiolo ◽  
Alessandro Longheu ◽  
Michele Malgeri ◽  
Giuseppe Mangioni ◽  
Marialaura Previti

A real-time news spreading is now available for everyone, especially thanks to Online Social Networks (OSNs) that easily endorse gate watching, so the collective intelligence and knowledge of dedicated communities are exploited to filter the news flow and to highlight and debate relevant topics. The main drawback is that the responsibility for judging the content and accuracy of information moves from editors and journalists to online information users, with the side effect of the potential growth of fake news. In such a scenario, trustworthiness about information providers cannot be overlooked anymore, rather it more and more helps in discerning real news from fakes. In this paper we evaluate how trustworthiness among OSN users influences the news spreading process. To this purpose, we consider the news spreading as a Susceptible-Infected-Recovered (SIR) process in OSN, adding the contribution credibility of users as a layer on top of OSN. Simulations with both fake and true news spreading on such a multiplex network show that the credibility improves the diffusion of real news while limiting the propagation of fakes. The proposed approach can also be extended to real social networks.


2020 ◽  
Vol 7 (5) ◽  
pp. 1159-1167
Author(s):  
Gulshan Shrivastava ◽  
Prabhat Kumar ◽  
Rudra Pratap Ojha ◽  
Pramod Kumar Srivastava ◽  
Senthilkumar Mohan ◽  
...  

2021 ◽  
Vol 118 (50) ◽  
pp. e2102141118 ◽  
Author(s):  
Fernando P. Santos ◽  
Yphtach Lelkes ◽  
Simon A. Levin

The level of antagonism between political groups has risen in the past years. Supporters of a given party increasingly dislike members of the opposing group and avoid intergroup interactions, leading to homophilic social networks. While new connections offline are driven largely by human decisions, new connections on online social platforms are intermediated by link recommendation algorithms, e.g., “People you may know” or “Whom to follow” suggestions. The long-term impacts of link recommendation in polarization are unclear, particularly as exposure to opposing viewpoints has a dual effect: Connections with out-group members can lead to opinion convergence and prevent group polarization or further separate opinions. Here, we provide a complex adaptive–systems perspective on the effects of link recommendation algorithms. While several models justify polarization through rewiring based on opinion similarity, here we explain it through rewiring grounded in structural similarity—defined as similarity based on network properties. We observe that preferentially establishing links with structurally similar nodes (i.e., sharing many neighbors) results in network topologies that are amenable to opinion polarization. Hence, polarization occurs not because of a desire to shield oneself from disagreeable attitudes but, instead, due to the creation of inadvertent echo chambers. When networks are composed of nodes that react differently to out-group contacts, either converging or polarizing, we find that connecting structurally dissimilar nodes moderates opinions. Overall, our study sheds light on the impacts of social-network algorithms and unveils avenues to steer dynamics of radicalization and polarization in online social networks.


Author(s):  
Nisha P. Shetty ◽  
Balachandra Muniyal ◽  
Arshia Anand ◽  
Sushant Kumar

Sybil accounts are swelling in popular social networking sites such as Twitter, Facebook etc. owing to cheap subscription and easy access to large masses. A malicious person creates multiple fake identities to outreach and outgrow his network. People blindly trust their online connections and fall into trap set up by these fake perpetrators. Sybil nodes exploit OSN’s ready-made connectivity to spread fake news, spamming, influencing polls, recommendations and advertisements, masquerading to get critical information, launching phishing attacks etc. Such accounts are surging in wide scale and so it has become very vital to effectively detect such nodes. In this research a new classifier (combination of Sybil Guard, Twitter engagement rate and Profile statistics analyser) is developed to combat such Sybil nodes. The proposed classifier overcomes the limitations of structure based, machine learning based and behaviour-based classifiers and is proven to be more accurate and robust than the base Sybil guard algorithm.


Sign in / Sign up

Export Citation Format

Share Document