Visual Social Media and Big Data. Interpreting Instagram Images Posted on Twitter

2016 ◽  
Vol 2 (2) ◽  
pp. 113-134 ◽  
Author(s):  
Dhiraj Murthy ◽  
Alexander Gross ◽  
Marisa McGarry

Abstract Social media such as Twitter and Instagram are fast, free, and multicast. These attributes make them particularly useful for crisis communication. However, the speed and volume also make them challenging to study. Historically, journalists controlled what/how images represented crises. Large volumes of social media can change the politics of representing disasters. However, methodologically, it is challenging to study visual social media data. Specifically, the process is usually labour-intensive, using human coding of images to discern themes and subjects. For this reason, Studies investigating social media during crises tend to examine text. In addition, application programming interfaces (APIs) for visual social media services such as Instagram and Snapchat are restrictive or even non-existent. Our work uses images posted by Instagram users on Twitter during Hurricane Sandy as a case study. This particular case is unique as it is perhaps the first US disaster where Instagram played a key role in how victims experienced Sandy. It is also the last major US disaster to take place before Instagram images were removed from Twitter feeds. Our sample consists of 11,964 Instagram images embedded into tweets during a twoweek timeline surrounding Hurricane Sandy. We found that the production and consumption of selfies, food/drink, pets, and humorous macro images highlight possible changes in the politics of representing disasters - a potential turn from top-down understandings of disasters to bottom-up, citizen informed views. Ultimately, we argue that image data produced during crises has potential value in helping us understand the social experience of disasters, but studying these types of data presents theoretical and methodological challenges.

2019 ◽  
Vol 1 ◽  
pp. 1-2
Author(s):  
Lei Zou

<p><strong>Abstract.</strong> The ability of a community to prepare for, absorb, recover from, and more successfully adapt to disastrous events is defined as disaster resilience. Disaster resilience can be better understood by investigating human behaviors during the four phases of emergency management – preparedness, response, recovery, and mitigation. However, a major challenge is that data describing communities’ behaviors in different phases of emergency management are difficult to access through traditional databases. Social media such as Twitter is increasingly being used as an effective platform to observe human behaviors in disastrous events. These responses and behaviors could be better understood by analyzing real-time social media data through categorizing them into different phases of the emergency management.</p><p>This research studies the Twitter use during 2012 Hurricane Sandy and 2017 Hurricane Harvey, which struck the U.S. northeast and south coasts, respectively. The objectives are fourfold: (1) to develop a Twitter data mining and visualization framework and a set of indexes for emergency management and resilience analysis; (2) to visualize the spatial-temporal patterns of disaster-related Twitter activities during the two hurricane events; (3) to examine and compare the social-geographical disparities of disaster-related Twitter activities during Sandy and Harvey; and (4) to build applications using social media data for smart management, including surveying human behaviors and emergency rescue.</p>


2019 ◽  
Vol 3 (3) ◽  
pp. 44
Author(s):  
Davis ◽  
Sedig ◽  
Lizotte

Existing keyword-based search techniques suffer from limitations owing to unknown, mismatched, and obscure vocabulary. These challenges are particularly prevalent in social media, where slang, jargon, and memetics are abundant. We develop a new technique, Archetype-Based Modeling and Search, that can mitigate these challenges as they are encountered in social media. This technique learns to identify new relevant documents based on a specified set of archetypes from which both vocabulary and relevance information are extracted. We present a case study from the social media data from Reddit, by using authors from /r/Opiates to characterize discourse around opioid use and to find additional relevant authors on this topic.


2021 ◽  
Vol 13 (7) ◽  
pp. 3836
Author(s):  
David Flores-Ruiz ◽  
Adolfo Elizondo-Salto ◽  
María de la O. Barroso-González

This paper explores the role of social media in tourist sentiment analysis. To do this, it describes previous studies that have carried out tourist sentiment analysis using social media data, before analyzing changes in tourists’ sentiments and behaviors during the COVID-19 pandemic. In the case study, which focuses on Andalusia, the changes experienced by the tourism sector in the southern Spanish region as a result of the COVID-19 pandemic are assessed using the Andalusian Tourism Situation Survey (ECTA). This information is then compared with data obtained from a sentiment analysis based on the social network Twitter. On the basis of this comparative analysis, the paper concludes that it is possible to identify and classify tourists’ perceptions using sentiment analysis on a mass scale with the help of statistical software (RStudio and Knime). The sentiment analysis using Twitter data correlates with and is supplemented by information from the ECTA survey, with both analyses showing that tourists placed greater value on safety and preferred to travel individually to nearby, less crowded destinations since the pandemic began. Of the two analytical tools, sentiment analysis can be carried out on social media on a continuous basis and offers cost savings.


Author(s):  
Mohamad Hasan

This paper presents a model to collect, save, geocode, and analyze social media data. The model is used to collect and process the social media data concerned with the ISIS terrorist group (the Islamic State in Iraq and Syria), and to map the areas in Syria most affected by ISIS accordingly to the social media data. Mapping process is assumed automated compilation of a density map for the geocoded tweets. Data mined from social media (e.g., Twitter and Facebook) is recognized as dynamic and easily accessible resources that can be used as a data source in spatial analysis and geographical information system. Social media data can be represented as a topic data and geocoding data basing on the text of the mined from social media and processed using Natural Language Processing (NLP) methods. NLP is a subdomain of artificial intelligence concerned with the programming computers to analyze natural human language and texts. NLP allows identifying words used as an initial data by developed geocoding algorithm. In this study, identifying the needed words using NLP was done using two corpora. First corpus contained the names of populated places in Syria. The second corpus was composed in result of statistical analysis of the number of tweets and picking the words that have a location meaning (i.e., schools, temples, etc.). After identifying the words, the algorithm used Google Maps geocoding API in order to obtain the coordinates for posts.


2020 ◽  
Vol 111 ◽  
pp. 819-828 ◽  
Author(s):  
Joseph T. Yun ◽  
Nickolas Vance ◽  
Chen Wang ◽  
Luigi Marini ◽  
Joseph Troy ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document