scholarly journals Linguistic Landscapes on Street-Level Images

2020 ◽  
Vol 9 (1) ◽  
pp. 57
Author(s):  
Seong-Yun Hong

Linguistic landscape research focuses on relationships between written languages in public spaces and the sociodemographic structure of a city. While a great deal of work has been done on the evaluation of linguistic landscapes in different cities, most of the studies are based on ad-hoc interpretation of data collected from fieldwork. The purpose of this paper is to develop a new methodological framework that combines computer vision and machine learning techniques for assessing the diversity of languages from street-level images. As demonstrated with an analysis of a small Chinese community in Seoul, South Korea, the proposed approach can reveal the spatiotemporal pattern of linguistic variations effectively and provide insights into the demographic composition as well as social changes in the neighborhood. Although the method presented in this work is at a conceptual stage, it has the potential to open new opportunities to conduct linguistic landscape research at a large scale and in a reproducible manner. It is also capable of yielding a more objective description of a linguistic landscape than arbitrary classification and interpretation of on-site observations. The proposed approach can be a new direction for the study of linguistic landscapes that builds upon urban analytics methodology, and it will help both geographers and sociolinguists explore and understand our society.

Author(s):  
Prachi

This chapter describes how with Botnets becoming more and more the leading cyber threat on the web nowadays, they also serve as the key platform for carrying out large-scale distributed attacks. Although a substantial amount of research in the fields of botnet detection and analysis, bot-masters inculcate new techniques to make them more sophisticated, destructive and hard to detect with the help of code encryption and obfuscation. This chapter proposes a new model to detect botnet behavior on the basis of traffic analysis and machine learning techniques. Traffic analysis behavior does not depend upon payload analysis so the proposed technique is immune to code encryption and other evasion techniques generally used by bot-masters. This chapter analyzes the benchmark datasets as well as real-time generated traffic to determine the feasibility of botnet detection using traffic flow analysis. Experimental results clearly indicate that a proposed model is able to classify the network traffic as a botnet or as normal traffic with a high accuracy and low false-positive rates.


2020 ◽  
pp. 146144482093944
Author(s):  
Aimei Yang ◽  
Adam J Saffer

Social media can offer strategic communicators cost-effective opportunities to reach millions of individuals. However, in practice it can be difficult to be heard in these crowded digital spaces. This study takes a strategic network perspective and draws from recent research in network science to propose the network contingency model of public attention. This model argues that in the networked social-mediated environment, an organization’s ability to attract public attention on social media is contingent on its ability to fit its network position with the network structure of the communication context. To test the model, we combine data mining, social network analysis, and machine-learning techniques to analyze a large-scale Twitter discussion network. The results of our analysis of Twitter discussion around the refugee crisis in 2016 suggest that in high core-periphery network contexts, “star” positions were most influential whereas in low core-periphery network contexts, a “community” strategy is crucial to attracting public attention.


2019 ◽  
Vol 20 (3) ◽  
pp. 185-193 ◽  
Author(s):  
Natalie Stephenson ◽  
Emily Shane ◽  
Jessica Chase ◽  
Jason Rowland ◽  
David Ries ◽  
...  

Background:Drug discovery, which is the process of discovering new candidate medications, is very important for pharmaceutical industries. At its current stage, discovering new drugs is still a very expensive and time-consuming process, requiring Phases I, II and III for clinical trials. Recently, machine learning techniques in Artificial Intelligence (AI), especially the deep learning techniques which allow a computational model to generate multiple layers, have been widely applied and achieved state-of-the-art performance in different fields, such as speech recognition, image classification, bioinformatics, etc. One very important application of these AI techniques is in the field of drug discovery.Methods:We did a large-scale literature search on existing scientific websites (e.g, ScienceDirect, Arxiv) and startup companies to understand current status of machine learning techniques in drug discovery.Results:Our experiments demonstrated that there are different patterns in machine learning fields and drug discovery fields. For example, keywords like prediction, brain, discovery, and treatment are usually in drug discovery fields. Also, the total number of papers published in drug discovery fields with machine learning techniques is increasing every year.Conclusion:The main focus of this survey is to understand the current status of machine learning techniques in the drug discovery field within both academic and industrial settings, and discuss its potential future applications. Several interesting patterns for machine learning techniques in drug discovery fields are discussed in this survey.


Author(s):  
SANDA M. HARABAGIU

This paper presents a novel methodology of disambiguating prepositional phrase attachments. We create patterns of attachments by classifying a collection of prepositional relations derived from Treebank parses. As a by-product, the arguments of every prepositional relation are semantically disambiguated. Attachment decisions are generated as the result of a learning process, that builds upon some of the most popular current statistical and machine learning techniques. We have tested this methodology on (1) Wall Street Journal articles, (2) textual definitions of concepts from a dictionary and (3) an ad hoc corpus of Web documents, used for conceptual indexing and information extraction.


2017 ◽  
Vol 3 (1) ◽  
Author(s):  
Giorgos Borboudakis ◽  
Taxiarchis Stergiannakos ◽  
Maria Frysali ◽  
Emmanuel Klontzas ◽  
Ioannis Tsamardinos ◽  
...  

2017 ◽  
Author(s):  
Sook-Lei Liew ◽  
Julia M. Anglin ◽  
Nick W. Banks ◽  
Matt Sondag ◽  
Kaori L. Ito ◽  
...  

AbstractStroke is the leading cause of adult disability worldwide, with up to two-thirds of individuals experiencing long-term disabilities. Large-scale neuroimaging studies have shown promise in identifying robust biomarkers (e.g., measures of brain structure) of long-term stroke recovery following rehabilitation. However, analyzing large rehabilitation-related datasets is problematic due to barriers in accurate stroke lesion segmentation. Manually-traced lesions are currently the gold standard for lesion segmentation on T1-weighted MRIs, but are labor intensive and require anatomical expertise. While algorithms have been developed to automate this process, the results often lack accuracy. Newer algorithms that employ machine-learning techniques are promising, yet these require large training datasets to optimize performance. Here we present ATLAS (Anatomical Tracings of Lesions After Stroke), an open-source dataset of 304 T1-weighted MRIs with manually segmented lesions and metadata. This large, diverse dataset can be used to train and test lesion segmentation algorithms and provides a standardized dataset for comparing the performance of different segmentation methods. We hope ATLAS release 1.1 will be a useful resource to assess and improve the accuracy of current lesion segmentation methods.


Sign in / Sign up

Export Citation Format

Share Document