scholarly journals Dispersed/Networked Open Social Discovery Research: Applications for Humanistic Machine Learning & Topic Modelling

2019 ◽  
Author(s):  
Richard J. Lane

One of the benefits of open social scholarship also presents researchers with a challenge: the dispersed nature of the knowledge breakthroughs presented by a diverse network of scholars inside and outside of the academy. Accessibility enhances the broad reach of open social scholarship, leading to a democratic engagement across a culturally rich spectrum of participants. But such processes do not necessarily provide coherent critical constellations or knowledge clusters from the perspective of the broad audience. Further, due to the positive benefits of functioning as a group, open social scholarship groups may ignore or simply not register potential discovery research breakthroughs that do not meet the criteria for the groups’ success. In all three instances (knowledge dispersal; lack of knowledge development coherence for all of the community and non-community members across a network; parallel knowledge breakthroughs that remain dispersed/unrecognized), machine learning and topic modelling can provide a methodology for recognizing and understanding open social knowledge creation.

2021 ◽  
Author(s):  
Norberto Sánchez-Cruz ◽  
Jose L. Medina-Franco

<p>Epigenetic targets are a significant focus for drug discovery research, as demonstrated by the eight approved epigenetic drugs for treatment of cancer and the increasing availability of chemogenomic data related to epigenetics. This data represents a large amount of structure-activity relationships that has not been exploited thus far for the development of predictive models to support medicinal chemistry efforts. Herein, we report the first large-scale study of 26318 compounds with a quantitative measure of biological activity for 55 protein targets with epigenetic activity. Through a systematic comparison of machine learning models trained on molecular fingerprints of different design, we built predictive models with high accuracy for the epigenetic target profiling of small molecules. The models were thoroughly validated showing mean precisions up to 0.952 for the epigenetic target prediction task. Our results indicate that the herein reported models have considerable potential to identify small molecules with epigenetic activity. Therefore, our results were implemented as freely accessible and easy-to-use web application.</p>


2021 ◽  
Vol 51 (3) ◽  
pp. 9-16
Author(s):  
José Suárez-Varela ◽  
Miquel Ferriol-Galmés ◽  
Albert López ◽  
Paul Almasan ◽  
Guillermo Bernárdez ◽  
...  

During the last decade, Machine Learning (ML) has increasingly become a hot topic in the field of Computer Networks and is expected to be gradually adopted for a plethora of control, monitoring and management tasks in real-world deployments. This poses the need to count on new generations of students, researchers and practitioners with a solid background in ML applied to networks. During 2020, the International Telecommunication Union (ITU) has organized the "ITU AI/ML in 5G challenge", an open global competition that has introduced to a broad audience some of the current main challenges in ML for networks. This large-scale initiative has gathered 23 different challenges proposed by network operators, equipment manufacturers and academia, and has attracted a total of 1300+ participants from 60+ countries. This paper narrates our experience organizing one of the proposed challenges: the "Graph Neural Networking Challenge 2020". We describe the problem presented to participants, the tools and resources provided, some organization aspects and participation statistics, an outline of the top-3 awarded solutions, and a summary with some lessons learned during all this journey. As a result, this challenge leaves a curated set of educational resources openly available to anyone interested in the topic.


Author(s):  
Dean Seeman ◽  
Heather Dean

Standardization both reflects and facilitates the collaborative and networked approach to metadata creation within the fields of librarianship and archival studies. These standards—such as Resource Description and Access and Rules for Archival Description—and the theoretical frameworks they embody enable professionals to work more effectively together. Yet such guidelines also determine who is qualified to undertake the work of cataloging and processing in libraries and archives. Both fields are empathetic to facilitating user-generated metadata and have taken steps towards collaborating with their research communities (as illustrated, for example, by social tagging and folksonomies) but these initial experiments cannot yet be regarded as widely adopted and radically open and social. This paper explores the recent histories of descriptive work in libraries and archives and the challenges involved in departing from deeply established models of metadata creation.


2019 ◽  
pp. 178-200
Author(s):  
Martin Lodge ◽  
Andrea Mennicken

This chapter focuses on the potentials and challenges posed by the utilization of machine learning algorithms in the regulation of public services, that is services supplied by or on behalf of government to a particular jurisdiction’s community, including healthcare, education, or correctional services. It argues that the widespread enthusiasm for algorithmic regulation hides much deeper differences in worldviews about regulatory approaches, and that advancing the utilization of algorithmic regulation potentially transforms existing mixes of regulatory approaches in non-anticipated ways. It also argues that regulating through algorithmic regulation presents distinct administrative problems in terms of knowledge creation, coordination, and integration, as well as ambiguity over objectives. These challenges for the use of machine learning algorithms in public service algorithmic regulation require renewed attention to questions of the ‘regulation of regulators’.


2020 ◽  
Vol 9 (1) ◽  
pp. 132-156
Author(s):  
Nachshon (Sean) Goltz ◽  
John Zeleznikow ◽  
Tracey Dowdeswell

Abstract This article discusses the regulation of artificial intelligence from a Jewish perspective, with an emphasis on the regulation of machine learning and its application to autonomous vehicles and machine learning. Through the Biblical story of Adam and Eve as well as Golem legends from Jewish folklore, we derive several basic principles that underlie a Jewish perspective on the moral and legal personhood of robots and other artificially intelligent agents. We argue that religious ethics in general, and Jewish ethics in particular, show us that the dangers of granting moral personhood to robots and in particular to autonomous vehicles lie not in the fact that they lack a soul—or consciousness or feelings or interests—but because to do so weakens our own ability to develop as fully autonomous legal and moral persons. Instead, we argue that existing legal persons should continue to maintain legal control over artificial agents, while natural persons assume ultimate moral responsibility for choices made by artificial agents they employ in their service. In the final section of the article we discuss the trolley dilemma in the context of governing autonomous vehicles and sketch out an application of Jewish ethics in a case where we are asking Artificial Intelligence to make life and death decisions. Our novel contribution is two-fold; first, we bring a religious approach to the discussion of the ethics of Artificial Intelligence which has hitherto been dominated by secular Western philosophies; second, we raise the idea that artificial entities who are trained through machine learning can be ethically trained in much the same way that human are—through reading and reflecting on core religious texts. This is both a way of ensuring the ethical regulation of artificial intelligence, but also promotes other core values of regulation, such as democratic engagement and user choice.


Author(s):  
John Girard ◽  
Andy Bertsch

This paper chronicles an exploratory, in-progress research project that compares the findings of Hofstede’s cross-cultural research with those of Forrester’s Social Technographics research.  The aim of the project is to determine if a relationship exists between cultural differences and social knowledge creation and exchange.  Part one of the study mapped Davenport and Prusak’s information and knowledge creation theories to the six components of Forrester’s Social Technographics study (creators, critics, collectors, joiners, spectators, and inactives).  Next, the Social Technographics results from 13 nations were compared with Hofstede’s four cultural dimensions (power distance, individualism, uncertainty avoidance, masculinity).  The analysis included exploring the relationship visually using 24 scatter diagrams, running correlation coefficients (Peasson’s r) for each relationship, testing for significance of Pearson’s r, and finally conducting regression analyses on each relationship. Although the authors believe that culture influences behaviours, this study did not reveal any reasonable relationships between culture and placement along the Social Technographics.  However, it is possible that there exists problems in the Hofstede scales.  The Hofstede scales have been highly criticized in the literature.  It may be that other cross-cultural models such as GLOBE, Schwartz, Triandis, or others may yield different results.  In this regard, further research is necessary.  The next phase of the project will compare Social Technographics with the GLOBE project findings.


2019 ◽  
Vol 6 (1) ◽  
Author(s):  
Claus Boye Asmussen ◽  
Charles Møller

Abstract Manual exploratory literature reviews should be a thing of the past, as technology and development of machine learning methods have matured. The learning curve for using machine learning methods is rapidly declining, enabling new possibilities for all researchers. A framework is presented on how to use topic modelling on a large collection of papers for an exploratory literature review and how that can be used for a full literature review. The aim of the paper is to enable the use of topic modelling for researchers by presenting a step-by-step framework on a case and sharing a code template. The framework consists of three steps; pre-processing, topic modelling, and post-processing, where the topic model Latent Dirichlet Allocation is used. The framework enables huge amounts of papers to be reviewed in a transparent, reliable, faster, and reproducible way.


2020 ◽  
Vol 02 ◽  
Author(s):  
Luis Meneses

The Social Media Engine relies on interactive computer-mediated technologies and the increased impact, readership, and alt-metrics present in open access repositories—while fostering public engagement, open social scholarship, and social knowledge creation by matching readers with publications. In this paper I focus on a discussion that explores the possibilities of integrating a search engine that ranks its results according to trends in social media with large-scale open access repositories. Ultimately, this discussion aims to explore the implications of creating tools to emphasize the connections between documents that can be treated as objects of study as well.


Sign in / Sign up

Export Citation Format

Share Document