scholarly journals A Multilayer CARU Framework to Obtain Probability Distribution for Paragraph-Based Sentiment Analysis

2021 ◽  
Vol 11 (23) ◽  
pp. 11344
Author(s):  
Wei Ke ◽  
Ka-Hou Chan

Paragraph-based datasets are hard to analyze by a simple RNN, because a long sequence always contains lengthy problems of long-term dependencies. In this work, we propose a Multilayer Content-Adaptive Recurrent Unit (CARU) network for paragraph information extraction. In addition, we present a type of CNN-based model as an extractor to explore and capture useful features in the hidden state, which represent the content of the entire paragraph. In particular, we introduce the Chebyshev pooling to connect to the end of the CNN-based extractor instead of using the maximum pooling. This can project the features into a probability distribution so as to provide an interpretable evaluation for the final analysis. Experimental results demonstrate the superiority of the proposed approach, being compared to the state-of-the-art models.

2020 ◽  
Vol 34 (05) ◽  
pp. 9122-9129
Author(s):  
Hai Wan ◽  
Yufei Yang ◽  
Jianfeng Du ◽  
Yanan Liu ◽  
Kunxun Qi ◽  
...  

Aspect-based sentiment analysis (ABSA) aims to detect the targets (which are composed by continuous words), aspects and sentiment polarities in text. Published datasets from SemEval-2015 and SemEval-2016 reveal that a sentiment polarity depends on both the target and the aspect. However, most of the existing methods consider predicting sentiment polarities from either targets or aspects but not from both, thus they easily make wrong predictions on sentiment polarities. In particular, where the target is implicit, i.e., it does not appear in the given text, the methods predicting sentiment polarities from targets do not work. To tackle these limitations in ABSA, this paper proposes a novel method for target-aspect-sentiment joint detection. It relies on a pre-trained language model and can capture the dependence on both targets and aspects for sentiment prediction. Experimental results on the SemEval-2015 and SemEval-2016 restaurant datasets show that the proposed method achieves a high performance in detecting target-aspect-sentiment triples even for the implicit target cases; moreover, it even outperforms the state-of-the-art methods for those subtasks of target-aspect-sentiment detection that they are competent to.


Author(s):  
Salvatore Manfreda ◽  
Oscar Link ◽  
Alonso Pizarro

Based on recent contributions regarding the treatment of unsteady hydraulic conditions into the state-of-the-art of scour literature, the theoretically derived probability distribution of bridge scour is introduced. The model is derived assuming a rectangular hydrograph shape with a given duration, and random flood peak following a Gumbel distribution. A model extension for a more complex flood event is also presented, assuming a synthetic exponential hydrograph shape. The mathematical formulation can be extended to any flood-peak probability distribution. The aim of the manuscript is to move forward the current approaches adopted for the bridge design coupling hydrological, hydraulic, and erosional models in a mathematical closed form.


2020 ◽  
Vol 34 (06) ◽  
pp. 10352-10360
Author(s):  
Jing Bi ◽  
Vikas Dhiman ◽  
Tianyou Xiao ◽  
Chenliang Xu

Learning from Demonstrations (LfD) via Behavior Cloning (BC) works well on multiple complex tasks. However, a limitation of the typical LfD approach is that it requires expert demonstrations for all scenarios, including those in which the algorithm is already well-trained. The recently proposed Learning from Interventions (LfI) overcomes this limitation by using an expert overseer. The expert overseer only intervenes when it suspects that an unsafe action is about to be taken. Although LfI significantly improves over LfD, the state-of-the-art LfI fails to account for delay caused by the expert's reaction time and only learns short-term behavior. We address these limitations by 1) interpolating the expert's interventions back in time, and 2) by splitting the policy into two hierarchical levels, one that generates sub-goals for the future and another that generates actions to reach those desired sub-goals. This sub-goal prediction forces the algorithm to learn long-term behavior while also being robust to the expert's reaction time. Our experiments show that LfI using sub-goals in a hierarchical policy framework trains faster and achieves better asymptotic performance than typical LfD.


2021 ◽  
Vol 4 ◽  
Author(s):  
Tiina Laamanen ◽  
Veera Norros ◽  
Sanna Suikkanen ◽  
Mikko Tolkkinen ◽  
Kristiina Vuorio ◽  
...  

Environmental DNA (eDNA) and other molecular based approaches are revolutionizing the field of biomonitoring. These approaches undergo rapid modifications, and it is crucial to develop the best practices by sharing the newest information and knowledge. In our ongoing project we: assess the state-of-the-art of eDNA methods at Finnish Environment Institute SYKE; identify concrete next steps towards the long-term aim of implementing eDNA methods into environmental and biomonitoring; promote information exchange on eDNA methods and advance future research efforts both within SYKE and with our national and international partners. assess the state-of-the-art of eDNA methods at Finnish Environment Institute SYKE; identify concrete next steps towards the long-term aim of implementing eDNA methods into environmental and biomonitoring; promote information exchange on eDNA methods and advance future research efforts both within SYKE and with our national and international partners. Scientific background Well-functioning and intact natural ecosystems are essential for human well-being, provide a variety of ecosystem services and contain a high diversity of organisms. However, human activities such as eutrophication, pollution, land-use or invasive species, are threatening the state and functioning of ecosystems from local to global scale (e.g. Benateau et al. 2019; Reid et al. 2018; Vörösmarty et al. 2010). New molecular techniques in the field and in the laboratory have enabled sampling and identification of much of terrestrial, marine and freshwater biodiversity. These include environmental DNA (eDNA, e.g. Valentini et al. 2016) and bulk-sample DNA metabarcoding approaches (e.g. Elbrecht et al. 2017) and targeted RNA-based methods (e.g. Mäki and Tiirola 2018). The eDNA technique uses DNA that is released from organisms into their environment, from which a signal of organisms’ presence in the system can be obtained. For example, in aquatic ecosystems, eDNA is typically extracted from sediment or filtered water samples (e.g. Deiner et al. 2016), and this approach is distinguished from bulk DNA metabarcoding, where organisms are directly identified from e.g. complete biological monitoring samples (e.g. Elbrecht et al. 2017). Despite the demonstrated potential of environmental and bulk-sample DNA metabarcoding approaches in recent years, there are still significant bottlenecks to their routine use that need to be addressed (e.g. Pawlowski et al. 2020). Methods and implementati on The project is divided into three work packages: WP1 Gathering existing knowledge, identifying knowledge gaps and proposing best practices, WP2 Roadmap to implementation and WP3 eDNA monitoring pilot. Please see more details in the Fig. 1


2021 ◽  
Vol 04 ◽  
Author(s):  
Diego Moreira Schlemper ◽  
Sérgio Henrique Pezzin

: Self-healing coatings are intended to increase long-term durability and reliability and can be enabled by the presence of microcapsules containing a self-healing agent capable of interacting with the matrix and regenerating the system. This review article provides an overview of the state-of-the-art, focusing on the patents published in the field of microcapsule-based self-healing organic coatings, since the early 2000’s. A discussion about coatings for corrosion protection and the different self-healing approaches and mechanisms are also addressed, as well as future challenges and expectations for this kind of coatings.


2020 ◽  
Vol 2020 ◽  
pp. 1-12
Author(s):  
Jiaxi Ye ◽  
Ruilin Li ◽  
Bin Zhang

Directed fuzzing is a practical technique, which concentrates its testing energy on the process toward the target code areas, while costing little on other unconcerned components. It is a promising way to make better use of available resources, especially in testing large-scale programs. However, by observing the state-of-the-art-directed fuzzing engine (AFLGo), we argue that there are two universal limitations, the balance problem between the exploration and the exploitation and the blindness in mutation toward the target code areas. In this paper, we present a new prototype RDFuzz to address these two limitations. In RDFuzz, we first introduce the frequency-guided strategy in the exploration and improve its accuracy by adopting the branch-level instead of the path-level frequency. Then, we introduce the input-distance-based evaluation strategy in the exploitation stage and present an optimized mutation to distinguish and protect the distance sensitive input content. Moreover, an intertwined testing schedule is leveraged to perform the exploration and exploitation in turn. We test RDFuzz on 7 benchmarks, and the experimental results demonstrate that RDFuzz is skilled at driving the program toward the target code areas, and it is not easily stuck by the balance problem of the exploration and the exploitation.


2018 ◽  
Vol 44 (4) ◽  
pp. 651-658
Author(s):  
Ralph Weischedel ◽  
Elizabeth Boschee

Though information extraction (IE) research has more than a 25-year history, F1 scores remain low. Thus, one could question continued investment in IE research. In this article, we present three applications where information extraction of entities, relations, and/or events has been used, and note the common features that seem to have led to success. We also identify key research challenges whose solution seems essential for broader successes. Because a few practical deployments already exist and because breakthroughs on particular challenges would greatly broaden the technology’s deployment, further R&D investments are justified.


2020 ◽  
Vol 10 (8) ◽  
pp. 2864 ◽  
Author(s):  
Muhammad Asad ◽  
Ahmed Moustafa ◽  
Takayuki Ito

Artificial Intelligence (AI) has been applied to solve various challenges of real-world problems in recent years. However, the emergence of new AI technologies has brought several problems, especially with regard to communication efficiency, security threats and privacy violations. Towards this end, Federated Learning (FL) has received widespread attention due to its ability to facilitate the collaborative training of local learning models without compromising the privacy of data. However, recent studies have shown that FL still consumes considerable amounts of communication resources. These communication resources are vital for updating the learning models. In addition, the privacy of data could still be compromised once sharing the parameters of the local learning models in order to update the global model. Towards this end, we propose a new approach, namely, Federated Optimisation (FedOpt) in order to promote communication efficiency and privacy preservation in FL. In order to implement FedOpt, we design a novel compression algorithm, namely, Sparse Compression Algorithm (SCA) for efficient communication, and then integrate the additively homomorphic encryption with differential privacy to prevent data from being leaked. Thus, the proposed FedOpt smoothly trade-offs communication efficiency and privacy preservation in order to adopt the learning task. The experimental results demonstrate that FedOpt outperforms the state-of-the-art FL approaches. In particular, we consider three different evaluation criteria; model accuracy, communication efficiency and computation overhead. Then, we compare the proposed FedOpt with the baseline configurations and the state-of-the-art approaches, i.e., Federated Averaging (FedAvg) and the paillier-encryption based privacy-preserving deep learning (PPDL) on all these three evaluation criteria. The experimental results show that FedOpt is able to converge within fewer training epochs and a smaller privacy budget.


Physics ◽  
2020 ◽  
Vol 2 (1) ◽  
pp. 49-66 ◽  
Author(s):  
Vyacheslav I. Yukalov

The article presents the state of the art and reviews the literature on the long-standing problem of the possibility for a sample to be at the same time solid and superfluid. Theoretical models, numerical simulations, and experimental results are discussed.


Sign in / Sign up

Export Citation Format

Share Document