scholarly journals Toward Data-Driven Collaborative Dialogue Systems: The JILDA Dataset

2021 ◽  
Vol 7 (1 | 2) ◽  
pp. 67-90
Author(s):  
Irene Sucameli ◽  
Alessandro Lenci ◽  
Bernardo Magnini ◽  
Manuela Speranza ◽  
Maria Simi
Author(s):  
Kevin K. Bowden ◽  
Shereen Oraby ◽  
Amita Misra ◽  
Jiaqi Wu ◽  
Stephanie Lukin ◽  
...  

2021 ◽  
Author(s):  
Anish Acharya ◽  
Suranjit Adhikari ◽  
Sanchit Agarwal ◽  
Vincent Auvray ◽  
Nehal Belgamwar ◽  
...  

Author(s):  
Peter Henderson ◽  
Koustuv Sinha ◽  
Nicolas Angelard-Gontier ◽  
Nan Rosemary Ke ◽  
Genevieve Fried ◽  
...  

Argumentation ◽  
2021 ◽  
Author(s):  
Olena Yaskorska-Shah

AbstractCurrent formal dialectical models postulate normative rules that enable discussants to conduct dialogical interactions without committing fallacies. Though the rules for conducting a dialogue are supposed to apply to interactions between actual arguers, they are without exception theoretically motivated. This creates a gap between model and reality, because dialogue participants typically leave important content-related elements implicit. Therefore, analysts cannot readily relate normative rules to actual debates in ways that will be empirically confirmable. This paper details a new, data-driven method for describing discussants’ actual reply structures, wherein corpus studies serve to acknowledge the complexity of natural argumentation (itself understood as a function of context). Rather than refer exclusively to propositional content as an indicator of arguing pro/contra a given claim, the proposed approach to dialogue structure tracks the sequence of dialogical moves itself. This arguably improves the applicability of theoretical dialectical models to empirical data, and thus advances the study of dialogue systems.


2020 ◽  
Vol 34 (09) ◽  
pp. 13622-13623
Author(s):  
Zhaojiang Lin ◽  
Peng Xu ◽  
Genta Indra Winata ◽  
Farhad Bin Siddique ◽  
Zihan Liu ◽  
...  

We present CAiRE, an end-to-end generative empathetic chatbot designed to recognize user emotions and respond in an empathetic manner. Our system adapts the Generative Pre-trained Transformer (GPT) to empathetic response generation task via transfer learning. CAiRE is built primarily to focus on empathy integration in fully data-driven generative dialogue systems. We create a web-based user interface which allows multiple users to asynchronously chat with CAiRE. CAiRE also collects user feedback and continues to improve its response quality by discarding undesirable generations via active learning and negative training.


2020 ◽  
Vol 34 (05) ◽  
pp. 7472-7479
Author(s):  
Hengyi Cai ◽  
Hongshen Chen ◽  
Cheng Zhang ◽  
Yonghao Song ◽  
Xiaofang Zhao ◽  
...  

Current state-of-the-art neural dialogue systems are mainly data-driven and are trained on human-generated responses. However, due to the subjectivity and open-ended nature of human conversations, the complexity of training dialogues varies greatly. The noise and uneven complexity of query-response pairs impede the learning efficiency and effects of the neural dialogue generation models. What is more, so far, there are no unified dialogue complexity measurements, and the dialogue complexity embodies multiple aspects of attributes—specificity, repetitiveness, relevance, etc. Inspired by human behaviors of learning to converse, where children learn from easy dialogues to complex ones and dynamically adjust their learning progress, in this paper, we first analyze five dialogue attributes to measure the dialogue complexity in multiple perspectives on three publicly available corpora. Then, we propose an adaptive multi-curricula learning framework to schedule a committee of the organized curricula. The framework is established upon the reinforcement learning paradigm, which automatically chooses different curricula at the evolving learning process according to the learning status of the neural dialogue generation model. Extensive experiments conducted on five state-of-the-art models demonstrate its learning efficiency and effectiveness with respect to 13 automatic evaluation metrics and human judgments.


2018 ◽  
Vol 9 (1) ◽  
pp. 1-49 ◽  
Author(s):  
Iulian Vlad Serban ◽  
Ryan Lowe ◽  
Peter Henderson ◽  
Laurent Charlin ◽  
Joelle Pineau

During the past decade, several areas of speech and language understanding have witnessed substantial breakthroughs from the use of data-driven models. In the area of dialogue systems, the trend is less obvious, and most practical systems are still built through significant engineering and expert knowledge. Nevertheless, several recent results suggest that data-driven approaches are feasible and quite promising. To facilitate research in this area, we have carried out a wide survey of publicly available datasets suitable for data-driven learning of dialogue systems. We discuss important characteristics of these datasets, how they can be used to learn diverse dialogue strategies, and their other potential uses. We also examine methods for transfer learning between datasets and the use of external knowledge. Finally, we discuss appropriate choice of evaluation metrics for the learning objective.


Sign in / Sign up

Export Citation Format

Share Document