The use of natural language processing in computer-assisted language instruction

1988 ◽  
Vol 22 (2) ◽  
pp. 99-110 ◽  
Author(s):  
Alan Bailin ◽  
Philip Thomson
2020 ◽  
Vol 0 (0) ◽  
Author(s):  
Fridah Katushemererwe ◽  
Andrew Caines ◽  
Paula Buttery

AbstractThis paper describes an endeavour to build natural language processing (NLP) tools for Runyakitara, a group of four closely related Bantu languages spoken in western Uganda. In contrast with major world languages such as English, for which corpora are comparatively abundant and NLP tools are well developed, computational linguistic resources for Runyakitara are in short supply. First therefore, we need to collect corpora for these languages, before we can proceed to the design of a spell-checker, grammar-checker and applications for computer-assisted language learning (CALL). We explain how we are collecting primary data for a new Runya Corpus of speech and writing, we outline the design of a morphological analyser, and discuss how we can use these new resources to build NLP tools. We are initially working with Runyankore–Rukiga, a closely-related pair of Runyakitara languages, and we frame our project in the context of NLP for low-resource languages, as well as CALL for the preservation of endangered languages. We put our project forward as a test case for the revitalization of endangered languages through education and technology.


2003 ◽  
Vol 17 (5) ◽  
Author(s):  
Anne Vandeventer Faltin

This paper illustrates the usefulness of natural language processing (NLP) tools for computer assisted language learning (CALL) through the presentation of three NLP tools integrated within a CALL software for French. These tools are (i) a sentence structure viewer; (ii) an error diagnosis system; and (iii) a conjugation tool. The sentence structure viewer helps language learners grasp the structure of a sentence, by providing lexical and grammatical information. This information is derived from a deep syntactic analysis. Two different outputs are presented. The error diagnosis system is composed of a spell checker, a grammar checker, and a coherence checker. The spell checker makes use of alpha-codes, phonological reinterpretation, and some ad hoc rules to provide correction proposals. The grammar checker employs constraint relaxation and phonological reinterpretation as diagnosis techniques. The coherence checker compares the underlying "semantic" structures of a stored answer and of the learners' input to detect semantic discrepancies. The conjugation tool is a resource with enhanced capabilities when put on an electronic format, enabling searches from inflected and ambiguous verb forms.


2015 ◽  
Vol 10 (5) ◽  
pp. 830-844 ◽  
Author(s):  
Kentaro Inui ◽  
◽  
Yotaro Watanabe ◽  
Kenshi Yamaguchi ◽  
Shingo Suzuki ◽  
...  

During times of disaster, local government departments and divisions need to communicate a broad range of information for disaster management to share the understating of the changing situation. This paper addresses the issues of how to effectively use a computer database system to communicate disaster management information and how to apply natural language processing technology to reduce the human labor for databasing a vast amount of information. The database schema was designed based on analyzing a collection of real-life disaster management information and the specifications of existing standardized systems. Our data analysis reveals that our database schema sufficiently covers the information exchanged in a local government during the Great East Earthquake. Our prototype system is designed so as to allow local governments to introduce it at a low cost: (i) the system’s user interface facilitates the operations for databasing given information, (ii) the system can be easily customized to each local municipality by simply replacing the dictionary and the sample data for training the system, and (iii) the system can be automatically adapted to each local municipality or each disaster incident through its capability of automatic learning from the user’s corrections to the system’s language processing outputs.


Author(s):  
Monica Ward

Intelligent Computer-Assisted Language Learning (ICALL) involves using tools and techniques from computational linguistics and Natural Language Processing (NLP) in the language learning process. It is an inherently complex endeavour and is multi-, inter-, and trans-disciplinary in nature. Often these tools and techniques are designed for tasks and purposes other than language learning, and this makes their adaptation and use in the CALL domain difficult. It can be even more challenging for Less-Resourced Languages (LRLs) for CALL researchers to adapt or incorporate NLP into CALL artefacts. This paper reports on how two existing NLP resources for Irish, a morphological analyser and a parser, were used to develop an app for Irish. The app, Irish Word Bricks (IWB), was adapted from an existing CALL app – Word Bricks (Mozgovoy & Efimov, 2013). Without this ‘joining the blocks together’ approach, the development of the IWB app would certainly have taken longer, may not have been as efficient or effective, and may not even have been accomplished at all.


Author(s):  
John Nerbonne

This article examines the application of natural language processing to computer-assisted language learning (CALL) including the history of work in this field over the last thirtyfive years and focuses on current developments and opportunities. It always refers to programs designed to help people learn foreign languages. CALL is a large field — much larger than computational linguistics. This article outlines the areas of CALL to which computational linguistics (CL) can be applied. CL programs process natural languages such as English and Spanish, and the techniques are therefore often referred to as natural language processing (NLP). NLP is enlisted in several ways in CALL to provide lemmatized access to corpora for advanced learners seeking subtleties unavailable in grammars and dictionaries. It also provides morphological analysis and subsequent dictionary access for words unknown to readers and to parse user input and diagnose morphological and syntactic errors.


2016 ◽  
Vol 9 (3) ◽  
pp. 49-67 ◽  
Author(s):  
Safa Ben Salem ◽  
Lilia Cheniti-Belcadhi ◽  
Rafik Braham ◽  
Nicolas Delestre

Computer Assisted Assessment of Short and open answers has established a great deal of work during the last years due to the need of evaluating the deep understanding of the lessons' concepts by learners that, conferring to most teachers, cannot be done by simple MCQ testing. In this paper we have reviewed the techniques underpinned this system, the description of currently available systems for marking short and open text answer and finally proposed a system that would evaluate answers using Natural Language Processing and lastly compared the results obtain by human expert graders and proposed system. We have also compared the results of proposed system with some existing systems.


2021 ◽  
pp. 6-11
Author(s):  
Brendon Albertson

A Computer-Assisted Language Learning (CALL) application, TextMix, was developed as a proof-of-concept for applying Natural Language Processing (NLP) sentence chunking techniques to creating ‘sentence scramble’ learning tasks. TextMix addresses limitations of existing applications for creating sentence scrambles by using NLP to parse and scramble syntactic components of sentences, while connecting with Application Programming Interfaces (APIs) to provide repeated exposure to authentic sentences in the context of texts such as Wikipedia articles. In addition to identifying a novel application of NLP and APIs in CALL, this project highlights the need for teacher-friendly interfaces that prioritize pedagogically useful ways of chunking text.


Sign in / Sign up

Export Citation Format

Share Document