Votic and Ingrian Core Lexicon in the Finnic Context: Swadesh Lists of Five Related Varieties

2019 ◽  
Vol 55 (2) ◽  
pp. 81
Author(s):  
F Rozhanskiy ◽  
M Zhivlov
Keyword(s):  
2019 ◽  
Vol 116 (15) ◽  
pp. 7397-7402 ◽  
Author(s):  
Mark Pagel ◽  
Mark Beaumont ◽  
Andrew Meade ◽  
Annemarie Verkerk ◽  
Andreea Calude

A puzzle of language is how speakers come to use the same words for particular meanings, given that there are often many competing alternatives (e.g., “sofa,” “couch,” “settee”), and there is seldom a necessary connection between a word and its meaning. The well-known process of random drift—roughly corresponding in this context to “say what you hear”—can cause the frequencies of alternative words to fluctuate over time, and it is even possible for one of the words to replace all others, without any form of selection being involved. However, is drift alone an adequate explanation of a shared vocabulary? Darwin thought not. Here, we apply models of neutral drift, directional selection, and positive frequency-dependent selection to explain over 417,000 word-use choices for 418 meanings in two natural populations of speakers. We find that neutral drift does not in general explain word use. Instead, some form of selection governs word choice in over 91% of the meanings we studied. In cases where one word dominates all others for a particular meaning—such as is typical of the words in the core lexicon of a language—word choice is guided by positive frequency-dependent selection—a bias that makes speakers disproportionately likely to use the words that most others use. This bias grants an increasing advantage to the common form as it becomes more popular and provides a mechanism to explain how a shared vocabulary can spontaneously self-organize and then be maintained for centuries or even millennia, despite new words continually entering the lexicon.


Author(s):  
Herbert Ernst Wiegand

AbstractBy means of a review of the use of the term gloss in the past three decades it becomes evident that the term gloss needs to be explained more precisely. After a brief discussion of the practice of lexicographic glossing it is clear that the mainreason for the use of the method of glossing lies therein that many words from the core lexicon have numerous cotext-specific senses and, for various reasons, restricted ways of being used. Because of the lack of an overview of the different types of glosses a typology of glosses with regard to their structure and topic is developed. In addition, different types of elementary and extended glosses are distinguished from sequences of glosses. Furthermore, different types of semantic glosses, e.g. semantically non-identified semantic glosses and (up to now not yet investigated) semantically identified semantic glosses with numerous subtypes as well as a variety of further types of semantic-pragmatic glosses are distinguished. The latter types have not yet been investigated although numerous subtypes can be distinguished, e.g. pragmatic- commenting semantic glosses and pragmatic-commented semantic glosses. It is furthermore shown that the functional equal post- and inner glosses have different places in a typology of lexicographic text segments because they are structurally different. For extended glosses different types of gloss segments are distinguished, e.g. pragmatic-commenting, semantic-commenting and -identifying as well as glossing gloss segments with numerous subtypes. A next section is dedicated to gloss addressing: gloss addressing, gloss-internal and gloss-excurrent gloss segment addressing are presented and different gloss accompanying addressing constellations are distinguished. All relations in which glosses occur are then investigated. Finally the different non-hierarchical as well as hierarchical gloss structures are presented.


2019 ◽  
Vol 41 (01) ◽  
pp. 010-019
Author(s):  
Davida Fromm ◽  
Margaret Forbes ◽  
Audrey Holland ◽  
Brian MacWhinney

AbstractAphasiaBank is a shared, multimedia database for the study of communication in aphasia. This article describes a variety of discourse measurement tools and teaching resources available at the AphasiaBank website. The discourse measurement tools include main concept analysis, core lexicon checklists, correct information unit computation techniques, and other automated analyses using the CLAN program. These tools can be used to measure a variety of aspects of language production for assessment as well as treatment evaluation and clinical research purposes. Importantly, they are intended to help make the discourse analysis process more efficient and reliable. Teaching resources include an online tutorial on aphasia, videos of typical behaviors seen in aphasia, group treatment videos, classroom activities, tutorial screencasts, and conference posters. These resources can be used for a variety of clinical and educational purposes. The AphasiaBank website is part of the larger TalkBank project which provides many other shared databases and resources that are relevant to professionals interested in communication and communication disorders.


2018 ◽  
Vol 54 (1) ◽  
pp. 62-78 ◽  
Author(s):  
Hana Kim ◽  
Stephen Kintz ◽  
Kristen Zelnosky ◽  
Heather Harris Wright

2020 ◽  
Vol 29 (1) ◽  
pp. 101-110
Author(s):  
Hana Kim ◽  
Heather Harris Wright

Purpose General agreement exists in the literature that clinicians struggle with quantifying discourse-level performance in clinical settings. Core lexicon analysis has gained recent attention as an alternative tool that may address difficulties that clinicians face. Although previous studies have demonstrated that core lexicon measures are an efficient means of assessing discourse in persons with aphasia (PWAs), the psychometric properties of core lexicon measures have yet to be investigated. The purpose of this study was (a) to examine the concurrent validity by using microlinguistic and macrolinguistic measures and (b) to demonstrate interrater reliability without transcription by raters with minimal training. Method Eleven language samples collected from PWAs were used in this study. Concurrent validity was assessed by correlating performance on the core lexicon measure with microlinguistic and macrolinguistic measures. For interrater reliability, 4 raters used the core lexicon checklists to score audio-recorded discourse samples from 10 PWAs. Results The core lexicon measures significantly correlated with microlinguistic and macrolinguistic measures. Acceptable interrater reliability was obtained among the 4 raters. Conclusions Core lexicon analysis is potentially useful for measuring word retrieval impairments at the discourse level. It may also be a feasible solution because it reduces the amount of preparatory work for discourse assessment.


Author(s):  
Daiho Kitaoka

This paper demonstrates repair strategies when place feature of the special moras in Japanese (the second half of a long vowel, moraic nasals, and the first half of a double consonant) fail to be specified in a usual manner. I posit three repair processes based on the observations of marked environments (loanwords, a word game called Sakasa Kotoba, blending): (i) over-application of regular structures in core lexicon, (ii) irregular structures that are produced through The Emergence of the Unmarked (TETU), and (iii) game-specific structures. I illustrate that even in marked environments, repair processes make outcome structures as unmarked as possible with these strategies. Based on the observations in the marked environments (mainly from Sakasa Kotoba), I further discuss the process of morification and underlying representations of special moras.


Sign in / Sign up

Export Citation Format

Share Document