linguistic regularity
Recently Published Documents


TOTAL DOCUMENTS

13
(FIVE YEARS 2)

H-INDEX

1
(FIVE YEARS 0)

2021 ◽  
pp. 136-150

The article touches upon the topical problems of modern linguistics related to the question of the status of linguistic units in general, and of simple and complex sentences, in particular, as signs of a nominative and communicative nature; as well as upon their structural-semantic and functional-communicative semantic features and their constituent parts such as, for example, the subject, the predicate, the object, the adverbial modifier, the introductory part and the introductory component, which are peculiarly characterized, defined and classified by the author. Proceeding from an integral approach to linguistic phenomena in modern directions of linguistics, which make up its anthropocentric paradigm, the author tries to express his attitude to a number of controversial issues faced by specialists in the field of syntax. Approaching these issues from a comparative-typological point of view, the author argues that, like any significant unit of a language, a sentence and all its types and varieties from a constructive and communicative-pragmatic points of view, should be considered as integral monolithic signs like a word. Pushing off from this general linguistic regularity, the author approaches in a new way to such parts of the sentence, which perform the syntactic function of a part of a sentence and its extenders, establishes and reveals their substantial status within the framework of sentences of such modern languages as English, which represents a type of languages of a predominantly analytical system; Uzbek representing a type of languages of a predominantly agglutinative system, and Russian demonstrating a type of languages of a predominantly inflectional system. According to the author, there is a universal law, in line with which any significant linguistic unit used in a sentence, alone or in combination with another unit, can and should function as a specific syntactic part of it, therefore, within the framework of a sentence there is not and cannot be such a member that should not have a syntactic function in it. Based on this approach to the sentence and its members, the author characterizes and classifies the members of the sentence and their extenders from the linguo-cognitive and linguo-pragmatic points of view in a new way, in contrast to the traditional approach, according to which the extenders of the sentence are its parts, and the extenders of the parts cannot be members of the sentence. The article attempts also to clarify the question of the role and functions of the subject and predicate in the constructive structure of the sentence and their relationships with other parts of the sentence. As a result, a new taxonomy of a simple sentence, depending on the ability of the subject and predicate to create their “peaks” has been worked out. It is substantiated that the correct solution of the problems raised in the article contributes to the improvement of the state of learning and teaching languages and the process of translation from one language into another.


2021 ◽  
pp. 1-15
Author(s):  
Caterina Suitner ◽  
Anne Maass ◽  
Eduardo Navarrete ◽  
Magdalena Formanowicz ◽  
Boyka Bratanova ◽  
...  

Abstract The spatial agency bias predicts that people whose native language is rightward written will predominantly envisage action along the same direction. Two mechanisms contribute jointly to this asymmetry: (a) an embodied process related to writing/reading; (b) a linguistic regularity according to which sentence subjects (typically the agent) tend to precede objects (typically the recipient). Here we test a novel hypothesis in relation to the second mechanism, namely, that this asymmetry will be most pronounced in languages with rigid word order. A preregistered study on 14 European languages (n = 420) varying in word order flexibility confirmed a rightward bias in drawings of interactions between two people (agent and recipient). This bias was weaker in more flexible languages, confirming that embodied and linguistic features of language interact in producing it.


2020 ◽  
Author(s):  
Hassan Khamis El-Malkh

The research, in an analytical method, aimed at presenting an approach that answers the question of how the Arab grammarian audience responds to this vision in structuring the Arabic grammar, basing itself on the fact that the Arabic language has regular usage habits at the level of the nation, group or individual. It turned out that after describing the Arabic language, they succeeded in sorting them according to the degree of the linguistic regularity, and they contained most of the dialectical customs in its rulings The grammatical permissibility of linguistic tolerance is taken from the teaching of grammar as a joint tool in making a general grammatical opinion, although some of its rulings have shadows from the incomparable anomalies, and the research has concluded that the grammatical correctness has come after an eras of an era of critical educational outcome pointing towards a unifed vision to the provisions governing rightness in Arabic grammar, even if the grammarians differ in their interpretation. Keywords: Arabic grammar. Language habits. Dialects انبنى البحثُ على أنَّ اللغة العربيّة عاداتٌ استعماليّة مُنتظَمةٌ على مستوى الأُمَّة أو الجماعة أو الفرد، فسعى بمنهج تحليليٍّ إلى تقديم مُقاربة تُجيب عن سؤال كيفيّة استجابةِ جمهور نحاة العربيّة لهذه الرؤية في بنائهم النحوَ العربيّ؛ فتبيَّن أنَّهم نجحوا بعدَ وَصْفِ العربيّة في فرْزِها حَسْبَ درجةِ الانتظام اللغويّ، فاحتوَوا مُعْظَمَ العادات اللَهْجيّة في أحكام الجواز النحويّ على سبيل التَّسامُح اللغويّ مُتَّخِذينَ من تعليم النحو أداةً مِفْصَليّةً في صناعةِ رأيٍ نحويّ عام واحد تقريبا، وإن كانت لبعض أحكامِه ظِلالٌ من الشَّواذِّ التي لا يُقاسُ عليها، وقد توصَّل البحثُ إلى أنَّ الصواب النحويّ صارَ بعد عَصْرِ الاحتجاجِ مُخْرَجا تعليميّا دالًا على رُؤيةٍ مُوحَّدةٍ تقريبا للأحكام الضابطةِ للصواب في النَّحو العربيّ، وإن اختلفَ النحاةُ في تفسيرها.


2019 ◽  
Author(s):  
Zachary N. Flamholz ◽  
Lyle H. Ungar ◽  
Gary E. Weissman

AbstractRationaleWord embeddings are used to create vector representations of text data but not all embeddings appropriately capture clinical information, are free of protected health information, and are computationally accessible to most researchers.MethodsWe trained word embeddings on published case reports because their language mimics that of clinical notes, the manuscripts are already de-identified by virtue of being published, and the corpus is much smaller than those trained on large, publicly available datasets. We tested the performance of these embeddings across five clinically relevant tasks and compared the results to embeddings trained on a large Wikipedia corpus, all publicly available manuscripts, notes from the MIMIC-III database using fastText, GloVe, and word2vec, and using different dimensions. Tasks included clinical applications of lexicographic coverage, semantic similarity, clustering purity, linguistic regularity, and mortality prediction.ResultsThe embeddings trained using the published case reports performed as well as if not better on most tasks than those using other corpora. The embeddings trained using all published manuscripts had the most consistent performance across all tasks and required a corpus with 100 times as many tokens as the corpus comprised of only case reports. Embeddings trained on the MIMIC-III dataset had small but marginally better scores on the clustering tasks which was also based on clinical notes from the MIMIC-III dataset. Embeddings trained on the Wikipedia corpus, although containing almost twice as many tokens as all available published manuscripts, performed poorly compared to those trained on medical and clinical corpora.ConclusionWord embeddings trained on freely available published case reports performed well for most clinical task, are free of protected health information, and are small compared to commonly used embeddings trained on larger clinical and non-clinical corpora. The optimal corpus, dimension size, and which embedding model to use for a given task involves tradeoffs in privacy, reproducibility, performance, and computational resources.


Linguistics ◽  
2019 ◽  
Author(s):  
Thanasis Georgakopoulos

A semantic map is a method for visually representing cross-linguistic regularity or universality in semantic structure. This method has proved attractive to typologists because it provides a convenient graphical display of the interrelationships between meanings or functions across languages, while (at the same time) differentiating what is universal from what is language-specific. The semantic map model was initially conceived to describe patterns of polysemy (or, more generally, of co-expression) in grammatical categories. However, several studies have shown that it can be fruitfully extended to lexical items and even constructions, suggesting that any type of meaning can be integrated in a map. The main idea of the method is that the spatial arrangement of the various meanings reflects their degree of (dis)similarity: the more similar the meanings, the closer they are placed—in accordance with the so-called connectivity hypothesis. Within the semantic map tradition, closeness has taken different forms depending on the approach adopted. In classical semantic maps (alternative terms: “first generation,” “implicational,” “connectivity” maps), the relation between meanings is represented as a line. This is the graph-based approach. In proximity maps (alternative terms: “similarity,” “second generation,” “statistical,” “probabilistic” maps), the distance between two meanings in space— represented as points—indicates the degree of their similarity. In this scale- or distance-based approach, the maps are constructed using multivariate statistical techniques, including the family of methods known as multidimensional scaling (MDS). Both classical and proximity maps have been widely used, although the latter have recently gained interest and popularity under the assumption that they can cope with large data more efficiently than classical semantic maps. However, classical semantic maps continue to be useful for studies aiming to discover universal semantic structures. Most importantly, classical maps can integrate information about directionality of change by drawing an arrow on the line connecting two meanings or functions. Beyond the choice between the two types of maps, one of the issues that has sparked debate and critical reflection among researchers is the universal relevance of semantic maps. The main question that these researchers address is whether semantic maps reflect the global geography of the human mind. Another much discussed issue is the identification of the factors that increase the accuracy of semantic maps in a way that allows for valid cross‐linguistic generalizations. Such factors include the choice of a representative language sample, the quality of the collected cross‐linguistic material, and the establishment of valid cross-linguistic comparators. Acknowledgments: The author wishes to thank one anonymous reviewer for their useful comments. For discussion of the material in this article, the author is grateful to Stéphane Polis.


Sign in / Sign up

Export Citation Format

Share Document