Literary Education and Digital Learning
Latest Publications


TOTAL DOCUMENTS

9
(FIVE YEARS 0)

H-INDEX

2
(FIVE YEARS 0)

Published By IGI Global

9781605669328, 9781605669335

Author(s):  
Lisa Lena Opas-Hänninen

This study investigates the expression of stance in Samuel Beckett’s prose work. Following Biber and Finegan (1989), a wide variety of stance markers are identified and calculated in the texts. A multivariate statistical methodology is then used to analyze the way in which these markers of stance interact in the texts. The results are plotted two-dimensionally to enable visualizing the similarities and differences between the texts. These are also illustrated using examples from the texts. Some of the findings are a little surprising and, therefore, a new tool is used to plot the results three-dimensionally, enabling a better understanding of how stance is reflected and how the texts resemble and deviate from one another. Finally, the usefulness of this analysis is discussed.


Author(s):  
Patrick Juola

Although authorship attribution is simply the determination of who wrote a document by analysis of its content, it is a long-standing problem both in the humanities and in computational text analysis. While traditional methods involve identifying key aspects of style through close reading, new developments in computational science permit a more objective approach through the statistical analysis of superficial characteristics such as vocabulary and word choice. If a writer can be shown (statistically) to have a particular stylistic quirk (‘stylome’) that appears broadly across his or her writing, then other writings also displaying that quirk are good candidates to also be by that author. The present chapter describes some of the statistical techniques used to make such judgments, and describes one particular computer program (JGAAP) that is freely available for this purpose. This type of analysis is capable of determining authorship with relatively high accuracy The potential creates some significant implications for authorship questions across the humanities curriculum, as well as broader impacts in the world outside the academy. In light of these implications, I argue for the inclusion of more mathematics into the humanities curriculum.


Author(s):  
Bill Louw

Until fairly recently, linguistics has been classified as a ‘science’ by definition, averral, and ideology rather than because of the uniformity of its practices across its many schools of thought. It is seldom the case in any discipline that a particular phenomenon begins to question that discipline’s raison d’etre, withdraw the option and luxury of its often directionless and eclectic practices and proceed to force unwelcome and sweeping changes upon the discipline by beginning to dictate its method. This paper re-states its author’s earlier proofs as claims that collocation as instrumentation for meaning is a scientific fact. The burden of this proof has acquired renewed urgency of an interdisciplinary nature that makes this paper both timely and necessary. The claim for collocation as science is reinforced by a number of new discoveries: the fact that all devices are brought about by relexicalisation as a marked form rather than the purported markedness that is mentalist and hence, merely averred. Collocation, corpus, stylistics, instrumentation, delexicalisation, relexicalisation, science, empiricism, philosophy of language, chunking, context of situation, context of culture, worlds, intuition, subtext, symbolism, co-selection.


Author(s):  
Lars Borin ◽  
Dimitrios Kokkinakis

In this chapter, the authors describe the development and application of language technology for intelligent information access to the content of digitized cultural heritage collections in the form of Swedish classical literary works. This technology offers sophisticated and flexible support functions to literary scholars and researchers. The authors focus on one kind of text processing technology (named entity recognition) and one research field (literary onomastics), but try to argue that the techniques involved are quite general and can be further developed in a number of directions. This way, the authors aim at supporting the users of digitized literature collections with tools that enable semantic search, browsing and indexing of texts. In this sense, the authors offer new ways for exploring the large volumes of literary texts being made available through national cultural heritage digitization projects. Language technology; Computational linguistics; Natural language processing; Literary onomastics; Named entity recognition; Corpus linguistics; Corpus annotation; Digital resources; Text technology; Cultural heritage


Author(s):  
William L. Heller

In order to learn whether Shakespeare can be taught successfully in the elementary school, the author devised and implemented a unit designed to teach Macbeth to one fifth-grade class using dramatic activities, theatrical production, and technology integration. The work challenges the use of standardized testing as the final measure of student achievement. It demonstrates how Vygotsky’s (1978) zone of proximal development exposes the limitations of measuring only what students can demonstrate under testing conditions, and how Gardner’s (1993) Theory of Multiple Intelligences offers a variety of avenues for learning more effectively. This approach is identified with that of a reflective practitioner, and is designed to assist professionals who are looking for practical models for using Shakespeare’s plays in their classrooms. The underlying motive is to help bring them to a wider audience.


Author(s):  
Jon Saklofske

The purpose of this chapter is to discuss issues and solutions surrounding the incorporation of interactive video games into university-level literary education. A comparative use of participatory games alongside more traditional texts and critical ideas in the classroom will encourage engaged learning, promote multiple literacies, and facilitate awareness of the nature of reading and the operations of narrative across media forms. While obstacles and challenges to the use of digital games in the university classroom include technology, programming ability, time, budget and platform longevity, the author will demonstrate how, by heavily customising enCore Xpress, an open-source, web-based, multi-user database and constructing two interactive fictions based on Romantic period novels, he has been able to circumvent these difficulties, engage students as lucid players and builders, and support metacritical reflection.


Author(s):  
Stefan Hofer ◽  
René Bauer ◽  
Imre Hofmann

The Humanities and cultural studies in particular have traditionally been distinguished by the specialty of their scientific practices. Since the object of their analyses can be broadly considered as meaningful texts, they usually emphasize hermeneutical, qualitative and discursive analytical procedures such as reading, text-analysis, interpretation and comparison. The new media offer fresh possibilities in this field of research by permitting web-based discursive text-interpretation for a community of scientists. In this chapter, the authors focus on the e-learning environment tEXtMACHINA by exploring the question of how these methodological particularities of the Humanities can be accounted for adequately with the new technical facilities. The didactic e-learning concept of tEXtMACHINA is based on the virtual simulation of scientific practices in class. By offering a set of techniques, such options as highlighting text-passages, communication tools or the flexible combination of different media, which allow for the collaborative, discursive and analytical interpretation of texts, students may be able to acquire the practical and theoretical scientific competencies for their field in a blended learning setting.


Sign in / Sign up

Export Citation Format

Share Document