Encountering Difference: Images of Otherness in Contemporary Spanish Film

Hispania ◽  
2015 ◽  
Vol 98 (3) ◽  
pp. 570-582 ◽  
Author(s):  
Andrea Meador Smith ◽  
Sarah Cox Campbell
2020 ◽  
Vol 8 (1) ◽  
pp. 105-125
Author(s):  
Carlos de Pablos-Ortega

AbstractThe main aim of the study is to ascertain contrastively, in English and Spanish, how directive speech acts are represented in film discourse. For the purpose of the investigation, the directive speech acts of 24 films, 12 in English and 12 in Spanish, were extracted and analysed. A classification taxonomy, inspired by previous research, was created in order to categorize the different types of directive speech acts and determine their level of (in)directness. The results show that indirectness is more widely represented in the English than in the Spanish film scripts, thus confirming the assertion that being indirect is a distinctive feature of English native speakers (Grundy, 2008). This research makes a valuable contribution to the exploration of speech acts in filmspeak and informs the existing local grammar descriptions of the linguistic patterns of directive speech acts.


2021 ◽  
Vol 7 (2) ◽  
pp. 27
Author(s):  
Dieter P. Gruber ◽  
Matthias Haselmann

This paper proposes a new machine vision method to test the quality of a semi-transparent automotive illuminant component. Difference images of Frangi filtered surface images are used to enhance defect-like image structures. In order to distinguish allowed structures from defective structures, morphological features are extracted and used for a nearest-neighbor-based anomaly score. In this way, it could be demonstrated that a segmentation of occurring defects is possible on transparent illuminant parts. The method turned out to be fast and accurate and is therefore also suited for in-production testing.


2021 ◽  
Vol 11 (12) ◽  
pp. 5563
Author(s):  
Jinsol Ha ◽  
Joongchol Shin ◽  
Hasil Park ◽  
Joonki Paik

Action recognition requires the accurate analysis of action elements in the form of a video clip and a properly ordered sequence of the elements. To solve the two sub-problems, it is necessary to learn both spatio-temporal information and the temporal relationship between different action elements. Existing convolutional neural network (CNN)-based action recognition methods have focused on learning only spatial or temporal information without considering the temporal relation between action elements. In this paper, we create short-term pixel-difference images from the input video, and take the difference images as an input to a bidirectional exponential moving average sub-network to analyze the action elements and their temporal relations. The proposed method consists of: (i) generation of RGB and differential images, (ii) extraction of deep feature maps using an image classification sub-network, (iii) weight assignment to extracted feature maps using a bidirectional, exponential, moving average sub-network, and (iv) late fusion with a three-dimensional convolutional (C3D) sub-network to improve the accuracy of action recognition. Experimental results show that the proposed method achieves a higher performance level than existing baseline methods. In addition, the proposed action recognition network takes only 0.075 seconds per action class, which guarantees various high-speed or real-time applications, such as abnormal action classification, human–computer interaction, and intelligent visual surveillance.


2010 ◽  
Vol 42 (11) ◽  
pp. 2945-2963 ◽  
Author(s):  
Sarah E. Blackwell
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document