Real-time annotation of video objects on tablet computers

Author(s):  
João Silva ◽  
Diogo Cabral ◽  
Carla Fernandes ◽  
Nuno Correia
Author(s):  
Jako Olivier

Effective interaction in classes and real-time feedback are challenges that may occur in any classroom. In this regard, mobile technologies may act as a supplement in a so-called blended context. This article investigates the role a bring-your-own-device approach in comparison with an approach where similar tablet computers are provided, plays in terms of interaction and feedback in a linguistics class at third-year university level. In this way the gap in the literature regarding the implementation of blended learning in higher education in South Africa and especially in terms of the bring-your-own-device approach could be augmented. In this research a case study design was used and the data of two groups of third-years were gathered by means of qualitative methods. User statistics from a learning management system was collected, but the main data consisted of the observations and reflections of the lecturer, as well as feedback from students by means of short questionnaires. This data was analysed in an inductive manner in order to identify relevant themes and codes. Both a bring-your-own-device approach and the use of provided tablet computers could facilitate effective interaction and real-time feedback. Apart from the benefits, some limitations were identified in terms of access and skills among students. In this specific context, the bring-your-own-device approach seems to be the better option, but for other contexts solutions will have to be customised.


2013 ◽  
Vol 2013 ◽  
pp. 1-21 ◽  
Author(s):  
Petr Motlicek ◽  
Stefan Duffner ◽  
Danil Korchagin ◽  
Hervé Bourlard ◽  
Carl Scheffler ◽  
...  

We describe the design of a system consisting of several state-of-the-art real-time audio and video processing components enabling multimodal stream manipulation (e.g., automatic online editing for multiparty videoconferencing applications) in open, unconstrained environments. The underlying algorithms are designed to allow multiple people to enter, interact, and leave the observable scene with no constraints. They comprise continuous localisation of audio objects and its application for spatial audio object coding, detection, and tracking of faces, estimation of head poses and visual focus of attention, detection and localisation of verbal and paralinguistic events, and the association and fusion of these different events. Combined all together, they represent multimodal streams with audio objects and semantic video objects and provide semantic information for stream manipulation systems (like a virtual director). Various experiments have been performed to evaluate the performance of the system. The obtained results demonstrate the effectiveness of the proposed design, the various algorithms, and the benefit of fusing different modalities in this scenario.


Sign in / Sign up

Export Citation Format

Share Document