CamCutter: Impromptu Vision-Based Cross-Device Application Sharing

2019 ◽  
Vol 31 (6) ◽  
pp. 539-554
Author(s):  
Takuma Hagiwara ◽  
Kazuki Takashima ◽  
Morten Fjeld ◽  
Yoshifumi Kitamura

Abstract As the range of handheld, mobile and desktop devices expands and worldwide demand for collaborative application tools increases, there is a growing need for higher speed impromptu cross-device application sharing to keep up with workplace requirements for on-site or remote collaborations. To address this, we have developed CamCutter, a cross-device interaction technique enabling a user to quickly select and share an application running on another screen using the camera of a handheld device. This technique can accurately identify the targeted application on a display using our adapted computer vision algorithm, system architecture and software implementation, allowing impromptu real-time and synchronized application sharing between devices. For desktop and meeting room set-ups, we performed a technical evaluation, measuring accuracy and speed of migration. For a single-user reading task and a collaborative composition task, we carried out a user study comparing our technique with commercial screen sharing applications. The results of this study showed both higher performance and preference for our system. Finally, we discuss CamCutter’s limitations and present insights for future vision-based cross-device application sharing.

2017 ◽  
Vol 18 (1) ◽  
pp. 68-93 ◽  
Author(s):  
Sriram Karthik Badam ◽  
Niklas Elmqvist

Going beyond the desktop to leverage novel devices—such as smartphones, tablets, or large displays—for visual sensemaking typically requires supporting extraneous operations for device discovery, interaction sharing, and view management. Such operations can be time-consuming and tedious and distract the user from the actual analysis. Embodied interaction models in these multi-device environments can take advantage of the natural interaction and physicality afforded by multimodal devices and help effectively carry out these operations in visual sensemaking. In this article, we present cross-device interaction models for visualization spaces, that are embodied in nature, by conducting a user study to elicit actions from participants that could trigger a portrayed effect of sharing visualizations (and therefore information) across devices. We then explore one common interaction style from this design elicitation called Visfer, a technique for effortlessly sharing visualizations across devices using the visual medium. More specifically, this technique involves taking pictures of visualizations, or rather the QR codes augmenting them, on a display using the built-in camera on a handheld device. Our contributions include a conceptual framework for cross-device interaction and the Visfer technique itself, as well as transformation guidelines to exploit the capabilities of each specific device and a web framework for encoding visualization components into animated QR codes, which capture multiple frames of QR codes to embed more information. Beyond this, we also present the results from a performance evaluation for the visual data transfer enabled by Visfer. We end the article by presenting the application examples of our Visfer framework.


2010 ◽  
Vol 2 (3) ◽  
pp. 15-30 ◽  
Author(s):  
Andrew Greaves ◽  
Enrico Rukzio

Co-present viewing and sharing of images on mobile devices is a popular but very cumbersome activity. Firstly, it is difficult to show a picture to a group of friends due to the small mobile phone screen and secondly it is difficult to share media between multiple friends, e.g., when considering Bluetooth usage and technical limitations, limited input and repeated user interactions. This paper introduces the View & Share system which allows mobile phone users to spontaneously form a group and engage in the viewing and sharing of images. A member of the group has a personal projector (e.g., projector phone) which is used to view pictures collaboratively. View & Share supports sharing with a single user, multiple users or all users, allows members to borrow the projected display and provides a private viewing mode between co-located users. This paper reports on the View & Share system, its implementation and an explorative user study with 12 participants showing the advantages of our system and user feedback.


2013 ◽  
Vol 5 (4) ◽  
pp. 56-80
Author(s):  
Abdallah El Ali ◽  
Hamed Ketabdar

Around Device Interaction (ADI) has expanded the interaction space on mobile devices to allow 3D gesture interaction around the device. In this paper, the authors look specifically at magnet-based ADI and its applied use in a playful, music-related context. Using three musical applications developed under the magnet-based ADI paradigm (Air Disc-Jockey, Air Guitar, Air GuitaRhythm), the authors investigate whether the magnet-based ADI paradigm can be effectively used to support playful music composition and gaming on mobile devices. Based on results from a controlled user study (usability and user experience questionnaire responses, users’ direct feedback, and video observations), the authors 1) showed how magnet-based ADI can be effectively used to create natural, playful and creative mobile music interactions amongst both musically-trained and non-musically trained users and 2) distilled magnet-based ADI design considerations to optimize playful and creative music interactions in today’s smartphones.


Author(s):  
Jinmiao Huang ◽  
Rahul Rai

We introduce an intuitive gesture-based interaction technique for creating and manipulating simple three-dimensional (3D) shapes. Specifically, the developed interface utilizes low-cost depth camera to capture user's hand gesture as the input, maps different gestures to system commands and generates 3D models from midair 3D sketches (as opposed to traditional two-dimensional (2D) sketches). Our primary contribution is in the development of an intuitive gesture-based interface that enables novice users to rapidly construct conceptual 3D models. Our development extends current works by proposing both design and technical solutions to the challenges of the gestural modeling interface for conceptual 3D shapes. The preliminary user study results suggest that the developed framework is intuitive to use and able to create a variety of 3D conceptual models.


Author(s):  
Andrew Greaves ◽  
Enrico Rukzio

Co-present viewing and sharing of images on mobile devices is a popular but very cumbersome activity. Firstly, it is difficult to show a picture to a group of friends due to the small mobile phone screen and secondly it is difficult to share media between multiple friends, e.g., when considering Bluetooth usage and technical limitations, limited input and repeated user interactions. This paper introduces the View & Share system which allows mobile phone users to spontaneously form a group and engage in the viewing and sharing of images. A member of the group has a personal projector (e.g., projector phone) which is used to view pictures collaboratively. View & Share supports sharing with a single user, multiple users or all users, allows members to borrow the projected display and provides a private viewing mode between co-located users. This paper reports on the View & Share system, its implementation and an explorative user study with 12 participants showing the advantages of our system and user feedback.


2021 ◽  
Vol 1965 (1) ◽  
pp. 012059
Author(s):  
Yin Tianyuan ◽  
Zhang Yi ◽  
Jian Zhian ◽  
Liu Yuanyuan

2019 ◽  
Author(s):  
Mauro Carlos Pichiliani ◽  
Prasun Dewan ◽  
Celso Massaki Hirata

Nowadays, there is little support for developers to transform single user applications to collaborative ones in the mobile domain. We present Lacomo, a new software layer to build collaborative mobile applications with accessibility, screen sharing, and application rewriting technologies that reduce costs to prototype collaboration features, thereby increasing the range of supported applications without deep application knowledge. We compare it to an ad hoc approach. Users using Lacomo performed a testing task faster, with less effort and errors at a higher completion time.


2018 ◽  
Vol 60 (3) ◽  
pp. 534-538 ◽  
Author(s):  
Liang Lu ◽  
Yong-Chang Jiao ◽  
Li Zhang ◽  
Chao-Yi Cui ◽  
Rui-Qi Wang

Author(s):  
Vikram S. Vyawahare ◽  
Richard T. Stone

Kinesthetic haptic devices due to their typically small workspaces have limitations to their reach in virtual environments. In order to overcome this limitation a new interaction technique ‘Bimanual Stretched String Control of Haptic Workspace Mapping’ (BS-SCHWM) is developed using a unique combination of spatial interaction devices; namely a kinesthetic haptic device (Phantom Omni®) and a magnetically tracked device (Razer Hydra) each held in one of the user hands. The technique is implemented in the domain of virtual assembly. The virtual assembly simulation implemented in this research is based on physically based modeling approach using Voxmap PointShell library. Immersive stereo vision and spatial interaction devices enable natural interactions with the CAD models within the virtual assembly environment. The BS-SCHWM technique uses scene motion to map the haptic device workspace to different parts of the scene and provides means of controlling the direction and speed of scene motion. A bimanual cursor helps the user in visualizing the bimanual interaction paradigm. The facility of transporting objects using the technique is implemented. Schemes of measuring task completion and metrics for analyzing the interaction characteristics are designed. A preliminary evaluation of the BS-SCHWM technique with comparative analysis of its characteristics with an existing unimanual technique of haptic workspace expansion was achieved by conducting a within subject user study experiment. Participants were screened for normal visual acuity, stereopsis and manual dexterity. Analysis of the generated data provide good indicators for evaluation of hypotheses regarding participant performance, ease of use, hand motion and intuitiveness.


Sign in / Sign up

Export Citation Format

Share Document