scholarly journals Experimental Performance Evaluation of Enhanced User Interaction Components for Web-Based Collaborative Extended Reality

2021 ◽  
Vol 11 (9) ◽  
pp. 3811
Author(s):  
Štefan Korečko ◽  
Marián Hudák ◽  
Branislav Sobota ◽  
Martin Sivý ◽  
Matúš Pleva ◽  
...  

COVID-19-related quarantine measures resulted in a significant increase of interest in online collaboration tools. This includes virtual reality (VR) or, in more general term, extended reality (XR) solutions. Shared XR allows for activities such as presentations, training of personnel or therapy to take place in a virtual space instead of a real one. To make online XR as accessible as possible, a significant effort has been put into the development of solutions that can run directly in web browsers. One of the most recognized solutions is the A-Frame software framework, created by Mozilla VR team and supporting most of the contemporary XR hardware. In addition, an extension called Networked-Aframe allows multiple users to share virtual environments, created using A-Frame, in real time. In this article, we introduce and experimentally evaluate three components that extend the functionality of A-Frame and Networked-Aframe. The first one extends Networked-Aframe with the ability to monitor and control users in a shared virtual scene. The second one implements six degrees of freedom motion tracking for smartphone-based VR headsets. The third one brings hand gesture support to the Microsoft HoloLens holographic computer. The evaluation was performed in a dedicated local network environment with 5, 10, 15 and 20 client computers. Each computer represented one user in a shared virtual scene. Since the experiments were carried out with and without the introduced components, the results presented here can also be regarded as a performance evaluation of A-Frame and Networked-Aframe themselves.

Author(s):  
Ivanka Petkova Veneva ◽  
Dimitar Chakarov ◽  
Mihail Tsveov ◽  
Dimitar Stefanov Trifonov ◽  
Evgeni Zlatanov ◽  
...  

Active orthosis (exoskeleton) is an assistive device with a wearable structure, corresponding to the natural motions of the human. This chapter focuses on developing an active/assistive orthosis system (AOS) enhancing movement. The AOS design is inspired by the biological musculoskeletal system of human upper and lower limbs and mimics the muscle-tendon-ligament structure. The exoskeleton structure includes left and right upper limb, left and right lower limb, and central exoskeleton structure for human torso and waist and provides support, balance, and control of different segments of the body. The device was fabricated with light materials and powered by pneumatic artificial muscles that provide more than fifteen degrees of freedom for the different joints. The active orthotic systems (AOS) can operate in three modes: motion tracking system with data exchange with virtual reality; haptic and rehabilitation device; and assistive mode with active orthosis in cases of impaired muscles.


Author(s):  
Ivanka Petkova Veneva ◽  
Dimitar Chakarov ◽  
Mihail Tsveov ◽  
Dimitar Stefanov Trifonov ◽  
Evgeni Zlatanov ◽  
...  

Active orthosis (exoskeleton) is an assistive device with a wearable structure, corresponding to the natural motions of the human. This chapter focuses on developing an active/assistive orthosis system (AOS) enhancing movement. The AOS design is inspired by the biological musculoskeletal system of human upper and lower limbs and mimics the muscle-tendon-ligament structure. The exoskeleton structure includes left and right upper limb, left and right lower limb, and central exoskeleton structure for human torso and waist and provides support, balance, and control of different segments of the body. The device was fabricated with light materials and powered by pneumatic artificial muscles that provide more than fifteen degrees of freedom for the different joints. The active orthotic systems (AOS) can operate in three modes: motion tracking system with data exchange with virtual reality; haptic and rehabilitation device; and assistive mode with active orthosis in cases of impaired muscles.


2020 ◽  
pp. 67-73
Author(s):  
N.D. YUsubov ◽  
G.M. Abbasova

The accuracy of two-tool machining on automatic lathes is analyzed. Full-factor models of distortions and scattering fields of the performed dimensions, taking into account the flexibility of the technological system on six degrees of freedom, i. e. angular displacements in the technological system, were used in the research. Possibilities of design and control of two-tool adjustment are considered. Keywords turning processing, cutting mode, two-tool setup, full-factor model, accuracy, angular displacement, control, calculation [email protected]


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2234
Author(s):  
Sebastian Kapp ◽  
Michael Barz ◽  
Sergey Mukhametov ◽  
Daniel Sonntag ◽  
Jochen Kuhn

Currently an increasing number of head mounted displays (HMD) for virtual and augmented reality (VR/AR) are equipped with integrated eye trackers. Use cases of these integrated eye trackers include rendering optimization and gaze-based user interaction. In addition, visual attention in VR and AR is interesting for applied research based on eye tracking in cognitive or educational sciences for example. While some research toolkits for VR already exist, only a few target AR scenarios. In this work, we present an open-source eye tracking toolkit for reliable gaze data acquisition in AR based on Unity 3D and the Microsoft HoloLens 2, as well as an R package for seamless data analysis. Furthermore, we evaluate the spatial accuracy and precision of the integrated eye tracker for fixation targets with different distances and angles to the user (n=21). On average, we found that gaze estimates are reported with an angular accuracy of 0.83 degrees and a precision of 0.27 degrees while the user is resting, which is on par with state-of-the-art mobile eye trackers.


2021 ◽  
Vol 21 (1) ◽  
pp. 1-32
Author(s):  
Vikram Mehta ◽  
Daniel Gooch ◽  
Arosha Bandara ◽  
Blaine Price ◽  
Bashar Nuseibeh

The emergence of ubiquitous computing (UbiComp) environments has increased the risk of undesired access to individuals’ physical space or their information, anytime and anywhere, raising potentially serious privacy concerns. Individuals lack awareness and control of the vulnerabilities in everyday contexts and need support and care in regulating disclosures to their physical and digital selves. Existing GUI-based solutions, however, often feel physically interruptive, socially disruptive, time-consuming and cumbersome. To address such challenges, we investigate the user interaction experience and discuss the need for more tangible and embodied interactions for effective and seamless natural privacy management in everyday UbiComp settings. We propose the Privacy Care interaction framework, which is rooted in the literature of privacy management and tangible computing. Keeping users at the center, Awareness and Control are established as the core parts of our framework. This is supported with three interrelated interaction tenets: Direct, Ready-to-Hand, and Contextual . Direct refers to intuitiveness through metaphor usage. Ready-to-Hand supports granularity, non-intrusiveness, and ad hoc management, through periphery-to-center style attention transitions. Contextual supports customization through modularity and configurability. Together, they aim to provide experience of an embodied privacy care with varied interactions that are calming and yet actively empowering. The framework provides designers of such care with a basis to refer to, to generate effective tangible tools for privacy management in everyday settings. Through five semi-structured focus groups, we explore the privacy challenges faced by a sample set of 15 older adults (aged 60+) across their cyber-physical-social spaces. The results show conformity to our framework, demonstrating the relevance of the facets of the framework to the design of privacy management tools in everyday UbiComp contexts.


Author(s):  
Alireza Marzbanrad ◽  
Jalil Sharafi ◽  
Mohammad Eghtesad ◽  
Reza Kamali

This is report of design, construction and control of “Ariana-I”, an Underwater Remotely Operated Vehicle (ROV), built in Shiraz University Robotic Lab. This ROV is equipped with roll, pitch, heading, and depth sensors which provide sufficient feedback signals to give the system six degrees-of-freedom actuation. Although its center of gravity and center of buoyancy are positioned in such a way that Ariana-I ROV is self-stabilized, but the combinations of sensors and speed controlled drivers provide more stability of the system without the operator involvement. Video vision is provided for the system with Ethernet link to the operation unit. Control commands and sensor feedbacks are transferred on RS485 bus; video signal, water leakage alarm, and battery charging wires are provided on the same multi-core cable. While simple PI controllers would improve the pitch and roll stability of the system, various control schemes can be applied for heading to track different paths. The net weight of ROV out of water is about 130kg with frame dimensions of 130×100×65cm. Ariana-I ROV is designed such that it is possible to be equipped with different tools such as mechanical arms, thanks to microprocessor based control system provided with two directional high speed communication cables for on line vision and operation unit.


Author(s):  
Lee-Huang Chen ◽  
Kyunam Kim ◽  
Ellande Tang ◽  
Kevin Li ◽  
Richard House ◽  
...  

This paper presents the design, analysis and testing of a fully actuated modular spherical tensegrity robot for co-robotic and space exploration applications. Robots built from tensegrity structures (composed of pure tensile and compression elements) have many potential benefits including high robustness through redundancy, many degrees of freedom in movement and flexible design. However to fully take advantage of these properties a significant fraction of the tensile elements should be active, leading to a potential increase in complexity, messy cable and power routing systems and increased design difficulty. Here we describe an elegant solution to a fully actuated tensegrity robot: The TT-3 (version 3) tensegrity robot, developed at UC Berkeley, in collaboration with NASA Ames, is a lightweight, low cost, modular, and rapidly prototyped spherical tensegrity robot. This robot is based on a ball-shaped six-bar tensegrity structure and features a unique modular rod-centered distributed actuation and control architecture. This paper presents the novel mechanism design, architecture and simulations of TT-3, the first untethered, fully actuated cable-driven six-bar tensegrity spherical robot ever built and tested for mobility. Furthermore, this paper discusses the controls and preliminary testing performed to observe the system’s behavior and performance.


Author(s):  
A. N. Brysin ◽  
Yu. A. Zhuravleva ◽  
A. S. Mikaeva ◽  
S. A. Mikaeva

The article describes an electronic multifunctional adder for electricity metering SEM-3. The authors give the technical characteristics, the device and the principle of its operation. The presented adder is designed to monitor and account for the consumption of electricity generation and power directly from consumers, as well as in automated centralized accounting and control systems, and is designed for round-the-clock operation. The adder can collect and transmit information over six independent serial interfaces. The adder with a builtin GSM module provides bidirectional information exchange via cellular modem communication with remote devices and the transfer of accumulated data to the upper level of the automated electricity metering system. It provides bidirectional exchange of information over a local network with a PC over the built-in 10/100 Base-T Ethernet interface.


Sign in / Sign up

Export Citation Format

Share Document