scholarly journals Real-Time Application for Generating Multiple Experiences from 360° Panoramic Video by Tracking Arbitrary Objects and Viewer’s Orientations

2020 ◽  
Vol 10 (7) ◽  
pp. 2248
Author(s):  
Syed Hammad Hussain Shah ◽  
Kyungjin Han ◽  
Jong Weon Lee

We propose a novel authoring and viewing system for generating multiple experiences with a single 360° video and efficiently transferring these experiences to the user. An immersive video contains much more interesting information within the 360° environment than normal videos. There can be multiple interesting areas within a 360° frame at the same time. Due to the narrow field of view in virtual reality head-mounted displays, a user can only view a limited area of a 360° video. Hence, our system is aimed at generating multiple experiences based on interesting information in different regions of a 360° video and efficient transferring of these experiences to prospective users. The proposed system generates experiences by using two approaches: (1) Recording of the user’s experience when the user watches a panoramic video using a virtual reality head-mounted display, and (2) tracking of an arbitrary interesting object in a 360° video selected by the user. For tracking of an arbitrary interesting object, we have developed a pipeline around an existing simple object tracker to adapt it for 360° videos. This tracking algorithm was performed in real time on a CPU with high precision. Moreover, to the best of our knowledge, there is no such existing system that can generate a variety of different experiences from a single 360° video and enable the viewer to watch one 360° visual content from various interesting perspectives in immersive virtual reality. Furthermore, we have provided an adaptive focus assistance technique for efficient transferring of the generated experiences to other users in virtual reality. In this study, technical evaluation of the system along with a detailed user study has been performed to assess the system’s application. Findings from evaluation of the system showed that a single 360° multimedia content has the capability of generating multiple experiences and transfers among users. Moreover, sharing of the 360° experiences enabled viewers to watch multiple interesting contents with less effort.

Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 397
Author(s):  
Qimeng Zhang ◽  
Ji-Su Ban ◽  
Mingyu Kim ◽  
Hae Won Byun ◽  
Chang-Hun Kim

We propose a low-asymmetry interface to improve the presence of non-head-mounted-display (non-HMD) users in shared virtual reality (VR) experiences with HMD users. The low-asymmetry interface ensures that the HMD and non-HMD users’ perception of the VR environment is almost similar. That is, the point-of-view asymmetry and behavior asymmetry between HMD and non-HMD users are reduced. Our system comprises a portable mobile device as a visual display to provide a changing PoV for the non-HMD user and a walking simulator as an in-place walking detection sensor to enable the same level of realistic and unrestricted physical-walking-based locomotion for all users. Because this allows non-HMD users to experience the same level of visualization and free movement as HMD users, both of them can engage as the main actors in movement scenarios. Our user study revealed that the low-asymmetry interface enables non-HMD users to feel a presence similar to that of the HMD users when performing equivalent locomotion tasks in a virtual environment. Furthermore, our system can enable one HMD user and multiple non-HMD users to participate together in a virtual world; moreover, our experiments show that the non-HMD user satisfaction increases with the number of non-HMD participants owing to increased presence and enjoyment.


Author(s):  
Thiago D'Angelo ◽  
Saul Emanuel Delabrida Silva ◽  
Ricardo A. R. Oliveira ◽  
Antonio A. F. Loureiro

Virtual Reality and Augmented Reality Head-Mounted Displays (HMDs) have been emerging in the last years. These technologies sound like the new hot topic for the next years. Head-Mounted Displays have been developed for many different purposes. Users have the opportunity to enjoy these technologies for entertainment, work tasks, and many other daily activities. Despite the recent release of many AR and VR HMDs, two major problems are hindering the AR HMDs from reaching the mainstream market: the extremely high costs and the user experience issues. In order to minimize these problems, we have developed an AR HMD prototype based on a smartphone and on other low-cost materials. The prototype is capable of running Eye Tracking algorithms, which can be used to improve user interaction and user experience. To assess our AR HMD prototype, we choose a state-of-the-art method for eye center location found in the literature and evaluate its real-time performance in different development boards.


2008 ◽  
Vol 41 (1) ◽  
pp. 161-181 ◽  
Author(s):  
Beatriz Sousa Santos ◽  
Paulo Dias ◽  
Angela Pimentel ◽  
Jan-Willem Baggerman ◽  
Carlos Ferreira ◽  
...  

Sensors ◽  
2020 ◽  
Vol 20 (17) ◽  
pp. 4956
Author(s):  
Jose Llanes-Jurado ◽  
Javier Marín-Morales ◽  
Jaime Guixeres ◽  
Mariano Alcañiz

Fixation identification is an essential task in the extraction of relevant information from gaze patterns; various algorithms are used in the identification process. However, the thresholds used in the algorithms greatly affect their sensitivity. Moreover, the application of these algorithm to eye-tracking technologies integrated into head-mounted displays, where the subject’s head position is unrestricted, is still an open issue. Therefore, the adaptation of eye-tracking algorithms and their thresholds to immersive virtual reality frameworks needs to be validated. This study presents the development of a dispersion-threshold identification algorithm applied to data obtained from an eye-tracking system integrated into a head-mounted display. Rules-based criteria are proposed to calibrate the thresholds of the algorithm through different features, such as number of fixations and the percentage of points which belong to a fixation. The results show that distance-dispersion thresholds between 1–1.6° and time windows between 0.25–0.4 s are the acceptable range parameters, with 1° and 0.25 s being the optimum. The work presents a calibrated algorithm to be applied in future experiments with eye-tracking integrated into head-mounted displays and guidelines for calibrating fixation identification algorithms


2011 ◽  
Vol 268-270 ◽  
pp. 523-527
Author(s):  
Hai Tao Song ◽  
Huan Yu Liu ◽  
Dong Yi Chen

Head Mounted Display (HMD) has been widely used in the area of wearable computing, augmented/virtual reality etc. The different image sources and optical subsystems of HMDs result in the diverse comprehensive display properties and optical properties. The traditional method to evaluate a HMD is modular transfer function. It requires special instruments, software and professional backgrounds knowledge, and thus leads to the lack of usability when it applied by common users. On the basis of human transfer function and Fitts’ model, in this paper we proposed an evaluation method, which has the advantage of easy applying. We evaluated this method targeting at a commercial HMD and a custom HMD, and the result shows that our method has a high usability.


Author(s):  
Thomas Kersten ◽  
Daniel Drenkhan ◽  
Simon Deggim

AbstractTechnological advancements in the area of Virtual Reality (VR) in the past years have the potential to fundamentally impact our everyday lives. VR makes it possible to explore a digital world with a Head-Mounted Display (HMD) in an immersive, embodied way. In combination with current tools for 3D documentation, modelling and software for creating interactive virtual worlds, VR has the means to play an important role in the conservation and visualisation of cultural heritage (CH) for museums, educational institutions and other cultural areas. Corresponding game engines offer tools for interactive 3D visualisation of CH objects, which makes a new form of knowledge transfer possible with the direct participation of users in the virtual world. However, to ensure smooth and optimal real-time visualisation of the data in the HMD, VR applications should run at 90 frames per second. This frame rate is dependent on several criteria including the amount of data or number of dynamic objects. In this contribution, the performance of a VR application has been investigated using different digital 3D models of the fortress Al Zubarah in Qatar with various resolutions. We demonstrate the influence on real-time performance by the amount of data and the hardware equipment and that developers of VR applications should find a compromise between the amount of data and the available computer hardware, to guarantee a smooth real-time visualisation with approx. 90 fps (frames per second). Therefore, CAD models offer a better performance for real-time VR visualisation than meshed models due to the significant reduced data volume.


1999 ◽  
Vol 8 (4) ◽  
pp. 462-468 ◽  
Author(s):  
Giuseppe Riva

Virtual reality (VR) is usually described by the media as a particular collection of technological hardware: a computer capable of 3-D real-time animation, a head-mounted display, and data gloves equipped with one or more position trackers. However, this focus on technology is somewhat disappointing for communication researchers and VR designers. To overcome this limitation, this paper describes VR as a communication tool: a communication medium in the case of multiuser VR and a communication interface in single-user VR. The consequences of this approach for the design and the development of VR systems are presented, together with the methodological and technical implications for the study of interactive communication via computers.


2021 ◽  
Author(s):  
◽  
Thomas Iorns

<p>The application of the newly popular content medium of 360 degree panoramic video to the widely used offline lighting technique of image based lighting is explored, and a system solution for real-time image based lighting of virtual objects using only the provided 360 degree video for lighting is developed. The system solution is suitable for use on live streaming video input, and is shown to run on consumer grade graphics hardware at the high resolutions and framerates necessary for comfortable viewing on head mounted displays, rendering at over 60 frames per second for stereo output at 1182x1464 per eye on a mid-range graphics card. Its use in several real-world applications is also studied, and extension to consider real-time shadowing and reflection is explored.</p>


2021 ◽  
Author(s):  
◽  
Thomas Iorns

<p>The application of the newly popular content medium of 360 degree panoramic video to the widely used offline lighting technique of image based lighting is explored, and a system solution for real-time image based lighting of virtual objects using only the provided 360 degree video for lighting is developed. The system solution is suitable for use on live streaming video input, and is shown to run on consumer grade graphics hardware at the high resolutions and framerates necessary for comfortable viewing on head mounted displays, rendering at over 60 frames per second for stereo output at 1182x1464 per eye on a mid-range graphics card. Its use in several real-world applications is also studied, and extension to consider real-time shadowing and reflection is explored.</p>


Electronics ◽  
2018 ◽  
Vol 7 (9) ◽  
pp. 171 ◽  
Author(s):  
Song-Woo Choi ◽  
Siyeong Lee ◽  
Min-Woo Seo ◽  
Suk-Ju Kang

Because the interest in virtual reality (VR) has increased recently, studies on head-mounted displays (HMDs) have been actively conducted. However, HMD causes motion sickness and dizziness to the user, who is most affected by motion-to-photon latency. Therefore, equipment for measuring and quantifying this occurrence is very necessary. This paper proposes a novel system to measure and visualize the time sequential motion-to-photon latency in real time for HMDs. Conventional motion-to-photon latency measurement methods can measure the latency only at the beginning of the physical motion. On the other hand, the proposed method can measure the latency in real time at every input time. Specifically, it generates the rotation data with intensity levels of pixels on the measurement area, and it can obtain the motion-to-photon latency data in all temporal ranges. Concurrently, encoders measure the actual motion from a motion generator designed to control the actual posture of the HMD device. The proposed system conducts a comparison between two motions from encoders and the output image on a display. Finally, it calculates the motion-to-photon latency for all time points. The experiment shows that the latency increases from a minimum of 46.55 ms to a maximum of 154.63 ms according to the workload levels.


Sign in / Sign up

Export Citation Format

Share Document