A Case Study of Augmented Reality for Mobile Platforms

Author(s):  
Gabriela Tinti Vasselai ◽  
Dalton Solano dos Reis ◽  
Paulo Cesar Rodacki Gomes
2011 ◽  
Vol 2 (3) ◽  
pp. 1
Author(s):  
Luciano Soares ◽  
Veronica Teichrieb

This special issue of the JIS (SBC Journal on 3D Interactive Systems) is in the second year acknowledging the best papers of the Symposium on Virtual and Augmented Reality. In the SVR 2011 several interesting research projects in the field of Virtual and Augmented Reality appeared and among the best papers this issue presents two selected papers for the readers of JIS. The SVR is the most important event on Virtual and Augmented Reality in Brazil, which is being conducted by academic professionals’ members of the Brazilian Computer Society (SBC) that is supporting the conference for many years. The two papers in this issue were selected among the best papers presented at the SVR 2011. Although the technical implementation of the papers is different each other, they try to solve problems of communication and location. It is possible to note that the virtual and augmented reality is really changing our life style, and these papers show important ideas that can be directly applied by people around the world. The paper “xGroupware: Supporting Collaborative Cross-Reality Environments using Multiagents System” authored by Katia Vega, Débora Cardador, Hugo Fuks and Carlos Lucena, presents very modern proposals of meetings, combining the virtual and real in a way that users can really take advantage of the resources available to improve their communication. It is also important to say that the clever idea to use multi agents in the system as an autonomous way to perceive and interact with the situations of the meetings is amazing. Finally the integration of wearable computing devices in the system makes it very interesting and probably reflects meetings of the future. The second paper “A Case Study of Augmented Reality for Mobile Platforms” authored by Gabriela Tinti Vasselai, Dalton Solano dos Reis and Paulo Cesar Rodacki Gomes presents how the mobile devices can contribute to help our localization with the resources of augmented reality. The paper shows that the resources available in the mobile devices can be used to augmented reality purposes although some improvements in the mobile platforms must be done. We would like to thank all the reviewers of the process, the editors and staff that supported us with the submission system, and of course the authors that had to extend and adapt their papers in order to have some new content that makes this special issue an important reference point for virtual and augmented reality research.


2011 ◽  
Vol 2 (3) ◽  
pp. 1
Author(s):  
Paulo Cesar Rodacki Gomes ◽  
Gabriela Tinti Vasselai ◽  
Dalton Solano dos Reis

This work describes the proposal of an architecture that evolves the concept of augmented reality with geolocation to mobile device platforms. The architecture allows the development of applications that draw points of interest on screen, represented by arrows and panels that directs to each point of interest's location. It uses OpenGL ES 1.0 library to draw the points of interest. The applications' user interaction can be made by moving the device in such a way that activates compass and accelerometer sensors. The device's location determines how far are the points of interest. This work also presents development resources that allow developers to use camera, sensors and geographic coordinates on applications inside the devices simulators. At the end we present an application example along with performance results running on an Android platform's simulator and also on a Android device.


2021 ◽  
Author(s):  
Patrick Dallasega ◽  
Felix Schulze ◽  
Andrea Revolti ◽  
Martin Martinelli

Author(s):  
Geoffrey Momin ◽  
Raj Panchal ◽  
Daniel Liu ◽  
Sharman Perera

Human error accounts for about 60% of the annual power loss due to maintenance incidents in the fossil power industry. The International Atomic Energy Agency reports that 80\% of industrial accidents in the nuclear industry can be attributed to human error and 20\% to equipment failure. The Personal Augmented Reality Reference System (PARRS) is a suite of computer-mediated reality applications that looks to minimize human error by digitizing manual procedures and providing real-time monitoring of hazards present in an environment. Our mission is to be able to provide critical feedback to inform personnel in real-time and protect them from avoidable hazards. PARRS aims to minimize human error and increase worker productivity by bringing innovation to safety and procedural compliance by leveraging technologies such as augmented reality, LiDAR, computer machine learning and particulate mapping using remote systems.


Author(s):  
Kevin Lesniak ◽  
Conrad S. Tucker

The method presented in this work reduces the frequency of virtual objects incorrectly occluding real-world objects in Augmented Reality (AR) applications. Current AR rendering methods cannot properly represent occlusion between real and virtual objects because the objects are not represented in a common coordinate system. These occlusion errors can lead users to have an incorrect perception of the environment around them when using an AR application, namely not knowing a real-world object is present due to a virtual object incorrectly occluding it and incorrect perception of depth or distance by the user due to incorrect occlusions. The authors of this paper present a method that brings both real-world and virtual objects into a common coordinate system so that distant virtual objects do not obscure nearby real-world objects in an AR application. This method captures and processes RGB-D data in real-time, allowing the method to be used in a variety of environments and scenarios. A case study shows the effectiveness and usability of the proposed method to correctly occlude real-world and virtual objects and provide a more realistic representation of the combined real and virtual environments in an AR application. The results of the case study show that the proposed method can detect at least 20 real-world objects with potential to be incorrectly occluded while processing and fixing occlusion errors at least 5 times per second.


Author(s):  
Katrin E. Brunner ◽  
Christine Perey ◽  
Philipp A. Rauschnabel ◽  
Mark Sage
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document