scholarly journals Automatic Interior Design in Augmented Reality Based on Hierarchical Tree of Procedural Rules

Electronics ◽  
2021 ◽  
Vol 10 (3) ◽  
pp. 245
Author(s):  
Peter Kán ◽  
Andrija Kurtic ◽  
Mohamed Radwan ◽  
Jorge M. Loáiciga Rodríguez

Augmented reality has a high potential in interior design due to its capability of visualizing numerous prospective designs directly in a target room. In this paper, we present our research on utilization of augmented reality for interactive and personalized furnishing. We propose a new algorithm for automated interior design which generates sensible and personalized furniture configurations. This algorithm is combined with mobile augmented reality system to provide a user with an interactive interior design try-out tool. Personalized design is achieved via a recommender system which uses user preferences and room data as input. We conducted three user studies to explore different aspects of our research. The first study investigated the user preference between augmented reality and on-screen visualization for interactive interior design. In the second user study, we studied the user preference between our algorithm for automated interior design and optimization-based algorithm. Finally, the third study evaluated the probability of sensible design generation by the compared algorithms. The main outcome of our research suggests that augmented reality is viable technology for interactive home furnishing.

10.2196/18637 ◽  
2020 ◽  
Vol 22 (8) ◽  
pp. e18637
Author(s):  
Theerapat Muangpoon ◽  
Reza Haghighi Osgouei ◽  
David Escobar-Castillejos ◽  
Christos Kontovounisios ◽  
Fernando Bello

Background Digital rectal examination is a difficult examination to learn and teach because of limited opportunities for practice; however, the main challenge is that students and tutors cannot see the finger when it is palpating the anal canal and prostate gland inside the patients. Objective This paper presents an augmented reality system to be used with benchtop models commonly available in medical schools with the aim of addressing the problem of lack of visualization. The system enables visualization of the examining finger, as well as of the internal organs when performing digital rectal examinations. Magnetic tracking sensors are used to track the movement of the finger, and a pressure sensor is used to monitor the applied pressure. By overlaying a virtual finger on the real finger and a virtual model on the benchtop model, students can see through the examination and finger maneuvers. Methods The system was implemented in the Unity game engine (Unity Technologies) and uses a first-generation HoloLens (Microsoft Inc) as an augmented reality device. To evaluate the system, 19 participants (9 clinicians who routinely performed digital rectal examinations and 10 medical students) were asked to use the system and answer 12 questions regarding the usefulness of the system. Results The system showed the movement of an examining finger in real time with a frame rate of 60 fps on the HoloLens and accurately aligned the virtual and real models with a mean error of 3.9 mm. Users found the movement of the finger was realistic (mean 3.9, SD 1.2); moreover, they found the visualization of the finger and internal organs were useful for teaching, learning, and assessment of digital rectal examinations (finger: mean 4.1, SD 1.1; organs: mean 4.6, SD 0.8), mainly targeting a novice group. Conclusions The proposed augmented reality system was designed to improve teaching and learning of digital rectal examination skills by providing visualization of the finger and internal organs. The initial user study proved its applicability and usefulness.


2020 ◽  
Author(s):  
Theerapat Muangpoon ◽  
Reza Haghighi Osgouei ◽  
David Escobar-Castillejos ◽  
Christos Kontovounisios ◽  
Fernando Bello

BACKGROUND Digital rectal examination is a difficult examination to learn and teach because of limited opportunities for practice; however, the main challenge is that students and tutors cannot see the finger when it is palpating the anal canal and prostate gland inside the patients. OBJECTIVE This paper presents an augmented reality system to be used with benchtop models commonly available in medical schools with the aim of addressing the problem of lack of visualization. The system enables visualization of the examining finger, as well as of the internal organs when performing digital rectal examinations. Magnetic tracking sensors are used to track the movement of the finger, and a pressure sensor is used to monitor the applied pressure. By overlaying a virtual finger on the real finger and a virtual model on the benchtop model, students can see through the examination and finger maneuvers. METHODS The system was implemented in the Unity game engine (Unity Technologies) and uses a first-generation HoloLens (Microsoft Inc) as an augmented reality device. To evaluate the system, 19 participants (9 clinicians who routinely performed digital rectal examinations and 10 medical students) were asked to use the system and answer 12 questions regarding the usefulness of the system. RESULTS The system showed the movement of an examining finger in real time with a frame rate of 60 fps on the HoloLens and accurately aligned the virtual and real models with a mean error of 3.9 mm. Users found the movement of the finger was realistic (mean 3.9, SD 1.2); moreover, they found the visualization of the finger and internal organs were useful for teaching, learning, and assessment of digital rectal examinations (finger: mean 4.1, SD 1.1; organs: mean 4.6, SD 0.8), mainly targeting a novice group. CONCLUSIONS The proposed augmented reality system was designed to improve teaching and learning of digital rectal examination skills by providing visualization of the finger and internal organs. The initial user study proved its applicability and usefulness.


Sensors ◽  
2020 ◽  
Vol 20 (15) ◽  
pp. 4286
Author(s):  
Joel Murithi Runji ◽  
Chyi-Yeu Lin

Augmented reality (AR) has been demonstrated to improve efficiency by up to thrice the level of traditional methods. Specifically, the adoption of visual AR is performed widely using handheld and head-mount technologies. Despite spatial augmented reality (SAR) addressing several shortcomings of wearable AR, its potential is yet to be fully explored. To date, it enhances the cooperation of users with its wide field of view and supports hands-free mobile operation, yet it has remained a challenge to provide references without relying on restrictive static empty surfaces of the same object or nearby objects for projection. Towards this end, we propose a novel approach that contextualizes projected references in real-time and on demand, onto and through the surface across a wireless network. To demonstrate the effectiveness of the approach, we apply the method to the safe inspection of printed circuit board assembly (PCBA) wirelessly networked to a remote automatic optical inspection (AOI) system. A defect detected and localized by the AOI system is wirelessly remitted to the proposed remote inspection system for prompt guidance to the inspector by augmenting a rectangular bracket and a reference image. The rectangular bracket transmitted through the switchable glass aids defect localization over the PCBA, whereas the image is projected over the opaque cells of the switchable glass to provide reference to a user. The developed system is evaluated in a user study for its robustness, precision and performance. Results indicate that the resulting contextualization from variability in occlusion levels not only positively affect inspection performance but also supersedes the state of the art in user preference. Furthermore, it supports a variety of complex visualization needs including varied sizes, contrast, online or offline tracking, with a simple robust integration requiring no additional calibration for registration.


2018 ◽  
Vol 2 (3) ◽  
pp. 20
Author(s):  
Gabriel de A. Pereira ◽  
João Bravo ◽  
Jorge Centeno

Recent technological advancements in many areas have changed the way that individuals interact with the world. Some daily tasks require visualization skills, especially when in a map-reading context. Augmented Reality systems could provide substantial improvement to geovisualization once it enhances a real scene with virtual information. However, relatively little research has worked on assessing the effective contribution of such systems during map reading. So, this research aims to provide a first look into the usability of an Augmented Reality system prototype for interaction with geoinformation. For this purpose, we have designed an activity with volunteers in order to assess the system prototype usability. We have interviewed 14 users (three experts and 11 non-experts), where experts were subjects with the following characteristics: a professor; with a PhD degree in Cartography, GIS, Geography, or Environmental Sciences/Water Resources; and with experience treating spatial information related to water resources. The activity aimed to detect where the system really helps the user to interpret a hydrographic map and how the users were helped by the Augmented Reality system prototype. We may conclude that the Augmented Reality system was helpful to the users during the map reading, as well as allowing the construction of spatial knowledge within the proposed scenario.


2021 ◽  
Vol 11 (13) ◽  
pp. 6047
Author(s):  
Soheil Rezaee ◽  
Abolghasem Sadeghi-Niaraki ◽  
Maryam Shakeri ◽  
Soo-Mi Choi

A lack of required data resources is one of the challenges of accepting the Augmented Reality (AR) to provide the right services to the users, whereas the amount of spatial information produced by people is increasing daily. This research aims to design a personalized AR that is based on a tourist system that retrieves the big data according to the users’ demographic contexts in order to enrich the AR data source in tourism. This research is conducted in two main steps. First, the type of the tourist attraction where the users interest is predicted according to the user demographic contexts, which include age, gender, and education level, by using a machine learning method. Second, the correct data for the user are extracted from the big data by considering time, distance, popularity, and the neighborhood of the tourist places, by using the VIKOR and SWAR decision making methods. By about 6%, the results show better performance of the decision tree by predicting the type of tourist attraction, when compared to the SVM method. In addition, the results of the user study of the system show the overall satisfaction of the participants in terms of the ease-of-use, which is about 55%, and in terms of the systems usefulness, about 56%.


Sensors ◽  
2021 ◽  
Vol 21 (9) ◽  
pp. 3061
Author(s):  
Alice Lo Valvo ◽  
Daniele Croce ◽  
Domenico Garlisi ◽  
Fabrizio Giuliano ◽  
Laura Giarré ◽  
...  

In recent years, we have assisted with an impressive advance in augmented reality systems and computer vision algorithms, based on image processing and artificial intelligence. Thanks to these technologies, mainstream smartphones are able to estimate their own motion in 3D space with high accuracy. In this paper, we exploit such technologies to support the autonomous mobility of people with visual disabilities, identifying pre-defined virtual paths and providing context information, reducing the distance between the digital and real worlds. In particular, we present ARIANNA+, an extension of ARIANNA, a system explicitly designed for visually impaired people for indoor and outdoor localization and navigation. While ARIANNA is based on the assumption that landmarks, such as QR codes, and physical paths (composed of colored tapes, painted lines, or tactile pavings) are deployed in the environment and recognized by the camera of a common smartphone, ARIANNA+ eliminates the need for any physical support thanks to the ARKit library, which we exploit to build a completely virtual path. Moreover, ARIANNA+ adds the possibility for the users to have enhanced interactions with the surrounding environment, through convolutional neural networks (CNNs) trained to recognize objects or buildings and enabling the possibility of accessing contents associated with them. By using a common smartphone as a mediation instrument with the environment, ARIANNA+ leverages augmented reality and machine learning for enhancing physical accessibility. The proposed system allows visually impaired people to easily navigate in indoor and outdoor scenarios simply by loading a previously recorded virtual path and providing automatic guidance along the route, through haptic, speech, and sound feedback.


2021 ◽  
Vol 11 (3) ◽  
pp. 1064
Author(s):  
Jenq-Haur Wang ◽  
Yen-Tsang Wu ◽  
Long Wang

In social networks, users can easily share information and express their opinions. Given the huge amount of data posted by many users, it is difficult to search for relevant information. In addition to individual posts, it would be useful if we can recommend groups of people with similar interests. Past studies on user preference learning focused on single-modal features such as review contents or demographic information of users. However, such information is usually not easy to obtain in most social media without explicit user feedback. In this paper, we propose a multimodal feature fusion approach to implicit user preference prediction which combines text and image features from user posts for recommending similar users in social media. First, we use the convolutional neural network (CNN) and TextCNN models to extract image and text features, respectively. Then, these features are combined using early and late fusion methods as a representation of user preferences. Lastly, a list of users with the most similar preferences are recommended. The experimental results on real-world Instagram data show that the best performance can be achieved when we apply late fusion of individual classification results for images and texts, with the best average top-k accuracy of 0.491. This validates the effectiveness of utilizing deep learning methods for fusing multimodal features to represent social user preferences. Further investigation is needed to verify the performance in different types of social media.


2020 ◽  
Vol 4 (4) ◽  
pp. 78
Author(s):  
Andoni Rivera Pinto ◽  
Johan Kildal ◽  
Elena Lazkano

In the context of industrial production, a worker that wants to program a robot using the hand-guidance technique needs that the robot is available to be programmed and not in operation. This means that production with that robot is stopped during that time. A way around this constraint is to perform the same manual guidance steps on a holographic representation of the digital twin of the robot, using augmented reality technologies. However, this presents the limitation of a lack of tangibility of the visual holograms that the user tries to grab. We present an interface in which some of the tangibility is provided through ultrasound-based mid-air haptics actuation. We report a user study that evaluates the impact that the presence of such haptic feedback may have on a pick-and-place task of the wrist of a holographic robot arm which we found to be beneficial.


Sign in / Sign up

Export Citation Format

Share Document