scholarly journals Mixed Reality Enhanced User Interactive Path Planning for Omnidirectional Mobile Robot

2020 ◽  
Vol 10 (3) ◽  
pp. 1135 ◽  
Author(s):  
Mulun Wu ◽  
Shi-Lu Dai ◽  
Chenguang Yang

This paper proposes a novel control system for the path planning of an omnidirectional mobile robot based on mixed reality. Most research on mobile robots is carried out in a completely real environment or a completely virtual environment. However, a real environment containing virtual objects has important actual applications. The proposed system can control the movement of the mobile robot in the real environment, as well as the interaction between the mobile robot’s motion and virtual objects which can be added to a real environment. First, an interactive interface is presented in the mixed reality device HoloLens. The interface can display the map, path, control command, and other information related to the mobile robot, and it can add virtual objects to the real map to realize a real-time interaction between the mobile robot and the virtual objects. Then, the original path planning algorithm, vector field histogram* (VFH*), is modified in the aspects of the threshold, candidate direction selection, and cost function, to make it more suitable for the scene with virtual objects, reduce the number of calculations required, and improve the security. Experimental results demonstrated that this proposed method can generate the motion path of the mobile robot according to the specific requirements of the operator, and achieve a good obstacle avoidance performance.

2019 ◽  
Vol 7 (1) ◽  
pp. 35-52 ◽  
Author(s):  
Balamurali Gunji ◽  
Deepak B.B.V.L. ◽  
Saraswathi M.B.L. ◽  
Umamaheswara Rao Mogili

Purpose The purpose of this paper is to obtain an optimal mobile robot path planning by the hybrid algorithm, which is developed by two nature inspired meta-heuristic algorithms, namely, cuckoo-search and bat algorithm (BA) in an unknown or partially known environment. The cuckoo-search algorithm is based on the parasitic behavior of the cuckoo, and the BA is based on the echolocation behavior of the bats. Design/methodology/approach The developed algorithm starts by sensing the obstacles in the environment using ultrasonic sensor. If there are any obstacles in the path, the authors apply the developed algorithm to find the optimal path otherwise reach the target point directly through diagonal distance. Findings The developed algorithm is implemented in MATLAB for the simulation to test the efficiency of the algorithm for different environments. The same path is considered to implement the experiment in the real-world environment. The ARDUINO microcontroller along with the ultrasonic sensor is considered to obtain the path length and time of travel of the robot to reach the goal point. Originality/value In this paper, a new hybrid algorithm has been developed to find the optimal path of the mobile robot using cuckoo search and BAs. The developed algorithm is tested with the real-world environment using the mobile robot.


2019 ◽  
Vol 9 (9) ◽  
pp. 1797
Author(s):  
Chen ◽  
Lin

Augmented reality (AR) is an emerging technology that allows users to interact with simulated environments, including those emulating scenes in the real world. Most current AR technologies involve the placement of virtual objects within these scenes. However, difficulties in modeling real-world objects greatly limit the scope of the simulation, and thus the depth of the user experience. In this study, we developed a process by which to realize virtual environments that are based entirely on scenes in the real world. In modeling the real world, the proposed scheme divides scenes into discrete objects, which are then replaced with virtual objects. This enables users to interact in and with virtual environments without limitations. An RGB-D camera is used in conjunction with simultaneous localization and mapping (SLAM) to obtain the movement trajectory of the user and derive information related to the real environment. In modeling the environment, graph-based segmentation is used to segment point clouds and perform object segmentation to enable the subsequent replacement of objects with equivalent virtual entities. Superquadrics are used to derive shape parameters and location information from the segmentation results in order to ensure that the scale of the virtual objects matches the original objects in the real world. Only after the objects have been replaced with their virtual counterparts in the real environment converted into a virtual scene. Experiments involving the emulation of real-world locations demonstrated the feasibility of the proposed rendering scheme. A rock-climbing application scenario is finally presented to illustrate the potential use of the proposed system in AR applications.


1996 ◽  
Vol 5 (1) ◽  
pp. 122-135 ◽  
Author(s):  
Takashi Oishi ◽  
Susumu Tachi

See-through head-mounted displays (STHMDs), which superimpose the virtual environment generated by computer graphics (CG) on the real world, are expected to be able to vividly display various simulations and designs by using both the real environment and the virtual environment around us. However, we must ensure that the virtual environment is superimposed exactly on the real environment because both environments are visible. Disagreement in matching locations and size between real and virtual objects is likely to occur between the world coordinates of the real environment where the STHMD user actually exists and those of the virtual environment described as parameters of CG. This disagreement directly causes displacement of locations where virtual objects are superimposed. The STHMD must be calibrated so that the virtual environment is superimposed properly. Among the causes of such errors, we focus both on systematic errors of projection transformation parameters caused in manufacturing and differences between actual and supposed location of user's eye on STHMD when in use, and propose a calibration method to eliminate these effects. In the calibration method, the virtual cursor drawn in the virtual environment is directly fitted onto targets in the real environment. Based on the result of fitting, the least-squares method identifies values of the parameters that minimize differences between locations of the virtual cursor in the virtual environment and targets in the real environment. After we describe the calibration methods, we also report the result of this application to the STHMD that we have made. The result is accurate enough to prove the effectiveness of the calibration methods.


2012 ◽  
Vol 16 (4) ◽  
pp. 514-518 ◽  
Author(s):  
Tae Hyon Kim ◽  
Kiyohiro Goto ◽  
Hiroki Igarashi ◽  
Kazuyuki Kon ◽  
Noritaka Sato ◽  
...  

Author(s):  
Jaakko Konttinen ◽  
Charles E. Hughes ◽  
Sumanta N. Pattanaik

Military training, concept design, and pre-acquisition studies often are carried out in virtual settings in which one can experience that which is, in the real world, too dangerous, too costly, or even beyond current technology. Purely virtual environments, however, have limitations in that they remove the participant from the physical world with its visual, auditory, and tactile complexities. In contrast, mixed reality (MR) seeks to blend the real and synthetic. How well that blending works is critical to the effectiveness of a user's experience within an MR scenario. The focus of this paper is on the visual aspects of this blending or more specifically on the interactions between the real and virtual in the contexts of proper inter-occlusion, illumination, and inter-shadowing. This means that the virtual objects must react properly to changes in real lighting and that the real must react properly to the insertion of virtual lights (e.g., a virtual flashlight or a simulated change in the time of day). Even more challenging, virtual objects must cast shadows on real objects and vice versa. The proper casting of shadows is critical to military training, in that shadows often provide clues of others' movements, and of our own to others, long before visual contact is made. Realistic shadows can improve training greatly; their omission or the insertion of physically incorrect shadowing can lead to negative training. To be effective, visual realism requires all such interactions occur at interactive rates (30+ frames per second). Our research focuses on algorithmic development and implementation of these procedures on programmable graphics units (GPUs) found commonly on today's commodity graphics cards. The algorithms we develop are tailored to take advantage of the parallel pipeline architecture of GPUs. Our primary application is training of dismounted infantry for the complexities of military operations in urban terrain (MOUT).


Sign in / Sign up

Export Citation Format

Share Document