2014 ◽  
Vol 26 (2) ◽  
pp. 185-195 ◽  
Author(s):  
Masanobu Saito ◽  
◽  
Kentaro Kiuchi ◽  
Shogo Shimizu ◽  
Takayuki Yokota ◽  
...  

This paper describes navigation systems for autonomous mobile robots taking part in the real-world Tsukuba Challenge 2013 robot competition. Tsukuba Challenge 2013 enables any information on the route to be collected beforehand and used on the day of the challenge. At the same time, however, autonomous mobile robots should function appropriately in daily human life even in areas where they have never been before. System thus need not capture pre-driving details. We analyzed traverses in complex urban areas without prior environmental information using light detection and ranging (LIDAR). We also determined robot status, such as its position and orientation using the gauss maps derived from LIDAR without gyro sensors. Dead reckoning was combined with wheel odometry and orientation from above. We corrected 2D robot poses by matching electronics maps from the Web. Because drift inevitably causes errors, slippage and failure, etc., our robot also traced waypoints derived beforehand from the same electronics map, so localization is consistent even if we do not drive through an area ahead of time. Trajectory candidates are generated along global planning routes based on these waypoints and an optimal trajectory is selected. Tsukuba Challenge 2013 required that robots find specified human targets indicated by features released on the Web. To find the target correctly without driving in Tsukuba beforehand, we searched for point cloud clusters similar to specified human targets based on predefined features. These point clouds were then projected on the camera image at the time, and we extracted points of interest such as SURF to apply fast appearance-based mapping (FAB-MAP). This enabled us to find specified targets highly accurately. To demonstrate the feasibility of our system, experiments were conducted over our university route and over that in the Tsukuba Challenge.


2016 ◽  
Vol 28 (4) ◽  
pp. 441-450 ◽  
Author(s):  
Naoki Akai ◽  
◽  
Yasunari Kakigi ◽  
Shogo Yoneyama ◽  
Koichi Ozaki ◽  
...  

[abstFig src='/00280004/02.jpg' width='300' text='Navigation under strong rainy condition' ] The Real World Robot Challenge (RWRC), a technical challenge for mobile outdoor robots, has robots automatically navigate a predetermined path over 1 km with the objective of detecting specific persons. RWRC 2015 was conducted in the rain and every robot could not complete the mission. This was because sensors on the robots detected raindrops and the robots then generated unexpected behavior, indicating the need to study the influence of rain on mobile navigation systems – a study clearly not yet sufficient. We begin by describing our robot’s waterproofing function, followed by investigating the influence of rain on the external sensors commonly used in mobile robot navigation and discuss how the robot navigates autonomous in the rain. We conducted navigation experiments in artificial and actual rainy environments and those results showed that the robot navigates stably in the rain.


2014 ◽  
Vol 1036 ◽  
pp. 737-742 ◽  
Author(s):  
Krzysztof Foit

Mixed reality is a term which covers a wide range of computer and real world interaction. This name refers to the picture shown on the display device, which consists of real and virtual elements. These elements are mixed in some proportion, so we may distinguish between augmented reality (where the real world dominates) and augmented virtuality (where the virtual world dominates). In most of cases a flat (2D) live image is processed, which makes this technology available in smartphones, car navigation systems etc. In the engineers world the mixed reality is used mainly in simulators and trainers, but also as supporting technology during manufacturing, assembly or maintenance process. This paper discuss different approach: mixed reality as a tool supporting programming of the robot. The main goal was to reproduce a path given by operator in off-line programming software using the visual representation of a real object. The system should be regarded as experimental one, because of early stage of development. It allows defining the coordinates of discrete points or to discretize the path and save this data for further processing. The operator should care the rest of programming task, but the use of the robot is reduced to minimum at this stage. The major disadvantages of the mentioned method are problems with accuracy and the invariable orientation of the tool used by the robot during the process they may be eliminated by using better video equipment and the specialized image processing routine. This paper presents the main assumptions of the method along with a possible solution of the problem.


2019 ◽  
Vol 27 (1) ◽  
pp. 32-45
Author(s):  
Sanna M. Pampel ◽  
Katherine Lamb ◽  
Gary Burnett ◽  
Lee Skrypchuk ◽  
Chrisminder Hare ◽  
...  

Although drivers gain experience with age, many older drivers are faced with age-related deteriorations that can lead to a higher crash risk. Head-Up Displays (HUDs) have been linked to significant improvements in driving performance for older drivers by tackling issues related to aging. For this study, two Augmented Reality (AR) HUD virtual car navigation solutions were tested (one screen-fixed, one world-fixed), aiming to improve navigation performance and reduce the discrepancy between younger and older drivers by aiding the appropriate allocation of attention and easing interpretation of navigational information. Twenty-five participants (12 younger, 13 older) undertook a series of drives within a medium-fidelity simulator with three different navigational conditions (virtual car HUD, static HUD arrow graphic, and traditional head-down satnav). Results showed that older drivers tended to achieve navigational success rates similar to the younger group, but experienced higher objective mental workload. Solely for the static HUD arrow graphic, differences in most workload questionnaire items and objective workload between younger and older participants were not significant. The virtual car led to improved navigation performance of all drivers, compared to the other systems. Hence, both AR HUD systems show potential for older drivers, which needs to be further investigated in a real-world driving context.


Author(s):  
Laura Pérez-Pachón ◽  
Parivrudh Sharma ◽  
Helena Brech ◽  
Jenny Gregory ◽  
Terry Lowe ◽  
...  

Abstract Purpose Emerging holographic headsets can be used to register patient-specific virtual models obtained from medical scans with the patient’s body. Maximising accuracy of the virtual models’ inclination angle and position (ideally, ≤ 2° and ≤ 2 mm, respectively, as in currently approved navigation systems) is vital for this application to be useful. This study investigated the accuracy with which a holographic headset registers virtual models with real-world features based on the position and size of image markers. Methods HoloLens® and the image-pattern-recognition tool Vuforia Engine™ were used to overlay a 5-cm-radius virtual hexagon on a monitor’s surface in a predefined position. The headset’s camera detection of an image marker (displayed on the monitor) triggered the rendering of the virtual hexagon on the headset’s lenses. 4 × 4, 8 × 8 and 12 × 12 cm image markers displayed at nine different positions were used. In total, the position and dimensions of 114 virtual hexagons were measured on photographs captured by the headset’s camera. Results Some image marker positions and the smallest image marker (4 × 4 cm) led to larger errors in the perceived dimensions of the virtual models than other image marker positions and larger markers (8 × 8 and 12 × 12 cm). ≤ 2° and ≤ 2 mm errors were found in 70.7% and 76% of cases, respectively. Conclusion Errors obtained in a non-negligible percentage of cases are not acceptable for certain surgical tasks (e.g. the identification of correct trajectories of surgical instruments). Achieving sufficient accuracy with image marker sizes that meet surgical needs and regardless of image marker position remains a challenge.


1979 ◽  
Vol 32 (2) ◽  
pp. 200-209 ◽  
Author(s):  
D. K. Gillman

Modern ship control and navigation systems involve a complex variety of intricately interwoven electronic, electrical, mechanical and hydraulic devices. It has become appropriate therefore to devise a convenient method for representing mathematically the dynamic characteristics of these systems and the elements which comprise them, so that block-diagram modelling can be implemented. This is ideal for displaying the interrelationships of such systems pictorially and has the added advantage of simplicity and of indicating realistically the actual processes which are taking place, as the models refer to the relationship between mathematics and the real world.


Author(s):  
Mahdi Hashemi ◽  
Abolghasem Sadeghi-Niaraki

You may forget where you left your keys when you need them. In ubiquitous computing space your keys will find you and inform you where they are. Ubiquitous computing, the third generation of computing spaces, following mainframes and personal computers, is in its incipient evolution steps. In ubiquitous computing space, sensors and computing nodes are invisibly, inconspicuously, and overwhelmingly embedded in all real-world objects and are all connected to each other through omnipresent wireless networks. The goal is to make real-world objects seem intelligent and autonomous in providing users with electronic and Internet services with users not even noticing how they are provided with these services. The real world, cyberspace, modeling, and mathematics are identified as the main constituents of ubiquitous computing in this study. These four areas are investigated one-by-one and in combination to show how they create a solid foundation for ubiquitous computing. An application of ubiquitous computing in car navigation systems is used to indicate the reliability of the proposed framework.


Author(s):  
Noor Abdul Khaleq Zghair ◽  
Ahmed S. Al-Araji

<span lang="EN-US">Recently, autonomous mobile robots have gained popularity in the modern world due to their relevance technology and application in real world situations. The global market for mobile robots will grow significantly over the next 20 years. Autonomous mobile robots are found in many fields including institutions, industry, business, hospitals, agriculture as well as private households for the purpose of improving day-to-day activities and services. The development of technology has increased in the requirements for mobile robots because of the services and tasks provided by them, like rescue and research operations, surveillance, carry heavy objects and so on. Researchers have conducted many works on the importance of robots, their uses, and problems. This article aims to analyze the control system of mobile robots and the way robots have the ability of moving in real-world to achieve their goals. It should be noted that there are several technological directions in a mobile robot industry. It must be observed and integrated so that the robot functions properly: Navigation systems, localization systems, detection systems (sensors) along with motion and kinematics and dynamics systems. All such systems should be united through a control unit; thus, the mission or work of mobile robots are conducted with reliability.</span>


2018 ◽  
Vol 41 ◽  
Author(s):  
Michał Białek

AbstractIf we want psychological science to have a meaningful real-world impact, it has to be trusted by the public. Scientific progress is noisy; accordingly, replications sometimes fail even for true findings. We need to communicate the acceptability of uncertainty to the public and our peers, to prevent psychology from being perceived as having nothing to say about reality.


Sign in / Sign up

Export Citation Format

Share Document