Virtual Prototyping of an Advanced Leveling Light System Using a Virtual Reality-Based Night Drive Simulator

Author(s):  
Jan Berssenbrügge ◽  
Sven Kreft ◽  
Jürgen Gausemeier

This paper proposes to use a virtual reality-based night driving simulator as a tool to evaluate an advanced leveling light system. The night driving simulator visualizes the complex beam patterns of automotive headlights in high detail, while the vehicle motion directly affects the lighting direction of the headlights. The system is connected to the control algorithm of an advanced leveling light system, which controls the headlight tilting angle. Within the virtual prototyping process of the lighting system, good combinations of control parameter values can be identified, based on virtual test drives, and the number of real test drives can be reduced significantly.

Author(s):  
Jan Berssenbru¨gge ◽  
Sven Kreft ◽  
Ju¨rgen Gausemeier

Modern automobiles contain various mechatronical components to support the task of driving. To enhance driver vision and driving safety at night time, advanced lighting systems, such as a predictive advanced front lighting system (PAFS) enhance automotive lighting by swiveling the headlights horizontally into approaching curves on a winding road. In addition to this, basic leveling light systems tilt the headlights vertically, in order to adjust to the vehicle chassis pitch due to the vehicle load or suspension effects based on the vehicle dynamics from driving on a rough road. More advanced leveling systems even account for the vertical course of an undulating road using GPS-data to locate the vehicle’s position plus digital map data to predict the vertical course of the road in front of the vehicle. That way, the headlights follow the road curvature and illuminate the road ahead of the vehicle without glaring oncoming traffic. In order to design, evaluate, and optimize the control algorithm within the electronic control unit (ECU) of the leveling light system, various control parameter values need to be adjusted and fine-tuned to ensure an optimal response of the system to the current road scenario. For this task, numerous time-consuming and costly test drives at night are necessary. This paper proposes to use a Virtual Reality-based night driving simulator as tool to simulate and evaluate an advanced leveling light system. The PC-based night drive simulator visualizes the complex beam patterns of automotive headlights in high detail and in real-time. The user drives a simulated vehicle over a virtual test track at night, while the vehicle motion directly affects the lighting direction of headlights. Thus, the effect of the vehicle dynamics on the lighting can be evaluated directly in the simulator. The system is connected to the control algorithm of the advanced leveling light system, which controls the headlights tilting angle. This provides a close-to-reality simulation of the advanced leveling light system during a simulated drive at night. That way, within the virtual prototyping process of the advanced leveling light system, good combinations of control parameter values can be indentified, based on virtual test drives in the night driving simulator, and the number of real test drives can be reduced significantly. Promising combinations of the control parameter values then can be validated during a real test drive a night.


Author(s):  
Jan Berssenbrügge ◽  
Jörg Stöcklein ◽  
Andre Koza ◽  
Iris Gräßler

Advanced driver assistant systems (ADAS) are increasingly being tested during simulated test drives in a test and training environment based on a driving simulator, in order to reduce the number of extensive real test drives. The need for numerous virtual test drives in the driving simulator requires to model detailed and realistically appearing 3D models of real test tracks. A manual reproduction of real tracks is a cumbersome and time-intensive task. In previous work, we have introduced a method to create virtual test tracks with minimized manual effort using data from various sources, such as navigation systems, digital elevation models, aerial images, digital landscape models etc. [1]. However, these virtual test tracks still do not appear very realistic to the test driver, since no detailed vegetation was generated by that method. In this paper, we propose an approach to enrich a virtual terrain with authentic vegetation. The aim is to increase the perceived realism of the landscape, in order to provide the same input for the sensors of an ADAS under test in the driving simulator as on the real track. The requirement is to automate the vegetation generation as far as possible and to support real-time rendering of the generated very complex 3D model, which is crucial for a usable sensor feed. The basis for the generation of vegetation in this work is data from digital landscape models. These data define where areas like woodlands and agricultural zones are located in geographic coordinates. These areas are refined by a color detection, which is applied to the corresponding aerial images, in order to identify various tree and plant species. Based on the application of a procedural rule system the actual plants are then placed in the refined areas. The rule system imitates the natural growth behavior of plants and is based on terrain characteristics like gradient, direction of a slope, or competition for resources. By combining terrain data, color detection on aerial images, and procedural rules, a planting method is developed to generate natural looking vegetation. The implementation prototype of our approach, based on the Unity3D game engine, which supports an easy creation of complex sceneries, showed that it is possible to create vegetation for a virtual test track with minimal manual effort. By placing vegetation at realistic locations, considering natural spread of plants, the perceived realism of the scene was improved. A performance analysis showed that even with the generated vegetation, interactive frame rates are achievable.


Sensors ◽  
2020 ◽  
Vol 21 (1) ◽  
pp. 26
Author(s):  
David González-Ortega ◽  
Francisco Javier Díaz-Pernas ◽  
Mario Martínez-Zarzuela ◽  
Míriam Antón-Rodríguez

Driver’s gaze information can be crucial in driving research because of its relation to driver attention. Particularly, the inclusion of gaze data in driving simulators broadens the scope of research studies as they can relate drivers’ gaze patterns to their features and performance. In this paper, we present two gaze region estimation modules integrated in a driving simulator. One uses the 3D Kinect device and another uses the virtual reality Oculus Rift device. The modules are able to detect the region, out of seven in which the driving scene was divided, where a driver is gazing at in every route processed frame. Four methods were implemented and compared for gaze estimation, which learn the relation between gaze displacement and head movement. Two are simpler and based on points that try to capture this relation and two are based on classifiers such as MLP and SVM. Experiments were carried out with 12 users that drove on the same scenario twice, each one with a different visualization display, first with a big screen and later with Oculus Rift. On the whole, Oculus Rift outperformed Kinect as the best hardware for gaze estimation. The Oculus-based gaze region estimation method with the highest performance achieved an accuracy of 97.94%. The information provided by the Oculus Rift module enriches the driving simulator data and makes it possible a multimodal driving performance analysis apart from the immersion and realism obtained with the virtual reality experience provided by Oculus.


Author(s):  
S. Aihara ◽  
T. Emura ◽  
R. Nomura ◽  
T. Sunada ◽  
M. Kumagai ◽  
...  

2000 ◽  
Vol 4 (4) ◽  
pp. 110-120 ◽  
Author(s):  
Chiyi Cheng ◽  
Mingmin Zhang ◽  
Zhigeng Pan

The benefits of multi-resolution modeling techniques in virtual reality are vast, but one essential component of this model is how it can be used to speedup the process of virtual design and virtual prototyping. In this paper we propose a new multi-resolution representation scheme called MRM, which can support efficient extraction of both fixed and variable resolution modeling data for handling multiple objects in the same scene. One important feature of the MRM scheme is that it supports unified selective simplifications and selective refinements over the mesh representation of the object. In addition, multi-resolution models may be used to support real-time geometric transmission of data in collaborative virtual design and prototyping applications. These key features in MRM, may be applied to a variety of VR applications.


Author(s):  
Jyun-Ming Chen ◽  
Chih-Chang Hsieh

Abstract The incorporation of VR (virtual reality) technology in the CAD/CAM community shows a promising future. Virtual prototyping uses VR techniques to simulate various functionalities of a candidate design. Downstream aspects of the product can be examined early at the design stage, saving the time and money required for repetitive design iterations. Real-time rendering is essential for interactive VR applications. This is especially challenging when dealing with complex geometric databases. Various methods have been proposed in the literature to tackle this problem. Level-of-details is a methodology that incorporates multiple representations of a model in the viewing environment. It reduces the rendering load by presenting the model in the most appropriate level of detail. However, these simplified representations often require laborious redesign efforts. In this paper, several model simplification techniques are reviewed. An automatic simplification procedure for CSG models is also devised. This method incorporates both the geometric simplification and the dimensional reduction schemes. Implemented on a non-manifold topological kernel, the system has been shown to produce promising results.


Author(s):  
Sankar Jayaram ◽  
Scott R. Angster ◽  
Sanjay Gowda ◽  
Uma Jayaram ◽  
Robert R. Kreitzer

Abstract Virtual prototyping is a relatively new field which is significantly changing the product development process. In many applications, virtual prototyping relies on virtual reality tools for analysis of designs. This paper presents an architecture for a virtual prototyping system which was created for the analysis of automotive interiors. This flexible and open architecture allows the integration of various virtual reality software and hardware tools with conventional state-of-the-art CAD/CAM tools to provide an integrated virtual prototyping environment. This architecture supports the automatic transfer of data from and to parametric CAD systems, human modeling for ergonomic evaluations (first person and third person perspectives), design modifications in the virtual environment, distributed evaluations of virtual prototypes, reverse transfer of design modifications to the CAD system, and preservation of design intent and assembly intent during modifications in the virtual environment.


Author(s):  
S. H. Choi ◽  
H. H. Cheung ◽  
W. K. Zhu

Biomedical objects are used as prostheses to repair damaged bone structures and missing body parts, as well as to study complex human organs and plan surgical procedures. They are, however, not economical to make by traditional manufacturing processes. Researchers have therefore explored the multi-material layered manufacturing (MMLM) technology to fabricate biomedical objects from CAD models. Yet, current MMLM systems remain experimental with limited practicality; they are slow, expensive, and can only handle small, simple objects. To address these limitations, this chapter presents the multi-material virtual prototyping (MMVP) technology for digital fabrication of complex biomedical objects cost-effectively. MMVP integrates MMLM with virtual reality to fabricate biomedical objects for stereoscopic visualisation and analyses to serve biomedical engineering purposes. This chapter describes the principle of MMVP and the processes of digital fabrication of biomedical objects. Case studies are presented to demonstrate these processes and their applications in biomedical engineering.


2018 ◽  
Vol 2 (1) ◽  
pp. 48-58 ◽  
Author(s):  
Otmar Bock ◽  
Uwe Drescher ◽  
Wim van Winsum ◽  
Thomas F Kesnerus ◽  
Claudia Voelcker-Rehage

Virtual reality technology can be used for ecologically valid assessment and rehabilitation of cognitive deficits. This article expands the scope of applications to ecologically valid multitasking. A commercially available driving simulator was upgraded by adding an ever-changing sequence of concurrent, everyday-like tasks. Furthermore, the simulator software was modified and interfaced with a non-motorized treadmill to yield a pedestrian street crossing simulator. In the latter simulator, participants walk on through a virtual city, stop at busy streets to wait for a gap in traffic, and then cross. Again, a sequence of everyday-like tasks is added. A feasibility study yielded adequate “presence” in both virtual scenarios, and plausible data about performance decrements under multi-task compared to single-task conditions. The present approach could be suitable for the assessment and training of multitasking skills in older adults and neurological patients.


Sign in / Sign up

Export Citation Format

Share Document