scholarly journals A new approach to simulate buildings and their crucial characteristics in a comprehensive urban simulation environment

2015 ◽  
Author(s):  
M. Ziegler ◽  
T. Bednar
Author(s):  
Óscar Pérez-Gil ◽  
Rafael Barea ◽  
Elena López-Guillén ◽  
Luis M. Bergasa ◽  
Carlos Gómez-Huélamo ◽  
...  

AbstractNowadays, Artificial Intelligence (AI) is growing by leaps and bounds in almost all fields of technology, and Autonomous Vehicles (AV) research is one more of them. This paper proposes the using of algorithms based on Deep Learning (DL) in the control layer of an autonomous vehicle. More specifically, Deep Reinforcement Learning (DRL) algorithms such as Deep Q-Network (DQN) and Deep Deterministic Policy Gradient (DDPG) are implemented in order to compare results between them. The aim of this work is to obtain a trained model, applying a DRL algorithm, able of sending control commands to the vehicle to navigate properly and efficiently following a determined route. In addition, for each of the algorithms, several agents are presented as a solution, so that each of these agents uses different data sources to achieve the vehicle control commands. For this purpose, an open-source simulator such as CARLA is used, providing to the system with the ability to perform a multitude of tests without any risk into an hyper-realistic urban simulation environment, something that is unthinkable in the real world. The results obtained show that both DQN and DDPG reach the goal, but DDPG obtains a better performance. DDPG perfoms trajectories very similar to classic controller as LQR. In both cases RMSE is lower than 0.1m following trajectories with a range 180-700m. To conclude, some conclusions and future works are commented.


2020 ◽  
Author(s):  
Andjelka Kelic ◽  
Walter Beyeler ◽  
Roger Mitchell ◽  
Michael Bernard ◽  
Casey Doyle ◽  
...  

Author(s):  
Bradley Stoor ◽  
Stanley Pruett ◽  
Mathrew Duquette ◽  
Robert Subr ◽  
Tim MtCastle

2009 ◽  
Vol 147-149 ◽  
pp. 203-214
Author(s):  
Lucas Ginzinger ◽  
Roland Zander ◽  
Heinz Ulbrich

A new approach to control a rubbing rotor by applying an active auxiliary bearing is developed. The auxiliary bearing is attached to the foundation via two unidirectional actuators. The control force is applied indirectly using the active auxiliary bearing only in case of rubbing. A framework for the development of a feedback controller for an active auxiliary bearing is presented. The theory of a robust two-phase control strategy which guarantees a smooth transition from free rotor motion to the state of full annular rub is presented. A simulation environment for the elastic rotor and the auxiliary bearing including the non-smooth nonlinear dynamics of the rubbing contact is used to develop the feedback controller. Experimental studies have been carried out at a rotor test rig. Various experiments show the outstanding success of the strategy. In case of rubbing, the contact forces are reduced up to 90%.


2021 ◽  
pp. 1-11
Author(s):  
Zaouche Mohammed ◽  
Foughali Khaled

In this work, a new approach for aircraft aerodynamic behavior identification by using virtual simulation is proposed. Both theoretical and experimental aspects are presented. A simulation environment, Microsoft Flight Simulator, is used as the test platform. To make the communication with this environment possible, a real-time interface that allows the read and/or write from and to the shared memory layer of this flight simulator is developed. Using this interface, the virtual aircrafts sensors are read and the commands are written to the inputs control (thrust, elevators, ailerons, trims, and rudder). Also, an identification of the aerodynamic coefficients’ derivatives using the total least square technique is presented. The piloting law expression is toughly tied to those derivatives which are unknown and not always available. The aircraft aerodynamic model is then used to calculate the aerodynamic coefficients. We determine the aerodynamic performances of the wing which is based on the polar drag, the computation of the maximum lift-to-drag coefficient ratio and the determination of the moment in which the aerodynamic stall phenomenon appears.


2011 ◽  
Vol 403-408 ◽  
pp. 3555-3558
Author(s):  
Abbas Khosravi ◽  
Mohammad Kazem Farhadipour ◽  
Abdolreza Moghimi ◽  
Najme Roozmand

This article contains explanations on how to develop an intelligent agent in war simulation environment by RoboGenius team. The main Focus of the team is on artificial intelligence application in war simulation server [1]. For this purpose we used a combined base code part of which is related to Robotoos team and its general parts are related to UVA2003 in 2D soccer simulation [2]. In this article we analyze the environment from an intelligent agent view point rather than exploring software engineering issues. And, considering the kind of simulation environment we study exploring issues in this environment. Simulation environment has been considered as a semi-visible dynamic environment. Robot decision making problems and priority of its decisions have been explored and implemented by using the decision tree. For the problem of environment exploration in war simulation we have presented a new approach. Also, we have explored the possibility of using artificial intelligence in more developing war simulation agent.


SIMULATION ◽  
2018 ◽  
Vol 94 (11) ◽  
pp. 979-992
Author(s):  
Hyo-Cheol Lee ◽  
Seok-Won Lee

Modeling and simulation are methods of validating new systems that are risky to be directly deployed in the real world. During the simulation, the simulation environment continuously changes and simulation objects correspondingly behave according to the changing situations. In general, modeling the behavior for all possible situations is extremely difficult when the rationale is unknown. Therefore, in order to adapt to the changing situation, it is important to recognize the rationale behind the behaviors of the simulation object. However, in many cases, even though the rationale is unknown or difficult to recognize, the simulation requires reasonable behaviors such as a commander’s decision in a war game simulation and a driver’s behavior in rush hours. In this study, we propose a new approach to determine the behavior of simulation objects under changing situations. The proposal is a unified learning approach that integrates two methods, data-driven and knowledge-driven approaches, which allow simulation objects to learn behavioral knowledge from experience as well as from domain experts performing the simulation and reuse verified knowledge. By combining both approaches, we supplement the shortcomings of one method with the strengths of the other. To verify our method, we apply the proposed approach to a military training simulation.


Sign in / Sign up

Export Citation Format

Share Document