A Fast Lax–Hopf Algorithm to Solve the Lighthill–Whitham–Richards Traffic Flow Model on Networks

2020 ◽  
Vol 54 (6) ◽  
pp. 1516-1534
Author(s):  
Michele D. Simoni ◽  
Christian G. Claudel

Efficient and exact algorithms are important for performing fast and accurate traffic network simulations with macroscopic traffic models. In this paper, we extend the semianalytical Lax–Hopf algorithm in order to compute link inflows and outflows with the Lighthill–Whitham–Richards (LWR) model. Our proposed fast Lax–Hopf algorithm has a very low computational complexity. We demonstrate that some of the original algorithm’s operations (associated with the initial conditions) can be discarded, leading to a faster computation of boundary demands/supplies in network simulation problems for general concave fundamental diagrams (FDs). Moreover, the computational cost can be further reduced for triangular FDs and specific space–time discretizations. The resulting formulation has a performance comparable to the link transmission model and because it solves the original LWR model for a wide range of FD shapes, with any initial configuration, it is suitable to solve a broad range of traffic operations problems. As part of the analysis, we compare the performance of the proposed scheme with that of other well-known computational methods.

Energies ◽  
2021 ◽  
Vol 14 (23) ◽  
pp. 7851
Author(s):  
Majid Haghshenas ◽  
Peetak Mitra ◽  
Niccolò Dal Santo ◽  
David P. Schmidt

In this work, a data-driven methodology for modeling combustion kinetics, Learned Intelligent Tabulation (LIT), is presented. LIT aims to accelerate the tabulation of combustion mechanisms via machine learning algorithms such as Deep Neural Networks (DNNs). The high-dimensional composition space is sampled from high-fidelity simulations covering a wide range of initial conditions to train these DNNs. The input data are clustered into subspaces, while each subspace is trained with a DNN regression model targeted to a particular part of the high-dimensional composition space. This localized approach has proven to be more tractable than having a global ANN regression model, which fails to generalize across various composition spaces. The clustering is performed using an unsupervised method, Self-Organizing Map (SOM), which automatically subdivides the space. A dense network comprised of fully connected layers is considered for the regression model, while the network hyper parameters are optimized using Bayesian optimization. A nonlinear transformation of the parameters is used to improve sensitivity to minor species and enhance the prediction of ignition delay. The LIT method is employed to model the chemistry kinetics of zero-dimensional H2–O2 and CH4-air combustion. The data-driven method achieves good agreement with the benchmark method while being cheaper in terms of computational cost. LIT is naturally extensible to different combustion models such as flamelet and PDF transport models.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
R. Mendes ◽  
J. C. B. da Silva ◽  
J. M. Magalhaes ◽  
B. St-Denis ◽  
D. Bourgault ◽  
...  

AbstractInternal waves (IWs) in the ocean span across a wide range of time and spatial scales and are now acknowledged as important sources of turbulence and mixing, with the largest observations having 200 m in amplitude and vertical velocities close to 0.5 m s−1. Their origin is mostly tidal, but an increasing number of non-tidal generation mechanisms have also been observed. For instance, river plumes provide horizontally propagating density fronts, which were observed to generate IWs when transitioning from supercritical to subcritical flow. In this study, satellite imagery and autonomous underwater measurements are combined with numerical modeling to investigate IW generation from an initial subcritical density front originating at the Douro River plume (western Iberian coast). These unprecedented results may have important implications in near-shore dynamics since that suggest that rivers of moderate flow may play an important role in IW generation between fresh riverine and coastal waters.


Author(s):  
E. Thilliez ◽  
S. T. Maddison

AbstractNumerical simulations are a crucial tool to understand the relationship between debris discs and planetary companions. As debris disc observations are now reaching unprecedented levels of precision over a wide range of wavelengths, an appropriate level of accuracy and consistency is required in numerical simulations to confidently interpret this new generation of observations. However, simulations throughout the literature have been conducted with various initial conditions often with little or no justification. In this paper, we aim to study the dependence on the initial conditions of N-body simulations modelling the interaction between a massive and eccentric planet on an exterior debris disc. To achieve this, we first classify three broad approaches used in the literature and provide some physical context for when each category should be used. We then run a series of N-body simulations, that include radiation forces acting on small grains, with varying initial conditions across the three categories. We test the influence of the initial parent body belt width, eccentricity, and alignment with the planet on the resulting debris disc structure and compare the final peak emission location, disc width and offset of synthetic disc images produced with a radiative transfer code. We also track the evolution of the forced eccentricity of the dust grains induced by the planet, as well as resonance dust trapping. We find that an initially broad parent body belt always results in a broader debris disc than an initially narrow parent body belt. While simulations with a parent body belt with low initial eccentricity (e ~ 0) and high initial eccentricity (0 < e < 0.3) resulted in similar broad discs, we find that purely secular forced initial conditions, where the initial disc eccentricity is set to the forced value and the disc is aligned with the planet, always result in a narrower disc. We conclude that broad debris discs can be modelled by using either a dynamically cold or dynamically warm parent belt, while in contrast eccentric narrow debris rings are reproduced using a secularly forced parent body belt.


2020 ◽  
Vol 2020 (12) ◽  
Author(s):  
Federico Carta ◽  
Nicole Righi ◽  
Yvette Welling ◽  
Alexander Westphal

Abstract We present a mechanism for realizing hybrid inflation using two axion fields with a purely non-perturbatively generated scalar potential. The structure of the scalar potential is highly constrained by the discrete shift symmetries of the axions. We show that harmonic hybrid inflation generates observationally viable slow-roll inflation for a wide range of initial conditions. This is possible while accommodating certain UV arguments favoring constraints f ≲ MP and ∆ϕ60 ≲ MP on the axion periodicity and slow-roll field range, respectively. We discuss controlled ℤ2-symmetry breaking of the adjacent axion vacua as a means of avoiding cosmological domain wall problems. Including a minimal form of ℤ2-symmetry breaking into the minimally tuned setup leads to a prediction of primordial tensor modes with the tensor-to-scalar ratio in the range 10−4 ≲ r ≲ 0.01, directly accessible to upcoming CMB observations. Finally, we outline several avenues towards realizing harmonic hybrid inflation in type IIB string theory.


2021 ◽  
Vol 50 (1) ◽  
pp. 33-40
Author(s):  
Chenhao Ma ◽  
Yixiang Fang ◽  
Reynold Cheng ◽  
Laks V.S. Lakshmanan ◽  
Wenjie Zhang ◽  
...  

Given a directed graph G, the directed densest subgraph (DDS) problem refers to the finding of a subgraph from G, whose density is the highest among all the subgraphs of G. The DDS problem is fundamental to a wide range of applications, such as fraud detection, community mining, and graph compression. However, existing DDS solutions suffer from efficiency and scalability problems: on a threethousand- edge graph, it takes three days for one of the best exact algorithms to complete. In this paper, we develop an efficient and scalable DDS solution. We introduce the notion of [x, y]-core, which is a dense subgraph for G, and show that the densest subgraph can be accurately located through the [x, y]-core with theoretical guarantees. Based on the [x, y]-core, we develop both exact and approximation algorithms. We have performed an extensive evaluation of our approaches on eight real large datasets. The results show that our proposed solutions are up to six orders of magnitude faster than the state-of-the-art.


1996 ◽  
Vol 324 ◽  
pp. 163-179 ◽  
Author(s):  
A. Levy ◽  
G. Ben-Dor ◽  
S. Sorek

The governing equations of the flow field which is obtained when a thermoelastic rigid porous medium is struck head-one by a shock wave are developed using the multiphase approach. The one-dimensional version of these equations is solved numerically using a TVD-based numerical code. The numerical predictions are compared to experimental results and good to excellent agreements are obtained for different porous materials and a wide range of initial conditions.


2021 ◽  
pp. 1-22
Author(s):  
Joohan Kim ◽  
Vyaas Gururajan ◽  
Riccardo Scarcelli ◽  
Sayan Biswas ◽  
Isaac Ekoto

Abstract Dilute combustion, either using exhaust gas recirculation or with excess-air, is considered a promising strategy to improve the thermal efficiency of internal combustion engines. However, the dilute air-fuel mixture, especially under intensified turbulence and high-pressure conditions, poses significant challenges for ignitability and combustion stability, which may limit the attainable efficiency benefits. In-depth knowledge of the flame kernel evolution to stabilize ignition and combustion in a challenging environment is crucial for effective engine development and optimization. To date, comprehensive understanding of ignition processes that result in the development of fully predictive ignition models usable by the automotive industry does not yet exist. Spark-ignition consists of a wide range of physics that includes electrical discharge, plasma evolution, joule-heating of gas, and flame kernel initiation and growth into a self-sustainable flame. In this study, an advanced approach is proposed to model spark-ignition energy deposition and flame kernel growth. To decouple the flame kernel growth from the electrical discharge, a nanosecond pulsed high-voltage discharge is used to trigger spark-ignition in an optically accessible small ignition test vessel with a quiescent mixture of air and methane. Initial conditions for the flame kernel, including its thermodynamic state and species composition, are derived from a plasma-chemical equilibrium calculation. The geometric shape and dimension of the kernel are characterized using a multi-dimensional thermal plasma solver. The proposed modeling approach is evaluated using a high-fidelity computational fluid dynamics procedure to compare the simulated flame kernel evolution against flame boundaries from companion schlieren images.


2018 ◽  
Author(s):  
Fabien Maussion ◽  
Anton Butenko ◽  
Julia Eis ◽  
Kévin Fourteau ◽  
Alexander H. Jarosch ◽  
...  

Abstract. Despite of their importance for sea-level rise, seasonal water availability, and as source of geohazards, mountain glaciers are one of the few remaining sub-systems of the global climate system for which no globally applicable, open source, community-driven model exists. Here we present the Open Global Glacier Model (OGGM, http://www.oggm.org), developed to provide a modular and open source numerical model framework for simulating past and future change of any glacier in the world. The modelling chain comprises data downloading tools (glacier outlines, topography, climate, validation data), a preprocessing module, a mass-balance model, a distributed ice thickness estimation model, and an ice flow model. The monthly mass-balance is obtained from gridded climate data and a temperature index melt model. To our knowledge, OGGM is the first global model explicitly simulating glacier dynamics: the model relies on the shallow ice approximation to compute the depth-integrated flux of ice along multiple connected flowlines. In this paper, we describe and illustrate each processing step by applying the model to a selection of glaciers before running global simulations under idealized climate forcings. Even without an in-depth calibration, the model shows a very realistic behaviour. We are able to reproduce earlier estimates of global glacier volume by varying the ice dynamical parameters within a range of plausible values. At the same time, the increased complexity of OGGM compared to other prevalent global glacier models comes at a reasonable computational cost: several dozens of glaciers can be simulated on a personal computer, while global simulations realized in a supercomputing environment take up to a few hours per century. Thanks to the modular framework, modules of various complexity can be added to the codebase, allowing to run new kinds of model intercomparisons in a controlled environment. Future developments will add new physical processes to the model as well as tools to calibrate the model in a more comprehensive way. OGGM spans a wide range of applications, from ice-climate interaction studies at millenial time scales to estimates of the contribution of glaciers to past and future sea-level change. It has the potential to become a self-sustained, community driven model for global and regional glacier evolution.


2018 ◽  
Vol 38 (1) ◽  
pp. 74-82
Author(s):  
Edgar García-Morantes ◽  
Iván Amaya-Contreras ◽  
Rodrigo Correa-Cely

This work considers the estimation of internal volumetric heat generation, as well as the heat capacity of a solid spherical sample, heated by a homogeneous, time-varying electromagnetic field. To that end, the numerical strategy solves the corresponding inverse problem. Three functional forms (linear, sinusoidal, and exponential) for the electromagnetic field were considered. White Gaussian noise was incorporated into the theoretical temperature profile (i.e. the solution of the direct problem) to simulate a more realistic situation. Temperature was pretended to be read through four sensors. The inverse problem was solved through three different kinds of approach: using a traditional optimizer, using modern techniques, and using a mixture of both. In the first case, we used a traditional, deterministic Levenberg-Marquardt (LM) algorithm. In the second one, we considered three stochastic algorithms: Spiral Optimization Algorithm (SOA), Vortex Search (VS), and Weighted Attraction Method (WAM). In the final case, we proposed a hybrid between LM and the metaheuristics algorithms. Results show that LM converges to the expected solutions only if the initial conditions (IC) are within a limited range. Oppositely, metaheuristics converge in a wide range of IC but exhibit low accuracy. The hybrid approaches converge and improve the accuracy obtained with the metaheuristics. The difference between expected and obtained values, as well as the RMS errors, are reported and compared for all three methods.


Sign in / Sign up

Export Citation Format

Share Document