Bayesian Optimization for Field Scale Geological Carbon Sequestration

2021 ◽  
Author(s):  
Xueying Lu ◽  
Kirk E. Jordan ◽  
Mary F. Wheeler ◽  
Edward O. Pyzer-Knapp ◽  
Matthew Benatan

Abstract We present a framework of the application of Bayesian Optimization (BO) to well management in geological carbon sequestration. The coupled compositional flow and poroelasticity simulator, IPARS, is utilized to accurately capture the underlying physical processes during CO2 sequestration. IPARS is coupled to IBM Bayesian Optimization (IBO) for parallel optimizations of CO2 injection strategies during field-scale CO2 sequestration. Bayesian optimization builds a probabilistic surrogate for the objective function using a Bayesian machine learning algorithm, Gaussian process regression, and then uses an acquisition function that leverages the uncertainty in the surrogate to decide where to sample. IBO addresses the three weak points of the standard BO in that it supports parallel (batch) executions, scales better for high-dimensional problems, and is more robust to initializations. We demonstrate these algorithmic merits by an application to the optimization of the CO2 injection schedule in the Cranfield site using field data. The performance is benchmarked with genetic algorithm (GA) and covariance matrix adaptation evolution strategy (CMA-ES). Results show that IBO achieves competitive objective function value with over 60% less number of forward model evaluations. Furthermore, the Bayesian framework that BO builds upon allows uncertainty quantification and naturally extends to optimization under uncertainty.

2018 ◽  
Vol 21 (2) ◽  
Author(s):  
Juan Cruz Barsce ◽  
Jorge Andrés Palombarini ◽  
Ernesto Carlos Martínez

With the increase of machine learning usage by industries and scientific communities in a variety of tasks such as text mining, image recognition and self-driving cars, automatic setting of hyper-parameter in learning algorithms is a key factor for obtaining good performances regardless of user expertise in the inner workings of the techniques and methodologies. In particular, for a reinforcement learning algorithm, the efficiency of an agent learning a control policy in an uncertain environment is heavily dependent on the hyper-parameters used to balance exploration with exploitation. In this work, an autonomous learning framework that integrates Bayesian optimization with Gaussian process regression to optimize the hyper-parameters of a reinforcement learning algorithm, is proposed. Also, a bandits-based approach to achieve a balance between computational costs and decreasing uncertainty about the \textit{Q}-values, is presented. A gridworld example is used to highlight how hyper-parameter configurations of a learning algorithm (SARSA) are iteratively improved based on two performance functions.


2018 ◽  
Author(s):  
Montserrat Recasens ◽  
Kitson Lim ◽  
M. Mercedes Maroto-Valer ◽  
Rachael Ellen ◽  
Susana Garcia

Foods ◽  
2021 ◽  
Vol 10 (4) ◽  
pp. 763
Author(s):  
Ran Yang ◽  
Zhenbo Wang ◽  
Jiajia Chen

Mechanistic-modeling has been a useful tool to help food scientists in understanding complicated microwave-food interactions, but it cannot be directly used by the food developers for food design due to its resource-intensive characteristic. This study developed and validated an integrated approach that coupled mechanistic-modeling and machine-learning to achieve efficient food product design (thickness optimization) with better heating uniformity. The mechanistic-modeling that incorporated electromagnetics and heat transfer was previously developed and validated extensively and was used directly in this study. A Bayesian optimization machine-learning algorithm was developed and integrated with the mechanistic-modeling. The integrated approach was validated by comparing the optimization performance with a parametric sweep approach, which is solely based on mechanistic-modeling. The results showed that the integrated approach had the capability and robustness to optimize the thickness of different-shape products using different initial training datasets with higher efficiency (45.9% to 62.1% improvement) than the parametric sweep approach. Three rectangular-shape trays with one optimized thickness (1.56 cm) and two non-optimized thicknesses (1.20 and 2.00 cm) were 3-D printed and used in microwave heating experiments, which confirmed the feasibility of the integrated approach in thickness optimization. The integrated approach can be further developed and extended as a platform to efficiently design complicated microwavable foods with multiple-parameter optimization.


2020 ◽  
Author(s):  
Alberto Bemporad ◽  
Dario Piga

AbstractThis paper proposes a method for solving optimization problems in which the decision-maker cannot evaluate the objective function, but rather can only express a preference such as “this is better than that” between two candidate decision vectors. The algorithm described in this paper aims at reaching the global optimizer by iteratively proposing the decision maker a new comparison to make, based on actively learning a surrogate of the latent (unknown and perhaps unquantifiable) objective function from past sampled decision vectors and pairwise preferences. A radial-basis function surrogate is fit via linear or quadratic programming, satisfying if possible the preferences expressed by the decision maker on existing samples. The surrogate is used to propose a new sample of the decision vector for comparison with the current best candidate based on two possible criteria: minimize a combination of the surrogate and an inverse weighting distance function to balance between exploitation of the surrogate and exploration of the decision space, or maximize a function related to the probability that the new candidate will be preferred. Compared to active preference learning based on Bayesian optimization, we show that our approach is competitive in that, within the same number of comparisons, it usually approaches the global optimum more closely and is computationally lighter. Applications of the proposed algorithm to solve a set of benchmark global optimization problems, for multi-objective optimization, and for optimal tuning of a cost-sensitive neural network classifier for object recognition from images are described in the paper. MATLAB and a Python implementations of the algorithms described in the paper are available at http://cse.lab.imtlucca.it/~bemporad/glis.


2013 ◽  
Vol 2013 ◽  
pp. 1-11
Author(s):  
Zhicong Zhang ◽  
Kaishun Hu ◽  
Shuai Li ◽  
Huiyu Huang ◽  
Shaoyong Zhao

Chip attach is the bottleneck operation in semiconductor assembly. Chip attach scheduling is in nature unrelated parallel machine scheduling considering practical issues, for example, machine-job qualification, sequence-dependant setup times, initial machine status, and engineering time. The major scheduling objective is to minimize the total weighted unsatisfied Target Production Volume in the schedule horizon. To apply Q-learning algorithm, the scheduling problem is converted into reinforcement learning problem by constructing elaborate system state representation, actions, and reward function. We select five heuristics as actions and prove the equivalence of reward function and the scheduling objective function. We also conduct experiments with industrial datasets to compare the Q-learning algorithm, five action heuristics, and Largest Weight First (LWF) heuristics used in industry. Experiment results show that Q-learning is remarkably superior to the six heuristics. Compared with LWF, Q-learning reduces three performance measures, objective function value, unsatisfied Target Production Volume index, and unsatisfied job type index, by considerable amounts of 80.92%, 52.20%, and 31.81%, respectively.


Author(s):  
Zheming Zhang ◽  
Ramesh Agarwal

With recent concerns on CO2 emissions from coal fired electricity generation plants; there has been major emphasis on the development of safe and economical Carbon Dioxide Capture and Sequestration (CCS) technology worldwide. Saline reservoirs are attractive geological sites for CO2 sequestration because of their huge capacity for sequestration. Over the last decade, numerical simulation codes have been developed in U.S, Europe and Japan to determine a priori the CO2 storage capacity of a saline aquifer and provide risk assessment with reasonable confidence before the actual deployment of CO2 sequestration can proceed with enormous investment. In U.S, TOUGH2 numerical simulator has been widely used for this purpose. However at present it does not have the capability to determine optimal parameters such as injection rate, injection pressure, injection depth for vertical and horizontal wells etc. for optimization of the CO2 storage capacity and for minimizing the leakage potential by confining the plume migration. This paper describes the development of a “Genetic Algorithm (GA)” based optimizer for TOUGH2 that can be used by the industry with good confidence to optimize the CO2 storage capacity in a saline aquifer of interest. This new code including the TOUGH2 and the GA optimizer is designated as “GATOUGH2”. It has been validated by conducting simulations of three widely used benchmark problems by the CCS researchers worldwide: (a) Study of CO2 plume evolution and leakage through an abandoned well, (b) Study of enhanced CH4 recovery in combination with CO2 storage in depleted gas reservoirs, and (c) Study of CO2 injection into a heterogeneous geological formation. Our results of these simulations are in excellent agreement with those of other researchers obtained with different codes. The validated code has been employed to optimize the proposed water-alternating-gas (WAG) injection scheme for (a) a vertical CO2 injection well and (b) a horizontal CO2 injection well, for optimizing the CO2 sequestration capacity of an aquifer. These optimized calculations are compared with the brute force nearly optimized results obtained by performing a large number of calculations. These comparisons demonstrate the significant efficiency and accuracy of GATOUGH2 as an optimizer for TOUGH2. This capability holds a great promise in studying a host of other problems in CO2 sequestration such as how to optimally accelerate the capillary trapping, accelerate the dissolution of CO2 in water or brine, and immobilize the CO2 plume.


2021 ◽  
Author(s):  
Changqing Yao ◽  
Hongquan Chen ◽  
Akhil Datta-Gupta ◽  
Sanjay Mawalkar ◽  
Srikanta Mishra ◽  
...  

Abstract Geologic CO2 sequestration and CO2 enhanced oil recovery (EOR) have received significant attention from the scientific community as a response to climate change from greenhouse gases. Safe and efficient management of a CO2 injection site requires spatio-temporal tracking of the CO2 plume in the reservoir during geologic sequestration. The goal of this paper is to develop robust modeling and monitoring technologies for imaging and visualization of the CO2 plume using routine pressure/temperature measurements. The streamline-based technology has proven to be effective and efficient for reconciling geologic models to various types of reservoir dynamic response. In this paper, we first extend the streamline-based data integration approach to incorporate distributed temperature sensor (DTS) data using the concept of thermal tracer travel time. Then, a hierarchical workflow composed of evolutionary and streamline methods is employed to jointly history match the DTS and pressure data. Finally, CO2 saturation and streamline maps are used to visualize the CO2 plume movement during the sequestration process. The power and utility of our approach are demonstrated using both synthetic and field applications. We first validate the streamline-based DTS data inversion using a synthetic example. Next, the hierarchical workflow is applied to a carbon sequestration project in a carbonate reef reservoir within the Northern Niagaran Pinnacle Reef Trend in Michigan, USA. The monitoring data set consists of distributed temperature sensing (DTS) data acquired at the injection well and a monitoring well, flowing bottom-hole pressure data at the injection well, and time-lapse pressure measurements at several locations along the monitoring well. The history matching results indicate that the CO2 movement is mostly restricted to the intended zones of injection which is consistent with an independent warmback analysis of the temperature data. The novelty of this work is the streamline-based history matching method for the DTS data and its field application to the Department of Engergy regional carbon sequestration project in Michigan.


Sign in / Sign up

Export Citation Format

Share Document