scholarly journals Using Microseismicity to Estimate Formation Permeability for Geological Storage of CO2

2013 ◽  
Vol 2013 ◽  
pp. 1-7 ◽  
Author(s):  
D. A. Angus ◽  
J. P. Verdon

We investigate two approaches for estimating formation permeability based on microseismic data. The two approaches differ in terms of the mechanism that triggers the seismicity: pore-pressure triggering mechanism and the so-called seepage-force (or effective stress) triggering mechanism. Based on microseismic data from a hydraulic fracture experiment using water and supercritical CO2 injection, we estimate permeability using the two different approaches. The microseismic data comes from two hydraulic stimulation treatments that were performed on two formation intervals having similar geological, geomechanical, and in situ stress conditions, yet different injection fluid was used. Both approaches (pore-pressure triggering, and the seepage-force triggering) provide estimates of permeability within the same order of magnitude. However, the seepage-force mechanism (i.e., effective stress perturbation) provides more consistent estimates of permeability between the two different injection fluids. The results show that permeability estimates using microseismic monitoring have strong potential to constrain formation permeability limitations for large-scale CO2 injection.

Geophysics ◽  
1986 ◽  
Vol 51 (4) ◽  
pp. 948-956 ◽  
Author(s):  
Douglas H. Green ◽  
Herbert F. Wang

The pore pressure response of saturated porous rock subjected to undrained compression at low effective stresses are investigated theoretically and experimentally. This behavior is quantified by the undrained pore pressure buildup coefficient, [Formula: see text] where [Formula: see text] is fluid pressure, [Formula: see text] is confining pressure, and [Formula: see text] is the mass of fluid per unit bulk volume. The measured values for B for three sandstones and a dolomite arc near 1.0 at zero effective stress and decrease with increasing effective stress. In one sandstone, B is 0.62 at 13 MPa effective stress. These results agree with the theories of Gassmann (1951) and Bishop (1966), which assume a locally homogeneous solid framework. The decrease of B with increasing effective stress is probably related to crack closure and to high‐compressibility materials within the rock framework. The more general theories of Biot (1955) and Brown and Korringa (1975) introduce an additional parameter, the unjacketed pore compressibility, which can be determined from induced pore pressure results. Values of B close to 1 imply that under appropriate conditions within the crust, zones of low effective pressure characterized by low seismic wave velocity and high wave attenuation could exist. Also, in confined aquifer‐reservoir systems at very low effective stress states, the calculated specific storage coefficient is an order of magnitude larger than if less overpressured conditions prevailed.


2020 ◽  
Vol 224 (3) ◽  
pp. 1523-1539
Author(s):  
Lisa Winhausen ◽  
Alexandra Amann-Hildenbrand ◽  
Reinhard Fink ◽  
Mohammadreza Jalali ◽  
Kavan Khaledi ◽  
...  

SUMMARY A comprehensive characterization of clay shale behavior requires quantifying both geomechanical and hydromechanical characteristics. This paper presents a comparative laboratory study of different methods to determine the water permeability of saturated Opalinus Clay: (i) pore pressure oscillation, (ii) pressure pulse decay and (iii) pore pressure equilibration. Based on a comprehensive data set obtained on one sample under well-defined temperature and isostatic effective stress conditions, we discuss the sensitivity of permeability and storativity on the experimental boundary conditions (oscillation frequency, pore pressure amplitudes and effective stress). The results show that permeability coefficients obtained by all three methods differ less than 15 per cent at a constant effective stress of 24 MPa (kmean = 6.6E-21 to 7.5E-21 m2). The pore pressure transmission technique tends towards lower permeability coefficients, whereas the pulse decay and pressure oscillation techniques result in slightly higher values. The discrepancies are considered minor and experimental times of the techniques are similar in the range of 1–2 d for this sample. We found that permeability coefficients determined by the pore pressure oscillation technique increase with higher frequencies, that is oscillation periods shorter than 2 hr. No dependence is found for the applied pressure amplitudes (5, 10 and 25 per cent of the mean pore pressure). By means of experimental handling and data density, the pore pressure oscillation technique appears to be the most efficient. Data can be recorded continuously over a user-defined period of time and yield information on both, permeability and storativity. Furthermore, effective stress conditions can be held constant during the test and pressure equilibration prior to testing is not necessary. Electron microscopic imaging of ion-beam polished surfaces before and after testing suggests that testing at effective stresses higher than in situ did not lead to pore significant collapse or other irreversible damage in the samples. The study also shows that unloading during the experiment did not result in a permeability increase, which is associated to the persistent closure of microcracks at effective stresses between 24 and 6 MPa.


2021 ◽  
Author(s):  
Parsoa Khorsand ◽  
Fereydoun Hormozdiari

Abstract Large scale catalogs of common genetic variants (including indels and structural variants) are being created using data from second and third generation whole-genome sequencing technologies. However, the genotyping of these variants in newly sequenced samples is a nontrivial task that requires extensive computational resources. Furthermore, current approaches are mostly limited to only specific types of variants and are generally prone to various errors and ambiguities when genotyping complex events. We are proposing an ultra-efficient approach for genotyping any type of structural variation that is not limited by the shortcomings and complexities of current mapping-based approaches. Our method Nebula utilizes the changes in the count of k-mers to predict the genotype of structural variants. We have shown that not only Nebula is an order of magnitude faster than mapping based approaches for genotyping structural variants, but also has comparable accuracy to state-of-the-art approaches. Furthermore, Nebula is a generic framework not limited to any specific type of event. Nebula is publicly available at https://github.com/Parsoa/Nebula.


2021 ◽  
Vol 15 (3) ◽  
pp. 1-31
Author(s):  
Haida Zhang ◽  
Zengfeng Huang ◽  
Xuemin Lin ◽  
Zhe Lin ◽  
Wenjie Zhang ◽  
...  

Driven by many real applications, we study the problem of seeded graph matching. Given two graphs and , and a small set of pre-matched node pairs where and , the problem is to identify a matching between and growing from , such that each pair in the matching corresponds to the same underlying entity. Recent studies on efficient and effective seeded graph matching have drawn a great deal of attention and many popular methods are largely based on exploring the similarity between local structures to identify matching pairs. While these recent techniques work provably well on random graphs, their accuracy is low over many real networks. In this work, we propose to utilize higher-order neighboring information to improve the matching accuracy and efficiency. As a result, a new framework of seeded graph matching is proposed, which employs Personalized PageRank (PPR) to quantify the matching score of each node pair. To further boost the matching accuracy, we propose a novel postponing strategy, which postpones the selection of pairs that have competitors with similar matching scores. We show that the postpone strategy indeed significantly improves the matching accuracy. To improve the scalability of matching large graphs, we also propose efficient approximation techniques based on algorithms for computing PPR heavy hitters. Our comprehensive experimental studies on large-scale real datasets demonstrate that, compared with state-of-the-art approaches, our framework not only increases the precision and recall both by a significant margin but also achieves speed-up up to more than one order of magnitude.


Author(s):  
F. Ma ◽  
J. H. Hwang

Abstract In analyzing a nonclassically damped linear system, one common procedure is to neglect those damping terms which are nonclassical, and retain the classical ones. This approach is termed the method of approximate decoupling. For large-scale systems, the computational effort at adopting approximate decoupling is at least an order of magnitude smaller than the method of complex modes. In this paper, the error introduced by approximate decoupling is evaluated. A tight error bound, which can be computed with relative ease, is given for this method of approximate solution. The role that modal coupling plays in the control of error is clarified. If the normalized damping matrix is strongly diagonally dominant, it is shown that adequate frequency separation is not necessary to ensure small errors.


2019 ◽  
Author(s):  
Harry Minas

Abstract Objective: There has been increased attention in recent years to mental health, quality of life, stress and academic performance among university students, and the possible influence of learning styles. Brief reliable questionnaires are useful in large-scale multivariate research designs, such as the largely survey-based research on well-being and academic performance of university students. The objective of this study was to examine the psychometric properties of a briefer version of the 39-item Adelaide Diagnostic Learning Inventory. Results: In two survey samples - medical and physiotherapy students - a 21-item version Adelaide Diagnostic Learning Inventory - Brief (ADLIB) was shown to have the same factor structure as the parent instrument, and the factor structure of the brief instrument was found to generalise across students of medicine and physiotherapy. Sub-scale reliability estimations were in the order of magnitude of the parent instrument. Sub-scale inter-correlations, inter-factor congruence coefficients, and correlations between ADLIB sub-scale scores and several external measures provide support support for the construct and criterion validity of the instrument.


Author(s):  
Eric Timmons ◽  
Brian C. Williams

State estimation methods based on hybrid discrete and continuous state models have emerged as a method of precisely computing belief states for real world systems, however they have difficulty scaling to systems with more than a handful of components. Classical, consistency based diagnosis methods scale to this level by combining best-first enumeration and conflict-directed search. While best-first methods have been developed for hybrid estimation, conflict-directed methods have thus far been elusive as conflicts summarize constraint violations, but probabilistic hybrid estimation is relatively unconstrained. In this paper we present an approach (A*BC) that unifies best-first enumeration and conflict-directed search in relatively unconstrained problems through the concept of "bounding" conflicts, an extension of conflicts that represent tighter bounds on the cost of regions of the search space. Experiments show that an A*BC powered state estimator produces estimates up to an order of magnitude faster than the current state of the art, particularly on large systems.


Sign in / Sign up

Export Citation Format

Share Document