scholarly journals High-resolution structures from low-resolution data

2011 ◽  
Vol 8 (9) ◽  
pp. 709-709
2016 ◽  
Vol 4 (3) ◽  
pp. T387-T394 ◽  
Author(s):  
Ankur Roy ◽  
Atilla Aydin ◽  
Tapan Mukerji

It is a common practice to analyze fracture spacing data collected from scanlines and wells at various resolutions for the purposes of aquifer and reservoir characterization. However, the influence of resolution on such analyses is not well-studied. Lacunarity is a parameter that is used for multiscale analysis of spatial data. In quantitative terms, at any given scale, it is a function of the mean and variance of the distribution of masses captured by a gliding a window of that scale (size) across any pattern of interest. We have described the application of lacunarity for delineating differences between scale-dependent clustering attributes of data collected at different resolutions along a scanline. Specifically, we considered data collected at different resolutions from two outcrop exposures, a pavement and a cliff section, of the Cretaceous turbititic sandstones of the Chatsworth Formation widely exposed in southern California. For each scanline, we analyzed data from low-resolution aerial or ground photographs and high-resolution ground measurements for scale-dependent clustering attributes. High-resolution data show larger values of scale-dependent lacunarity than their respective low-resolution counterparts. We further performed a bootstrap analysis for each data set to test for the significance of such clustering differences. We started with generating 300 realizations for each data set and then ran lacunarity analysis on them. It was seen that lacunarity for higher resolution data set lay significantly outside the upper 90th percentile values, thus proving that higher resolution data are distinctly different from random and fractures are clustered. We have therefore postulated that lower resolution data capture fracture zones that had relatively uniform spacing, whereas higher resolution data capture thin and short splay joints and sheared joints that contribute to fracture clustering. Such findings have important implications in terms of understanding organization of fractures in fracture corridors, which in turn is critical for modeling and upscaling exercises.


2019 ◽  
Vol 36 (5) ◽  
pp. 745-760 ◽  
Author(s):  
Lia Siegelman ◽  
Fabien Roquet ◽  
Vigan Mensah ◽  
Pascal Rivière ◽  
Etienne Pauthenet ◽  
...  

AbstractMost available CTD Satellite Relay Data Logger (CTD-SRDL) profiles are heavily compressed before satellite transmission. High-resolution profiles recorded at the sampling frequency of 0.5 Hz are, however, available upon physical retrieval of the logger. Between 2014 and 2018, several loggers deployed on elephant seals in the Southern Ocean have been set in continuous recording mode, capturing both the ascent and descent for over 60 profiles per day during several months, opening new horizons for the physical oceanography community. Taking advantage of a new dataset made of seven such loggers, a postprocessing procedure is proposed and validated to improve the quality of all CTD-SRDL data: that is, both high-resolution profiles and compressed low-resolution ones. First, temperature and conductivity are corrected for a thermal mass effect. Then salinity spiking and density inversion are removed by adjusting salinity while leaving temperature unchanged. This method, applied here to more than 50 000 profiles, yields significant and systematic improvements in both temperature and salinity, particularly in regions of rapid temperature variation. The continuous high-resolution dataset is then used to provide updated accuracy estimates of CTD-SRDL data. For high-resolution data, accuracies are estimated to be of ±0.02°C for temperature and ±0.03 g kg−1 for salinity. For low-resolution data, transmitted data points have similar accuracies; however, reconstructed temperature profiles have a reduced accuracy associated with the vertical interpolation of ±0.04°C and a nearly unchanged salinity accuracy of ±0.03 g kg−1.


2021 ◽  
Author(s):  
Sébastien Barthélémy ◽  
Julien Brajard ◽  
Laurent Bertino

<p>Going from low- to high-resolution models is an efficient way to improve the data assimilation process in three ways: it makes better use of high-resolution observations, it represents more accurately the small scale features of the dynamics and it provides a high-resolution field that can further be used as an initial condition of a forecast. Of course, the pitfall of such an approach is the cost of computing a forecast with a high-resolution numerical model. This drawback is even more acute when using an ensemble data assimilation approach, such as the ensemble Kalman filter, for which an ensemble of forecasts is to be issued by the numerical model.</p><p>In our approach, we propose to use a cheap low-resolution model to provide the forecast while still performing the assimilation step in a high-resolution space. The principle of the algorithm is based on a machine learning approach: from a low-resolution forecast, a neural network (NN) emulates a high-resolution field that can then be used to assimilate high-resolution observations. This NN super-resolution operator is trained on one high-resolution simulation. This new data assimilation approach denoted "Super-resolution data assimilation" (SRDA), is built on an ensemble Kalman filter (EnKF) algorithm.</p><p>We applied SRDA to a quasi-geostrophic model representing simplified ocean dynamics of the surface layer, with a low-resolution up to four times smaller than the reference high-resolution (so the cost of the model is divided by 64). We show that this approach outperforms the standard low-resolution data assimilation approach and the SRDA method using standard interpolation instead of a neural network as a super-resolution operator. For the reduced cost of a low-resolution model, SRDA provides a high-resolution field with an error close to that of the field that would be obtained using a high-resolution model.</p>


Author(s):  
Fan Hai-fu ◽  
Hao Quan ◽  
M. M. Woolfson

AbstractConventional direct methods, which work so well for small structures, are less successful for macromolecules. Where it has been demonstrated that a solution might be found using direct methods it is then found that the usual figures of merit are unable to distinguish the few good sets of phases from the large number of sets generated. The reasons for the difficulties with very large structures are considered from a first-principles approach taking into account both the factors of having a large number of atoms and low resolution data. A proposal is made for trying to recognize good phase sets by taking a large structure as a sum of a number of smaller structures for each of which a conventional figure of merit can be applied.


Sign in / Sign up

Export Citation Format

Share Document