Enhancing Borehole Image Data on a High-Resolution PC

1989 ◽  
Author(s):  
S.A. Wong ◽  
R.A. Startzman ◽  
T-B. Kuo
2021 ◽  
Author(s):  
Nithin G R ◽  
Nitish Kumar M ◽  
Venkateswaran Narasimhan ◽  
Rajanikanth Kakani ◽  
Ujjwal Gupta ◽  
...  

Pansharpening is the task of creating a High-Resolution Multi-Spectral Image (HRMS) by extracting and infusing pixel details from the High-Resolution Panchromatic Image into the Low-Resolution Multi-Spectral (LRMS). With the boom in the amount of satellite image data, researchers have replaced traditional approaches with deep learning models. However, existing deep learning models are not built to capture intricate pixel-level relationships. Motivated by the recent success of self-attention mechanisms in computer vision tasks, we propose Pansformers, a transformer-based self-attention architecture, that computes band-wise attention. A further improvement is proposed in the attention network by introducing a Multi-Patch Attention mechanism, which operates on non-overlapping, local patches of the image. Our model is successful in infusing relevant local details from the Panchromatic image while preserving the spectral integrity of the MS image. We show that our Pansformer model significantly improves the performance metrics and the output image quality on imagery from two satellite distributions IKONOS and LANDSAT-8.


Author(s):  
Roger Hyam

Many of the world’s natural history collections are creating high resolution digital images of their specimens. They often make these available on the web through some form or zoomable viewer. For historical reasons, a hotchpotch of technologies are used to achieve this. This diversity has lead to two issues. Firstly, maintenance becomes costly as technologies need replacing. Secondly there is little chance to share data between institutions or provide a unified user experience. A researcher visiting four different virtual collections may have four very different experiences. Similar issues exist in the archives and libraries disciplines. They also need to share high resolution, annotated images of the physical objects in their care. In response to this issue many have coalesced around the International Image Interoperability Framework (IIIF). IIIF is a set of shared application programming interface (API) specifications for interoperable functionality in digital image repositories. It separates the notion of a viewer, which may be used as part of a website or other application, and the web services that feed data to that viewer. By using a common API for serving data about images, different viewers can be used to view the same images, thus providing an upgrade path that does not require replacement of viewer and server software at the same time and allows different viewers to be used for the same image data. Potentially more importantly, it facilitates the construction of applications that view data from different collections as if they were in the same place. From the researcher’s point of view, the experience could be the same whether the virtual specimen is hosted locally or in a museum on another continent. There is one important thing that has been deliberately omitted from the IIIF standard. This has both enabled its rapid adoption but also makes it incomplete for building research applications. IIIF transmits no semantic data about the subject of the images, only labels. The IIIF data therefore needs to be bound to semantically rich data about the specimens being viewed, in some uniform way. Consortium of Taxonomic Facilities (CETAF ) specimen identifiers are now widely adopted by natural history collections in Europe. Each individual collection object is designated by a URI chosen and maintained by the institution owning the specimen (Groom et al. 2017, Güntsch et al. 2018, Güntsch et al. 2017, HYAM et al. 2012). Under Linked Data conventions, content negotiation is used at the server so that users accessing an object using a web-browser are redirected to a human-readable representation of the object, typically a web-page, whilst software systems requiring machine-processable representations are redirected to an RDF-encoded metadata record. CETAF specimen identifiers are therefore ideal partners for IIIF representations of specimens. But how should we join the two together in a semantically rich way that will be generally understandable? SYNTHESYS+ is a European Commission funded programme that facilitates collaboration and network building among European natural history collections. It is concerned with both physical and virtual access to the 390 million specimens of plants and animals housed in participating institutions. Under Task 4.3 of this project, we have been working to create a reliable way to link between the RDF metadata about specimens and images of those specimens in IIIF as well as from images of specimens back to metadata of those specimens. By January 2021, we aim to have ten exemplar institutions publishing IIIF manifest files linked to CETAF identifiers for a few million specimens and for this to act as a catalyst for wider adoption in the natural history community. This presentation gives an update on the rollout of these implementations, paying particular attention to the challenges of semantically annotating specimens with images.


Author(s):  
Jingtan Li ◽  
Maolin Xu ◽  
Hongling Xiu

With the resolution of remote sensing images is getting higher and higher, high-resolution remote sensing images are widely used in many areas. Among them, image information extraction is one of the basic applications of remote sensing images. In the face of massive high-resolution remote sensing image data, the traditional method of target recognition is difficult to cope with. Therefore, this paper proposes a remote sensing image extraction based on U-net network. Firstly, the U-net semantic segmentation network is used to train the training set, and the validation set is used to verify the training set at the same time, and finally the test set is used for testing. The experimental results show that U-net can be applied to the extraction of buildings.


2018 ◽  
Vol 6 (3) ◽  
pp. T723-T737
Author(s):  
Tao Nian ◽  
Zaixing Jiang ◽  
Hongyu Song

Electrical borehole image logs have the potential for direct interpretation of lithofacies characteristics. The challenge is to establish a set of reliable diagnostic criteria with which electrical images can be correlated to lithofacies features such as lithology, sedimentary structures, and bedding sequences. We used the “behind-outcrop” logging procedure that can link borehole images to actual rocks and also reduce errors that are associated with core-shift process. To better reveal the correlation between borehole images and carbonate lithofacies for subsurface reservoir applications, and also make a comparative petrographic analysis with the aim of establishing diagnostic criteria for borehole images, a 200 m well was drilled in the Tarim Ordovician outcrop. A full set of borehole image data and cores with approximately 100% coring recovery rate was acquired at the same depth interval, and more than 100 stained thin sections were prepared. Electrical borehole images in wells adjacent to the outcrop were further interpreted to validate the proposed criteria. Borehole image electrofacies were established according to the image elements, such as stacked mode, bed thickness, conglomerate diameter, rim characteristics, and internal structure of bed/conglomerate, to interpret depositional/diagenetic textures and platform-slope associations. Nine image electrofacies types, corresponding to mud/wacke/pack/grain/bindstone texture, were identified and interpreted in detail. Our method reveals a set of diagnostic criteria for borehole image interpretation in carbonate platform slope, and it finally provides a powerful tool for direct interpretation of electrical images in similar reservoir environment.


2018 ◽  
Vol 15 (9) ◽  
pp. 1451-1455 ◽  
Author(s):  
Grant J. Scott ◽  
Kyle C. Hagan ◽  
Richard A. Marcum ◽  
James Alex Hurt ◽  
Derek T. Anderson ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document