Deep Network-Based Feature Extraction and Reconstruction of Complex Material Microstructures

Author(s):  
Ruijin Cang ◽  
Max Yi Ren

Computational material design (CMD) aims to accelerate optimal design of complex material systems by integrating material science and design automation. For tractable CMD, it is required that (1) a feature space be identified to allow reconstruction of new designs, and (2) the reconstruction process be property-preserving. Existing solutions rely on the designer’s understanding of specific material systems to identify geometric and statistical features, which could be insufficient for reconstructing physically meaningful microstructures of complex material systems. This paper develops a feature learning mechanism that automates a two-way conversion between microstructures and their lower-dimensional feature representations. The proposed model is applied to four material systems: Ti-6Al-4V alloy, Pb-Sn alloy, Fontainebleau sandstone, and spherical colloids, to produce random reconstructions that are visually similar to the samples. This capability is not achieved by existing synthesis methods relying on the Markovian assumption of material systems. For Ti-6Al-4V alloy, we also show that the reconstructions preserve the mean critical fracture force of the system for a fixed processing setting. Source code and datasets are available.

2017 ◽  
Vol 139 (7) ◽  
Author(s):  
Ruijin Cang ◽  
Yaopengxiao Xu ◽  
Shaohua Chen ◽  
Yongming Liu ◽  
Yang Jiao ◽  
...  

Integrated Computational Materials Engineering (ICME) aims to accelerate optimal design of complex material systems by integrating material science and design automation. For tractable ICME, it is required that (1) a structural feature space be identified to allow reconstruction of new designs, and (2) the reconstruction process be property-preserving. The majority of existing structural presentation schemes relies on the designer's understanding of specific material systems to identify geometric and statistical features, which could be biased and insufficient for reconstructing physically meaningful microstructures of complex material systems. In this paper, we develop a feature learning mechanism based on convolutional deep belief network (CDBN) to automate a two-way conversion between microstructures and their lower-dimensional feature representations, and to achieve a 1000-fold dimension reduction from the microstructure space. The proposed model is applied to a wide spectrum of heterogeneous material systems with distinct microstructural features including Ti–6Al–4V alloy, Pb63–Sn37 alloy, Fontainebleau sandstone, and spherical colloids, to produce material reconstructions that are close to the original samples with respect to two-point correlation functions and mean critical fracture strength. This capability is not achieved by existing synthesis methods that rely on the Markovian assumption of material microstructures.


Author(s):  
Ruijin Cang ◽  
Aditya Vipradas ◽  
Yi Ren

A key challenge in computational material design is to optimize for particular material properties by searching in an often high-dimensional design space of microstructures. A tractable approach to this optimization task is to identify an encoder that maps from microstructures, which are 2D or 3D images, to a lower-dimensional feature space, and a decoder that generates new microstructures based on samples from the feature space. This two-way mapping has been achieved through feature learning, as common features often exist in microstructures from the same material system. Yet existing approaches limit the size of the generated images to that of the training samples, making it less applicable to designing microstructures at an arbitrary scale. This paper proposes a hybrid model that learns both common features and the spatial distributions of them. We show through various material systems that unlike existing reconstruction methods, our method can generate new microstructure samples of arbitrary sizes that are both visually and statistically close to the training samples while preserving local microstructure patterns.


2021 ◽  
Vol 13 (4) ◽  
pp. 742
Author(s):  
Jian Peng ◽  
Xiaoming Mei ◽  
Wenbo Li ◽  
Liang Hong ◽  
Bingyu Sun ◽  
...  

Scene understanding of remote sensing images is of great significance in various applications. Its fundamental problem is how to construct representative features. Various convolutional neural network architectures have been proposed for automatically learning features from images. However, is the current way of configuring the same architecture to learn all the data while ignoring the differences between images the right one? It seems to be contrary to our intuition: it is clear that some images are easier to recognize, and some are harder to recognize. This problem is the gap between the characteristics of the images and the learning features corresponding to specific network structures. Unfortunately, the literature so far lacks an analysis of the two. In this paper, we explore this problem from three aspects: we first build a visual-based evaluation pipeline of scene complexity to characterize the intrinsic differences between images; then, we analyze the relationship between semantic concepts and feature representations, i.e., the scalability and hierarchy of features which the essential elements in CNNs of different architectures, for remote sensing scenes of different complexity; thirdly, we introduce CAM, a visualization method that explains feature learning within neural networks, to analyze the relationship between scenes with different complexity and semantic feature representations. The experimental results show that a complex scene would need deeper and multi-scale features, whereas a simpler scene would need lower and single-scale features. Besides, the complex scene concept is more dependent on the joint semantic representation of multiple objects. Furthermore, we propose the framework of scene complexity prediction for an image and utilize it to design a depth and scale-adaptive model. It achieves higher performance but with fewer parameters than the original model, demonstrating the potential significance of scene complexity.


2021 ◽  
pp. 1-13
Author(s):  
Yikai Zhang ◽  
Yong Peng ◽  
Hongyu Bian ◽  
Yuan Ge ◽  
Feiwei Qin ◽  
...  

Concept factorization (CF) is an effective matrix factorization model which has been widely used in many applications. In CF, the linear combination of data points serves as the dictionary based on which CF can be performed in both the original feature space as well as the reproducible kernel Hilbert space (RKHS). The conventional CF treats each dimension of the feature vector equally during the data reconstruction process, which might violate the common sense that different features have different discriminative abilities and therefore contribute differently in pattern recognition. In this paper, we introduce an auto-weighting variable into the conventional CF objective function to adaptively learn the corresponding contributions of different features and propose a new model termed Auto-Weighted Concept Factorization (AWCF). In AWCF, on one hand, the feature importance can be quantitatively measured by the auto-weighting variable in which the features with better discriminative abilities are assigned larger weights; on the other hand, we can obtain more efficient data representation to depict its semantic information. The detailed optimization procedure to AWCF objective function is derived whose complexity and convergence are also analyzed. Experiments are conducted on both synthetic and representative benchmark data sets and the clustering results demonstrate the effectiveness of AWCF in comparison with the related models.


2021 ◽  
Vol 11 (4) ◽  
pp. 1380
Author(s):  
Yingbo Zhou ◽  
Pengcheng Zhao ◽  
Weiqin Tong ◽  
Yongxin Zhu

While Generative Adversarial Networks (GANs) have shown promising performance in image generation, they suffer from numerous issues such as mode collapse and training instability. To stabilize GAN training and improve image synthesis quality with diversity, we propose a simple yet effective approach as Contrastive Distance Learning GAN (CDL-GAN) in this paper. Specifically, we add Consistent Contrastive Distance (CoCD) and Characteristic Contrastive Distance (ChCD) into a principled framework to improve GAN performance. The CoCD explicitly maximizes the ratio of the distance between generated images and the increment between noise vectors to strengthen image feature learning for the generator. The ChCD measures the sampling distance of the encoded images in Euler space to boost feature representations for the discriminator. We model the framework by employing Siamese Network as a module into GANs without any modification on the backbone. Both qualitative and quantitative experiments conducted on three public datasets demonstrate the effectiveness of our method.


Entropy ◽  
2021 ◽  
Vol 23 (6) ◽  
pp. 653
Author(s):  
Ruihua Zhang ◽  
Fan Yang ◽  
Yan Luo ◽  
Jianyi Liu ◽  
Jinbin Li ◽  
...  

Thorax disease classification is a challenging task due to complex pathologies and subtle texture changes, etc. It has been extensively studied for years largely because of its wide application in computer-aided diagnosis. Most existing methods directly learn global feature representations from whole Chest X-ray (CXR) images, without considering in depth the richer visual cues lying around informative local regions. Thus, these methods often produce sub-optimal thorax disease classification performance because they ignore the very informative pathological changes around organs. In this paper, we propose a novel Part-Aware Mask-Guided Attention Network (PMGAN) that learns complementary global and local feature representations from all-organ region and multiple single-organ regions simultaneously for thorax disease classification. Specifically, multiple innovative soft attention modules are designed to progressively guide feature learning toward the global informative regions of whole CXR image. A mask-guided attention module is designed to further search for informative regions and visual cues within the all-organ or single-organ images, where attention is elegantly regularized by automatically generated organ masks and without introducing computation during the inference stage. In addition, a multi-task learning strategy is designed, which effectively maximizes the learning of complementary local and global representations. The proposed PMGAN has been evaluated on the ChestX-ray14 dataset and the experimental results demonstrate its superior thorax disease classification performance against the state-of-the-art methods.


2020 ◽  
Vol 34 (07) ◽  
pp. 11386-11393 ◽  
Author(s):  
Shuang Li ◽  
Chi Liu ◽  
Qiuxia Lin ◽  
Binhui Xie ◽  
Zhengming Ding ◽  
...  

Tremendous research efforts have been made to thrive deep domain adaptation (DA) by seeking domain-invariant features. Most existing deep DA models only focus on aligning feature representations of task-specific layers across domains while integrating a totally shared convolutional architecture for source and target. However, we argue that such strongly-shared convolutional layers might be harmful for domain-specific feature learning when source and target data distribution differs to a large extent. In this paper, we relax a shared-convnets assumption made by previous DA methods and propose a Domain Conditioned Adaptation Network (DCAN), which aims to excite distinct convolutional channels with a domain conditioned channel attention mechanism. As a result, the critical low-level domain-dependent knowledge could be explored appropriately. As far as we know, this is the first work to explore the domain-wise convolutional channel activation for deep DA networks. Moreover, to effectively align high-level feature distributions across two domains, we further deploy domain conditioned feature correction blocks after task-specific layers, which will explicitly correct the domain discrepancy. Extensive experiments on three cross-domain benchmarks demonstrate the proposed approach outperforms existing methods by a large margin, especially on very tough cross-domain learning tasks.


Author(s):  
Martín I Idiart ◽  
Pedro Ponte Castañeda

This work is concerned with the development of bounds for nonlinear composites with anisotropic phases by means of an appropriate generalization of the ‘linear comparison’ variational method, introduced by Ponte Castañeda for composites with isotropic phases. The bounds can be expressed in terms of a convex (concave) optimization problem, requiring the computation of certain ‘error’ functions that, in turn, depend on the solution of a non-concave/non-convex optimization problem. A simple formula is derived for the overall stress–strain relation of the composite associated with the bound, and special, simpler forms are provided for power-law materials, as well as for ideally plastic materials, where the computation of the error functions simplifies dramatically. As will be seen in part II of this work in the specific context of composites with crystalline phases (e.g. polycrystals), the new bounds have the capability of improving on earlier bounds, such as the ones proposed by deBotton and Ponte Castañeda for these specific material systems.


Author(s):  
Yudong Zhang ◽  
Wenhao Zheng ◽  
Ming Li

Semantic feature learning for natural language and programming language is a preliminary step in addressing many software mining tasks. Many existing methods leverage information in lexicon and syntax to learn features for textual data. However, such information is inadequate to represent the entire semantics in either text sentence or code snippet. This motivates us to propose a new approach to learn semantic features for both languages, through extracting three levels of information, namely global, local and sequential information, from textual data. For tasks involving both modalities, we project the data of both types into a uniform feature space so that the complementary knowledge in between can be utilized in their representation. In this paper, we build a novel and general-purpose feature learning framework called UniEmbed, to uniformly learn comprehensive semantic representation for both natural language and programming language. Experimental results on three real-world software mining tasks show that UniEmbed outperforms state-of-the-art models in feature learning and prove the capacity and effectiveness of our model.


Sign in / Sign up

Export Citation Format

Share Document