Scalable Microstructure Reconstruction With Multi-Scale Pattern Preservation

Author(s):  
Ruijin Cang ◽  
Aditya Vipradas ◽  
Yi Ren

A key challenge in computational material design is to optimize for particular material properties by searching in an often high-dimensional design space of microstructures. A tractable approach to this optimization task is to identify an encoder that maps from microstructures, which are 2D or 3D images, to a lower-dimensional feature space, and a decoder that generates new microstructures based on samples from the feature space. This two-way mapping has been achieved through feature learning, as common features often exist in microstructures from the same material system. Yet existing approaches limit the size of the generated images to that of the training samples, making it less applicable to designing microstructures at an arbitrary scale. This paper proposes a hybrid model that learns both common features and the spatial distributions of them. We show through various material systems that unlike existing reconstruction methods, our method can generate new microstructure samples of arbitrary sizes that are both visually and statistically close to the training samples while preserving local microstructure patterns.

Author(s):  
Ruijin Cang ◽  
Max Yi Ren

Computational material design (CMD) aims to accelerate optimal design of complex material systems by integrating material science and design automation. For tractable CMD, it is required that (1) a feature space be identified to allow reconstruction of new designs, and (2) the reconstruction process be property-preserving. Existing solutions rely on the designer’s understanding of specific material systems to identify geometric and statistical features, which could be insufficient for reconstructing physically meaningful microstructures of complex material systems. This paper develops a feature learning mechanism that automates a two-way conversion between microstructures and their lower-dimensional feature representations. The proposed model is applied to four material systems: Ti-6Al-4V alloy, Pb-Sn alloy, Fontainebleau sandstone, and spherical colloids, to produce random reconstructions that are visually similar to the samples. This capability is not achieved by existing synthesis methods relying on the Markovian assumption of material systems. For Ti-6Al-4V alloy, we also show that the reconstructions preserve the mean critical fracture force of the system for a fixed processing setting. Source code and datasets are available.


2019 ◽  
Vol 9 (22) ◽  
pp. 4749
Author(s):  
Lingyun Jiang ◽  
Kai Qiao ◽  
Linyuan Wang ◽  
Chi Zhang ◽  
Jian Chen ◽  
...  

Decoding human brain activities, especially reconstructing human visual stimuli via functional magnetic resonance imaging (fMRI), has gained increasing attention in recent years. However, the high dimensionality and small quantity of fMRI data impose restrictions on satisfactory reconstruction, especially for the reconstruction method with deep learning requiring huge amounts of labelled samples. When compared with the deep learning method, humans can recognize a new image because our human visual system is naturally capable of extracting features from any object and comparing them. Inspired by this visual mechanism, we introduced the mechanism of comparison into deep learning method to realize better visual reconstruction by making full use of each sample and the relationship of the sample pair by learning to compare. In this way, we proposed a Siamese reconstruction network (SRN) method. By using the SRN, we improved upon the satisfying results on two fMRI recording datasets, providing 72.5% accuracy on the digit dataset and 44.6% accuracy on the character dataset. Essentially, this manner can increase the training data about from n samples to 2n sample pairs, which takes full advantage of the limited quantity of training samples. The SRN learns to converge sample pairs of the same class or disperse sample pairs of different class in feature space.


2015 ◽  
Vol 35-36 ◽  
pp. 206-214 ◽  
Author(s):  
Shengfa Wang ◽  
Nannan Li ◽  
Shuai Li ◽  
Zhongxuan Luo ◽  
Zhixun Su ◽  
...  

2018 ◽  
Vol 57 (4S) ◽  
pp. 04FF04
Author(s):  
Aiwen Luo ◽  
Fengwei An ◽  
Xiangyu Zhang ◽  
Lei Chen ◽  
Zunkai Huang ◽  
...  

Algorithms ◽  
2018 ◽  
Vol 11 (8) ◽  
pp. 112 ◽  
Author(s):  
Ruhua Wang ◽  
Ling Li ◽  
Jun Li

In this paper, damage detection/identification for a seven-storey steel structure is investigated via using the vibration signals and deep learning techniques. Vibration characteristics, such as natural frequencies and mode shapes are captured and utilized as input for a deep learning network while the output vector represents the structural damage associated with locations. The deep auto-encoder with sparsity constraint is used for effective feature extraction for different types of signals and another deep auto-encoder is used to learn the relationship of different signals for final regression. The existing SAF model in a recent research study for the same problem processed all signals in one serial auto-encoder model. That kind of models have the following difficulties: (1) the natural frequencies and mode shapes are in different magnitude scales and it is not logical to normalize them in the same scale in building the models with training samples; (2) some frequencies and mode shapes may not be related to each other and it is not fair to use them for dimension reduction together. To tackle the above-mentioned problems for the multi-scale dataset in SHM, a novel parallel auto-encoder framework (Para-AF) is proposed in this paper. It processes the frequency signals and mode shapes separately for feature selection via dimension reduction and then combine these features together in relationship learning for regression. Furthermore, we introduce sparsity constraint in model reduction stage for performance improvement. Two experiments are conducted on performance evaluation and our results show the significant advantages of the proposed model in comparison with the existing approaches.


Author(s):  
Yan Bai ◽  
Yihang Lou ◽  
Yongxing Dai ◽  
Jun Liu ◽  
Ziqian Chen ◽  
...  

Vehicle Re-Identification (ReID) has attracted lots of research efforts due to its great significance to the public security. In vehicle ReID, we aim to learn features that are powerful in discriminating subtle differences between vehicles which are visually similar, and also robust against different orientations of the same vehicle. However, these two characteristics are hard to be encapsulated into a single feature representation simultaneously with unified supervision. Here we propose a Disentangled Feature Learning Network (DFLNet) to learn orientation specific and common features concurrently, which are discriminative at details and invariant to orientations, respectively. Moreover, to effectively use these two types of features for ReID, we further design a feature metric alignment scheme to ensure the consistency of the metric scales. The experiments show the effectiveness of our method that achieves state-of-the-art performance on three challenging datasets.


2020 ◽  
Vol 34 (07) ◽  
pp. 12975-12983
Author(s):  
Sicheng Zhao ◽  
Guangzhi Wang ◽  
Shanghang Zhang ◽  
Yang Gu ◽  
Yaxian Li ◽  
...  

Deep neural networks suffer from performance decay when there is domain shift between the labeled source domain and unlabeled target domain, which motivates the research on domain adaptation (DA). Conventional DA methods usually assume that the labeled data is sampled from a single source distribution. However, in practice, labeled data may be collected from multiple sources, while naive application of the single-source DA algorithms may lead to suboptimal solutions. In this paper, we propose a novel multi-source distilling domain adaptation (MDDA) network, which not only considers the different distances among multiple sources and the target, but also investigates the different similarities of the source samples to the target ones. Specifically, the proposed MDDA includes four stages: (1) pre-train the source classifiers separately using the training data from each source; (2) adversarially map the target into the feature space of each source respectively by minimizing the empirical Wasserstein distance between source and target; (3) select the source training samples that are closer to the target to fine-tune the source classifiers; and (4) classify each encoded target feature by corresponding source classifier, and aggregate different predictions using respective domain weight, which corresponds to the discrepancy between each source and target. Extensive experiments are conducted on public DA benchmarks, and the results demonstrate that the proposed MDDA significantly outperforms the state-of-the-art approaches. Our source code is released at: https://github.com/daoyuan98/MDDA.


2019 ◽  
Vol 11 (14) ◽  
pp. 1678 ◽  
Author(s):  
Yongyong Fu ◽  
Ziran Ye ◽  
Jinsong Deng ◽  
Xinyu Zheng ◽  
Yibo Huang ◽  
...  

Marine aquaculture plays an important role in seafood supplement, economic development, and coastal ecosystem service provision. The precise delineation of marine aquaculture areas from high spatial resolution (HSR) imagery is vital for the sustainable development and management of coastal marine resources. However, various sizes and detailed structures of marine objects make it difficult for accurate mapping from HSR images by using conventional methods. Therefore, this study attempts to extract marine aquaculture areas by using an automatic labeling method based on the convolutional neural network (CNN), i.e., an end-to-end hierarchical cascade network (HCNet). Specifically, for marine objects of various sizes, we propose to improve the classification performance by utilizing multi-scale contextual information. Technically, based on the output of a CNN encoder, we employ atrous convolutions to capture multi-scale contextual information and aggregate them in a hierarchical cascade way. Meanwhile, for marine objects with detailed structures, we propose to refine the detailed information gradually by using a series of long-span connections with fine resolution features from the shallow layers. In addition, to decrease the semantic gaps between features in different levels, we propose to refine the feature space (i.e., channel and spatial dimensions) using an attention-based module. Experimental results show that our proposed HCNet can effectively identify and distinguish different kinds of marine aquaculture, with 98% of overall accuracy. It also achieves better classification performance compared with object-based support vector machine and state-of-the-art CNN-based methods, such as FCN-32s, U-Net, and DeeplabV2. Our developed method lays a solid foundation for the intelligent monitoring and management of coastal marine resources.


Author(s):  
Caixia Sun ◽  
Lian Zou ◽  
Cien Fan ◽  
Yu Shi ◽  
Yifeng Liu

Deep neural networks are vulnerable to adversarial examples, which can fool models by adding carefully designed perturbations. An intriguing phenomenon is that adversarial examples often exhibit transferability, thus making black-box attacks effective in real-world applications. However, the adversarial examples generated by existing methods typically overfit the structure and feature representation of the source model, resulting in a low success rate in a black-box manner. To address this issue, we propose the multi-scale feature attack to boost attack transferability, which adjusts the internal feature space representation of the adversarial image to get far to the internal representation of the original image. We show that we can select a low-level layer and a high-level layer of the source model to conduct the perturbations, and the crafted adversarial examples are confused with original images, not just in the class but also in the feature space representations. To further improve the transferability of adversarial examples, we apply reverse cross-entropy loss to reduce the overfitting further and show that it is effective for attacking adversarially trained models with strong defensive ability. Extensive experiments show that the proposed methods consistently outperform the iterative fast gradient sign method (IFGSM) and momentum iterative fast gradient sign method (MIFGSM) under the challenging black-box setting.


Sign in / Sign up

Export Citation Format

Share Document