scholarly journals A Developed Siamese CNN with 3D Adaptive Spatial-Spectral Pyramid Pooling for Hyperspectral Image Classification

2020 ◽  
Vol 12 (12) ◽  
pp. 1964 ◽  
Author(s):  
Mengbin Rao ◽  
Ping Tang ◽  
Zheng Zhang

Since hyperspectral images (HSI) captured by different sensors often contain different number of bands, but most of the convolutional neural networks (CNN) require a fixed-size input, the generalization capability of deep CNNs to use heterogeneous input to achieve better classification performance has become a research focus. For classification tasks with limited labeled samples, the training strategy of feeding CNNs with sample-pairs instead of single sample has proven to be an efficient approach. Following this strategy, we propose a Siamese CNN with three-dimensional (3D) adaptive spatial-spectral pyramid pooling (ASSP) layer, called ASSP-SCNN, that takes as input 3D sample-pair with varying size and can easily be transferred to another HSI dataset regardless of the number of spectral bands. The 3D ASSP layer can also extract different levels of 3D information to improve the classification performance of the equipped CNN. To evaluate the classification and generalization performance of ASSP-SCNN, our experiments consist of two parts: the experiments of ASSP-SCNN without pre-training and the experiments of ASSP-SCNN-based transfer learning framework. Experimental results on three HSI datasets demonstrate that both ASSP-SCNN without pre-training and transfer learning based on ASSP-SCNN achieve higher classification accuracies than several state-of-the-art CNN-based methods. Moreover, we also compare the performance of ASSP-SCNN on different transfer learning tasks, which further verifies that ASSP-SCNN has a strong generalization capability.

PLoS ONE ◽  
2018 ◽  
Vol 13 (1) ◽  
pp. e0188996 ◽  
Author(s):  
Muhammad Ahmad ◽  
Stanislav Protasov ◽  
Adil Mehmood Khan ◽  
Rasheed Hussain ◽  
Asad Masood Khattak ◽  
...  

2017 ◽  
Vol 59 ◽  
pp. 495-541 ◽  
Author(s):  
Ramya Ramakrishnan ◽  
Chongjie Zhang ◽  
Julie Shah

In this work, we design and evaluate a computational learning model that enables a human-robot team to co-develop joint strategies for performing novel tasks that require coordination. The joint strategies are learned through "perturbation training," a human team-training strategy that requires team members to practice variations of a given task to help their team generalize to new variants of that task. We formally define the problem of human-robot perturbation training and develop and evaluate the first end-to-end framework for such training, which incorporates a multi-agent transfer learning algorithm, human-robot co-learning framework and communication protocol. Our transfer learning algorithm, Adaptive Perturbation Training (AdaPT), is a hybrid of transfer and reinforcement learning techniques that learns quickly and robustly for new task variants. We empirically validate the benefits of AdaPT through comparison to other hybrid reinforcement and transfer learning techniques aimed at transferring knowledge from multiple source tasks to a single target task. We also demonstrate that AdaPT's rapid learning supports live interaction between a person and a robot, during which the human-robot team trains to achieve a high level of performance for new task variants. We augment AdaPT with a co-learning framework and a computational bi-directional communication protocol so that the robot can co-train with a person during live interaction. Results from large-scale human subject experiments (n=48) indicate that AdaPT enables an agent to learn in a manner compatible with a human's own learning process, and that a robot undergoing perturbation training with a human results in a high level of team performance. Finally, we demonstrate that human-robot training using AdaPT in a simulation environment produces effective performance for a team incorporating an embodied robot partner.


Author(s):  
Tayyip Ozcan

Abstract Coronavirus, a large family of viruses, causes illness in both humans and animals. The novel coronavirus (COVID-19) came up in Wuhan in December 2019. This deadly COVID-19 pandemic has become very fast-spreading and currently present in several countries worldwide. The timely detection of patients who have COVID-19 is vitally important. To this end, scientists are working on different detection methods.In this paper, a grid search (GS) and pre-trained model aided convolutional neural network (CNN) model is proposed to detect COVID-19 in X-Ray images. In the proposed method, the GS method is employed to optimize the hyperparameters of CNN, which directly affects classification performance. Three pre-trained CNN models (GoogleNet, ResNet18 and ResNet50), which can be used for classification, feature extraction and transfer learning purposes were used for transfer learning in this study. The proposed method was trained using the training and validation subdatasets of the collected dataset and detail evaluations are presented according to different performance metrics. According to the experimental studies, the best results were obtained with the GS and ResNet50 aided model.


2018 ◽  
Vol 10 (9) ◽  
pp. 1425 ◽  
Author(s):  
Xuefeng Liu ◽  
Qiaoqiao Sun ◽  
Yue Meng ◽  
Min Fu ◽  
Salah Bourennane

Recent research has shown that spatial-spectral information can help to improve the classification of hyperspectral images (HSIs). Therefore, three-dimensional convolutional neural networks (3D-CNNs) have been applied to HSI classification. However, a lack of HSI training samples restricts the performance of 3D-CNNs. To solve this problem and improve the classification, an improved method based on 3D-CNNs combined with parameter optimization, transfer learning, and virtual samples is proposed in this paper. Firstly, to optimize the network performance, the parameters of the 3D-CNN of the HSI to be classified (target data) are adjusted according to the single variable principle. Secondly, in order to relieve the problem caused by insufficient samples, the weights in the bottom layers of the parameter-optimized 3D-CNN of the target data can be transferred from another well trained 3D-CNN by a HSI (source data) with enough samples and the same feature space as the target data. Then, some virtual samples can be generated from the original samples of the target data to further alleviate the lack of HSI training samples. Finally, the parameter-optimized 3D-CNN with transfer learning can be trained by the training samples consisting of the virtual and the original samples. Experimental results on real-world hyperspectral satellite images have shown that the proposed method has great potential prospects in HSI classification.


2020 ◽  
Vol 12 (11) ◽  
pp. 1780 ◽  
Author(s):  
Yao Liu ◽  
Lianru Gao ◽  
Chenchao Xiao ◽  
Ying Qu ◽  
Ke Zheng ◽  
...  

Convolutional neural networks (CNNs) have been widely applied in hyperspectral imagery (HSI) classification. However, their classification performance might be limited by the scarcity of labeled data to be used for training and validation. In this paper, we propose a novel lightweight shuffled group convolutional neural network (abbreviated as SG-CNN) to achieve efficient training with a limited training dataset in HSI classification. SG-CNN consists of SG conv units that employ conventional and atrous convolution in different groups, followed by channel shuffle operation and shortcut connection. In this way, SG-CNNs have less trainable parameters, whilst they can still be accurately and efficiently trained with fewer labeled samples. Transfer learning between different HSI datasets is also applied on the SG-CNN to further improve the classification accuracy. To evaluate the effectiveness of SG-CNNs for HSI classification, experiments have been conducted on three public HSI datasets pretrained on HSIs from different sensors. SG-CNNs with different levels of complexity were tested, and their classification results were compared with fine-tuned ShuffleNet2, ResNeXt, and their original counterparts. The experimental results demonstrate that SG-CNNs can achieve competitive classification performance when the amount of labeled data for training is poor, as well as efficiently providing satisfying classification results.


2021 ◽  
Vol 13 (12) ◽  
pp. 2353
Author(s):  
Junru Yin ◽  
Changsheng Qi ◽  
Qiqiang Chen ◽  
Jiantao Qu

Recently, deep learning methods based on the combination of spatial and spectral features have been successfully applied in hyperspectral image (HSI) classification. To improve the utilization of the spatial and spectral information from the HSI, this paper proposes a unified network framework using a three-dimensional convolutional neural network (3-D CNN) and a band grouping-based bidirectional long short-term memory (Bi-LSTM) network for HSI classification. In the framework, extracting spectral features is regarded as a procedure of processing sequence data, and the Bi-LSTM network acts as the spectral feature extractor of the unified network to fully exploit the close relationships between spectral bands. The 3-D CNN has a unique advantage in processing the 3-D data; therefore, it is used as the spatial-spectral feature extractor in this unified network. Finally, in order to optimize the parameters of both feature extractors simultaneously, the Bi-LSTM and 3-D CNN share a loss function to form a unified network. To evaluate the performance of the proposed framework, three datasets were tested for HSI classification. The results demonstrate that the performance of the proposed method is better than the current state-of-the-art HSI classification methods.


Author(s):  
Mattia Fumagalli ◽  
Gábor Bella ◽  
Samuele Conti ◽  
Fausto Giunchiglia

The aim of transfer learning is to reuse learnt knowledge across different contexts. In the particular case of cross-domain transfer (also known as domain adaptation), reuse happens across different but related knowledge domains. While there have been promising first results in combining learning with symbolic knowledge to improve cross-domain transfer results, the singular ability of ontologies for providing classificatory knowledge has not been fully exploited so far by the machine learning community. We show that ontologies, if properly designed, are able to support transfer learning by improving generalization and discrimination across classes. We propose an architecture based on direct attribute prediction for combining ontologies with a transfer learning framework, as well as an ontology-based solution for cross-domain generalization based on the integration of top-level and domain ontologies. We validate the solution on an experiment over an image classification task, demonstrating the system’s improved classification performance.


2020 ◽  
Vol 12 (12) ◽  
pp. 2035 ◽  
Author(s):  
Peida Wu ◽  
Ziguan Cui ◽  
Zongliang Gan ◽  
Feng Liu

Recently, deep learning methods based on three-dimensional (3-D) convolution have been widely used in the hyperspectral image (HSI) classification tasks and shown good classification performance. However, affected by the irregular distribution of various classes in HSI datasets, most previous 3-D convolutional neural network (CNN)-based models require more training samples to obtain better classification accuracies. In addition, as the network deepens, which leads to the spatial resolution of feature maps gradually decreasing, much useful information may be lost during the training process. Therefore, how to ensure efficient network training is key to the HSI classification tasks. To address the issue mentioned above, in this paper, we proposed a 3-DCNN-based residual group channel and space attention network (RGCSA) for HSI classification. Firstly, the proposed bottom-up top-down attention structure with the residual connection can improve network training efficiency by optimizing channel-wise and spatial-wise features throughout the whole training process. Secondly, the proposed residual group channel-wise attention module can reduce the possibility of losing useful information, and the novel spatial-wise attention module can extract context information to strengthen the spatial features. Furthermore, our proposed RGCSA network only needs few training samples to achieve higher classification accuracies than previous 3-D-CNN-based networks. The experimental results on three commonly used HSI datasets demonstrate the superiority of our proposed network based on the attention mechanism and the effectiveness of the proposed channel-wise and spatial-wise attention modules for HSI classification. The code and configurations are released at Github.com.


Author(s):  
Halit Dogan ◽  
Md Mahbub Alam ◽  
Navid Asadizanjani ◽  
Sina Shahbazmohamadi ◽  
Domenic Forte ◽  
...  

Abstract X-ray tomography is a promising technique that can provide micron level, internal structure, and three dimensional (3D) information of an integrated circuit (IC) component without the need for serial sectioning or decapsulation. This is especially useful for counterfeit IC detection as demonstrated by recent work. Although the components remain physically intact during tomography, the effect of radiation on the electrical functionality is not yet fully investigated. In this paper we analyze the impact of X-ray tomography on the reliability of ICs with different fabrication technologies. We perform a 3D imaging using an advanced X-ray machine on Intel flash memories, Macronix flash memories, Xilinx Spartan 3 and Spartan 6 FPGAs. Electrical functionalities are then tested in a systematic procedure after each round of tomography to estimate the impact of X-ray on Flash erase time, read margin, and program operation, and the frequencies of ring oscillators in the FPGAs. A major finding is that erase times for flash memories of older technology are significantly degraded when exposed to tomography, eventually resulting in failure. However, the flash and Xilinx FPGAs of newer technologies seem less sensitive to tomography, as only minor degradations are observed. Further, we did not identify permanent failures for any chips in the time needed to perform tomography for counterfeit detection (approximately 2 hours).


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Jonas Albers ◽  
Angelika Svetlove ◽  
Justus Alves ◽  
Alexander Kraupner ◽  
Francesca di Lillo ◽  
...  

AbstractAlthough X-ray based 3D virtual histology is an emerging tool for the analysis of biological tissue, it falls short in terms of specificity when compared to conventional histology. Thus, the aim was to establish a novel approach that combines 3D information provided by microCT with high specificity that only (immuno-)histochemistry can offer. For this purpose, we developed a software frontend, which utilises an elastic transformation technique to accurately co-register various histological and immunohistochemical stainings with free propagation phase contrast synchrotron radiation microCT. We demonstrate that the precision of the overlay of both imaging modalities is significantly improved by performing our elastic registration workflow, as evidenced by calculation of the displacement index. To illustrate the need for an elastic co-registration approach we examined specimens from a mouse model of breast cancer with injected metal-based nanoparticles. Using the elastic transformation pipeline, we were able to co-localise the nanoparticles to specifically stained cells or tissue structures into their three-dimensional anatomical context. Additionally, we performed a semi-automated tissue structure and cell classification. This workflow provides new insights on histopathological analysis by combining CT specific three-dimensional information with cell/tissue specific information provided by classical histology.


Sign in / Sign up

Export Citation Format

Share Document