scholarly journals A Rank-Deficient and Sparse Penalized Optimization Model for Compressive Indoor Radar Target Localization

Author(s):  
Van Ha Tang ◽  
Van-Giang Nguyen

This paper proposes a rank-deficient and sparse penalized optimization method for addressing the problem of through-wall radar imaging (TWRI) in the presence of structured wall clutter. Compressive TWRI enables fast data collection and accurate target localization, but faces with the challenges of incomplete data measurements and strong wall clutter. This paper handles these challenges by formulating the task of wall-clutter removal and target image reconstruction as a joint low-rank and sparse regularized minimization problem. In this problem,  the low-rank regularization is used to capture the low-dimensional structure of the wall signals and the sparse penalty is employed to represent the image of the indoor targets. We introduce an iterative algorithm based on the forward-backward proximal gradient technique to solve the large-scale optimization problem, which simultaneously removes unwanted wall clutter and reconstruct an image of indoor targets. Simulated and real radar data are used to validate the effectiveness of the proposed rank-deficient and sparse regularized optimization approach.

2021 ◽  
Vol 8 (3) ◽  
pp. 526-536
Author(s):  
L. Sadek ◽  
◽  
H. Talibi Alaoui ◽  

In this paper, we present a new approach for solving large-scale differential Lyapunov equations. The proposed approach is based on projection of the initial problem onto an extended block Krylov subspace by using extended nonsymmetric block Lanczos algorithm then, we get a low-dimensional differential Lyapunov matrix equation. The latter differential matrix equation is solved by the Backward Differentiation Formula method (BDF) or Rosenbrock method (ROS), the obtained solution allows to build a low-rank approximate solution of the original problem. Moreover, we also give some theoretical results. The numerical results demonstrate the performance of our approach.


2019 ◽  
Author(s):  
Mark Allen Thornton ◽  
Diana Tamir

Humans engage in a wide variety of different actions and activities. These range from simple motor actions like reaching for an object, to complex activities like governing a nation. Navigating everyday life requires people to make sense of this diversity of actions. We suggest that the mind simplifies this complex domain by attending primarily to the most essential features of actions. Using a parsimonious set of action dimensions, the mind can organize action knowledge in a low-dimensional representational space. In nine studies, we derive and validate such an action taxonomy. Studies 1-3 use large-scale text analyses to generate and test potential action dimensions. Study 4 validates interpretable labels for these dimensions. Studies 5-7 demonstrate that these dimensions can explain human judgments about actions. We perform model selection on data from Studies 5-7 to arrive at the optimal set of six psychological dimensions, together forming the Abstraction, Creation, Tradition, Food, Animacy, Spiritualism Taxonomy (ACT-FAST). Study 8 demonstrates that ACT-FAST can predict socially relevant qualities of actions, including how, when, where, why, and by whom they are performed. Finally, Study 9 shows that ACT-FAST can explain action-related patterns of brain activity using naturalistic fMRI. Together, these studies reveal the dimensional structure the mind applies to organize action concepts.


Author(s):  
Kai Cao ◽  
Xiangqi Bai ◽  
Yiguang Hong ◽  
Lin Wan

AbstractSingle-cell multi-omics data provide a comprehensive molecular view of cells. However, single-cell multi-omics datasets consist of unpaired cells measured with distinct unmatched features across modalities, making data integration challenging. In this study, we present a novel algorithm, termed UnionCom, for the unsupervised topological alignment of single-cell multi-omics integration. UnionCom does not require any correspondence information, either among cells or among features. It first embeds the intrinsic low-dimensional structure of each single-cell dataset into a distance matrix of cells within the same dataset and then aligns the cells across single-cell multi-omics datasets by matching the distance matrices via a matrix optimization method. Finally, it projects the distinct unmatched features across single-cell datasets into a common embedding space for feature comparability of the aligned cells. To match the complex nonlinear geometrical distorted low-dimensional structures across datasets, UnionCom proposes and adjusts a global scaling parameter on distance matrices for aligning similar topological structures. It does not require one-to-one correspondence among cells across datasets, and it can accommodate samples with dataset-specific cell types. UnionCom outperforms state-of-the-art methods on both simulated and real single-cell multi-omics datasets. UnionCom is robust to parameter choices, as well as subsampling of features. UnionCom software is available at https://github.com/caokai1073/UnionCom.


2022 ◽  
pp. 17-25
Author(s):  
Nancy Jan Sliper

Experimenters today frequently quantify millions or even billions of characteristics (measurements) each sample to address critical biological issues, in the hopes that machine learning tools would be able to make correct data-driven judgments. An efficient analysis requires a low-dimensional representation that preserves the differentiating features in data whose size and complexity are orders of magnitude apart (e.g., if a certain ailment is present in the person's body). While there are several systems that can handle millions of variables and yet have strong empirical and conceptual guarantees, there are few that can be clearly understood. This research presents an evaluation of supervised dimensionality reduction for large scale data. We provide a methodology for expanding Principal Component Analysis (PCA) by including category moment estimations in low-dimensional projections. Linear Optimum Low-Rank (LOLR) projection, the cheapest variant, includes the class-conditional means. We show that LOLR projections and its extensions enhance representations of data for future classifications while retaining computing flexibility and reliability using both experimental and simulated data benchmark. When it comes to accuracy, LOLR prediction outperforms other modular linear dimension reduction methods that require much longer computation times on conventional computers. LOLR uses more than 150 million attributes in brain image processing datasets, and many genome sequencing datasets have more than half a million attributes.


2021 ◽  
pp. 1-15
Author(s):  
Zhixuan xu ◽  
Caikou Chen ◽  
Guojiang Han ◽  
Jun Gao

As a successful improvement on Low Rank Representation (LRR), Latent Low Rank Representation (LatLRR) has been one of the state-of-the-art models for subspace clustering due to the capability of discovering the low dimensional subspace structures of data, especially when the data samples are insufficient and/or extremely corrupted. However, the LatLRR method does not consider the nonlinear geometric structures within data, which leads to the loss of the locality information among data in the learning phase. Moreover, the coefficients of the learnt representation matrix can be negative, which lack the interpretability. To solve the above drawbacks of LatLRR, this paper introduces Laplacian, sparsity and non-negativity to LatLRR model and proposes a novel subspace clustering method, termed latent low rank representation with non-negative, sparse and laplacian constraints (NNSLLatLRR), in which we jointly take into account non-negativity, sparsity and laplacian properties of the learnt representation. As a result, the NNSLLatLRR can not only capture the global low dimensional structure and intrinsic non-linear geometric information of the data, but also enhance the interpretability of the learnt representation. Extensive experiments on two face benchmark datasets and a handwritten digit dataset show that our proposed method outperforms existing state-of-the-art subspace clustering methods.


2021 ◽  
Vol 7 (7) ◽  
pp. 110
Author(s):  
Zehan Chao ◽  
Longxiu Huang ◽  
Deanna Needell

Matrix completion, the problem of completing missing entries in a data matrix with low-dimensional structure (such as rank), has seen many fruitful approaches and analyses. Tensor completion is the tensor analog that attempts to impute missing tensor entries from similar low-rank type assumptions. In this paper, we study the tensor completion problem when the sampling pattern is deterministic and possibly non-uniform. We first propose an efficient weighted Higher Order Singular Value Decomposition (HOSVD) algorithm for the recovery of the underlying low-rank tensor from noisy observations and then derive the error bounds under a properly weighted metric. Additionally, the efficiency and accuracy of our algorithm are both tested using synthetic and real datasets in numerical simulations.


2021 ◽  
Vol 33 (1) ◽  
pp. 105-137
Author(s):  
Leah Natasha Glassow ◽  
Victoria Rolfe ◽  
Kajsa Yang Hansen

AbstractResearch related to the “teacher characteristics” dimension of teacher quality has proven inconclusive and weakly related to student success, and addressing the teaching contexts may be crucial for furthering this line of inquiry. International large-scale assessments are well positioned to undertake such questions due to their systematic sampling of students, schools, and education systems. However, researchers are frequently prohibited from answering such questions due to measurement invariance related issues. This study uses the traditional multiple group confirmatory factor analysis (MGCFA) and an alignment optimization method to examine measurement invariance in several constructs from the teacher questionnaires in the Trends in International Mathematics and Science Study (TIMSS) 2015 across 46 education systems. Constructs included mathematics teacher’s Job satisfaction, School emphasis on academic success, School condition and resources, Safe and orderly school, and teacher’s Self-efficacy. The MGCFA results show that just three constructs achieve invariance at the metric level. However, an alignment optimization method is applied, and results show that all five constructs fall within the threshold of acceptable measurement non-invariance. This study therefore presents an argument that they can be validly compared across education systems, and a subsequent comparison of latent factor means compares differences across the groups. Future research may utilize the estimated factor means from the aligned models in order to further investigate the role of teacher characteristics and contexts in student outcomes.


Author(s):  
Adam Gordon Kline ◽  
Stephanie Palmer

Abstract The renormalization group (RG) is a class of theoretical techniques used to explain the collective physics of interacting, many-body systems. It has been suggested that the RG formalism may be useful in finding and interpreting emergent low-dimensional structure in complex systems outside of the traditional physics context, such as in biology or computer science. In such contexts, one common dimensionality-reduction framework already in use is information bottleneck (IB), in which the goal is to compress an ``input'' signal X while maximizing its mutual information with some stochastic ``relevance'' variable Y. IB has been applied in the vertebrate and invertebrate processing systems to characterize optimal encoding of the future motion of the external world. Other recent work has shown that the RG scheme for the dimer model could be ``discovered'' by a neural network attempting to solve an IB-like problem. This manuscript explores whether IB and any existing formulation of RG are formally equivalent. A class of soft-cutoff non-perturbative RG techniques are defined by families of non-deterministic coarsening maps, and hence can be formally mapped onto IB, and vice versa. For concreteness, this discussion is limited entirely to Gaussian statistics (GIB), for which IB has exact, closed-form solutions. Under this constraint, GIB has a semigroup structure, in which successive transformations remain IB-optimal. Further, the RG cutoff scheme associated with GIB can be identified. Our results suggest that IB can be used to impose a notion of ``large scale'' structure, such as biological function, on an RG procedure.


2020 ◽  
Vol 12 (7) ◽  
pp. 1164 ◽  
Author(s):  
Jie Kong ◽  
Quansen Sun ◽  
Mithun Mukherjee ◽  
Jaime Lloret

As remote sensing (RS) images increase dramatically, the demand for remote sensing image retrieval (RSIR) is growing, and has received more and more attention. The characteristics of RS images, e.g., large volume, diversity and high complexity, make RSIR more challenging in terms of speed and accuracy. To reduce the retrieval complexity of RSIR, a hashing technique has been widely used for RSIR, mapping high-dimensional data into a low-dimensional Hamming space while preserving the similarity structure of data. In order to improve hashing performance, we propose a new hash learning method, named low-rank hypergraph hashing (LHH), to accomplish for the large-scale RSIR task. First, LHH employs a l2-1 norm to constrain the projection matrix to reduce the noise and redundancy among features. In addition, low-rankness is also imposed on the projection matrix to exploit its global structure. Second, LHH uses hypergraphs to capture the high-order relationship among data, and is very suitable to explore the complex structure of RS images. Finally, an iterative algorithm is developed to generate high-quality hash codes and efficiently solve the proposed optimization problem with a theoretical convergence guarantee. Extensive experiments are conducted on three RS image datasets and one natural image dataset that are publicly available. The experimental results demonstrate that the proposed LHH outperforms the existing hashing learning in RSIR tasks.


2020 ◽  
Vol 36 (Supplement_1) ◽  
pp. i48-i56 ◽  
Author(s):  
Kai Cao ◽  
Xiangqi Bai ◽  
Yiguang Hong ◽  
Lin Wan

Abstract Motivation Single-cell multi-omics data provide a comprehensive molecular view of cells. However, single-cell multi-omics datasets consist of unpaired cells measured with distinct unmatched features across modalities, making data integration challenging. Results In this study, we present a novel algorithm, termed UnionCom, for the unsupervised topological alignment of single-cell multi-omics integration. UnionCom does not require any correspondence information, either among cells or among features. It first embeds the intrinsic low-dimensional structure of each single-cell dataset into a distance matrix of cells within the same dataset and then aligns the cells across single-cell multi-omics datasets by matching the distance matrices via a matrix optimization method. Finally, it projects the distinct unmatched features across single-cell datasets into a common embedding space for feature comparability of the aligned cells. To match the complex non-linear geometrical distorted low-dimensional structures across datasets, UnionCom proposes and adjusts a global scaling parameter on distance matrices for aligning similar topological structures. It does not require one-to-one correspondence among cells across datasets, and it can accommodate samples with dataset-specific cell types. UnionCom outperforms state-of-the-art methods on both simulated and real single-cell multi-omics datasets. UnionCom is robust to parameter choices, as well as subsampling of features. Availability and implementation UnionCom software is available at https://github.com/caokai1073/UnionCom. Supplementary information Supplementary data are available at Bioinformatics online.


Sign in / Sign up

Export Citation Format

Share Document