scholarly journals Chi-Squared Distance Metric Learning for Histogram Data

2015 ◽  
Vol 2015 ◽  
pp. 1-12 ◽  
Author(s):  
Wei Yang ◽  
Luhui Xu ◽  
Xiaopan Chen ◽  
Fengbin Zheng ◽  
Yang Liu

Learning a proper distance metric for histogram data plays a crucial role in many computer vision tasks. The chi-squared distance is a nonlinear metric and is widely used to compare histograms. In this paper, we show how to learn a general form of chi-squared distance based on the nearest neighbor model. In our method, the margin of sample is first defined with respect to the nearest hits (nearest neighbors from the same class) and the nearest misses (nearest neighbors from the different classes), and then the simplex-preserving linear transformation is trained by maximizing the margin while minimizing the distance between each sample and its nearest hits. With the iterative projected gradient method for optimization, we naturally introduce thel2,1norm regularization into the proposed method for sparse metric learning. Comparative studies with the state-of-the-art approaches on five real-world datasets verify the effectiveness of the proposed method.

2021 ◽  
Vol 552 ◽  
pp. 261-277
Author(s):  
Yibang Ruan ◽  
Yanshan Xiao ◽  
Zhifeng Hao ◽  
Bo Liu

Author(s):  
SHILIANG SUN ◽  
QIAONA CHEN

Distance metric learning is a powerful tool to improve performance in classification, clustering and regression tasks. Many techniques have been proposed for distance metric learning based on convex programming, kernel learning, dimension reduction and large margin. The recently proposed large margin nearest neighbor classification (LMNN) improves the performance of k-nearest neighbors classification (k-nn) by a learned global distance metric. However, it does not consider the locality of data distributions. We demonstrate a novel local distance metric learning method called hierarchical distance metric learning (HDM) which first builds a hierarchical structure by grouping data points according to the overlapping ratios defined by us and then learns distance metrics sequentially. In this paper, we combine HDM with LMNN and further propose a new method named hierarchical distance metric learning for large margin nearest neighbor classification (HLMNN). Experiments are performed on many artificial and real-world data sets. Comparisons with the traditional k-nn and the state-of-the-art LMNN show the effectiveness of the proposed HLMNN.


Author(s):  
Yunfeng Zhao ◽  
Guoxian Yu ◽  
Lei Liu ◽  
Zhongmin Yan ◽  
Lizhen Cui ◽  
...  

Partial-label learning (PLL) generally focuses on inducing a noise-tolerant multi-class classifier by training on overly-annotated samples, each of which is annotated with a set of labels, but only one is the valid label. A basic promise of existing PLL solutions is that there are sufficient partial-label (PL) samples for training. However, it is more common than not to have just few PL samples at hand when dealing with new tasks. Furthermore, existing few-shot learning algorithms assume precise labels of the support set; as such, irrelevant labels may seriously mislead the meta-learner and thus lead to a compromised performance. How to enable PLL under a few-shot learning setting is an important problem, but not yet well studied. In this paper, we introduce an approach called FsPLL (Few-shot PLL). FsPLL first performs adaptive distance metric learning by an embedding network and rectifying prototypes on the tasks previously encountered. Next, it calculates the prototype of each class of a new task in the embedding network. An unseen example can then be classified via its distance to each prototype. Experimental results on widely-used few-shot datasets demonstrate that our FsPLL can achieve a superior performance than the state-of-the-art methods, and it needs fewer samples for quickly adapting to new tasks.


2021 ◽  
Author(s):  
Tomoki Yoshida ◽  
Ichiro Takeuchi ◽  
Masayuki Karasuyama

Electronics ◽  
2021 ◽  
Vol 10 (5) ◽  
pp. 567
Author(s):  
Donghun Yang ◽  
Kien Mai Mai Ngoc ◽  
Iksoo Shin ◽  
Kyong-Ha Lee ◽  
Myunggwon Hwang

To design an efficient deep learning model that can be used in the real-world, it is important to detect out-of-distribution (OOD) data well. Various studies have been conducted to solve the OOD problem. The current state-of-the-art approach uses a confidence score based on the Mahalanobis distance in a feature space. Although it outperformed the previous approaches, the results were sensitive to the quality of the trained model and the dataset complexity. Herein, we propose a novel OOD detection method that can train more efficient feature space for OOD detection. The proposed method uses an ensemble of the features trained using the softmax-based classifier and the network based on distance metric learning (DML). Through the complementary interaction of these two networks, the trained feature space has a more clumped distribution and can fit well on the Gaussian distribution by class. Therefore, OOD data can be efficiently detected by setting a threshold in the trained feature space. To evaluate the proposed method, we applied our method to various combinations of image datasets. The results show that the overall performance of the proposed approach is superior to those of other methods, including the state-of-the-art approach, on any combination of datasets.


Sign in / Sign up

Export Citation Format

Share Document