positive instance
Recently Published Documents


TOTAL DOCUMENTS

7
(FIVE YEARS 0)

H-INDEX

3
(FIVE YEARS 0)

Author(s):  
Teng Zhang ◽  
Hai Jin

Multi-instance learning (MIL) is a celebrated learning framework where each example is represented as a bag of instances. An example is negative if it has no positive instances, and vice versa if at least one positive instance is contained. During the past decades, various MIL algorithms have been proposed, among which the large margin based methods is a very popular class. Recently, the studies on margin theory disclose that the margin distribution is of more importance to generalization ability than the minimal margin. Inspired by this observation, we propose the multi-instance optimal margin distribution machine, which can identify the key instances via explicitly optimizing the margin distribution. We also extend a stochastic accelerated mirror prox method to solve the formulated minimax problem. Extensive experiments show the superiority of the proposed method.


Author(s):  
Ya-Lin Zhang ◽  
Zhi-Hua Zhou

Multi-instance learning (MIL) deals with the tasks where each example is represented by a bag of instances. A bag is positive if it contains at least one positive instance, and negative otherwise. The positive instances are also called key instances. Only bag labels are observed, whereas specific instance labels are not available in MIL. Previous studies typically assume that training and test data follow the same distribution, which may be violated in many real-world tasks. In this paper, we address the problem that the distribution of key instances varies between training and test phase. We refer to this problem as MIL with key instance shift and solve it by proposing an embedding based method MIKI. Specifically, to transform the bags into informative vectors, we propose a weighted multi-class model to select the instances with high positiveness as instance prototypes. Then we learn the importance weights for transformed bag vectors and incorporate original instance weights into them to narrow the gap between training/test distributions. Experimental results validate the effectiveness of our approach when key instance shift occurs.


2014 ◽  
Vol 40 ◽  
pp. 19-26 ◽  
Author(s):  
Zhan Li ◽  
Guo-Hua Geng ◽  
Jun Feng ◽  
Jin-ye Peng ◽  
Chao Wen ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document