Combining Motion And Segmentation Information For Localization Of Occlusion Boundaries

1988 ◽  
Author(s):  
Keith Moler ◽  
Alan Scherf
Keyword(s):  
2012 ◽  
Vol 12 (13) ◽  
pp. 15-15 ◽  
Author(s):  
C. DiMattina ◽  
S. A. Fox ◽  
M. S. Lewicki

Author(s):  
S. Hussain Raza ◽  
Ahmad Humayun ◽  
Irfan Essa ◽  
Matthias Grundmann ◽  
David Anderson

Author(s):  
Syed Raza ◽  
Omar Javed ◽  
Aveek Das ◽  
Harpreet Sawhney ◽  
Hui Cheng ◽  
...  

2021 ◽  
Author(s):  
Heping Sheng ◽  
John Wilder ◽  
Dirk B. Walther

Abstract We often take people’s ability to understand and produce line drawings for granted. But where should we draw lines, and why? We address fundamental principles that underlie efficient representations of complex information in line drawings. First, 58 participants with varying degree of artistic experience produced multiple drawings of a small set of scenes by tracing contours on a digital tablet. Second, 37 independent observers ranked the drawings by how representative they are of the original photograph. Overall, artists’ drawings ranked higher than non-artists’. Matching contours between drawings of the same scene revealed that the most consistently drawn contours tend to be drawn earlier. We generated half-images with the most-versus least-consistently drawn contours by sorting contours by their consistency scores. Twenty five observers performed significantly better in a fast scene categorization task for the most compared to the least consistent half-images. The most consistent contours were longer and more likely to depict occlusion boundaries. Using psychophysics experiments and computational analysis, we confirmed quantitatively what makes certain contours in line drawings special: longer contours mark occlusion boundaries and aid rapid scene recognition. They allow artist and non-artists to convey important information starting from the first few strokes in their drawing process.


2010 ◽  
Vol 91 (3) ◽  
pp. 328-346 ◽  
Author(s):  
Derek Hoiem ◽  
Alexei A. Efros ◽  
Martial Hebert
Keyword(s):  

Author(s):  
Derek Hoiem ◽  
Andrew N. Stein ◽  
Alexei A. Efros ◽  
Martial Hebert

2011 ◽  
Vol 23 (3) ◽  
pp. 593-650 ◽  
Author(s):  
Nicolas Le Roux ◽  
Nicolas Heess ◽  
Jamie Shotton ◽  
John Winn

Computer vision has grown tremendously in the past two decades. Despite all efforts, existing attempts at matching parts of the human visual system's extraordinary ability to understand visual scenes lack either scope or power. By combining the advantages of general low-level generative models and powerful layer-based and hierarchical models, this work aims at being a first step toward richer, more flexible models of images. After comparing various types of restricted Boltzmann machines (RBMs) able to model continuous-valued data, we introduce our basic model, the masked RBM, which explicitly models occlusion boundaries in image patches by factoring the appearance of any patch region from its shape. We then propose a generative model of larger images using a field of such RBMs. Finally, we discuss how masked RBMs could be stacked to form a deep model able to generate more complicated structures and suitable for various tasks such as segmentation or object recognition.


Sign in / Sign up

Export Citation Format

Share Document