scholarly journals Top-Down Visual Saliency via Joint CRF and Dictionary Learning

Author(s):  
Jimei Yang ◽  
Ming-Hsuan Yang
Author(s):  
Nuo Tong ◽  
Shuiping Gou ◽  
Yao Yao ◽  
Chenjiao Wang ◽  
Jing Bai

2013 ◽  
Vol 09 (02) ◽  
pp. 1350010 ◽  
Author(s):  
MATTEO CACCIOLA ◽  
GIANLUIGI OCCHIUTO ◽  
FRANCESCO CARLO MORABITO

Many computer vision problems consist of making a suitable content description of images usually aiming to extract the relevant information content. In case of images representing paintings or artworks, the information extracted is rather subject-dependent, thus escaping any universal quantification. However, we proposed a measure of complexity of such kinds of oeuvres which is related to brain processing. The artistic complexity measures the brain inability to categorize complex nonsense forms represented in modern art, in a dynamic process of acquisition that most involves top-down mechanisms. Here, we compare the quantitative results of our analysis on a wide set of paintings of various artists to the cues extracted from a standard bottom-up approach based on visual saliency concept. In every painting inspection, the brain searches for more informative areas at different scales, then connecting them in an attempt to capture the full impact of information content. Artistic complexity is able to quantify information which might have been individually lost in the fruition of a human observer thus identifying the artistic hand. Visual saliency highlights the most salient areas of the paintings standing out from their neighbours and grabbing our attention. Nevertheless, we will show that a comparison on the ways the two algorithms act, may manifest some interesting links, finally indicating an interplay between bottom-up and top-down modalities.


2021 ◽  
Author(s):  
Uziel Jaramillo-Avila ◽  
Jonathan M. Aitken ◽  
Kevin Gurney ◽  
Sean R. Anderson

Author(s):  
Jiawei Xu ◽  
Shigang Yue

The driver-assistance system (DAS) becomes quite necessary in-vehicle equipment nowadays due to the large number of road traffic accidents worldwide. An efficient DAS detecting hazardous situations robustly is key to reduce road accidents. The core of a DAS is to identify salient regions or regions of interest relevant to visual attended objects in real visual scenes for further process. In order to achieve this goal, we present a method to locate regions of interest automatically based on a novel adaptive mean shift segmentation algorithm to obtain saliency objects. In the proposed mean shift algorithm, we use adaptive Bayesian bandwidth to find the convergence of all data points by iterations and the k-nearest neighborhood queries. Experiments showed that the proposed algorithm is efficient, and yields better visual salient regions comparing with ground-truth benchmark. The proposed algorithm continuously outperformed other known visual saliency methods, generated higher precision and better recall rates, when challenged with natural scenes collected locally and one of the largest publicly available data sets. The proposed algorithm can also be extended naturally to detect moving vehicles in dynamic scenes once integrated with top-down shape biased cues, as demonstrated in our experiments.


2019 ◽  
Author(s):  
Louisa Kulke

Emotional faces draw attention and eye-movements towards them. However, the neural mechanisms of attention have mainly been investigated during fixation, which is uncommon in everyday life where people move their eyes to shift attention to faces. Therefore, the current study combined eye-tracking and Electroencephalography (EEG) to measure neural mechanisms of overt attention shifts to faces with happy, neutral and angry expressions, allowing participants to move their eyes freely towards the stimuli. Saccade latencies towards peripheral faces did not differ depending on expression and early neural response (P1) amplitudes and latencies were unaffected. However, the later occurring Early Posterior Negativity (EPN) was significantly larger for emotional than for neutral faces. This response occurs after saccades towards the faces. Therefore, emotion modulations only occurred after an overt shift of gaze towards the stimulus had already been completed. Visual saliency rather than emotional content may therefore drive early saccades, while later top-down processes reflect emotion processing.


Author(s):  
Jeremiah D. Still ◽  
Christopher M. Masciocchi

In this chapter, the authors highlight the influence of visual saliency, or local contrast, on users’ searches of interfaces, particularly web pages. Designers have traditionally focused on the importance of goals and expectations (top-down processes) for the navigation of interfaces (Diaper & Stanton, 2004), with little consideration for the influence of saliency (bottom-up processes). The Handbook of Human-Computer Interaction (Sears & Jacko, 2008), for example, does not discuss the influence of bottom-up processing, potentially neglecting an important aspect of interface-based searches. The authors review studies that demonstrate how a user’s attention is rapidly drawn to visually salient locations in a variety of tasks and scenes, including web pages. They then describe an inexpensive, rapid technique that designers can use to identify visually salient locations in web pages, and discuss its advantages over similar methods.


Author(s):  
Vasili Ramanishka ◽  
Abir Das ◽  
Jianming Zhang ◽  
Kate Saenko
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document