Abstract
A new bottom-up visual saliency model, Graph-Based Visual Saliency (GBVS), is proposed. It consists of two steps: first forming activation maps on certain feature \nchannels, and then normalizing them in a way which highlights conspicuity and admits combination with other maps. The model is simple, and biologically plausible \ninsofar as it is naturally parallelized. This model powerfully predicts human fixations on 749 variations of 108 natural images, achieving 98% of the ROC area \nof a human-based control, whereas the classical algorithms of Itti & Koch ([2], [3], [4]) achieve only 84%.