Go to page content

Position Information in Visual Saliency and Convolutional Neural Networks

Thursday, Jan. 14, 2-3 p.m.

Webex

Faculty candidate seminar Dr. Sen Jia (Ryerson University) 

Abstract: 

Position information plays a significant role in visual saliency. Photographers tend to place the most interesting region near the center of an image. This unavoidable behaviour (known as center bias) results in a problem in saliency evaluation, blindly placing a Gaussian map at the center may outperform a well-designed saliency system. To penalize the center bias, shuffled-AUC (s-AUC) is widely used as a measure. However, s-AUC does not consider the position information between positive and negative points. In our study, a new metric, Farthest-Neighbour-AUC is proposed to evaluate saliency more accurately and effectively by taking the spatial relationship into account.  

Further, our study shows a Convolutional Neural Network (CNN) is able to encode the position information implicitly. A pre-trained CNN backbone may learn the position information of the input stimuli for image classification, which could be important when the task is location-dependent, e.g., saliency detection. Our work has discovered the reason behind the position information in CNNs could be the zero-padding strategy used for spatial alignment, removing zero-padding will reduce the information significantly.

Webex link:

https://mun.webex.com/mun/j.php?MTID=m2780f33faabce13535574c4194c867d6

Presented by Department of Computer Science

Event Listing 2021-01-14 14:00:00 2021-01-14 15:00:00 America/St_Johns Position Information in Visual Saliency and Convolutional Neural Networks Faculty candidate seminar Dr. Sen Jia (Ryerson University)  Abstract:  Position information plays a significant role in visual saliency. Photographers tend to place the most interesting region near the center of an image. This unavoidable behaviour (known as center bias) results in a problem in saliency evaluation, blindly placing a Gaussian map at the center may outperform a well-designed saliency system. To penalize the center bias, shuffled-AUC (s-AUC) is widely used as a measure. However, s-AUC does not consider the position information between positive and negative points. In our study, a new metric, Farthest-Neighbour-AUC is proposed to evaluate saliency more accurately and effectively by taking the spatial relationship into account.   Further, our study shows a Convolutional Neural Network (CNN) is able to encode the position information implicitly. A pre-trained CNN backbone may learn the position information of the input stimuli for image classification, which could be important when the task is location-dependent, e.g., saliency detection. Our work has discovered the reason behind the position information in CNNs could be the zero-padding strategy used for spatial alignment, removing zero-padding will reduce the information significantly. Webex link: https://mun.webex.com/mun/j.php?MTID=m2780f33faabce13535574c4194c867d6 Webex Department of Computer Science