Document Type : Original Research Paper
Authors
Artificial Intelligence Department, Faculty of Computer Engineering, Shahid Rajaee Teacher Training University, Tehran, Iran
Abstract
Background and Objectives: Visual attention is a high order cognitive process of human brain which defines where a human observer attends. Dynamic computational visual attention models are modeled on the behavior of the human brain and can predict what areas a human will pay attention to when viewing a scene such as a video. However, several types of computational models have been proposed to provide a better understanding of saliency maps in static and dynamic environments, most of these models are used for specific scenes. In this paper, we propose a model that can generate saliency maps in a variety of dynamic environments with complex scenes.
Methods: We used a deep learner as a mediating network to combine basic saliency maps with appropriate weighting. Each of these basic saliency maps covers an important feature of human visual attention, and ultimately the final saliency map is very similar to human visual behavior.
Results: The proposed model is run on two datasets and the generated saliency maps are evaluated by different criteria such as ROC, CC, NSS, SIM and KLdiv. The results show that the proposed model has a good performance compared to other similar models.
Conclusion: The proposed model consists of three main parts, including basic saliency maps, gating network, and combinator. This model was implemented on the ETMD dataset and the resulting saliency maps (visual attention areas) were compared with some other models in this field by evaluation criteria and their results were evaluated. The results obtained from the proposed model are acceptable and based on the accepted evaluation criteria in this area, it performs better than similar models.
Keywords
Main Subjects
Open Access
This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit: http://creativecommons.org/licenses/by/4.0/
Publisher’s Note
JECEI Publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Publisher
Shahid Rajaee Teacher Training University
Send comment about this article