VisualSAF-A Novel Framework for Visual Semantic Analysis Tasks

We introduce VisualSAF, a novel Visual Semantic Analysis Framework designed to enhance the understanding of contextual characteristics in Visual Scene Analysis (VSA) tasks. The framework leverages semantic variables extracted using machine learning algorithms to provide additional high-level informa...

Full description

Saved in:
Bibliographic Details
Main Authors: Antonio V. A. Lundgren, Byron L. D. Bezerra, Carmelo J. A. Bastos-Filho
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10855394/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:We introduce VisualSAF, a novel Visual Semantic Analysis Framework designed to enhance the understanding of contextual characteristics in Visual Scene Analysis (VSA) tasks. The framework leverages semantic variables extracted using machine learning algorithms to provide additional high-level information, augmenting the capabilities of the primary task model. Comprising three main components – the General DL Model, Semantic Variables, and Output Branches – VisualSAF offers a modular and adaptable approach to addressing diverse VSA tasks. The General DL Model processes input images, extracting high-level features through a backbone network and detecting regions of interest. Semantic Variables are then extracted from these regions, incorporating a wide range of contextual information tailored to specific scenarios. Finally, the Output Branch integrates semantic variables and detections, generating high-level task information while allowing for flexible weighting of inputs to optimize task performance. The framework is demonstrated through experiments on the HOD Dataset, showcasing improvements in mean average precision and mean average recall compared to baseline models; the improvements are 0.05 in both mAP and 0.01 in mAR compared to the baseline. Future research directions include exploring multiple semantic variables, developing more complex output heads, and investigating the framework’s performance across context-shifting datasets.
ISSN:2169-3536