VisGraphVar: A benchmark generator for Assessing Variability in Graph Analysis Using Large Vision-Language Models
The fast advancement of Large Vision-Language Models (LVLMs) has shown immense potential. These models are increasingly capable of tackling abstract visual tasks. Geometric structures, particularly graphs with their inherent flexibility and complexity, serve as an excellent benchmark for evaluating...
Saved in:
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2025-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/10855899/ |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1832088134211338240 |
---|---|
author | Camilo Chacon Sartori Christian Blum Filippo Bistaffa |
author_facet | Camilo Chacon Sartori Christian Blum Filippo Bistaffa |
author_sort | Camilo Chacon Sartori |
collection | DOAJ |
description | The fast advancement of Large Vision-Language Models (LVLMs) has shown immense potential. These models are increasingly capable of tackling abstract visual tasks. Geometric structures, particularly graphs with their inherent flexibility and complexity, serve as an excellent benchmark for evaluating these models’ predictive capabilities. While human observers can readily identify subtle visual details and perform accurate analyses, our investigation reveals that state-of-the-art LVLMs exhibit consistent limitations in specific visual graph scenarios, especially when confronted with stylistic variations. In response to these challenges, we introduce <monospace>VisGraphVar</monospace> (Visual Graph Variability), a customizable benchmark generator able to produce graph images for seven distinct task categories (detection, classification, segmentation, pattern recognition, link prediction, reasoning, matching), designed to systematically evaluate the strengths and limitations of individual LVLMs. We use VisGraphVar to produce 990 graph images and evaluate six LVLMs, employing two distinct prompting strategies, namely zero-shot and chain-of-thought. The findings demonstrate that variations in visual attributes of images (e.g., node labeling and layout) and the deliberate inclusion of visual imperfections, such as overlapping nodes, significantly affect model performance. This research emphasizes the importance of a comprehensive evaluation across graph-related tasks, extending beyond reasoning alone. VisGraphVar offers valuable insights to guide the development of more reliable and robust systems capable of performing advanced visual graph analysis. The project URL is available at: [<uri>https://camilochs.github.io/visgraphvar-website</uri>]. |
format | Article |
id | doaj-art-fe98d8df6f5049d29d8fa9b01d54c3fd |
institution | Kabale University |
issn | 2169-3536 |
language | English |
publishDate | 2025-01-01 |
publisher | IEEE |
record_format | Article |
series | IEEE Access |
spelling | doaj-art-fe98d8df6f5049d29d8fa9b01d54c3fd2025-02-06T00:00:31ZengIEEEIEEE Access2169-35362025-01-0113217882181010.1109/ACCESS.2025.353583710855899VisGraphVar: A benchmark generator for Assessing Variability in Graph Analysis Using Large Vision-Language ModelsCamilo Chacon Sartori0https://orcid.org/0000-0002-8543-9893Christian Blum1https://orcid.org/0000-0002-1736-3559Filippo Bistaffa2https://orcid.org/0000-0003-1658-6125Artificial Intelligence Research Institute (IIIA-CSIC), Bellaterra, Barcelona, SpainArtificial Intelligence Research Institute (IIIA-CSIC), Bellaterra, Barcelona, SpainArtificial Intelligence Research Institute (IIIA-CSIC), Bellaterra, Barcelona, SpainThe fast advancement of Large Vision-Language Models (LVLMs) has shown immense potential. These models are increasingly capable of tackling abstract visual tasks. Geometric structures, particularly graphs with their inherent flexibility and complexity, serve as an excellent benchmark for evaluating these models’ predictive capabilities. While human observers can readily identify subtle visual details and perform accurate analyses, our investigation reveals that state-of-the-art LVLMs exhibit consistent limitations in specific visual graph scenarios, especially when confronted with stylistic variations. In response to these challenges, we introduce <monospace>VisGraphVar</monospace> (Visual Graph Variability), a customizable benchmark generator able to produce graph images for seven distinct task categories (detection, classification, segmentation, pattern recognition, link prediction, reasoning, matching), designed to systematically evaluate the strengths and limitations of individual LVLMs. We use VisGraphVar to produce 990 graph images and evaluate six LVLMs, employing two distinct prompting strategies, namely zero-shot and chain-of-thought. The findings demonstrate that variations in visual attributes of images (e.g., node labeling and layout) and the deliberate inclusion of visual imperfections, such as overlapping nodes, significantly affect model performance. This research emphasizes the importance of a comprehensive evaluation across graph-related tasks, extending beyond reasoning alone. VisGraphVar offers valuable insights to guide the development of more reliable and robust systems capable of performing advanced visual graph analysis. The project URL is available at: [<uri>https://camilochs.github.io/visgraphvar-website</uri>].https://ieeexplore.ieee.org/document/10855899/Benchmarkcomputer visiongraph theorylarge vision-language models |
spellingShingle | Camilo Chacon Sartori Christian Blum Filippo Bistaffa VisGraphVar: A benchmark generator for Assessing Variability in Graph Analysis Using Large Vision-Language Models IEEE Access Benchmark computer vision graph theory large vision-language models |
title | VisGraphVar: A benchmark generator for Assessing Variability in Graph Analysis Using Large Vision-Language Models |
title_full | VisGraphVar: A benchmark generator for Assessing Variability in Graph Analysis Using Large Vision-Language Models |
title_fullStr | VisGraphVar: A benchmark generator for Assessing Variability in Graph Analysis Using Large Vision-Language Models |
title_full_unstemmed | VisGraphVar: A benchmark generator for Assessing Variability in Graph Analysis Using Large Vision-Language Models |
title_short | VisGraphVar: A benchmark generator for Assessing Variability in Graph Analysis Using Large Vision-Language Models |
title_sort | visgraphvar a benchmark generator for assessing variability in graph analysis using large vision language models |
topic | Benchmark computer vision graph theory large vision-language models |
url | https://ieeexplore.ieee.org/document/10855899/ |
work_keys_str_mv | AT camilochaconsartori visgraphvarabenchmarkgeneratorforassessingvariabilityingraphanalysisusinglargevisionlanguagemodels AT christianblum visgraphvarabenchmarkgeneratorforassessingvariabilityingraphanalysisusinglargevisionlanguagemodels AT filippobistaffa visgraphvarabenchmarkgeneratorforassessingvariabilityingraphanalysisusinglargevisionlanguagemodels |