Explainable Security Requirements Classification Through Transformer Models
Security and non-security requirements are two critical issues in software development. Classifying requirements is crucial as it aids in recalling security needs during the early stages of development, ultimately leading to enhanced security in the final software solution. However, it remains a cha...
Saved in:
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2025-01-01
|
Series: | Future Internet |
Subjects: | |
Online Access: | https://www.mdpi.com/1999-5903/17/1/15 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1832588426723983360 |
---|---|
author | Luca Petrillo Fabio Martinelli Antonella Santone Francesco Mercaldo |
author_facet | Luca Petrillo Fabio Martinelli Antonella Santone Francesco Mercaldo |
author_sort | Luca Petrillo |
collection | DOAJ |
description | Security and non-security requirements are two critical issues in software development. Classifying requirements is crucial as it aids in recalling security needs during the early stages of development, ultimately leading to enhanced security in the final software solution. However, it remains a challenging task to classify requirements into security and non-security categories automatically. In this work, we propose a novel method for automatically classifying software requirements using transformer models to address these challenges. In this work, we fine-tuned four pre-trained transformers using four datasets (the original one and the three augmented versions). In addition, we employ few-shot learning techniques by leveraging transfer learning models, explicitly utilizing pre-trained architectures. The study demonstrates that these models can effectively classify security requirements with reasonable accuracy, precision, recall, and F1-score, demonstrating that the fine-tuning and SetFit can help smaller models generalize, making them suitable for enhancing security processes in the Software Development Cycle. Finally, we introduced the explainability of fine-tuned models to elucidate how each model extracts and interprets critical information from input sequences through attention visualization heatmaps. |
format | Article |
id | doaj-art-8d523251da6a4b5195047cbac1892aa2 |
institution | Kabale University |
issn | 1999-5903 |
language | English |
publishDate | 2025-01-01 |
publisher | MDPI AG |
record_format | Article |
series | Future Internet |
spelling | doaj-art-8d523251da6a4b5195047cbac1892aa22025-01-24T13:33:34ZengMDPI AGFuture Internet1999-59032025-01-011711510.3390/fi17010015Explainable Security Requirements Classification Through Transformer ModelsLuca Petrillo0Fabio Martinelli1Antonella Santone2Francesco Mercaldo3Institute for Informatics and Telematics, National Research Council of Italy (CNR), 56124 Pisa, ItalyInstitute for High Performance Computing and Networking, National Research Council of Italy (CNR), 87036 Rende, ItalyDepartment of Medicine and Health Sciences “Vincenzo Tiberio”, University of Molise, 86100 Campobasso, ItalyInstitute for Informatics and Telematics, National Research Council of Italy (CNR), 56124 Pisa, ItalySecurity and non-security requirements are two critical issues in software development. Classifying requirements is crucial as it aids in recalling security needs during the early stages of development, ultimately leading to enhanced security in the final software solution. However, it remains a challenging task to classify requirements into security and non-security categories automatically. In this work, we propose a novel method for automatically classifying software requirements using transformer models to address these challenges. In this work, we fine-tuned four pre-trained transformers using four datasets (the original one and the three augmented versions). In addition, we employ few-shot learning techniques by leveraging transfer learning models, explicitly utilizing pre-trained architectures. The study demonstrates that these models can effectively classify security requirements with reasonable accuracy, precision, recall, and F1-score, demonstrating that the fine-tuning and SetFit can help smaller models generalize, making them suitable for enhancing security processes in the Software Development Cycle. Finally, we introduced the explainability of fine-tuned models to elucidate how each model extracts and interprets critical information from input sequences through attention visualization heatmaps.https://www.mdpi.com/1999-5903/17/1/15requirements classificationtransformersexplainability |
spellingShingle | Luca Petrillo Fabio Martinelli Antonella Santone Francesco Mercaldo Explainable Security Requirements Classification Through Transformer Models Future Internet requirements classification transformers explainability |
title | Explainable Security Requirements Classification Through Transformer Models |
title_full | Explainable Security Requirements Classification Through Transformer Models |
title_fullStr | Explainable Security Requirements Classification Through Transformer Models |
title_full_unstemmed | Explainable Security Requirements Classification Through Transformer Models |
title_short | Explainable Security Requirements Classification Through Transformer Models |
title_sort | explainable security requirements classification through transformer models |
topic | requirements classification transformers explainability |
url | https://www.mdpi.com/1999-5903/17/1/15 |
work_keys_str_mv | AT lucapetrillo explainablesecurityrequirementsclassificationthroughtransformermodels AT fabiomartinelli explainablesecurityrequirementsclassificationthroughtransformermodels AT antonellasantone explainablesecurityrequirementsclassificationthroughtransformermodels AT francescomercaldo explainablesecurityrequirementsclassificationthroughtransformermodels |