An exploratory study of interpretability for face presentation attack detection
Abstract Biometric recognition and presentation attack detection (PAD) methods strongly rely on deep learning algorithms. Though often more accurate, these models operate as complex black boxes. Interpretability tools are now being used to delve deeper into the operation of these methods, which is w...
Saved in:
Main Authors: | , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Wiley
2021-07-01
|
Series: | IET Biometrics |
Subjects: | |
Online Access: | https://doi.org/10.1049/bme2.12045 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1832546761196961792 |
---|---|
author | Ana F. Sequeira Tiago Gonçalves Wilson Silva João Ribeiro Pinto Jaime S. Cardoso |
author_facet | Ana F. Sequeira Tiago Gonçalves Wilson Silva João Ribeiro Pinto Jaime S. Cardoso |
author_sort | Ana F. Sequeira |
collection | DOAJ |
description | Abstract Biometric recognition and presentation attack detection (PAD) methods strongly rely on deep learning algorithms. Though often more accurate, these models operate as complex black boxes. Interpretability tools are now being used to delve deeper into the operation of these methods, which is why this work advocates their integration in the PAD scenario. Building upon previous work, a face PAD model based on convolutional neural networks was implemented and evaluated both through traditional PAD metrics and with interpretability tools. An evaluation on the stability of the explanations obtained from testing models with attacks known and unknown in the learning step is made. To overcome the limitations of direct comparison, a suitable representation of the explanations is constructed to quantify how much two explanations differ from each other. From the point of view of interpretability, the results obtained in intra and inter class comparisons led to the conclusion that the presence of more attacks during training has a positive effect in the generalisation and robustness of the models. This is an exploratory study that confirms the urge to establish new approaches in biometrics that incorporate interpretability tools. Moreover, there is a need for methodologies to assess and compare the quality of explanations. |
format | Article |
id | doaj-art-bdb49cbc2c18486c8a3f08467cd7ba16 |
institution | Kabale University |
issn | 2047-4938 2047-4946 |
language | English |
publishDate | 2021-07-01 |
publisher | Wiley |
record_format | Article |
series | IET Biometrics |
spelling | doaj-art-bdb49cbc2c18486c8a3f08467cd7ba162025-02-03T06:47:18ZengWileyIET Biometrics2047-49382047-49462021-07-0110444145510.1049/bme2.12045An exploratory study of interpretability for face presentation attack detectionAna F. Sequeira0Tiago Gonçalves1Wilson Silva2João Ribeiro Pinto3Jaime S. Cardoso4INESC TEC Porto Porto PortugalINESC TEC Porto Porto PortugalINESC TEC Porto Porto PortugalINESC TEC Porto Porto PortugalINESC TEC Porto Porto PortugalAbstract Biometric recognition and presentation attack detection (PAD) methods strongly rely on deep learning algorithms. Though often more accurate, these models operate as complex black boxes. Interpretability tools are now being used to delve deeper into the operation of these methods, which is why this work advocates their integration in the PAD scenario. Building upon previous work, a face PAD model based on convolutional neural networks was implemented and evaluated both through traditional PAD metrics and with interpretability tools. An evaluation on the stability of the explanations obtained from testing models with attacks known and unknown in the learning step is made. To overcome the limitations of direct comparison, a suitable representation of the explanations is constructed to quantify how much two explanations differ from each other. From the point of view of interpretability, the results obtained in intra and inter class comparisons led to the conclusion that the presence of more attacks during training has a positive effect in the generalisation and robustness of the models. This is an exploratory study that confirms the urge to establish new approaches in biometrics that incorporate interpretability tools. Moreover, there is a need for methodologies to assess and compare the quality of explanations.https://doi.org/10.1049/bme2.12045biometrics (access control)face recognitiondeep learning (artificial intelligence) |
spellingShingle | Ana F. Sequeira Tiago Gonçalves Wilson Silva João Ribeiro Pinto Jaime S. Cardoso An exploratory study of interpretability for face presentation attack detection IET Biometrics biometrics (access control) face recognition deep learning (artificial intelligence) |
title | An exploratory study of interpretability for face presentation attack detection |
title_full | An exploratory study of interpretability for face presentation attack detection |
title_fullStr | An exploratory study of interpretability for face presentation attack detection |
title_full_unstemmed | An exploratory study of interpretability for face presentation attack detection |
title_short | An exploratory study of interpretability for face presentation attack detection |
title_sort | exploratory study of interpretability for face presentation attack detection |
topic | biometrics (access control) face recognition deep learning (artificial intelligence) |
url | https://doi.org/10.1049/bme2.12045 |
work_keys_str_mv | AT anafsequeira anexploratorystudyofinterpretabilityforfacepresentationattackdetection AT tiagogoncalves anexploratorystudyofinterpretabilityforfacepresentationattackdetection AT wilsonsilva anexploratorystudyofinterpretabilityforfacepresentationattackdetection AT joaoribeiropinto anexploratorystudyofinterpretabilityforfacepresentationattackdetection AT jaimescardoso anexploratorystudyofinterpretabilityforfacepresentationattackdetection AT anafsequeira exploratorystudyofinterpretabilityforfacepresentationattackdetection AT tiagogoncalves exploratorystudyofinterpretabilityforfacepresentationattackdetection AT wilsonsilva exploratorystudyofinterpretabilityforfacepresentationattackdetection AT joaoribeiropinto exploratorystudyofinterpretabilityforfacepresentationattackdetection AT jaimescardoso exploratorystudyofinterpretabilityforfacepresentationattackdetection |