The Walk of Guilt: Multimodal Deception Detection from Nonverbal Motion Behaviour
Detecting deceptive behaviour for surveillance and border protection is critical for a country’s security. With the advancement of technology in relation to sensors and artificial intelligence, recognising deceptive behaviour could be performed automatically. Following the success of affective compu...
Saved in:
Main Authors: | , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2024-12-01
|
Series: | Information |
Subjects: | |
Online Access: | https://www.mdpi.com/2078-2489/16/1/6 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1832588329259892736 |
---|---|
author | Sharifa Alghowinem Sabrina Caldwell Ibrahim Radwan Michael Wagner Tom Gedeon |
author_facet | Sharifa Alghowinem Sabrina Caldwell Ibrahim Radwan Michael Wagner Tom Gedeon |
author_sort | Sharifa Alghowinem |
collection | DOAJ |
description | Detecting deceptive behaviour for surveillance and border protection is critical for a country’s security. With the advancement of technology in relation to sensors and artificial intelligence, recognising deceptive behaviour could be performed automatically. Following the success of affective computing in emotion recognition from verbal and nonverbal cues, we aim to apply a similar concept for deception detection. Recognising deceptive behaviour has been attempted; however, only a few studies have analysed this behaviour from gait and body movement. This research involves a multimodal approach for deception detection from gait, where we fuse features extracted from body movement behaviours from a video signal, acoustic features from walking steps from an audio signal, and the dynamics of walking movement using an accelerometer sensor. Using the video recording of walking from the Whodunnit deception dataset, which contains 49 subjects performing scenarios that elicit deceptive behaviour, we conduct multimodal two-category (guilty/not guilty) subject-independent classification. The classification results obtained reached an accuracy of up to 88% through feature fusion, with an average of 60% from both single and multimodal signals. Analysing body movement using single modality showed that the visual signal had the highest performance followed by the accelerometer and acoustic signals. Several fusion techniques were explored, including early, late, and hybrid fusion, where hybrid fusion not only achieved the highest classification results, but also increased the confidence of the results. Moreover, using a systematic framework for selecting the most distinguishing features of guilty gait behaviour, we were able to interpret the performance of our models. From these baseline results, we can conclude that pattern recognition techniques could help in characterising deceptive behaviour, where future work will focus on exploring the tuning and enhancement of the results and techniques. |
format | Article |
id | doaj-art-1dec96a447cf44b6a07eca0c51e903cd |
institution | Kabale University |
issn | 2078-2489 |
language | English |
publishDate | 2024-12-01 |
publisher | MDPI AG |
record_format | Article |
series | Information |
spelling | doaj-art-1dec96a447cf44b6a07eca0c51e903cd2025-01-24T13:35:06ZengMDPI AGInformation2078-24892024-12-01161610.3390/info16010006The Walk of Guilt: Multimodal Deception Detection from Nonverbal Motion BehaviourSharifa Alghowinem0Sabrina Caldwell1Ibrahim Radwan2Michael Wagner3Tom Gedeon4Media Lab, Massachusetts Institute of Technology, Cambridge, MA 02139, USAResearch School of Computer Science, Australian National University, Canberra 2600, AustraliaHuman-Centred Computing Laboratory (HCC Lab), University of Canberra, Canberra 2617, AustraliaResearch School of Computer Science, Australian National University, Canberra 2600, AustraliaResearch School of Computer Science, Australian National University, Canberra 2600, AustraliaDetecting deceptive behaviour for surveillance and border protection is critical for a country’s security. With the advancement of technology in relation to sensors and artificial intelligence, recognising deceptive behaviour could be performed automatically. Following the success of affective computing in emotion recognition from verbal and nonverbal cues, we aim to apply a similar concept for deception detection. Recognising deceptive behaviour has been attempted; however, only a few studies have analysed this behaviour from gait and body movement. This research involves a multimodal approach for deception detection from gait, where we fuse features extracted from body movement behaviours from a video signal, acoustic features from walking steps from an audio signal, and the dynamics of walking movement using an accelerometer sensor. Using the video recording of walking from the Whodunnit deception dataset, which contains 49 subjects performing scenarios that elicit deceptive behaviour, we conduct multimodal two-category (guilty/not guilty) subject-independent classification. The classification results obtained reached an accuracy of up to 88% through feature fusion, with an average of 60% from both single and multimodal signals. Analysing body movement using single modality showed that the visual signal had the highest performance followed by the accelerometer and acoustic signals. Several fusion techniques were explored, including early, late, and hybrid fusion, where hybrid fusion not only achieved the highest classification results, but also increased the confidence of the results. Moreover, using a systematic framework for selecting the most distinguishing features of guilty gait behaviour, we were able to interpret the performance of our models. From these baseline results, we can conclude that pattern recognition techniques could help in characterising deceptive behaviour, where future work will focus on exploring the tuning and enhancement of the results and techniques.https://www.mdpi.com/2078-2489/16/1/6deception detectionbody posenonverbal behaviourmotion analysismultimodal fusion |
spellingShingle | Sharifa Alghowinem Sabrina Caldwell Ibrahim Radwan Michael Wagner Tom Gedeon The Walk of Guilt: Multimodal Deception Detection from Nonverbal Motion Behaviour Information deception detection body pose nonverbal behaviour motion analysis multimodal fusion |
title | The Walk of Guilt: Multimodal Deception Detection from Nonverbal Motion Behaviour |
title_full | The Walk of Guilt: Multimodal Deception Detection from Nonverbal Motion Behaviour |
title_fullStr | The Walk of Guilt: Multimodal Deception Detection from Nonverbal Motion Behaviour |
title_full_unstemmed | The Walk of Guilt: Multimodal Deception Detection from Nonverbal Motion Behaviour |
title_short | The Walk of Guilt: Multimodal Deception Detection from Nonverbal Motion Behaviour |
title_sort | walk of guilt multimodal deception detection from nonverbal motion behaviour |
topic | deception detection body pose nonverbal behaviour motion analysis multimodal fusion |
url | https://www.mdpi.com/2078-2489/16/1/6 |
work_keys_str_mv | AT sharifaalghowinem thewalkofguiltmultimodaldeceptiondetectionfromnonverbalmotionbehaviour AT sabrinacaldwell thewalkofguiltmultimodaldeceptiondetectionfromnonverbalmotionbehaviour AT ibrahimradwan thewalkofguiltmultimodaldeceptiondetectionfromnonverbalmotionbehaviour AT michaelwagner thewalkofguiltmultimodaldeceptiondetectionfromnonverbalmotionbehaviour AT tomgedeon thewalkofguiltmultimodaldeceptiondetectionfromnonverbalmotionbehaviour AT sharifaalghowinem walkofguiltmultimodaldeceptiondetectionfromnonverbalmotionbehaviour AT sabrinacaldwell walkofguiltmultimodaldeceptiondetectionfromnonverbalmotionbehaviour AT ibrahimradwan walkofguiltmultimodaldeceptiondetectionfromnonverbalmotionbehaviour AT michaelwagner walkofguiltmultimodaldeceptiondetectionfromnonverbalmotionbehaviour AT tomgedeon walkofguiltmultimodaldeceptiondetectionfromnonverbalmotionbehaviour |