Design and Implement Deepfake Video Detection Using VGG-16 and Long Short-Term Memory
This study aims to design and implement deepfake video detection using VGG-16 in combination with long short-term memory (LSTM). In contrast to other studies, this study compares VGG-16, VGG-19, and the newest model, ResNet-101, including LSTM. All the models were tested using Celeb-DF video dataset...
Saved in:
| Main Authors: | , , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Wiley
2024-01-01
|
| Series: | Applied Computational Intelligence and Soft Computing |
| Online Access: | http://dx.doi.org/10.1155/2024/8729440 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | This study aims to design and implement deepfake video detection using VGG-16 in combination with long short-term memory (LSTM). In contrast to other studies, this study compares VGG-16, VGG-19, and the newest model, ResNet-101, including LSTM. All the models were tested using Celeb-DF video dataset. The result showed that the VGG-16 model with 15 epochs and 32 batch sizes had the highest performance. The results showed that the VGG-16 model with 15 epochs and 32 batch sizes exhibited the highest performance, with 96.25% accuracy, 93.04% recall, 99.20% specificity, and 99.07% precision. In conclusion, this model can be implemented practically. |
|---|---|
| ISSN: | 1687-9732 |