From Prediction to Explanation: Using Explainable AI to Understand Satellite-Based Riot Forecasting Models
This study investigates the application of explainable AI (XAI) techniques to understand the deep learning models used for predicting urban conflict from satellite imagery. First, a ResNet18 convolutional neural network achieved 89% accuracy in distinguishing riot and non-riot urban areas. Using the...
Saved in:
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2025-01-01
|
Series: | Remote Sensing |
Subjects: | |
Online Access: | https://www.mdpi.com/2072-4292/17/2/313 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1832587545797459968 |
---|---|
author | Scott Warnke Daniel Runfola |
author_facet | Scott Warnke Daniel Runfola |
author_sort | Scott Warnke |
collection | DOAJ |
description | This study investigates the application of explainable AI (XAI) techniques to understand the deep learning models used for predicting urban conflict from satellite imagery. First, a ResNet18 convolutional neural network achieved 89% accuracy in distinguishing riot and non-riot urban areas. Using the Score-CAM technique, regions critical to the model’s predictions were identified, and masking these areas caused a 20.9% drop in the classification accuracy, highlighting their importance. However, Score-CAM’s ability to consistently localize key features was found to be limited, particularly in complex, multi-object urban environments. Analysis revealed minimal alignment between the model-identified features and traditional land use metrics, suggesting that deep learning captures unique patterns not represented in existing GIS datasets. These findings underscore the potential of deep learning to uncover previously unrecognized socio-spatial dynamics while revealing the need for improved interpretability methods. This work sets the stage for future research to enhance explainable AI techniques, bridging the gap between model performance and interpretability and advancing our understanding of urban conflict drivers. |
format | Article |
id | doaj-art-0faf5a76e6914c219dfefb3cc0d1548c |
institution | Kabale University |
issn | 2072-4292 |
language | English |
publishDate | 2025-01-01 |
publisher | MDPI AG |
record_format | Article |
series | Remote Sensing |
spelling | doaj-art-0faf5a76e6914c219dfefb3cc0d1548c2025-01-24T13:48:05ZengMDPI AGRemote Sensing2072-42922025-01-0117231310.3390/rs17020313From Prediction to Explanation: Using Explainable AI to Understand Satellite-Based Riot Forecasting ModelsScott Warnke0Daniel Runfola1Department of Applied Sciences, William & Mary, Williamsburg, VA 23185, USADepartment of Applied Sciences, William & Mary, Williamsburg, VA 23185, USAThis study investigates the application of explainable AI (XAI) techniques to understand the deep learning models used for predicting urban conflict from satellite imagery. First, a ResNet18 convolutional neural network achieved 89% accuracy in distinguishing riot and non-riot urban areas. Using the Score-CAM technique, regions critical to the model’s predictions were identified, and masking these areas caused a 20.9% drop in the classification accuracy, highlighting their importance. However, Score-CAM’s ability to consistently localize key features was found to be limited, particularly in complex, multi-object urban environments. Analysis revealed minimal alignment between the model-identified features and traditional land use metrics, suggesting that deep learning captures unique patterns not represented in existing GIS datasets. These findings underscore the potential of deep learning to uncover previously unrecognized socio-spatial dynamics while revealing the need for improved interpretability methods. This work sets the stage for future research to enhance explainable AI techniques, bridging the gap between model performance and interpretability and advancing our understanding of urban conflict drivers.https://www.mdpi.com/2072-4292/17/2/313deep learningconvolutional neural networkssatellite imageryexplainable AIland use/land cover |
spellingShingle | Scott Warnke Daniel Runfola From Prediction to Explanation: Using Explainable AI to Understand Satellite-Based Riot Forecasting Models Remote Sensing deep learning convolutional neural networks satellite imagery explainable AI land use/land cover |
title | From Prediction to Explanation: Using Explainable AI to Understand Satellite-Based Riot Forecasting Models |
title_full | From Prediction to Explanation: Using Explainable AI to Understand Satellite-Based Riot Forecasting Models |
title_fullStr | From Prediction to Explanation: Using Explainable AI to Understand Satellite-Based Riot Forecasting Models |
title_full_unstemmed | From Prediction to Explanation: Using Explainable AI to Understand Satellite-Based Riot Forecasting Models |
title_short | From Prediction to Explanation: Using Explainable AI to Understand Satellite-Based Riot Forecasting Models |
title_sort | from prediction to explanation using explainable ai to understand satellite based riot forecasting models |
topic | deep learning convolutional neural networks satellite imagery explainable AI land use/land cover |
url | https://www.mdpi.com/2072-4292/17/2/313 |
work_keys_str_mv | AT scottwarnke frompredictiontoexplanationusingexplainableaitounderstandsatellitebasedriotforecastingmodels AT danielrunfola frompredictiontoexplanationusingexplainableaitounderstandsatellitebasedriotforecastingmodels |