Localizing regional signals of climate variability using integrated gradients

Authors

  • Hans Emmanuel H. Gamido National Institute of Physics, University of the Philippines Diliman
  • Francis N. C. Paraan National Institute of Physics, University of the Philippines Diliman

Abstract

The accuracy of deep learning models comes at the expense of the interpretability of their results. This creates an imperative to assess the validity of network predictions, particularly in fields requiring complex network architectures coupled with high data granularity for detection or forecasting. In this paper, we demonstrate a model interpretability pipeline using explainable artificial intelligence (XAI) for a climate dataset classification task. After benchmarking, we selected the best-performing algorithm (integrated gradients) to generate attribution maps that explain how an adopted convolutional neural network (CNN) made its predictions in classifying 2-m surface temperature maps according to their decade classes. Results showed that the integrated gradients algorithm was able to localize regional indicators of climate variability, with similar results as previous work that implemented a different XAI algorithm. The resulting composite heatmaps indicate relevant regions around the globe that were important to the CNN in assigning decade classes for their respective time periods. Moreover, this model interpretability pipeline mitigates the bias in algorithm selection and extracts important spatiotemporal information from complex climate projections to make network decisions more outwardly interpretable.

Issue

Article ID

SPP-2024-PB-25

Section

Poster Session B (Complex Systems, Computational Physics, and Astrophysics)

Published

2024-07-01

How to Cite

[1]
HEH Gamido and FNC Paraan, Localizing regional signals of climate variability using integrated gradients, Proceedings of the Samahang Pisika ng Pilipinas 42, SPP-2024-PB-25 (2024). URL: https://proceedings.spp-online.org/article/view/SPP-2024-PB-25.