Abstract:
Abstract Predictive business process monitoring aims to enhance process execution by providing real-time predictions about the future evolution of a process instance. In recent years, several deep learning approaches have been established as state of the art for various predictive tasks, including those based on the transformer architecture. The transformer architecture is equipped with a powerful attention mechanism that assigns attention-based importance scores to each input element, guiding the model to focus on the most relevant parts of the sequence, regardless of their position. This capability leads to more accurate and contextually grounded predictions. However, like most deep learning models, transformers largely operate as a black box, making it challenging to trace how specific features influence the model’s predictions. In this paper, we conduct a series of experiments to examine the role of attention scores in a transformer-based next activity prediction model. Specifically, we investigate whether these scores provide meaningful explanations for the model’s decisions. Our findings reveal that attention scores can indeed serve as effective explanations. Building on these insights, we propose two novel, global, graph-based explanation approaches that illustrate the model’s understanding of the process’s control flow. Our evaluation using various metrics on both real-world and synthetic event logs demonstrates that these explainers effectively capture the model’s decision-making process. By improving interpretability, these insights not only enhance process participants’ confidence in predictive models but also offer a valuable foundation for refining model performance. Furthermore, our investigation into the reliability of attention scores offers valuable insights into how transformer models encapsulate temporal and sequential dependencies in prediction tasks.