This repository contains the code used to generate the figure that appear in the paper Interpretable Surrogate Modeling for Simulations: A State-of-the-Art Review and Perspectives on Explainable AI for Decision-Making
Surrogate models play a central role in reducing the computational cost of complex systems simulations across engineering disciplines, yet their black‑box nature often hinders insight into how input variables drive system responses. This state‑of‑the‑art review surveys the integration of surrogate modeling with explainable artificial intelligence (XAI) techniques to enhance transparency and support design decision making. We first classify simulation paradigms—from physics‑based to agent‑based models—and describe how machine‑learning surrogates complement traditional analysis in design exploration and uncertainty quantification. We then review a wide spectrum of interpretability methods, including variance‑based sensitivity analysis, partial dependence plots, individual conditional expectation curves, local interpretable model‑agnostic explanations, Shapley value decomposition, active subspace methods, and quantile regression, highlighting their strengths for capturing interactions, handling high‑dimensional inputs and correlated variables, and facilitating human comprehension. Next, we propose a unified workflow that integrates experimental design, adaptive sampling, and reliability‑based optimization with interpretation layers. Practical case studies—ranging from hybrid‑electric aircraft sizing to social segregation simulations—demonstrate how these approaches guide co‑design among multidisciplinary teams. Finally, we identify key challenges, such as dynamic model explainability, mixed‑variable systems, transfer learning for extreme‑value estimation, and robustness of explanation metrics, and we outline a research agenda that positions interpretability as a core principle in simulation‑based engineering workflows.