AI models produce outputs with high accuracy. The issue with AI models is, that a highly complex model’s output is simple, maybe so simple that it frustrates us with its simplicity. We don’t know what to do with it. Okay, this is the result, but why? Of course we still expect accurate predictions, but we also want to really understand how it comes to that decision. Explainability of AI models is gaining importance in AI approaches…