Moreover, “explainable” is an open term, bringing about other crucial notions when considering XAI’s implementation. Embedding in or deducting explainability from AI’s code and algorithms may be theoretically preferable but practically problematic because there is a clash between the prescribed nature of algorithms and code on the one hand and the flexibility of open-ended terminology on the other.
Indeed, when AI’s interpretability is tested by looking at the most critical parameters and factors shaping a decision, questions such as what amounts to “transparent” or “interpretable” AI arise. How high are such thresholds?