Explanability or (Interpretability) is about the attempt to make the results of nonlinear models transparent so that they are not trapped in a black-box process. The approach adopted covers a few dimensions, the primary being algorithmic transparency.
Frameworks
LIME - Local Interpretable Model Agnostic Explanations
SHAP - Shapely Additive exPlnations. This is a framework that adopts a game theory approach to AI explanability. Details of the frameowrk can be found in this research paper : https://proceedings.neurips.cc/paper/2017/hash/8a20a8621978632d76c43dfd28b67767-Abstract.html
How can we validate companies claims on the robustness of their AI systems? Having prior knowledge about the problem we are trying to solve can help to select relevant features for modelling and mitigate the bias.