Document worth reading: “The Mythos of Model Interpretability”
Supervised machine finding out fashions boast excellent predictive capabilities. But can you perception your model? Will it work in deployment? What else can it inform you regarding the world? We want fashions to be not solely good, nonetheless interpretable. And however the responsibility of interpretation appears underspecified. Papers current quite a few and usually non-overlapping motivations for interpretability, and supply myriad notions of what attributes render fashions interpretable. Despite this ambiguity, many papers proclaim interpretability axiomatically, absent extra clarification. In this paper, we search to refine the discourse on interpretability. First, we research the motivations underlying curiosity in interpretability, discovering them to be quite a few and sometimes discordant. Then, we take care of model properties and strategies thought to confer interpretability, determining transparency to folks and post-hoc explanations as competing notions. Throughout, we discuss concerning the feasibility and desirability of utterly completely different notions, and question the oft-made assertions that linear fashions are interpretable and that deep neural networks shouldn’t. The Mythos of Model Interpretability