Deep Learning: No more BlackBox
Deep Learning is defamed for its Black-Box nature, meaning one can’t get an easy explanation about the model and its features, which one can get in machine learning or statistical models.
Why is Deep learning Blackbox?
Unlike other techniques, Deep Learning creates its own features out of the dataset. These features can be thought of as different layers of neurons that activate/deactivate based on specific criteria, hence it is difficult to decode what has the model learned.
Captum (which means comprehension in latin) is a model interpretability and understanding library for PyTorch.
Model developers can use it to improve and troubleshoot models and ML researchers can easily implement interpretability algorithms that can interact with PyTorch models. Captum also allows researchers to quickly benchmark their work against other existing algorithms available in the library.
Captum is multi-modal and it supports various types of data including text (NLP), images (Computer Vision), and more. It supports BERT(NLP), CIFAR & RESNET(Computer Vision), and other models.
Note: It is currently in beta and under active development!
Interpret-Text (by Microsoft) incorporates community developed interpretability techniques for NLP models and a visualization dashboard to view the results.
It helps Data Scientists to interpret their model and to leverage its dashboard visualization for interpretations rather than creating them all by themselves. Business Executives (or stakeholders/users) can also use it to audit model predictions for potential unfairness.
Interpret-Text is not multi-modal and primarily supports NLP based models (like BERT/RNN) but has a more intuitive visualization experience
Check here for the GITHUB
Pytorch’s co-founder has also hinted that advancements in Deep Learning Model interpretation can be the next breakthrough after BERT(2017) and CONVNET(2012) and promises exciting times ahead for Deep Learning
Refer to this url for details