This repository contains implementations of popular CNN explainability techniques such as CAM, Grad-CAM, Grad-CAM++, and more. The goal is to make it easy to visualize what parts of an image a convolutional neural network focuses on when making predictions.
At the moment, only Class Activation Mapping (CAM) is implemented. More methods — including Grad-CAM, Grad-CAM++, Score-CAM, and others — will be added soon.
This project is a work in progress. Expect frequent updates as new explainability techniques are added.
CAM highlights the image regions that contribute most to a model’s prediction.
Explore class_activation_mapping.ipynb
Contributions, suggestions, and ideas are welcome! Feel free to open an issue or submit a pull request.