Skip to content

Conversation

@naveenkcb
Copy link
Contributor

@naveenkcb naveenkcb commented Dec 29, 2025

Contributor: Naveen Baskaran

Contribution Type: Interpretability method, Tests, Example

Description
This PR implements the LIME (Local Interpretable Model-agnostic Explanations ) interpretability method for PyHealth models, enabling users to understand which features contribute most to model predictions. LIME explains model predictions by approximating the model locally with an interpretable surrogate model by creating feature perturbations and weighting locally.

Files to Review

  1. pyhealth/interpret/methods/init.py - added the LIME explainer class
  2. pyhealth/interpret/methods/lime.py - Core LIME method implementation. Supports embedding based attribution, continuous feature support
  3. examples/lime_stagenet_mimic4.py - Example script showing the usage of LIME method
  4. tests/core/test_lime.py - added comprehensive test cases to test the main class, utility methods and attribution methods.
  5. docs/api/interpret.rst - added support for LIME documentation
  6. docs/api/interpret/pyhealth.interpret.methods.lime.rst - added a new toctree file for LIME

Copy link
Collaborator

@jhnwu3 jhnwu3 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like you accidentally duplicated the SHAP changes into the same PR. you'll need to resolve the merge conflict.

@naveenkcb naveenkcb force-pushed the feature/lime-interpret branch from 90c4424 to d24ecfa Compare December 29, 2025 22:23
@naveenkcb
Copy link
Contributor Author

@jhnwu3 - I fixed this PR's commits to keep only the LIME related changes. it is ready for review now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants