Skip to content

Conversation

@rijobro
Copy link
Contributor

@rijobro rijobro commented Jan 11, 2021

Last update to occlusion sensitivity.

Description

@wyli this finally behaves as I expect and I'll upload the tutorials to match.

Now you can:

  • I left the output without any postprocessing. The absolute values are useful, as they indicate how much
  • Output now has an extra dimension of size N added to the end, where N is the number of classes.
  • Also returned is the most probable class. i.e., when part of the image is occluded, does the predicted class change, and if so, to what?

Status

Ready/Work in progress/Hold

Types of changes

  • Non-breaking change (fix or new feature that would not break existing functionality).
  • Integration tests passed locally by running ./runtests.sh --codeformat --coverage.
  • Quick tests passed locally by running ./runtests.sh --quick.
  • In-line docstrings updated.
  • Documentation updated, tested make html command in the docs/ folder.

Signed-off-by: Richard Brown <33289025+rijobro@users.noreply.github.com>
Signed-off-by: Richard Brown <33289025+rijobro@users.noreply.github.com>
Signed-off-by: Richard Brown <33289025+rijobro@users.noreply.github.com>
Signed-off-by: Richard Brown <33289025+rijobro@users.noreply.github.com>
Signed-off-by: Richard Brown <33289025+rijobro@users.noreply.github.com>
Copy link
Contributor

@wyli wyli left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks -- some minor comments in line


def __call__( # type: ignore
self, x: torch.Tensor, class_idx: Optional[Union[int, torch.Tensor]] = None, b_box: Optional[Sequence] = None
self,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought with non-empty class_idx option it'll save some memory here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's a marginal increase in memory, the output image might be e.g., 64,64,64,3 instead of 64,64,64 for 3D and even smaller for 2D. Since batch size is always 1, the images will always be small, and it seems a shame to discard the information when it's generated anyway during the inference step at no extra cost.

Signed-off-by: Richard Brown <33289025+rijobro@users.noreply.github.com>
Signed-off-by: Richard Brown <33289025+rijobro@users.noreply.github.com>
Copy link
Contributor

@wyli wyli left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks!

@wyli wyli merged commit 70483b6 into Project-MONAI:master Jan 12, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants