Skip to content

Conversation

@juampatronics
Copy link
Contributor

Fixes #7697 .

Description

A few sentences describing the changes proposed in this pull request.

Types of changes

  • Non-breaking change (fix or new feature that would not break existing functionality).
  • Breaking change (fix or new feature that would cause existing functionality to change).
  • New tests added to cover the changes.
  • Integration tests passed locally by running ./runtests.sh -f -u --net --coverage.
  • Quick tests passed locally by running ./runtests.sh --quick --unittests --disttests.
  • In-line docstrings updated.
  • Documentation updated, tested make html command in the docs/ folder.

self.assertEqual(output.shape, sample.shape)
self.assertTrue(any(not torch.allclose(sample, cutmix(sample)) for _ in range(10)))
# croppings are different on each application... most of the times!
checks = [torch.allclose(sample, cutmix(sample)) for _ in range(1000)]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While this check might help mitigate the issue, it's not addressing the core problem. After set_determinism(seed=0), we should guarantee consistent transform results every time.

cc @ericspod

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we're setting a seed before the test then we should get the same results every time. Any behaviour dependent on a random variable should be dependent by this seed so that we can enforce determinism in cases like this. These transforms should inherit form RandomizableTransform but also set the random state of the internal delegate transform appropriately like here. This ensures the seed value is propagated to the delegate.

Sorry this wasn't discussed in the initial PR for these transforms, it's something we missed.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you @ericspod for the detailed explanation of how to correctly implement it

Also, I still think that the unit test was missing the case that no transform happens when the mixing only considers one of the samples. Now this is exactly tested and it is checked that it happens as often as expected, so I would leave it as it is now.

Best

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This does test that the mixing is a no-op an expected number of times, but I feel that's a problem with the tests not being deterministic. The issue is that for a given seed we should see the exact behaviour, so be able to choose a seed that gives a no-op for a fixed input and test that this continues to happen (changing the seed as needed). This test may pass if the no-op occurs correctly a known number of times, but could also pass if there's a bug causing no-ops at the wrong time but within your threshold. We aren't directly testing that a no-op occurs. Tests should always be deterministic, if we fix the Mixup/CutMix classes to be so we can do that here as well.

Signed-off-by: Juan Pablo de la Cruz Gutiérrez <juampatronics@gmail.com>
…nics@gmail.com>

I, Juan Pablo de la Cruz Gutiérrez <juampatronics@gmail.com>, hereby add my Signed-off-by to this commit: bc6dea3

Signed-off-by: Juan Pablo de la Cruz Gutiérrez <juampatronics@gmail.com>
@KumoLiu
Copy link
Contributor

KumoLiu commented Jul 30, 2024

Already fixed in #7813.

@KumoLiu KumoLiu closed this Jul 30, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

AssertionError in test_regularization

3 participants