Skip to content

Comments

Add absolute tolerance to test_net.py to prevent random Travis fails#5973

Merged
Noiredd merged 1 commit intoBVLC:masterfrom
Noiredd:pytest
Oct 19, 2017
Merged

Add absolute tolerance to test_net.py to prevent random Travis fails#5973
Noiredd merged 1 commit intoBVLC:masterfrom
Noiredd:pytest

Conversation

@Noiredd
Copy link
Member

@Noiredd Noiredd commented Oct 11, 2017

Follow up on issue #5960 - fix by adding absolute tolerance of 1e-5 to test_forward_start_end() and test_backward_start_end() in test_net.py. I iterated python -m unittest discover -s caffe/test -p "test_net.py" over the current master 500 times and no random fails occurred.

Additionally I removed semicolons that ended each line in those two functions - it looked weird.

@shaibagon
Copy link
Member

LGTM

@Noiredd
Copy link
Member Author

Noiredd commented Oct 11, 2017

Why did you opt for adding atol instead of increasing slightly the random input numbers?

I explained that in the comment under the issue itself. This test does not only concern randomly generated data, but also weights in the InnerProduct layer of the test net. We can increase the data slightly but the weights (drawn from a normal distribution) can still roll such that some output will be small.

@shaibagon
Copy link
Member

@Noiredd what holds you from merging this PR?

@Noiredd Noiredd merged commit b4ffad8 into BVLC:master Oct 19, 2017
@Noiredd Noiredd deleted the pytest branch October 19, 2017 07:54
@shelhamer
Copy link
Member

Thanks for addressing these spurious flaky tests!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants