You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Feb 7, 2025. It is now read-only.
Thanks for this amazing work, helps a lot in accelerating experiments!
I tried training a AE using PatchDiscriminator and ran into this issue, when switching to DistributedDataParallel (DDP)
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [512]] is at version 4; expected version 3 instead.
running it with torch.autograd.set_detect_anomaly(True) gives:
UserWarning: Error detected in CudnnBatchNormBackward0
With some troubleshooting I found that the issue is the BatchNorm. So running