During the training process, the consistency loss is getting bigger and bigger, and the other losses are getting smaller and smaller. Is it normal? Is it what we expect?
consistency_loss = 0
consistency_weight = get_current_consistency_weight(iter_num//len(loader_train_s), max_epoch)
consistency_dist = consistency_criterion(predout_t[train_params['labeled_bs']:], ema_output) #(batch, 3, 256, 256)
consistency_dist = torch.mean(consistency_dist)
consistency_loss = consistency_dist * consistency_weight
🐱👤
During the training process, the consistency loss is getting bigger and bigger, and the other losses are getting smaller and smaller. Is it normal? Is it what we expect?
🐱👤