-
Notifications
You must be signed in to change notification settings - Fork 53
Description
Thanks for your impressive works.
I use your codes and run command
./run_train.sh
When it comes to train model on flyingthings, I find the performance on Sintel(train) is much better than your results provided in paper's Table 1. The only difference is that I change the batch size to 4 due to the memory limit. The logs are shown in below:




The experiment name of the blue curve is: things/latentcostformer/cost_heads_num[1]vert_c_dim[64]cnet[twins]pretrain[True]add_flow_token[True]encoder_depth[3]gma[GMA]cost_encoder_res[True] (08_19_21_07), and the orange curve is the logs of training on flyingchairs: chairs/latentcostformer/cost_heads_num[1]vert_c_dim[64]cnet[twins]pretrain[True]add_flow_token[True]encoder_depth[3]gma[True]cost_encoder_res[True]arxiv2(08_17_16_47).
So, I want to know why the performance is different from the one reported in the paper, is there any difference of codes?
Looking forward to your kind reply!