-
Notifications
You must be signed in to change notification settings - Fork 49
Description
Excuse me,when I train the pvrcnn_st on nuscenes with a single GPU, everything works fine. However, when I use the following command for multi-GPU training, the code fails to run, and the GPUs are not utilized.
The command :
"bash scripts/dist_train.sh 8 --cfg_file cfgs/da-waymo-nus_models/pvrcnn_st3d/pvrcnn_st3d.yaml --batch_size 16 --pretrained_model /data/zhaoshenao/code/ST3D/ckpt/pre-models/da-waymo-nus/st3d/st3d_pvrcnn.pth"
The code gets stuck at this point:
FutureWarning,
WARNING:torch.distributed.run:
Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
I can't figure out what the problem is. Can you give me some suggestions?