-
Notifications
You must be signed in to change notification settings - Fork 2
Open
Description
Thanks for your excellent work! I am trying to reproduce the project using the IARPA dataset on an NVIDIA RTX 4090. The data loading phase completes successfully. However, the training process hangs at this stage and produces no further logs for several hours. I verified that the environment is functional because run_JAX_RGB.sh (JAX_004) runs perfectly fine on the same machine.
Environment
- OS: Ubuntu 22.04
- GPU: NVIDIA RTX 4090
- Python: 3.9
- Pytorch: 2.3.1+cu121
- CUDA: 12.1
Steps to Reproduce
- Update data paths
# input paths
datasetdir=/media/shiki/4TB_SSD/eonerf_code/Datasets
root_dir=$datasetdir"/SatNeRF_IARPA/root_dir/rpcs_ba/"$aoi_id
cache_dir=$datasetdir"/SatNeRF_IARPA/cache_dir_utm/rpcs_ba/"$aoi_id"_ds"$downsample_factor
img_dir=$datasetdir"/SatNeRF_IARPA/crops/"$aoi_id
gt_dir=$datasetdir"/SatNeRF_IARPA/Truth"
shadow_masks_dir=$datasetdir"/SatNeRF_IARPA/Shadows-pred_v2/"$aoi_id
# output paths
out_dir="/media/shiki/4TB_SSD/eonerf_code/eonerfacc_logs_latest"
logs_dir=$out_dir/logs
ckpts_dir=$out_dir/ckpts
errs_dir=$out_dir/errs
mkdir -p $errs_dir
errs="$aoi_id"_errors.txt
- Run the training script
bash run_IARPA.sh IARPA_001
- Terminal output:
Running 2025-11-26_15-43-31_IARPA_001_IARPA_eonerf - Using gpu 0
Image 30JUN15WV031000015JUN30135323-M1BS-500497282080_01_P001_________GA_E0AAAAAAKAAK0 loaded ( 1 / 25 )
Image 04MAY15WV031000015MAY04135349-M1BS-500497283060_01_P001_________GA_E0AAAAAAKAAK0 loaded ( 2 / 25 )
Image 15MAR15WV031000015MAR15140133-M1BS-500497284070_01_P001_________GA_E0AAAAAAKAAK0 loaded ( 3 / 25 )
Image 18DEC15WV031000015DEC18140455-M1BS-500515572010_01_P001_________GA_E0AAAAAAKAAI0 loaded ( 4 / 25 )
Image 11JAN15WV031000015JAN11135414-M1BS-500497283010_01_P001_________GA_E0AAAAAAKAAL0 loaded ( 5 / 25 )
Image 18DEC15WV031000015DEC18140554-M1BS-500515572030_01_P001_________GA_E0AAAAAAKAAI0 loaded ( 6 / 25 )
Image 23OCT15WV031100015OCT23141928-M1BS-500497285030_01_P001_________GA_E0AAAAAAKAAK0 loaded ( 7 / 25 )
Image 12JUN15WV031000015JUN12140737-M1BS-500497284090_01_P001_________GA_E0AAAAAAKAAJ0 loaded ( 8 / 25 )
Image 03APR15WV031000015APR03140238-M1BS-500497283030_01_P001_________GA_E0AAAAAAKAAK0 loaded ( 9 / 25 )
Image 24JUN15WV031000015JUN24135730-M1BS-500497285080_01_P001_________GA_E0AAAAAAKAAK0 loaded ( 10 / 25 )
Image 02APR15WV031000015APR02134802-M1BS-500276959010_02_P001_________GA_E0AAAAAAKAAH0 loaded ( 11 / 25 )
Image 18DEC15WV031000015DEC18140510-M1BS-500515572040_01_P001_________GA_E0AAAAAAKAAI0 loaded ( 12 / 25 )
Image 03OCT15WV031000015OCT03140452-M1BS-500497283050_01_P001_________GA_E0AAAAAAKAAK0 loaded ( 13 / 25 )
Image 21MAR15WV031000015MAR21135704-M1BS-500497282060_01_P001_________GA_E0AAAAAAKAAK0 loaded ( 14 / 25 )
Image 14SEP15WV031000015SEP14140305-M1BS-500497285020_01_P001_________GA_E0AAAAAAKAAL0 loaded ( 15 / 25 )
Image 22APR15WV031000015APR22140347-M1BS-500497282070_01_P001_________GA_E0AAAAAAKAAL0 loaded ( 16 / 25 )
Image 19JUL15WV031000015JUL19135438-M1BS-500497285010_01_P001_________GA_E0AAAAAAKAAK0 loaded ( 17 / 25 )
Image 11FEB15WV031000015FEB11135123-M1BS-500497282030_01_P001_________GA_E0AAAAAAKAAK0 loaded ( 18 / 25 )
Image 01SEP15WV031000015SEP01135603-M1BS-500497284040_01_P001_________GA_E0AAAAAAKAAK0 loaded ( 19 / 25 )
Image 14JUL15WV031000015JUL14141509-M1BS-500497283020_01_P001_________GA_E0AAAAAAKAAK0 loaded ( 20 / 25 )
Image 11DEC15WV030900015DEC11135506-M1BS-500510591010_01_P001_________GA_E0AAAAAAKAAI0 loaded ( 21 / 25 )
Image 18DEC15WV031000015DEC18140544-M1BS-500515572060_01_P001_________GA_E0AAAAAAKAAI0 loaded ( 22 / 25 )
Image 05JAN15WV031000015JAN05135727-M1BS-500497282040_01_P001_________GA_E0AAAAAAKAAK0 loaded ( 23 / 25 )
Image 08MAR15WV031000015MAR08134953-M1BS-500497284060_01_P001_________GA_E0AAAAAAKAAK0 loaded ( 24 / 25 )
Image 02APR15WV031000015APR02134718-M1BS-500497282050_01_P001_________GA_E0AAAAAAKAAI0 loaded ( 25 / 25 )
datasets successfully loaded
occupancy grid is ready
tensorboard log is ready
/home/shiki/anaconda3/envs/eonerf/lib/python3.9/site-packages/torch/optim/lr_scheduler.py:143: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate
warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. "
Observations & Debugging Info
- No logs were generated afterward, and after waiting for several hours, it still stuck at the first step;
- GPU Utilization is high: nvidia-smi shows the GPU utilization is at ~98% and VRAM usage is 9GB.
Any suggestions or guidance on how to debug this would be greatly appreciated.
Metadata
Metadata
Assignees
Labels
No labels