@benoit-cty - Can we hold off on 2.0 release?
I'm now testing on the live multi-node setup and I'm getting bombarded by:
WARNING:apscheduler.scheduler:Execution of job "BaseEmissionsTracker._measure_power (trigger: interval[0:00:15], next run at: 2021-08-25 23:09:23 CEST)" skipped: maximum number of running instances reached (1)
WARNING:apscheduler.scheduler:Execution of job "BaseEmissionsTracker._measure_power (trigger: interval[0:00:15], next run at: 2021-08-25 23:09:38 CEST)" skipped: maximum number of running instances reached (1)
WARNING:apscheduler.scheduler:Execution of job "BaseEmissionsTracker._measure_power (trigger: interval[0:00:15], next run at: 2021-08-25 23:09:53 CEST)" skipped: maximum number of running instances reached (1)
WARNING:apscheduler.scheduler:Execution of job "BaseEmissionsTracker._measure_power (trigger: interval[0:00:15], next run at: 2021-08-25 23:10:08 CEST)" skipped: maximum number of running instances reached (1)
WARNING:apscheduler.scheduler:Execution of job "BaseEmissionsTracker._measure_power (trigger: interval[0:00:15], next run at: 2021-08-25 23:10:23 CEST)" skipped: maximum number of running instances reached (1)
WARNING:apscheduler.scheduler:Execution of job "BaseEmissionsTracker._measure_power (trigger: interval[0:00:15], next run at: 2021-08-25 23:10:38 CEST)" skipped: maximum number of running instances reached (1)
WARNING:apscheduler.scheduler:Execution of job "BaseEmissionsTracker._measure_power (trigger: interval[0:00:15], next run at: 2021-08-25 23:10:53 CEST)" skipped: maximum number of running instances reached (1)
WARNING:apscheduler.scheduler:Execution of job "BaseEmissionsTracker._measure_power (trigger: interval[0:00:15], next run at: 2021-08-25 23:11:08 CEST)" skipped: maximum number of running instances reached (1)
every few secs.
What is wrong and how do I fix this?
this problem wasn't there when I was last testing your PR on my machine (a single node), so perhaps this is something specific to the target HPC.
You can see how the tracker has been initialized here:
https://github.com/bigscience-workshop/Megatron-DeepSpeed/blob/350fb903935f900f400f140d157165d5fa4d7645/megatron/global_vars.py#L170-L175
Thank you!
@benoit-cty - Can we hold off on 2.0 release?
I'm now testing on the live multi-node setup and I'm getting bombarded by:
every few secs.
What is wrong and how do I fix this?
this problem wasn't there when I was last testing your PR on my machine (a single node), so perhaps this is something specific to the target HPC.
You can see how the tracker has been initialized here:
https://github.com/bigscience-workshop/Megatron-DeepSpeed/blob/350fb903935f900f400f140d157165d5fa4d7645/megatron/global_vars.py#L170-L175
Thank you!