|
func (r *LeaderElection) CanBeLeader(cluster api.Cluster, pod *v1.Pod) bool { |
|
if pod.GetDeletionTimestamp() != nil { |
|
return false |
|
} |
|
|
|
if !utils.IsPodRunning(pod) { |
|
return false |
|
} |
|
|
|
if !cluster.IsBootstrapped() { |
|
return utils.IsPodDefaultContainerReady(pod) |
|
} |
|
|
|
return utils.IsPodReady(pod) |
|
} |
Expected behavior
Any pod in which the cartridge is functioning and has a non-empty config can be elected as the leader. (when cluster already bootstrapped)
Pod ready condition may depends from readiness probe which can be quite different between clusters or even replicasets
Actual behavior
Two scenarios are possible:
- deadlock, operator cannot choose leader to apply some important config, and pods cannot be ready without a config.
- the operator relies only on ready status and chooses as the leader a pod that has lost config data. This may lead to the fact that the config either does not reach the entire cluster, or partially overwrites the current one
Environment
- Tarantool Operator 1.0.0-rc1
related:
#116
For some reason, the ticket was closed, but the problem remained uncorrected, nodes are still added to the topology one by one and not all together, as happens in the cartridge cli.
tarantool-operator/pkg/election/election.go
Lines 158 to 172 in 9cb8b09
Expected behavior
Any pod in which the cartridge is functioning and has a non-empty config can be elected as the leader. (when cluster already bootstrapped)
Pod ready condition may depends from readiness probe which can be quite different between clusters or even replicasets
Actual behavior
Two scenarios are possible:
Environment
related:
#116
For some reason, the ticket was closed, but the problem remained uncorrected, nodes are still added to the topology one by one and not all together, as happens in the cartridge cli.