-
Notifications
You must be signed in to change notification settings - Fork 594
HDDS-5413. Limit num of containers to process per round for ReplicationManager. #2395
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
bshashikant
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The idea looks good to me but would like to have @nandakumar131 opinion on it.
I'm not sure how well this will work in practice. Lets say we have 1M containers and the default limit here is 10,000. The default RM sleep interval is 5 minutes. It will take 100 iterations to process all the containers, so 500 minutes. If we lose one node on the cluster, then some percentage of these containers will be under-replicated. Lets say we have a 100 node cluster, so: 1M containers * 3 replicas = 3M / 100 DNs = 30k per DN. In theory we could process them all in 3 iterations, but this change will not do that. We also have to consider decommission. In the example I gave above, it might take upto 500 minutes for all the containers on the decommissioning node to get processed. Or worse, for maintenance mode, there may be only a handful of containers that need replicated for the node to goto maintenance, but it will potentially get delayed by 500mins due to the batch size. RM has problems for sure - I don't like the way it calculates all the replication work for all containers and then sends all the work down to the DNs. The DNs don't really give feedback, the attempts timeout on the RM (by default after 10 minutes) and it schedules more. I feel that we need some way to hold back the work in SCM and feed it to the DNs as they have capacity to accept it. That might mean the DNs feeding back via their heartbeat the pending replication tasks, and then RM not scheduling more commands until the DN has free capacity. Some DNs on the cluster may work faster than others for some reason (newer hardware, under less load somehow etc). Ideally they would clear their replication work faster than others and hence be able to receive more, but as things stand the work is all passed out randomly, so we cannot take advantage of that. Another thing we might want to consider, is to trigger RM based on a dead node or decommission / maintenance event, rather than waiting for the thread sleep interval. I believe we need to have a good think about how RM works, and what we could do to improve it generally. |
|
Thanks @sodonnel @bshashikant for your comments, we are planning to enhance the ReplicationManager these days, and I start to work with some metrics and basic throttling, but there are more problems to handle as donnel said below.
Yes, the default value was borrowed from HDFS directly and could be tuned for large clusters based on some numbers(num of DNs, etc). But I agree that we could take a different way to do counting, such as only count for non-good containers as processed.
Oh, this is really a problem for decommission, but we could have different priorities for containers and decommission/maintenance related containers could have higher priorities to be processed.
I like the idea of DN feedbacks about its replication states, such as num of tasks queued to show how busy the DN is.
Ah, this implies a new ContainerPlacementPolicy to be implemented to take hardware into consideration, maybe. But in most deployments for us, we have unified hardwares for all nodes, so we could live with the existing policies. For less workload, we could have feedbacks in node reports as discussed above.
Yes, it solves the problem you raised above, similar to the idea to have higher priorities for decommission / maintenance related containers.
Sure, I will put up a google doc soon, thanks~ |
|
A simple doc updated as discussed, you could find it above. |
|
Thanks for adding the doc. Lets think about this more next week and consider how we would like to see Replication Manager change and we can take it from there. |
What changes were proposed in this pull request?
Limit num of containers to process per round for ReplicationManager.
A doc describing the problem:
https://docs.google.com/document/d/1Y-xrcmqySXoo4DCX8ESp322wxmNwjejtTqsGS_KWS7Q/edit?usp=sharing
What is the link to the Apache JIRA
https://issues.apache.org/jira/browse/HDDS-5413
How was this patch tested?
new ut.