-
Notifications
You must be signed in to change notification settings - Fork 594
HDDS-5209. Datanode hasEnoughSpace check should apply on volume instead of global DN #2246
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Hi @ChenSammi please help review this one, I'll check the CI failures, not related at first glance, thanks~ |
|
Hi @guihecheng , we have leverage the DatanodeInfo value. It seems there is no need to add new API in NodeManager. You can refer to these piece of codes, final DatanodeInfo datanodeInfo = nodeStateManager |
|
@ChenSammi ah, then I shall put the logic directly into hasEnoughSpace and avoid adding a new API, thanks~ |
a9fc6b3 to
d9299c5
Compare
|
@ChenSammi updated, thanks~ |
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/SCMCommonPlacementPolicy.java
Outdated
Show resolved
Hide resolved
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/SCMCommonPlacementPolicy.java
Outdated
Show resolved
Hide resolved
|
+1. Thanks @guihecheng for the contribution. |
What changes were proposed in this pull request?
When use placement policy to choose datanodes, we should check whether a datanode has a volume that
has enough space to hold the container, not check the space from all volumes together, because a container
could only be on a single volume not spread across volumes.
What is the link to the Apache JIRA
https://issues.apache.org/jira/browse/HDDS-5209
How was this patch tested?
extended existing ut.