Sync admin passwords at cluster setup finish#1803
Conversation
|
Hi @jaydoane , This doesn't fix the problems in #1781. Please read the use cases in #1781 (comment) and comment (either here or there). I am also not sure you want to use the exact same salt for all newly created admins - that would reduce security, unless I'm misunderstanding the patch? |
|
@wohali writes:
I think I would put it somewhat differently. To summarize, (and ignoring single-node config), there are basically two scenarios in which one can currently use the cluster setup code: both ways start with an "enable_cluster" request to the coordinating node, and both end with a "finish_cluster" request to the coordinating node. The scenarios only differ in the following ways:
The key difference between the two is that in scenario 2, instead of directly making a request to the new nodes, the client makes requests to the coordinator, and then the coordinator makes requests to the new nodes. The main benefit of this approach is that the coordinator sends the admin However, when this patch runs during the "finish_cluster" request, it also syncs the password hash from the coordinator to the other nodes in the cluster, and so scenario 1 also enjoys the benefits of synced admin password hashes (and is also redundant to how scenario 2 does it). Additionally, it creates a cluster-wide password salt (if one has not already been pre-defined in a config file) and syncs that, so that any subsequent admin user that gets added to the cluster will also have the same hashed password. It's that last bit that I thought was originally being requested, but it now sounds like there is a question about whether it's a good idea to salt all admin passwords with the same salt. If we decide that doing so is ok, then I think merging this patch will improve admin password administration overall. Otherwise, I'm fine to drop it and close this PR. I'm also happy to remove the "Fixes #1781" part, although I'm not exactly sure how, other than not clarifying the documentation, this doesn't fix inconsistent admin password hashes during cluster setup. |
|
@jaydoane OK, I agree with you that this is an improvement. We can merge this once the tests pass, but will you take the step to improve the documentation at the same time? I'd like to do both at once, so if you don't have time before the holidays start, we can leave this until 2019. |
|
@wohali I would be down with working on the docs, but I'm not sure which route to take. Do we really want to document both scenarios, when it seems like (especially after this patch) there's no obvious upside to using one over the other? It also occurs to me that, since one can now set up a cluster by making multiple requests to a single coordinator node, the whole process could in principle be made much simpler by changing the setup code to support a single request to the coordinator with all the information necessary in one data structure. For example, we could in principle deprecate the entire series of error-prone and confusing requests with something like this: where "nodes" represents an optional list of additional nodes in the cluster, and "admins" is an optional (?) list of all admins to be configured. A "node" object could also include an optional "bind_address", and perhaps other useful fields. This would also allow for (possibly randomly generated) individual creds for each node prior to forming a cluster. The POST request itself implicitly requires the credentials "user1:pass1" to authenticate. I think I'll play around with prototyping this unless you think it's a terrible idea for some reason. |
|
@jaydoane OK, I give up, let's not let perfect be the enemy of the good. Can you please merge this, then get a docs PR up? Thanks! |
d33b792 to
6ed0b85
Compare
|
@rnewson I'm curious whether you think having the same salt configured on all cluster nodes is somehow a security problem? Since we already do something similar with |
|
@wohali I was assuming I needed either an explicit |
|
You're right, you need a formal +1, and I'm selecting out of the ability to review this patch because my Erlang-fu is presently too weak. I was only commenting on the salt bit :) |
|
|
||
| hash_admin_password("simple", ClearPassword) -> % deprecated | ||
| Salt = couch_uuids:random(), | ||
| Salt = salt(), |
There was a problem hiding this comment.
This reduces our security. For PBKDF the salt should be (see https://www.owasp.org/index.php/Password_Storage_Cheat_Sheet):
- unique for each stored credential
- generated by cryptographically-strong PRNG
There was a problem hiding this comment.
@iilyak thanks for pointing out those requirements for the salt. However, those requirements me wonder about the security implications of our current practice of keeping [couch_httpd_auth] secret consistent among nodes, since that value is used in a similar way to generate cookie values.
Nevertheless, it's pretty clear that this approach will not fly. I think the general approach we would need to take is to obtain the hash prior to setting the admin creds, and then set (admin-username, password-hash), which won't be changed under the covers. This is sort of the approach taken here although I believe there's a race because that code doesn't explicitly wait until passwords are hashed, but just gets lucky that the hashing happens quickly enough.
There was a problem hiding this comment.
@jaydoane Please update testing recommendation.
iilyak
left a comment
There was a problem hiding this comment.
We cannot replace random salt with static value from config.
This code, even without the changes to salt in couch_passwords.erl, might still be useful in that it would sync the admin passwords in the "direct" setup scenario. And speaking of which, that is presumably the scenario fauxton is using, since if it were using the (documented)[http://docs.couchdb.org/en/latest/setup/cluster.html#the-cluster-setup-api] "remote" scenario, those admin hashes should be the same on all nodes. I looked through the fauxton source, but am not yet proficient enough with that code to determine which of the two setup scenarios it is actually using. Identifying these files as potentially relevant is about as far as I've delved: |
|
@iilyak I don't see the security problem with using the same hash on every node. Fixing that hash to an always-the-same value, yes, I see that as a problem. |
As you've said the same hash for every node is ok. However this PR is implemented in such a way that we use the same |
6ed0b85 to
01310a5
Compare
|
I removed all salt-related code and kept the admin syncing code. I tested by changing this line to remove the and then confirmed that all admin passwords are hashed the same after setup: Presumably, we could also remove the |
|
|
||
|
|
||
| sync_admin(User, Pass) -> | ||
| {Results, Errors} = rpc:multicall(nodes(), config, set, |
There was a problem hiding this comment.
Should we use nodes() -- [node()]?
There was a problem hiding this comment.
The docs say "Returns a list of all visible nodes in the system, except the local node. Same as nodes(visible)." so nodes() -- [node()] == nodes().
There was a problem hiding this comment.
While testing this PR I noticed a problem. One of the nodes was not getting correct hash for adm user.
I added debug statement which logs the list of nodes returned by nodes(). Turns out that sometimes it doesn't contain all nodes of the cluster. Maybe we should use mem3:nodes() instead?
There was a problem hiding this comment.
@iilyak I factored a new function other_nodes/0 which uses mem3:nodes/0. I subsequently re-ran the manual dev/run test half a dozen times, and it's working for me.
01310a5 to
8991678
Compare
|
Testing recommendations updated |
This ensures that admin password hashes are the same on all nodes when passwords are set directly on each node rather than through the coordinator node.
8991678 to
a6c1988
Compare
Overview
Sync admin passwords at cluster setup finish
Testing recommendations
I tested by changing this line to remove the
hashifyfunction like so:and then started couchdb normally:
dev/run -a adm:pass".In a remsh I confirmed that all admin passwords are hashed the same after setup:
Related Issues or Pull Requests
Checklist