Skip to content
This repository was archived by the owner on Dec 19, 2024. It is now read-only.

Helm: Configurable OSD directory or devices#562

Closed
hunter wants to merge 1 commit intoceph:masterfrom
AcalephStorage:helm-devices
Closed

Helm: Configurable OSD directory or devices#562
hunter wants to merge 1 commit intoceph:masterfrom
AcalephStorage:helm-devices

Conversation

@hunter
Copy link
Contributor

@hunter hunter commented Mar 21, 2017

I thought I'd raise this PR to get some discussion going on the best way to handle multiple OSDs in a cluster. There has been some discussion in various places about real disk support.

We've been using an approach with daemonsets per device across a cluster. It gives more control to the OSD lifecycle with drive types, crush location and rolling restarts. Its served us reasonable well for clusters with identical servers/disks but may be difficult to manage in a varied cluster.

It may be that the best solution is a ThirdPartyResources/Operator but for now this does allow the use of real devices with Helm. Please jump in with thoughts, alternatives and ideas

@leseb
Copy link
Member

leseb commented Mar 30, 2017

Do you mind rebasing this on top of master please?
I'm looking into it.

osd_directory:
enabled: true
osd_devices:
- device: sdb
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is an issue for people using persistent path such as /dev/disk/by-.
Can we ask for the full path instead?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, that is the downside with this approach and one of the reasons why I'd opened the PR for discussion.

We use the device for building the name and selectors in the daemonset (e.g. name: ceph-osd-{{ $value.device }}-{{ $value.type }}) so won't work for the longer by-* paths. I haven't really found with a decent alternative that will work for all cases. Any thoughts?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Alright, so I've been thinking and put my thoughts into this PR, would you mind having a look at the README and tell me what you think? I left some questions too. Thanks!
#610

The PR is obviously incomplete at the moment but the general idea is to keep one OSD per container and not to have to declare an OSD device for it. In an ideal world someone would ask k8s, deploy me a Ceph cluster from a specific set of labeled machines (storage node), take all the disks and use them to build my Ceph cluster.

@leseb
Copy link
Member

leseb commented Jan 18, 2018

I'm closing this since this should probably go in https://github.com/ceph/ceph-helm/.
Thanks

@leseb leseb closed this Jan 18, 2018
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants