Conversation
c18d407 to
18c51e0
Compare
| source start_mgr.sh | ||
| start_mgr | ||
| ;; | ||
| disk_introspection) |
There was a problem hiding this comment.
need to make sure that KV_TYPEis k8s|kubernetes before executing this.
| @@ -0,0 +1,15 @@ | |||
| # Introspect disks available on the machines using a container | |||
There was a problem hiding this comment.
.md filename extension is missing for this README file
| # Introspect disks available on the machines using a container | ||
|
|
||
| The idea is to widely deploy this container on the hosts that should become storage nodes. | ||
| Do we need a label for that? |
There was a problem hiding this comment.
Labels would certainly help target which nodes should contain this operator container.
| The idea is to widely deploy this container on the hosts that should become storage nodes. | ||
| Do we need a label for that? | ||
|
|
||
| The container will need the following privileged otherwise will fail: '--privileged=true -v /dev/:/dev/' |
There was a problem hiding this comment.
s/privileged/privileges/
s/'/`/g
| * store the list in a text file | ||
| * send that list to a configmaps | ||
|
|
||
| Configmaps are sent using the following name so they should be easily recognizable by the container: $(hostname -f)-disks |
There was a problem hiding this comment.
This will use the hostname of the pod itself. What if we use the hostname of the node that the pod is running on? That should help resolve the relationship between the list and where the container will run.
There was a problem hiding this comment.
Good point, how can we do this? Any command?
| Configmaps are sent using the following name so they should be easily recognizable by the container: $(hostname -f)-disks | ||
|
|
||
| Later, we run the k8s template that should iterate through the list of devices of a given configmaps. | ||
| The tricky part is to build the relationship between the list and where the container will run since the configmaps are named after the hostname... |
There was a problem hiding this comment.
See comment above. The pod can read the configmap using the name of the node it's running on. This should resolve any ambiguity.
|
|
||
| set -e | ||
|
|
||
| DISK_FILE=$(hostname -f)-disks |
There was a problem hiding this comment.
We should query the Kubernetes API to determine the hostname of the node we're running on.
|
|
||
| function get_all_disks_without_partitions { | ||
| for disk in $DISCOVERED_DEVICES; do | ||
| if [[ $(egrep -c $disk[0-9] /proc/partitions) == 0 ]]; then |
There was a problem hiding this comment.
Can we surround disk with curly braces as in ${disk}[0-9] to be more clear?
| done | ||
| if [ ! -s $DISK_FILE_PATH ]; then | ||
| log "No disk detected." | ||
| log "Abord mission!" |
| } | ||
|
|
||
| function store_disk_list_configmaps { | ||
| log "Creating configmap $DISK_FILE on the 'ceph' namespace" |
|
|
||
| function store_disk_list_configmaps { | ||
| log "Creating configmap $DISK_FILE on the 'ceph' namespace" | ||
| kubectl --namespace=ceph create configmap $DISK_FILE --from-file=$DISK_FILE_PATH |
There was a problem hiding this comment.
This is probably okay for now, but we should not assume the ceph namespace is being used. We could query for the namespace this pod is running in and use that instead.
| #!/bin/bash | ||
| set -e | ||
|
|
||
| DISK_FILE=$(uname -n)-disks |
There was a problem hiding this comment.
How about DISK_FILE=$(kubectl get pods $(hostname) -o template --template="{{.spec.nodeName}}")-disks?
|
|
||
| function store_disk_list_configmaps { | ||
| log "Creating configmap $DISK_FILE in the 'ceph' namespace" | ||
| kubectl --namespace=ceph create configmap "$DISK_FILE" --from-file="$DISK_FILE_PATH" |
There was a problem hiding this comment.
We can query for the namespace we're in using something like MY_NAMESPACE=$(kubectl get pods $(hostname) -o template --template="{{.metadata.namespace}}").
However, if you leave out the --namespace altogether from the kubectl create configmap command, I believe it will create the configmap in the namespace the pod is running in.
| set -e | ||
|
|
||
| HOSTNAME=$(uname -n) | ||
| MY_NAMESPACE=$(kubectl get pods "$HOSTNAME" -o template --template="{{.metadata.namespace}}") |
There was a problem hiding this comment.
This command only queries this pod's current namespace so it's sort of repetitive in that we're saving the current namespace only to specify it in subsequent commands.
I think if we want to query all namespaces to find this pod's namespace use this command instead:
MY_NAMESPACE=$(kubectl get pods --all-namespaces -o jsonpath="{.items[?(@.metadata.name == \"${HOSTNAME}\")].metadata.namespace}")
Otherwise, we can probably leave out the namespace.
|
|
||
| HOSTNAME=$(uname -n) | ||
| MY_NAMESPACE=$(kubectl get pods "$HOSTNAME" -o template --template="{{.metadata.namespace}}") | ||
| DISK_FILE=$(kubectl --namespace="$MY_NAMESPACE" get pods "$HOSTNAME" -o template --template="{{.spec.nodeName}}")-disks |
There was a problem hiding this comment.
If we make the change above, let's replace this with a jsonpath template for consistency:
DISK_FILE=$(kubectl --namespace="${MY_NAMESPACE}" get pods "${HOSTNAME}" -o jsonpath="{.spec.nodeName}")-disks
Alternatively, if we also want to query all namespaces for this pod's node name we can use this command:
DISK_FILE=$(kubectl get pods --all-namespaces -o jsonpath="{.items[?(@.metadata.name == \"${HOSTNAME}\")].spec.nodeName}")-disks
|
@font what do you think? Can we merge this as a first step? |
| source disk_introspection.sh | ||
| else | ||
| log "You can not use the disk introspection method outside a Kubernetes environment" | ||
| log "Make sure KV_TYPE equal either k8s or kubernetes" |
| * store the list in a text file | ||
| * send that list to a configmaps | ||
|
|
||
| Configmaps are sent using the following name so they should be easily recognizable by the container: $(hostname -f)-disks |
There was a problem hiding this comment.
The command is now using <node_name>-disks.
The idea is to widely deploy this container on the hosts that should become storage nodes. The container will need the following privileges otherwise will fail: '--privileged=true -v /dev/:/dev/' It will: * look for the devices without a partition available * store the list in a text file * send that list to a configmaps Configmaps are sent using the following name so they should be easily recognizable by the container: `<node_name>-disks`. Signed-off-by: Sébastien Han <seb@redhat.com>
Signed-off-by: Sébastien Han seb@redhat.com