Skip to content
This repository was archived by the owner on Dec 19, 2024. It is now read-only.

daemon: disk introspection method#610

Merged
leseb merged 1 commit intomasterfrom
scan-dev
Jun 20, 2017
Merged

daemon: disk introspection method#610
leseb merged 1 commit intomasterfrom
scan-dev

Conversation

@leseb
Copy link
Member

@leseb leseb commented Apr 21, 2017

Signed-off-by: Sébastien Han seb@redhat.com

@leseb leseb force-pushed the scan-dev branch 23 times, most recently from c18d407 to 18c51e0 Compare April 25, 2017 16:01
@leseb leseb requested a review from ErwanAliasr1 April 26, 2017 09:11
@guits guits self-assigned this Apr 26, 2017
source start_mgr.sh
start_mgr
;;
disk_introspection)
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

need to make sure that KV_TYPEis k8s|kubernetes before executing this.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

@leseb leseb requested a review from font June 14, 2017 16:49
@@ -0,0 +1,15 @@
# Introspect disks available on the machines using a container
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

.md filename extension is missing for this README file

# Introspect disks available on the machines using a container

The idea is to widely deploy this container on the hosts that should become storage nodes.
Do we need a label for that?
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Labels would certainly help target which nodes should contain this operator container.

The idea is to widely deploy this container on the hosts that should become storage nodes.
Do we need a label for that?

The container will need the following privileged otherwise will fail: '--privileged=true -v /dev/:/dev/'
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/privileged/privileges/
s/'/`/g

* store the list in a text file
* send that list to a configmaps

Configmaps are sent using the following name so they should be easily recognizable by the container: $(hostname -f)-disks
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will use the hostname of the pod itself. What if we use the hostname of the node that the pod is running on? That should help resolve the relationship between the list and where the container will run.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point, how can we do this? Any command?

Configmaps are sent using the following name so they should be easily recognizable by the container: $(hostname -f)-disks

Later, we run the k8s template that should iterate through the list of devices of a given configmaps.
The tricky part is to build the relationship between the list and where the container will run since the configmaps are named after the hostname...
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See comment above. The pod can read the configmap using the name of the node it's running on. This should resolve any ambiguity.


set -e

DISK_FILE=$(hostname -f)-disks
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should query the Kubernetes API to determine the hostname of the node we're running on.


function get_all_disks_without_partitions {
for disk in $DISCOVERED_DEVICES; do
if [[ $(egrep -c $disk[0-9] /proc/partitions) == 0 ]]; then
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we surround disk with curly braces as in ${disk}[0-9] to be more clear?

done
if [ ! -s $DISK_FILE_PATH ]; then
log "No disk detected."
log "Abord mission!"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/Abord/Abort/

}

function store_disk_list_configmaps {
log "Creating configmap $DISK_FILE on the 'ceph' namespace"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/on/in/


function store_disk_list_configmaps {
log "Creating configmap $DISK_FILE on the 'ceph' namespace"
kubectl --namespace=ceph create configmap $DISK_FILE --from-file=$DISK_FILE_PATH
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is probably okay for now, but we should not assume the ceph namespace is being used. We could query for the namespace this pod is running in and use that instead.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How?

#!/bin/bash
set -e

DISK_FILE=$(uname -n)-disks
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about DISK_FILE=$(kubectl get pods $(hostname) -o template --template="{{.spec.nodeName}}")-disks?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done, thanks!


function store_disk_list_configmaps {
log "Creating configmap $DISK_FILE in the 'ceph' namespace"
kubectl --namespace=ceph create configmap "$DISK_FILE" --from-file="$DISK_FILE_PATH"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can query for the namespace we're in using something like MY_NAMESPACE=$(kubectl get pods $(hostname) -o template --template="{{.metadata.namespace}}").

However, if you leave out the --namespace altogether from the kubectl create configmap command, I believe it will create the configmap in the namespace the pod is running in.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

set -e

HOSTNAME=$(uname -n)
MY_NAMESPACE=$(kubectl get pods "$HOSTNAME" -o template --template="{{.metadata.namespace}}")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This command only queries this pod's current namespace so it's sort of repetitive in that we're saving the current namespace only to specify it in subsequent commands.

I think if we want to query all namespaces to find this pod's namespace use this command instead:

MY_NAMESPACE=$(kubectl get pods --all-namespaces -o jsonpath="{.items[?(@.metadata.name == \"${HOSTNAME}\")].metadata.namespace}")

Otherwise, we can probably leave out the namespace.


HOSTNAME=$(uname -n)
MY_NAMESPACE=$(kubectl get pods "$HOSTNAME" -o template --template="{{.metadata.namespace}}")
DISK_FILE=$(kubectl --namespace="$MY_NAMESPACE" get pods "$HOSTNAME" -o template --template="{{.spec.nodeName}}")-disks
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we make the change above, let's replace this with a jsonpath template for consistency:

DISK_FILE=$(kubectl --namespace="${MY_NAMESPACE}" get pods "${HOSTNAME}" -o jsonpath="{.spec.nodeName}")-disks

Alternatively, if we also want to query all namespaces for this pod's node name we can use this command:

DISK_FILE=$(kubectl get pods --all-namespaces -o jsonpath="{.items[?(@.metadata.name == \"${HOSTNAME}\")].spec.nodeName}")-disks

@leseb
Copy link
Member Author

leseb commented Jun 20, 2017

@font what do you think? Can we merge this as a first step?

Copy link
Collaborator

@font font left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@leseb Yes, LGTM.

source disk_introspection.sh
else
log "You can not use the disk introspection method outside a Kubernetes environment"
log "Make sure KV_TYPE equal either k8s or kubernetes"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/equal/equals/

* store the list in a text file
* send that list to a configmaps

Configmaps are sent using the following name so they should be easily recognizable by the container: $(hostname -f)-disks
Copy link
Collaborator

@font font Jun 20, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The command is now using <node_name>-disks.

@leseb leseb changed the title DNM: daemon: disk introspection method daemon: disk introspection method Jun 20, 2017
The idea is to widely deploy this container on the hosts that should
become storage nodes.

The container will need the following privileges otherwise will fail:
'--privileged=true -v /dev/:/dev/'
It will:
  * look for the devices without a partition available
  * store the list in a text file
  * send that list to a configmaps

Configmaps are sent using the following name so they should be easily
recognizable by the container: `<node_name>-disks`.

Signed-off-by: Sébastien Han <seb@redhat.com>
@leseb leseb merged commit 3788c6f into master Jun 20, 2017
@leseb leseb deleted the scan-dev branch June 20, 2017 15:28
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants