Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
88 changes: 79 additions & 9 deletions install_config/aggregate_logging.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -416,24 +416,95 @@ The deployer creates an ephemeral deployment in which all of a pod's data is
lost upon restart. For production usage, add a persistent storage volume to each
Elasticsearch deployment configuration.

The following example specifies a volume for an Elasticsearch replica (using a
xref:../architecture/additional_concepts/storage.adoc#persistent-volume-claims[PersistentVolumeClaim]):
The best-performing volumes are local disks, if it is possible to use
them. Doing so requires some preparation as follows.

. The relevant service account must be given the privilege to mount and edit a local volume, as follows:
+
====
----
$ oc volume dc/logging-es-rca2m9u8 \
$ oadm policy add-scc-to-user privileged \
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do you have to access to the privileged SCC here or will hostmount-anyuid (which does not allow privileged) be enough?

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@pweil- I tried hostmount-anyuid first and it did not have access due to SELinux context. I believe it's much the same problem we had with fluentd - openshift/origin-aggregated-logging#89 (comment)

It seems like less-than-privileged may be possible, but I'm not quite sure how and it seems like it would be a PITA for a user to set up. What do you think?

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey, whaddya know... openshift/origin#8504

Copy link
Copy Markdown
Member Author

@sosiouxme sosiouxme Aug 4, 2016

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@pweil- I'm a little foggy on whether exactly the same fix will apply. The problem with fluentd was that it was trying to read and write in /var/log. Here we're trying to read and write in an admin-supplied storage volume; I suppose we could have them chcon the volume to whatever would be convenient? If so, what would that be - is there a label that will allow read/write for any context the pod may be running in?

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The kubelet (when a pod is using host namespaces) or docker should be performing a relabeling of the volume when it can. It uses the docker opts to pass in the selinux context that is being used. If that isn't working or this is a different use case then we can figure out what is different. cc @pmorie who is very familiar with the selinux code for volumes

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What the AVC looks like, FYI:

type=AVC msg=audit(1470323991.042:27487): avc:  denied  { write } for  pid=9883 comm="java" name="es-storage" dev="dm-
0" ino=68862303 scontext=system_u:system_r:svirt_lxc_net_t:s0:c2,c8 tcontext=unconfined_u:object_r:usr_t:s0 tclass=dir
type=SYSCALL msg=audit(1470323991.042:27487): arch=c000003e syscall=83 success=no exit=-13 a0=7ff7d43d3780 a1=1ff a2=7
ff7d43d3780 a3=7ff7c47bd728 items=0 ppid=15669 pid=9883 auid=4294967295 uid=1000 gid=0 euid=1000 suid=1000 fsuid=1000 
egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="java" exe="/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.101-3.b13.el7_2
.x86_64/jre/bin/java" subj=system_u:system_r:svirt_lxc_net_t:s0:c2,c8 key=(null)

system:serviceaccount:logging:aggregated-logging-elasticsearch <1>
----
<1> Use the new project you created earlier (e.g., *logging*) when specifying
this service account.
====

. Each Elasticsearch replica definition must be patched to claim that privilege, for example:
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should probably remind users to stop their cluster first

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suppose so. I figured they were gonna lose any ephemeral data anyway...

+
----
$ for dc in $(oc get deploymentconfig --selector logging-infra=elasticsearch -o name); do
oc scale $dc --replicas=0
oc patch $dc \
-p '{"spec":{"template":{"spec":{"containers":[{"name":"elasticsearch","securityContext":{"privileged": true}}]}}}}'
done
----

. The Elasticsearch pods must be located on the correct nodes to use
the local storage, and should not move around even if those nodes are
taken down for a period of time. This requires giving each Elasticsearch
replica a node selector that is unique to the node where an administrator
has allocated storage for it. xref:#logging-node-selector[See below
for directions on setting a node selector].

. Once these steps are taken, a local host mount can be applied to each replica
as in this example (where we assume storage is mounted at the same path on each node):
+
ifdef::openshift-origin[]
----
$ for dc in $(oc get deploymentconfig --selector logging-infra=elasticsearch -o name); do
oc set volume $dc \
--add --overwrite --name=elasticsearch-storage \
--type=hostPath --path=/usr/local/es-storage
oc deploy --latest $dc
oc scale $dc --replicas=1
done
----
endif::openshift-origin[]
ifdef::openshift-enterprise[]
----
$ for dc in $(oc get deploymentconfig --selector logging-infra=elasticsearch -o name); do
oc set volume $dc \
--add --overwrite --name=elasticsearch-storage \
--type=hostPath --path=/usr/local/es-storage
oc scale $dc --replicas=1
done
----
endif::openshift-enterprise[]

If using host mounts is impractical or
undesirable, it may be necessary to attach block storage as a
xref:../architecture/additional_concepts/storage.adoc#persistent-volume-claims[PersistentVolumeClaim])
as in the following example:

----
$ oc set volume dc/logging-es-<unique> \
--add --overwrite --name=elasticsearch-storage \
--type=persistentVolumeClaim --claim-name=logging-es-1
----
====

[NOTE]
[WARNING]
====
Any available volume type can be used, such as a host-mount, but the
recommended volume type is a PersistentVolumeClaim.
Using NFS storage directly or as a PersistentVolume (or via other NAS
such as Gluster) is not supported for Elasticsearch storage, as Lucene
relies on filesystem behavior that NFS does not supply. Data corruption
and other problems can occur. If NFS storage is a requirement, you can
allocate a large file on that storage to serve as a storage device and
treat it as a host mount on each host. For example:

----
$ truncate -s 1T /nfs/storage/elasticsearch-1
$ mkfs.xfs /nfs/storage/elasticsearch-1
$ mount -o loop /nfs/storage/elasticsearch-1 /usr/local/es-storage
$ chown 1000:1000 /usr/local/es-storage
----

Then, use *_/usr/local/es-storage_* as a host-mount as
described above. Performance under this solution is significantly
worse than using actual local drives.
====

ifdef::openshift-enterprise[]

[[logging-node-selector]]
*Node Selector*

Expand Down Expand Up @@ -465,7 +536,6 @@ $ oc patch dc/logging-es-<unique_name> \
-p '{"spec":{"template":{"spec":{"nodeSelector":{"nodeLabel":"logging-es-node-1"}}}}}'
----
====
endif::openshift-enterprise[]

[[scaling-elasticsearch]]
*Changing the Scale of Elasticsearch*
Expand Down