Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 7 additions & 7 deletions architecture/additional_concepts/storage.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -301,9 +301,9 @@ ifdef::openshift-dedicated[]
====
* PVs are provisioned with either EBS volumes (AWS) or GCP storage (GCP), depending on where the cluster is provisioned.
* Only RWO access mode is applicable, since EBS volumes and GCE Persistent Disks cannot be mounted to multiple nodes.
* *EmptyDir* has the same lifecycle as the pod:
** *EmptyDir* volumes survive container crashes/restarts.
** *EmptyDir* volumes are deleted when the pod is deleted.
* *emptyDir* has the same lifecycle as the pod:
** *emptyDir* volumes survive container crashes/restarts.
** *emptyDir* volumes are deleted when the pod is deleted.
====
endif::[]

Expand All @@ -314,12 +314,12 @@ ifdef::openshift-online[]
* Only RWO access access mode is applicable, since EBS volumes and GCE Persistent Disks cannot be mounted to to multiple nodes.
* Docker volumes are disabled.
** VOLUME directive without a mapped external volume fails to be instantiated.
* *EmptyDir* is restricted to 512 Mi per project (group) per node.
* *emptyDir* is restricted to 512 Mi per project (group) per node.
** If there is a single pod for a project on a particular node, then the pod can consume up to 512 Mi of *emptyDir* storage.
** If there are multiple pods for a project on a particular node, then those pods will share the 512 Mi of *emptyDir* storage.
* *EmptyDir* has the same lifecycle as the pod:
** *EmptyDir* volumes survive container crashes/restarts.
** *EmptyDir* volumes are deleted when the pod is deleted.
* *emptyDir* has the same lifecycle as the pod:
** *emptyDir* volumes survive container crashes/restarts.
** *emptyDir* volumes are deleted when the pod is deleted.
====
endif::[]

Expand Down
2 changes: 1 addition & 1 deletion creating_images/guidelines.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -243,7 +243,7 @@ ifdef::openshift-online[]

Docker images cannot be built using the `VOLUME` directive in the `DOCKERFILE`.
Images using a read/write file system need to use persistent volumes or
`emptydir` volumes instead of local storage. Instead of specifying a volume in
`emptyDir` volumes instead of local storage. Instead of specifying a volume in
the Dockerfile, specify a directory for local storage and mount either a
persistent volume or `emptyDir` volume to that directory when deploying the pod.
endif::[]
Expand Down
6 changes: 3 additions & 3 deletions dev_guide/application_lifecycle/new_app.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -302,9 +302,9 @@ endif::[]
The second one represents the output image. If a container image was specified
as input to `new-app`, then an image stream is created for that image as well.

a|`*DeploymentConfig*`
a|A `*DeploymentConfig*` is created either to deploy the output of a build, or a
specified image. The `new-app` command creates xref:../volumes.adoc#dev-guide-volumes[*EmptyDir*
a|`DeploymentConfig`
a|A `DeploymentConfig` is created either to deploy the output of a build, or a
specified image. The `new-app` command creates xref:../volumes.adoc#dev-guide-volumes[*emptyDir*
volumes] for all Docker volumes that are specified in containers included in the
resulting `*DeploymentConfig*`.

Expand Down
4 changes: 2 additions & 2 deletions dev_guide/volumes.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -23,14 +23,14 @@ are present, to repair them when possible, {product-title} invokes the `fsck`
utility prior to the `mount` utility. This occurs when either adding a volume or
updating an existing volume.

The simplest volume type is `EmptyDir`, which is a temporary directory on a
The simplest volume type is `emptyDir`, which is a temporary directory on a
single machine. Administrators may also allow you to request a
xref:persistent_volumes.adoc#dev-guide-persistent-volumes[persistent volume] that is automatically attached
to your pods.

[NOTE]
====
`EmptyDir` volume storage may be restricted by a quota based on the pod's
`emptyDir` volume storage may be restricted by a quota based on the pod's
FSGroup, if the FSGroup parameter is enabled by your cluster administrator.
====

Expand Down
6 changes: 3 additions & 3 deletions getting_started/devpreview_faq.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -80,7 +80,7 @@ Yes, but with a few caveats. For
https://docs.docker.com/engine/security/security/[security reasons], no images
that run processes as root are allowed. Additionally, any Dockerfile `VOLUME`
instruction must be mounted with either a persistent volume claim (PVC) or an
EmptyDir at this time. See xref:devpreview-current-usage-considerations[more
emptyDir at this time. See xref:devpreview-current-usage-considerations[more
considerations].

CAN I RUN PRODUCTION SERVICES ON THE DEVELOPER PREVIEW?::
Expand Down Expand Up @@ -159,8 +159,8 @@ pods and containers.
secrets (though some amount of these secrets will be needed by the system's
build and deployer service accounts).
* Any Dockerfile `VOLUME` instruction must be mounted with either a persistent
volume claim (PVC) or an EmptyDir at this time.
* The project associated with a user can allocate up to two PVCs.
volume claim (PVC) or an emptyDir at this time.
* The project associated with a user can allocate up to two PVCs of up to 1 GiB each.
* No images that run as *root* are allowed.
* Only the Source-to-Image (S2I) build strategy is allowed for any build
configurations imported into your project.
Expand Down
119 changes: 116 additions & 3 deletions install_config/cluster_metrics.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -308,9 +308,9 @@ non-persistent data does come with the risk of permanent data loss. However,
metrics can still survive a container being restarted.

In order to use non-persistent storage, you must set the
`*USE_PERSISTENT_STORAGE*`
xref:modifying-the-deployer-template[template
option] to `false` for the Metrics Deployer.
`openshift_metrics_cassandra_storage_type`
xref:../install_config/cluster_metrics.adoc#metrics-ansible-variables[variable]
to `emptyDir` in the inventory file.

[NOTE]
====
Expand Down Expand Up @@ -434,6 +434,119 @@ add `system:master-proxy` to the list in order to allow
xref:../dev_guide/pod_autoscaling.adoc#dev-guide-pod-autoscaling[horizontal pod autoscaling] to function
properly.

|`openshift_metrics_cassandra_storage_type`
|Use `emptyDir` for ephemeral storage (for testing); `pv` for persistent volumes,
which need to be created before the installation; or `dynamic` for dynamic
persistent volumes.

|`openshift_metrics_cassandra_replicas`
|The number of Cassandra nodes for the metrics stack. This value dictates the
number of Cassandra replication controllers.

|`openshift_metrics_cassandra_limits_memory`
|The memory limit for the Cassandra pod. For example, a value of `2Gi` would
limit Cassandra to 2 GB of memory. This value could be further adjusted by the
start script based on available memory of the node on which it is scheduled.

|`openshift_metrics_cassandra_limits_cpu`
|The CPU limit for the Cassandra pod. For example, a value of `4000m` (4000
millicores) would limit Cassandra to 4 CPUs.

|`openshift_metrics_cassandra_replicas`
|The number of replicas for Cassandra.

|`openshift_metrics_cassandra_requests_memory`
|The amount of memory to request for Cassandra pod. For example, a value of
`2Gi` would request 2 GB of memory.

|`openshift_metrics_cassandra_requests_cpu`
|The CPU request for the Cassandra pod. For example, a value of `4000m` (4000
millicores) would request 4 CPUs.

|`openshift_metrics_cassandra_storage_group`
|The supplemental storage group to use for Cassandra.

|`openshift_metrics_cassandra_nodeselector`
|Set to the desired, existing
xref:../admin_guide/scheduling/node_selector.adoc#admin-guide-sched-selector[node selector] to ensure that
pods are placed onto nodes with specific labels. For example,
`{"region":"infra"}`.

|`openshift_metrics_hawkular_ca`
|An optional certificate authority (CA) file used to sign the Hawkular certificate.

|`openshift_metrics_hawkular_cert`
|The certificate file used for re-encrypting the route to Hawkular metrics. The
certificate must contain the host name used by the route. If unspecified, the
default router certificate is used.

|`openshift_metrics_hawkular_key`
|The key file used with the Hawkular certificate.

|`openshift_metrics_hawkular_limits_memory`
|The amount of memory to limit the Hawkular pod. For example, a value of `2Gi`
would limit the Hawkular pod to 2 GB of memory. This value could be further
adjusted by the start script based on available memory of the node on which it
is scheduled.

|`openshift_metrics_hawkular_limits_cpu`
|The CPU limit for the Hawkular pod. For example, a value of `4000m` (4000
millicores) would limit the Hawkular pod to 4 CPUs.

|`openshift_metrics_hawkular_replicas`
|The number of replicas for Hawkular metrics.

|`openshift_metrics_hawkular_requests_memory`
|The amount of memory to request for the Hawkular pod. For example, a value of
`2Gi` would request 2 GB of memory.

|`openshift_metrics_hawkular_requests_cpu`
|The CPU request for the Hawkular pod. For example, a value of `4000m` (4000
millicores) would request 4 CPUs.

|`openshift_metrics_hawkular_nodeselector`
|Set to the desired, existing
xref:../admin_guide/scheduling/node_selector.adoc#admin-guide-sched-selector[node selector] to ensure that
pods are placed onto nodes with specific labels. For example,
`{"region":"infra"}`.

|`openshift_metrics_heapster_allowed_users`
|A comma-separated list of CN to accept. By default, this is set to allow the
OpenShift service proxy to connect. Add `system:master-proxy` to the list when
overriding in order to allow
xref:../dev_guide/pod_autoscaling.adoc#dev-guide-pod-autoscaling[horizontal pod
autoscaling] to function properly.

|`openshift_metrics_heapster_limits_memory`
|The amount of memory to limit the Heapster pod. For example, a value of `2Gi`
would limit the Heapster pod to 2 GB of memory.

|`openshift_metrics_heapster_limits_cpu`
|The CPU limit for the Heapster pod. For example, a value of `4000m` (4000
millicores) would limit the Heapster pod to 4 CPUs.

|`openshift_metrics_heapster_requests_memory`
|The amount of memory to request for Heapster pod. For example, a value of `2Gi`
would request 2 GB of memory.

|`openshift_metrics_heapster_requests_cpu`
|The CPU request for the Heapster pod. For example, a value of `4000m` (4000
millicores) would request 4 CPUs.

|`openshift_metrics_heapster_standalone`
|Deploy only Heapster, without the Hawkular Metrics and Cassandra components.

|`openshift_metrics_heapster_nodeselector`
|Set to the desired, existing
xref:../admin_guide/scheduling/node_selector.adoc#admin-guide-sched-selector[node selector] to ensure that
pods are placed onto nodes with specific labels. For example,
`{"region":"infra"}`.

|`openshift_metrics_install_hawkular_agent`
|Set to `true` to install the Hawkular OpenShift Agent (HOSA). Set to `false` to
remove the HOSA from an installation. HOSA can be used to collect custom
metrics from your pods. This component is currently in
Technology Preview and is not installed by default.
|===

[[modifying-the-deployer-template]]
Expand Down
14 changes: 7 additions & 7 deletions install_config/install/advanced_install.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -1156,7 +1156,7 @@ If you alter this variable, ensure the host name is accessible via your router.
The `openshift_metrics_cassandra_storage_type` variable must be set in order to
use persistent storage for metrics. If
`openshift_metrics_cassandra_storage_type` is not set, then cluster metrics data
is stored in an `EmptyDir` volume, which will be deleted when the Cassandra pod
is stored in an `emptyDir` volume, which will be deleted when the Cassandra pod
terminates.

There are three options for enabling cluster metrics storage when using the
Expand Down Expand Up @@ -1232,9 +1232,9 @@ openshift_hosted_logging_deployer_version=v3.5
[[advanced-installation-logging-storage]]
==== Configuring Logging Storage

The `openshift_hosted_logging_storage_kind` variable must be set in order to use
persistent storage for logging. If `openshift_hosted_logging_storage_kind` is
not set, then cluster logging data is stored in an `EmptyDir` volume, which will
The `openshift_logging_storage_kind` variable must be set in order to use
persistent storage for logging. If `openshift_logging_storage_kind` is
not set, then cluster logging data is stored in an `emptyDir` volume, which will
be deleted when the Elasticsearch pod terminates.

There are three options for enabling cluster logging storage when using the
Expand Down Expand Up @@ -1492,15 +1492,15 @@ balance the master API (port 8443) on all master hosts.

[NOTE]
====
This HAProxy load balancer is intended to demonstrate the API server's HA mode
This HAProxy load balancer is intended to demonstrate the API server's HA mode
and is not recommended for production environments. If you are deploying to a cloud provider, Red Hat recommends deploying
a cloud-native TCP-based load balancer or take other steps to provide a highly available load balancer.
====

For an external load balancing solution, you must have:

* A pre-created load balancer VIP configured for SSL passthrough.
* A VIP listening on the port specified by the xref:advanced-master-ports[`openshift_master_api_port`] and xref:advanced-master-ports[`openshift_master_console_port`]
* A VIP listening on the port specified by the xref:advanced-master-ports[`openshift_master_api_port`] and xref:advanced-master-ports[`openshift_master_console_port`]
values (8443 by default) and proxying back to all master hosts on that port.
* A domain name for VIP registered in DNS.
** The domain name will become the value of both
Expand Down
2 changes: 1 addition & 1 deletion release_notes/ose_3_1_release_notes.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -252,7 +252,7 @@ https://bugzilla.redhat.com/show_bug.cgi?id=1275388[BZ#1275388]:: Previously, so
https://bugzilla.redhat.com/show_bug.cgi?id=1265187[BZ#1265187]:: When persistent volume claims (PVC) were created from a template, sometimes the same volume would be mounted to multiple PVCs. At the same time, the volume would show that only one PVC was being used. The issue has been fixed.
https://bugzilla.redhat.com/show_bug.cgi?id=1279308[BZ#1279308]:: Previously, using a etcd storage location other than the default, as defined in the master configuration file, would result in an upgrade fail at the "generate etcd backup" stage. This issue has now been fixed.
https://bugzilla.redhat.com/show_bug.cgi?id=1276599[BZ#1276599]:: Basic authentication passwords can now contain colons.
https://bugzilla.redhat.com/show_bug.cgi?id=1279744[BZ#1279744]:: Previously, giving `*EmptyDir*` volumes a different default permission setting and group ownership could affect deploying the *postgresql-92-rhel7* image. The issue has been fixed.
https://bugzilla.redhat.com/show_bug.cgi?id=1279744[BZ#1279744]:: Previously, giving `*emptyDir*` volumes a different default permission setting and group ownership could affect deploying the *postgresql-92-rhel7* image. The issue has been fixed.
https://bugzilla.redhat.com/show_bug.cgi?id=1276395[BZ#1276395]:: Previously, an error could occur when trying to perform an HA install using Ansible, due to a problem with SRC files. The issue has been fixed.
https://bugzilla.redhat.com/show_bug.cgi?id=1267733[BZ#1267733]:: When installing a etcd cluster with hosts with different network interfaces, the install would fail. The issue has been fixed.
https://bugzilla.redhat.com/show_bug.cgi?id=1274239[BZ#1274239]:: Previously, when changing the default project region from *infra* to *primary*, old route and registry pods are stuck in the terminating stage and could not be deleted, meaning that new route and registry pods could not be deployed. The issue has been fixed.
Expand Down
2 changes: 1 addition & 1 deletion release_notes/ose_3_2_release_notes.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -130,7 +130,7 @@ Authentication] for details.
- The `SETUID` and `SETGID` capabilities have been added back to the *anyuid* SCC,
which ensures that programs that start as root and then drop to a lower
permission level will work by default.
- Quota support has been added for `*emptydir*`. When the quota is enabled on an
- Quota support has been added for `*emptyDir*`. When the quota is enabled on an
XFS system, nodes will limit the amount of space any given project can use on a
node to a fixed upper bound. The quota is tied to the `*FSGroup*` of the
project. Administrators can control this value by editing the project directly
Expand Down
2 changes: 1 addition & 1 deletion using_images/other_images/jenkins.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -335,7 +335,7 @@ are already installed.
$ oc new-app jenkins-persistent
----

.. Or an `EmptyDir` type volume (where configuration does not persist across pod restarts):
.. Or an `emptyDir` type volume (where configuration does not persist across pod restarts):
----
$ oc new-app jenkins-ephemeral
----
Expand Down