From e089e6c92bc6389d591bb9ed6b3e4530aec09b6e Mon Sep 17 00:00:00 2001 From: Gaurav Nelson Date: Thu, 2 Nov 2017 15:49:11 +1000 Subject: [PATCH] [enterprise-3.6] changed all instances of emptydir and EmptyDir to emptyDir (cherry picked from commit fdd8a09f70db154f3f883bf29c96a0e79d50d45e) https://github.com/openshift/openshift-docs/pull/6084 --- architecture/additional_concepts/storage.adoc | 14 +- creating_images/guidelines.adoc | 2 +- dev_guide/application_lifecycle/new_app.adoc | 2 +- dev_guide/volumes.adoc | 4 +- getting_started/devpreview_faq.adoc | 50 ++-- install_config/cluster_metrics.adoc | 4 +- install_config/install/advanced_install.adoc | 213 ++++++++++-------- release_notes/ose_3_1_release_notes.adoc | 2 +- release_notes/ose_3_2_release_notes.adoc | 2 +- using_images/other_images/jenkins.adoc | 2 +- 10 files changed, 146 insertions(+), 149 deletions(-) diff --git a/architecture/additional_concepts/storage.adoc b/architecture/additional_concepts/storage.adoc index 43bf5b3570fd..a4369aecde07 100644 --- a/architecture/additional_concepts/storage.adoc +++ b/architecture/additional_concepts/storage.adoc @@ -354,9 +354,9 @@ ifdef::openshift-dedicated[] ==== * PVs are provisioned with either EBS volumes (AWS) or GCP storage (GCP), depending on where the cluster is provisioned. * Only RWO access mode is applicable, since EBS volumes and GCE Persistent Disks cannot be mounted to multiple nodes. - * *EmptyDir* has the same lifecycle as the pod: - ** *EmptyDir* volumes survive container crashes/restarts. - ** *EmptyDir* volumes are deleted when the pod is deleted. + * *emptyDir* has the same lifecycle as the pod: + ** *emptyDir* volumes survive container crashes/restarts. + ** *emptyDir* volumes are deleted when the pod is deleted. ==== endif::[] @@ -367,12 +367,12 @@ ifdef::openshift-online[] * Only RWO access access mode is applicable, since EBS volumes and GCE Persistent Disks cannot be mounted to to multiple nodes. * Docker volumes are disabled. ** VOLUME directive without a mapped external volume fails to be instantiated. - * *EmptyDir* is restricted to 512 Mi per project (group) per node. + * *emptyDir* is restricted to 512 Mi per project (group) per node. ** If there is a single pod for a project on a particular node, then the pod can consume up to 512 Mi of *emptyDir* storage. ** If there are multiple pods for a project on a particular node, then those pods will share the 512 Mi of *emptyDir* storage. - * *EmptyDir* has the same lifecycle as the pod: - ** *EmptyDir* volumes survive container crashes/restarts. - ** *EmptyDir* volumes are deleted when the pod is deleted. + * *emptyDir* has the same lifecycle as the pod: + ** *emptyDir* volumes survive container crashes/restarts. + ** *emptyDir* volumes are deleted when the pod is deleted. ==== endif::[] diff --git a/creating_images/guidelines.adoc b/creating_images/guidelines.adoc index 44af7ae5f7a1..5e760a433f14 100644 --- a/creating_images/guidelines.adoc +++ b/creating_images/guidelines.adoc @@ -243,7 +243,7 @@ ifdef::openshift-online[] Docker images cannot be built using the `VOLUME` directive in the `DOCKERFILE`. Images using a read/write file system need to use persistent volumes or -`emptydir` volumes instead of local storage. Instead of specifying a volume in +`emptyDir` volumes instead of local storage. Instead of specifying a volume in the Dockerfile, specify a directory for local storage and mount either a persistent volume or `emptyDir` volume to that directory when deploying the pod. endif::[] diff --git a/dev_guide/application_lifecycle/new_app.adoc b/dev_guide/application_lifecycle/new_app.adoc index af4b70313b36..93364360812e 100644 --- a/dev_guide/application_lifecycle/new_app.adoc +++ b/dev_guide/application_lifecycle/new_app.adoc @@ -319,7 +319,7 @@ as input to `new-app`, then an image stream is created for that image as well. a|`DeploymentConfig` a|A `DeploymentConfig` is created either to deploy the output of a build, or a -specified image. The `new-app` command creates xref:../volumes.adoc#dev-guide-volumes[*EmptyDir* +specified image. The `new-app` command creates xref:../volumes.adoc#dev-guide-volumes[*emptyDir* volumes] for all Docker volumes that are specified in containers included in the resulting `DeploymentConfig`. diff --git a/dev_guide/volumes.adoc b/dev_guide/volumes.adoc index ff40b04e65d4..f07661d4762b 100644 --- a/dev_guide/volumes.adoc +++ b/dev_guide/volumes.adoc @@ -22,14 +22,14 @@ are present, to repair them when possible, {product-title} invokes the `fsck` utility prior to the `mount` utility. This occurs when either adding a volume or updating an existing volume. -The simplest volume type is `EmptyDir`, which is a temporary directory on a +The simplest volume type is `emptyDir`, which is a temporary directory on a single machine. Administrators may also allow you to request a xref:persistent_volumes.adoc#dev-guide-persistent-volumes[persistent volume] that is automatically attached to your pods. [NOTE] ==== -`EmptyDir` volume storage may be restricted by a quota based on the pod's +`emptyDir` volume storage may be restricted by a quota based on the pod's FSGroup, if the FSGroup parameter is enabled by your cluster administrator. ==== diff --git a/getting_started/devpreview_faq.adoc b/getting_started/devpreview_faq.adoc index 167ef0e5b559..8d24b78780e5 100644 --- a/getting_started/devpreview_faq.adoc +++ b/getting_started/devpreview_faq.adoc @@ -13,7 +13,7 @@ toc::[] == Overview -During the {product-title} 3 Developer Preview, consult the following sections +During the {product-title} (Next Gen) Developer Preview, consult the following sections for frequently asked questions and xref:devpreview-current-usage-considerations[current usage considerations] during the preview period. @@ -34,12 +34,12 @@ plan for the current (v2) offering and provide you with adequate time to migrate applications to the new platform. WHAT ARE THE RESOURCE LIMITS DURING THE DEVELOPER PREVIEW?:: -Each user can create 1 project with up to 2 GiB memory, 4 CPU cores, and 2 x 1 +Each user can create a single project with up to 2 GiB memory, 4 CPU cores, and 2 x 1 GiB persistent volumes. For more detailed limits, see the *Settings* tab on your project's Overview page in the web console. HOW LONG WILL I HAVE ACCESS TO THE ENVIRONMENT?:: -You will have access to the {product-title} 3 Developer Preview environment for +You will have access to the {product-title} (Next Gen) Developer Preview environment for 30 days, at which point your account will expire. WHAT HAPPENS WHEN MY ACCOUNT EXPIRES?:: @@ -49,38 +49,21 @@ longer be able to log in to the web console, authenticate using the {product-tit CLI tools, or access your applications and related data. CAN I CREATE A NEW ACCOUNT AFTER MY ACCOUNT EXPIRES?:: -If you are interested in trying the {product-title} 3 Developer Preview again, +If you are interested in trying the {product-title} (Next Gen) Developer Preview again, just complete the registration form after your account expires and we will provision a fresh set of resources for you as soon as they become available. -WHAT LANGUAGES ARE SUPPORTED?:: -The {product-title} 3 Developer Preview currently supports: +WHAT LANGUAGES AND DATABASE SERVICES ARE SUPPORTED?:: +The {product-title} (Next Gen) Developer Preview currently supports a number of developer languages and database services, including JBoss Middleware services. -- Node.js (0.10) -- PHP (5.5, 5.6) -- Python (2.7, 3.3, 3.4) -- Ruby (2.0, 2.2) -- Perl (5.16, 5.20) -- Java (6, 7, 8, EE) is available via optional JBoss Middleware Services (JBoss -EAP and JBoss Web Server) - -WHAT DATABASE SERVICES ARE SUPPORTED?:: -The {product-title} 3 Developer Preview currently supports: - -- MongoDB (2.4, 2.6) -- MySQL (5.5, 5.6) -- PostgreSQL (9.2, 9.4) - -WHAT JBOSS MIDDLEWARE SERVICES ARE AVAILABLE IN THE DEVELOPER PREVIEW?:: -JBoss EAP and JBoss Web Server are available to try during the {product-title} -(Next Gen) Developer Preview. +See the link:https://www.openshift.com/features/cartridges.html#online3[OpenShift features page] for the list of available languages and services. CAN USERS RUN IMAGES FROM DOCKER HUB OR PUSH THEIR OWN IMAGES TO THE REGISTRY?:: Yes, but with a few caveats. For https://docs.docker.com/engine/security/security/[security reasons], no images that run processes as root are allowed. Additionally, any Dockerfile `VOLUME` instruction must be mounted with either a persistent volume claim (PVC) or an -EmptyDir at this time. See xref:devpreview-current-usage-considerations[more +emptyDir at this time. See xref:devpreview-current-usage-considerations[more considerations]. CAN I RUN PRODUCTION SERVICES ON THE DEVELOPER PREVIEW?:: @@ -93,9 +76,9 @@ to see how it performs in the environment. == Pricing HOW AM I BILLED?:: -During our Developer Preview period, {product-title} 3 is FREE! +During our Developer Preview period, {product-title} (Next Gen) is FREE! -ARE PAID PLANS AVAILABLE FOR {product-title} (NEXT GEN)?:: +ARE PAID PLANS AVAILABLE FOR OPENSHIFT (NEXT GEN)?:: Not at this time. {product-title} (Next Gen) will offer paid tiers when the offering becomes generally available. @@ -108,7 +91,7 @@ During our Developer Preview period, we do not offer a Service Level Agreement HOW CAN I FIND OUT ABOUT PRODUCT UPDATES AND SCHEDULED MAINTENANCE?:: Red Hat will provide updates via -http://status.openshift.com[status.openshift.com]. +http://status.preview.openshift.com[status.preview.openshift.com]. [[devpreview-faq-support]] == Support @@ -147,20 +130,21 @@ selected with the provided link). [[devpreview-current-usage-considerations]] == Current Usage Considerations -The {product-title} 3 Developer Preview offering scopes the inventory of images +The {product-title} (Next Gen) Developer Preview offering scopes the inventory of images it provides out of the box with a few considerations in mind, which also apply to any images you choose to import into your project. These conditions are enforced via the {product-title} xref:../dev_guide/compute_resources.adoc#dev-guide-compute-resources[quotas, limit ranges, and compute resources] systems. -* A memory limit of 2GiB is in place. The 2 GiB is spread out across the project's -pods and containers. +* A memory limit of 2 GiB is in place for a project. The 2 GiB is spread out +across the project's pods and containers. Individual pods and containers have a +limit of 1 GiB each. * Maximum counts are in place for pods, replication controllers, services, and secrets (though some amount of these secrets will be needed by the system's build and deployer service accounts). * Any Dockerfile `VOLUME` instruction must be mounted with either a persistent -volume claim (PVC) or an EmptyDir at this time. -* The project associated with a user can allocate up to two PVCs. +volume claim (PVC) or an emptyDir at this time. +* The project associated with a user can allocate up to two PVCs of up to 1 GiB each. * No images that run as *root* are allowed. * Only the Source-to-Image (S2I) build strategy is allowed for any build configurations imported into your project. diff --git a/install_config/cluster_metrics.adoc b/install_config/cluster_metrics.adoc index d0444afdf3f1..7dadd959adf6 100644 --- a/install_config/cluster_metrics.adoc +++ b/install_config/cluster_metrics.adoc @@ -306,7 +306,7 @@ metrics can still survive a container being restarted. In order to use non-persistent storage, you must set the `openshift_metrics_cassandra_storage_type` xref:../install_config/cluster_metrics.adoc#metrics-ansible-variables[variable] -to `emptydir` in the inventory file. +to `emptyDir` in the inventory file. [NOTE] ==== @@ -382,7 +382,7 @@ appended to the prefix starting from 1. |The persistent volume claim size for each of the Cassandra nodes. |`openshift_metrics_cassandra_storage_type` -|Use `emptydir` for ephemeral storage (for testing); `pv` for persistent volumes, +|Use `emptyDir` for ephemeral storage (for testing); `pv` for persistent volumes, which need to be created before the installation; or `dynamic` for dynamic persistent volumes. diff --git a/install_config/install/advanced_install.adoc b/install_config/install/advanced_install.adoc index 3e27d03d3066..5617008fd11e 100644 --- a/install_config/install/advanced_install.adoc +++ b/install_config/install/advanced_install.adoc @@ -267,6 +267,9 @@ master may require access to, or the installation will fail. Defaults to |This variable overrides the default subdomain to use for exposed xref:../../architecture/networking/routes.adoc#architecture-core-concepts-routes[routes]. +|`openshift_master_image_policy_config` +|Sets `imagePolicyConfig` in the master configuration. See xref:../../install_config/master_node_configuration.adoc#master-config-image-config[Image Configuration] for details. + |`openshift_node_proxy_mode` |This variable specifies the xref:../../architecture/core_concepts/pods_and_services.adoc#service-proxy-mode[service @@ -300,7 +303,7 @@ re-configured after deployment. |`openshift_use_flannel` |This variable enables *flannel* as an alternative networking layer instead of the default SDN. If enabling *flannel*, disable the default SDN with the -`openshift_use_openshift_sdn` variable. For more information, see xref:../../install_config/configuring_sdn.adoc#using-flannel[Using Flannel]. +`openshift_use_openshift_sdn` variable. For more information, see xref:../install_config/configuring_sdn.adoc#using-flannel[Using Flannel]. |`openshift_docker_additional_registries` |{product-title} adds the specified additional registry or registries to the @@ -317,10 +320,12 @@ the *docker* configuration. For any of these registries, secure sockets layer *docker* configuration. Block the listed registries. Setting this to `all` blocks everything not in the other variables. -|`openshift_hosted_metrics_public_url` +|`openshift_metrics_hawkular_hostname` |This variable sets the host name for integration with the metrics console by overriding `metricsPublicURL` in the master configuration for cluster metrics. If you alter this variable, ensure the host name is accessible via your router. +See xref:advanced-install-cluster-metrics[Configuring Cluster Metrics] for +details. |`openshift_template_service_broker_namespaces` |This variable enables the template service broker by specifying one of more @@ -417,7 +422,7 @@ image garbage collection], and to xref:../../admin_guide/manage_nodes.adoc#configuring-node-resources[specify resources per node]. `kubeletArguments` are key value pairs that are passed directly to the Kubelet that match the -https://kubernetes.io/docs/admin/kubelet/[Kubelet's command line +http://kubernetes.io/v1.1/docs/admin/kubelet.html[Kubelet's command line arguments]. `kubeletArguments` are not migrated or validated and may become invalid if used. These values override other settings in node configuration which may cause invalid configurations. Example usage: @@ -442,11 +447,6 @@ xref:advanced-install-docker-system-container[running `docker` as a system conta |This variable configures whether the host is marked as a schedulable node, meaning that it is available for placement of new pods. See xref:marking-masters-as-unschedulable-nodes[Configuring Schedulability on Masters]. - -|`*openshift_template_service_broker_namespaces*` -|This variable enables the template service broker by specifying one of more -namespaces whose templates will be served by the broker. -openshift_template_service_broker_namespaces configurable |=== [[configuring-host-port]] @@ -705,8 +705,9 @@ If you are using an image registry other than the default at *_/etc/ansible/hosts_* file. ---- -oreg_url=example.com/openshift3/ose-${component}:${version} +oreg_url={registry}/openshift3/ose-${component}:${version} openshift_examples_modify_imagestreams=true +openshift_docker_additional_registries={registry} ---- .Registry Variables @@ -719,8 +720,18 @@ openshift_examples_modify_imagestreams=true |`*openshift_examples_modify_imagestreams*` |Set to `true` if pointing to a registry other than the default. Modifies the image stream location to the value of `*oreg_url*`. + +|`*openshift_docker_additional_registries*` +|Specify the additional registry or registries. |=== +For example: +---- +oreg_url=example.com/openshift3/ose-${component}:${version} +openshift_examples_modify_imagestreams=true +openshift_docker_additional_registries=example.com +---- + [[advanced-install-registry-storage]] ==== Configuring Registry Storage @@ -813,7 +824,7 @@ openshift_hosted_registry_storage_s3_regionendpoint=https://myendpoint.example.c GlusterFS can be configured to provide xref:../../architecture/additional_concepts/storage.adoc#architecture-additional-concepts-storage[peristent storage] and -xref:../../install_config/persistent_storage/dynamically_provisioning_pvs.adoc#install-config-persistent-storage-dynamically-provisioning-pvs[dynamic provisioning] for {product-title}. It can be used both containerized within +xref:../../install_config/storage_examples/gluster_dynamic_example.adoc#install-config-persistent-storage-dynamically-provisioning-pvs[dynamic provisioning] for {product-title}. It can be used both containerized within {product-title} and non-containerized on its own nodes. [[advanced-install-containerized-glusterfs-persistent-storage]] @@ -926,7 +937,7 @@ Hello from Heketi. After successful installation, see link:https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.2/html/container-native_storage_for_openshift_container_platform/chap-documentation-red_hat_gluster_storage_container_native_with_openshift_platform-gluster_pod_operations[Operations on a Red Hat Gluster Storage Pod in an OpenShift Environment] to check the status of the GlusterFS clusters. -xref:../../install_config/persistent_storage/dynamically_provisioning_pvs.adoc#install-config-persistent-storage-dynamically-provisioning-pvs[Dynamic provisioning] of GlusterFS volumes can occur by +xref:../../install_config/storage_examples/gluster_dynamic_example.adoc#install-config-persistent-storage-dynamically-provisioning-pvs[Dynamic provisioning] of GlusterFS volumes can occur by xref:../../install_config/storage_examples/gluster_dynamic_example.adoc#create-a-pvc-ro-request-storage-for-your-application[creating a PVC to request storage]. [[advanced-install-configuring-openshift-container-registry]] @@ -1068,60 +1079,79 @@ environment is defined for builds. |Variable |Purpose -|`*openshift_http_proxy*` -|This variable specifies the `*HTTP_PROXY*` environment variable for masters and +|`openshift_http_proxy` +|This variable specifies the `HTTP_PROXY` environment variable for masters and the Docker daemon. -|`*openshift_https_proxy*` -|This variable specifices the `*HTTPS_PROXY*` environment variable for masters +|`openshift_https_proxy` +|This variable specifices the `HTTPS_PROXY` environment variable for masters and the Docker daemon. -|`*openshift_no_proxy*` -|This variable is used to set the `*NO_PROXY*` environment variable for masters +|`openshift_no_proxy` +|This variable is used to set the `NO_PROXY` environment variable for masters and the Docker daemon. This value should be set to a comma separated list of host names or wildcard host names that should not use the defined proxy. This list will be augmented with the list of all defined {product-title} host names by default. -|`*openshift_generate_no_proxy_hosts*` +|`openshift_generate_no_proxy_hosts` |This boolean variable specifies whether or not the names of all defined OpenShift hosts and `pass:[*.cluster.local]` should be automatically appended to -the `*NO_PROXY*` list. Defaults to *true*; set it to *false* to override this +the `NO_PROXY` list. Defaults to `true`; set it to `false` to override this option. -|`*openshift_builddefaults_http_proxy*` -|This variable defines the `*HTTP_PROXY*` environment variable inserted into -builds using the `*BuildDefaults*` admission controller. If -`*openshift_http_proxy*` is set, this variable will inherit that value; you only +|`openshift_builddefaults_http_proxy` +|This variable defines the `HTTP_PROXY` environment variable inserted into +builds using the `BuildDefaults` admission controller. If +`openshift_http_proxy` is set, this variable will inherit that value; you only need to set this if you want your builds to use a different value. -|`*openshift_builddefaults_https_proxy*` +|`openshift_builddefaults_https_proxy` |This variable defines the `*HTTPS_PROXY*` environment variable inserted into builds using the `*BuildDefaults*` admission controller. If `*openshift_https_proxy*` is set, this variable will inherit that value; you only need to set this if you want your builds to use a different value. -|`*openshift_builddefaults_no_proxy*` -|This variable defines the `*NO_PROXY*` environment variable inserted into -builds using the `*BuildDefaults*` admission controller. If -`*openshift_no_proxy*` is set, this variable will inherit that value; you only +|`openshift_builddefaults_no_proxy` +|This variable defines the `NO_PROXY` environment variable inserted into +builds using the `BuildDefaults` admission controller. If +`openshift_no_proxy` is set, this variable will inherit that value; you only need to set this if you want your builds to use a different value. -|`*openshift_builddefaults_git_http_proxy*` +|`openshift_builddefaults_git_http_proxy` |This variable defines the HTTP proxy used by `git clone` operations during a -build, defined using the `*BuildDefaults*` admission controller. If -`*openshift_builddefaults_http_proxy*` is set, this variable will inherit that +build, defined using the `BuildDefaults` admission controller. If +`openshift_builddefaults_http_proxy` is set, this variable will inherit that value; you only need to set this if you want your `git clone` operations to use a different value. -|`*openshift_builddefaults_git_https_proxy*` +|`openshift_builddefaults_git_https_proxy` |This variable defines the HTTPS proxy used by `git clone` operations during a -build, defined using the `*BuildDefaults*` admission controller. If -`*openshift_builddefaults_https_proxy*` is set, this variable will inherit that +build, defined using the `BuildDefaults` admission controller. If +`openshift_builddefaults_https_proxy` is set, this variable will inherit that value; you only need to set this if you want your `git clone` operations to use a different value. |=== +[[advanced-install-no-proxy-list]] +If any of: + +- `openshift_no_proxy` +- `openshift_https_proxy` +- `openshift_http_proxy` + +are set, then all cluster hosts will have an automatically generated `NO_PROXY` +environment variable injected into several service configuration scripts. The +default `.svc` domain and your cluster's `dns_domain` (typically +`.cluster.local`) will also be added. + +[NOTE] +==== +Setting `openshift_generate_no_proxy_hosts` to `false` in your inventory will +not disable the automatic addition of the `.svc` domain and the cluster domain. +These are required and added automatically if any of the above listed proxy +parameters are set. +==== ifdef::openshift-enterprise,openshift-origin[] [[advanced-install-configuring-firewalls]] @@ -1221,9 +1251,8 @@ pods. They are set to `region=infra` by default: # openshift_registry_selector='region=infra' ---- -The default router and registry will be automatically deployed during -installation if nodes exist in the `[nodes]` section that match the selector -settings. For example: +The registry and router are only able to run on node hosts with the `region=infra` label. +Ensure that at least one node host in your {product-title} environment has the `region=infra` label. For example: ---- [nodes] @@ -1232,9 +1261,8 @@ infra-node1.example.com openshift_node_labels="{'region': 'infra','zone': 'defau [IMPORTANT] ==== -The registry and router are only able to run on node hosts with the -`region=infra` label. Ensure that at least one node host in your {product-title} -environment has the `region=infra` label. +If there is not a node in the [nodes] section that matches the selector settings, +the default router and registry will be deployed as failed with `Pending` status. ==== It is recommended for production environments that you maintain dedicated @@ -1399,17 +1427,12 @@ following to enable cluster metrics when using the advanced install: ---- [OSEv3:vars] -openshift_hosted_metrics_deploy=true <1> -openshift_hosted_metrics_deployer_prefix=registry.example.com:8888/openshift3/ <2> -openshift_hosted_metrics_deployer_version=v3.6 <3> +openshift_metrics_install_metrics=true ---- -<1> Enables the metrics deployment. -<2> Replace `registry.example.com:8888/openshift3/` with the prefix for the component images. -<3> Replace with the desired image version. The {product-title} web console uses the data coming from the Hawkular Metrics service to display its graphs. The metrics public URL can be set during cluster -installation using the `openshift_hosted_metrics_public_url` Ansible variable, +installation using the `openshift_metrics_hawkular_hostname` Ansible variable, which defaults to: `\https://hawkular-metrics.{{openshift_master_default_subdomain}}/hawkular/metrics` @@ -1422,7 +1445,7 @@ If you alter this variable, ensure the host name is accessible via your router. The `openshift_metrics_cassandra_storage_type` variable must be set in order to use persistent storage for metrics. If `openshift_metrics_cassandra_storage_type` is not set, then cluster metrics data -is stored in an `EmptyDir` volume, which will be deleted when the Cassandra pod +is stored in an `emptyDir` volume, which will be deleted when the Cassandra pod terminates. There are three options for enabling cluster metrics storage when using the @@ -1440,12 +1463,12 @@ be *_/exports/metrics_*: ---- [OSEv3:vars] -openshift_hosted_metrics_storage_kind=nfs -openshift_hosted_metrics_storage_access_modes=['ReadWriteOnce'] -openshift_hosted_metrics_storage_nfs_directory=/exports -openshift_hosted_metrics_storage_nfs_options='*(rw,root_squash)' -openshift_hosted_metrics_storage_volume_name=metrics -openshift_hosted_metrics_storage_volume_size=10Gi +openshift_metrics_storage_kind=nfs +openshift_metrics_storage_access_modes=['ReadWriteOnce'] +openshift_metrics_storage_nfs_directory=/exports +openshift_metrics_storage_nfs_options='*(rw,root_squash)' +openshift_metrics_storage_volume_name=metrics +openshift_metrics_storage_volume_size=10Gi ---- [discrete] @@ -1458,12 +1481,12 @@ To use an external NFS volume, one must already exist with a path of ---- [OSEv3:vars] -openshift_hosted_metrics_storage_kind=nfs -openshift_hosted_metrics_storage_access_modes=['ReadWriteOnce'] -openshift_hosted_metrics_storage_host=nfs.example.com -openshift_hosted_metrics_storage_nfs_directory=/exports -openshift_hosted_metrics_storage_volume_name=metrics -openshift_hosted_metrics_storage_volume_size=10Gi +openshift_metrics_storage_kind=nfs +openshift_metrics_storage_access_modes=['ReadWriteOnce'] +openshift_metrics_storage_host=nfs.example.com +openshift_metrics_storage_nfs_directory=/exports +openshift_metrics_storage_volume_name=metrics +openshift_metrics_storage_volume_size=10Gi ---- The remote volume path using the following options would be @@ -1491,20 +1514,15 @@ following to enable cluster logging when using the advanced installation method: ---- [OSEv3:vars] -openshift_hosted_logging_deploy=true <1> -openshift_hosted_logging_deployer_prefix=registry.example.com:8888/openshift3/ <2> -openshift_hosted_logging_deployer_version=v3.6 <3> +openshift_logging_install_logging=true ---- -<1> Enables the logging stack. -<2> Replace `registry.example.com:8888/openshift3/` with your desired prefix. -<3> Replace with the desired image version. [[advanced-installation-logging-storage]] ==== Configuring Logging Storage -The `openshift_hosted_logging_storage_kind` variable must be set in order to use -persistent storage for logging. If `openshift_hosted_logging_storage_kind` is -not set, then cluster logging data is stored in an `EmptyDir` volume, which will +The `openshift_logging_storage_kind` variable must be set in order to use +persistent storage for logging. If `openshift_logging_storage_kind` is +not set, then cluster logging data is stored in an `emptyDir` volume, which will be deleted when the Elasticsearch pod terminates. There are three options for enabling cluster logging storage when using the @@ -1522,12 +1540,12 @@ the `[nfs]` host group. For example, the volume path using these options would b ---- [OSEv3:vars] -openshift_hosted_logging_storage_kind=nfs -openshift_hosted_logging_storage_access_modes=['ReadWriteOnce'] -openshift_hosted_logging_storage_nfs_directory=/exports -openshift_hosted_logging_storage_nfs_options='*(rw,root_squash)' -openshift_hosted_logging_storage_volume_name=logging -openshift_hosted_logging_storage_volume_size=10Gi +openshift_logging_storage_kind=nfs +openshift_logging_storage_access_modes=['ReadWriteOnce'] +openshift_logging_storage_nfs_directory=/exports +openshift_logging_storage_nfs_options='*(rw,root_squash)' +openshift_logging_storage_volume_name=logging +openshift_logging_storage_volume_size=10Gi ---- [discrete] @@ -1540,12 +1558,12 @@ To use an external NFS volume, one must already exist with a path of ---- [OSEv3:vars] -openshift_hosted_logging_storage_kind=nfs -openshift_hosted_logging_storage_access_modes=['ReadWriteOnce'] -openshift_hosted_logging_storage_host=nfs.example.com -openshift_hosted_logging_storage_nfs_directory=/exports -openshift_hosted_logging_storage_volume_name=logging -openshift_hosted_logging_storage_volume_size=10Gi +openshift_logging_storage_kind=nfs +openshift_logging_storage_access_modes=['ReadWriteOnce'] +openshift_logging_storage_host=nfs.example.com +openshift_logging_storage_nfs_directory=/exports +openshift_logging_storage_volume_name=logging +openshift_logging_storage_volume_size=10Gi ---- The remote volume path using the following options would be @@ -1561,7 +1579,7 @@ xref:../../install_config/persistent_storage/dynamically_provisioning_pvs.adoc#i ---- [OSEv3:vars] -openshift_hosted_logging_storage_kind=dynamic +openshift_logging_storage_kind=dynamic ---- [[enabling-service-catalog]] @@ -1718,7 +1736,7 @@ endif::[] If you have xref:enabling-service-catalog[enabled the service catalog], you can also enable the -xref:../../architecture/service_catalog/template_service_broker.adoc#arch-template-service-broker[template service broker] (TSB). +xref:../../architecture/service_catalog/template_service_broker.adoc#arch-template-service-broke[template service broker] (TSB). To configure the TSB: @@ -1780,7 +1798,7 @@ xref:../../install_config/web_console_customization.adoc#install-config-web-cons === Single Master Examples You can configure an environment with a single master and multiple nodes, and -either a single embedded *etcd* or multiple external *etcd* hosts. +either a single or multiple number of external *etcd* hosts. [NOTE] ==== @@ -1793,7 +1811,7 @@ not supported. ==== Single Master and Multiple Nodes The following table describes an example environment for a single -xref:../../architecture/infrastructure_components/kubernetes_infrastructure.adoc#master[master] (with embedded *etcd*) +xref:../../architecture/infrastructure_components/kubernetes_infrastructure.adoc#master[master] (with *etcd* on the same host) and two xref:../../architecture/infrastructure_components/kubernetes_infrastructure.adoc#node[nodes]: @@ -1805,6 +1823,9 @@ xref:../../architecture/infrastructure_components/kubernetes_infrastructure.adoc |*master.example.com* |Master and node +|*master.example.com* +|etcd + |*node1.example.com* .2+.^|Node @@ -1843,6 +1864,10 @@ endif::[] [masters] master.example.com +# host group for etcd +[etcd] +master.example.com + # host group for nodes, includes region info [nodes] master.example.com @@ -2258,28 +2283,16 @@ and configuration files available on the local host. To run the installer, use the following command, specifying `-i` if your inventory file located somewhere other than *_/etc/ansible/hosts_*: -// tag::BZ1466783-workaround-install[] -If you are using a proxy, you must add the IP address of the etcd endpoints to -the `openshift_no_proxy` cluster variable in your inventory file. - -[NOTE] -==== -If you are not using a proxy, you can skip this step. -==== - -In {product-title}: -ifdef::openshift-enterprise[] ---- +ifdef::openshift-enterprise[] # ansible-playbook [-i /path/to/inventory] \ /usr/share/ansible/openshift-ansible/playbooks/byo/config.yml ----- endif::[] ifdef::openshift-origin[] ----- # ansible-playbook [-i /path/to/inventory] \ ~/openshift-ansible/playbooks/byo/config.yml ----- endif::[] +---- If for any reason the installation fails, before re-running the installer, see xref:installer-known-issues[Known Issues] to check for any specific @@ -2493,9 +2506,9 @@ endif::[] [[advanced-verifying-the-installation]] == Verifying the Installation +// tag::verifying-the-installation[] After the installation completes: -// tag::verifying-the-installation[] . Verify that the master is started and nodes are registered and reporting in *Ready* status. _On the master host_, run the following as root: diff --git a/release_notes/ose_3_1_release_notes.adoc b/release_notes/ose_3_1_release_notes.adoc index 63f378a85ee2..1d91304b745b 100644 --- a/release_notes/ose_3_1_release_notes.adoc +++ b/release_notes/ose_3_1_release_notes.adoc @@ -252,7 +252,7 @@ https://bugzilla.redhat.com/show_bug.cgi?id=1275388[BZ#1275388]:: Previously, so https://bugzilla.redhat.com/show_bug.cgi?id=1265187[BZ#1265187]:: When persistent volume claims (PVC) were created from a template, sometimes the same volume would be mounted to multiple PVCs. At the same time, the volume would show that only one PVC was being used. The issue has been fixed. https://bugzilla.redhat.com/show_bug.cgi?id=1279308[BZ#1279308]:: Previously, using a etcd storage location other than the default, as defined in the master configuration file, would result in an upgrade fail at the "generate etcd backup" stage. This issue has now been fixed. https://bugzilla.redhat.com/show_bug.cgi?id=1276599[BZ#1276599]:: Basic authentication passwords can now contain colons. -https://bugzilla.redhat.com/show_bug.cgi?id=1279744[BZ#1279744]:: Previously, giving `*EmptyDir*` volumes a different default permission setting and group ownership could affect deploying the *postgresql-92-rhel7* image. The issue has been fixed. +https://bugzilla.redhat.com/show_bug.cgi?id=1279744[BZ#1279744]:: Previously, giving `*emptyDir*` volumes a different default permission setting and group ownership could affect deploying the *postgresql-92-rhel7* image. The issue has been fixed. https://bugzilla.redhat.com/show_bug.cgi?id=1276395[BZ#1276395]:: Previously, an error could occur when trying to perform an HA install using Ansible, due to a problem with SRC files. The issue has been fixed. https://bugzilla.redhat.com/show_bug.cgi?id=1267733[BZ#1267733]:: When installing a etcd cluster with hosts with different network interfaces, the install would fail. The issue has been fixed. https://bugzilla.redhat.com/show_bug.cgi?id=1274239[BZ#1274239]:: Previously, when changing the default project region from *infra* to *primary*, old route and registry pods are stuck in the terminating stage and could not be deleted, meaning that new route and registry pods could not be deployed. The issue has been fixed. diff --git a/release_notes/ose_3_2_release_notes.adoc b/release_notes/ose_3_2_release_notes.adoc index 0458856c1b0c..bf3fd483a1c2 100644 --- a/release_notes/ose_3_2_release_notes.adoc +++ b/release_notes/ose_3_2_release_notes.adoc @@ -130,7 +130,7 @@ Authentication] for details. - The `SETUID` and `SETGID` capabilities have been added back to the *anyuid* SCC, which ensures that programs that start as root and then drop to a lower permission level will work by default. -- Quota support has been added for `*emptydir*`. When the quota is enabled on an +- Quota support has been added for `*emptyDir*`. When the quota is enabled on an XFS system, nodes will limit the amount of space any given project can use on a node to a fixed upper bound. The quota is tied to the `*FSGroup*` of the project. Administrators can control this value by editing the project directly diff --git a/using_images/other_images/jenkins.adoc b/using_images/other_images/jenkins.adoc index 60a18cd55686..7feee512b8ba 100644 --- a/using_images/other_images/jenkins.adoc +++ b/using_images/other_images/jenkins.adoc @@ -321,7 +321,7 @@ are already installed. $ oc new-app jenkins-persistent ---- -.. Or an `EmptyDir` type volume (where configuration does not persist across pod restarts): +.. Or an `emptyDir` type volume (where configuration does not persist across pod restarts): ---- $ oc new-app jenkins-ephemeral ----