From 0ef50689b16859a5e3ecc1f8c06e599656be0893 Mon Sep 17 00:00:00 2001 From: Gaurav Nelson Date: Thu, 2 Nov 2017 15:49:11 +1000 Subject: [PATCH] [enterprise-3.5] changed all instances of emptydir and EmptyDir to emptyDir (cherry picked from commit fdd8a09f70db154f3f883bf29c96a0e79d50d45e) https://github.com/openshift/openshift-docs/pull/6084 --- architecture/additional_concepts/storage.adoc | 14 +- creating_images/guidelines.adoc | 2 +- dev_guide/application_lifecycle/new_app.adoc | 2 +- dev_guide/volumes.adoc | 4 +- getting_started/devpreview_faq.adoc | 50 +- install_config/cluster_metrics.adoc | 4 +- install_config/install/advanced_install.adoc | 928 +++++++++++++++--- release_notes/ose_3_1_release_notes.adoc | 2 +- release_notes/ose_3_2_release_notes.adoc | 2 +- using_images/other_images/jenkins.adoc | 2 +- 10 files changed, 812 insertions(+), 198 deletions(-) diff --git a/architecture/additional_concepts/storage.adoc b/architecture/additional_concepts/storage.adoc index 3254ef4f2336..68e83b2b546c 100644 --- a/architecture/additional_concepts/storage.adoc +++ b/architecture/additional_concepts/storage.adoc @@ -307,9 +307,9 @@ ifdef::openshift-dedicated[] ==== * PVs are provisioned with either EBS volumes (AWS) or GCP storage (GCP), depending on where the cluster is provisioned. * Only RWO access mode is applicable, since EBS volumes and GCE Persistent Disks cannot be mounted to multiple nodes. - * *EmptyDir* has the same lifecycle as the pod: - ** *EmptyDir* volumes survive container crashes/restarts. - ** *EmptyDir* volumes are deleted when the pod is deleted. + * *emptyDir* has the same lifecycle as the pod: + ** *emptyDir* volumes survive container crashes/restarts. + ** *emptyDir* volumes are deleted when the pod is deleted. ==== endif::[] @@ -320,12 +320,12 @@ ifdef::openshift-online[] * Only RWO access access mode is applicable, since EBS volumes and GCE Persistent Disks cannot be mounted to to multiple nodes. * Docker volumes are disabled. ** VOLUME directive without a mapped external volume fails to be instantiated. - * *EmptyDir* is restricted to 512 Mi per project (group) per node. + * *emptyDir* is restricted to 512 Mi per project (group) per node. ** If there is a single pod for a project on a particular node, then the pod can consume up to 512 Mi of *emptyDir* storage. ** If there are multiple pods for a project on a particular node, then those pods will share the 512 Mi of *emptyDir* storage. - * *EmptyDir* has the same lifecycle as the pod: - ** *EmptyDir* volumes survive container crashes/restarts. - ** *EmptyDir* volumes are deleted when the pod is deleted. + * *emptyDir* has the same lifecycle as the pod: + ** *emptyDir* volumes survive container crashes/restarts. + ** *emptyDir* volumes are deleted when the pod is deleted. ==== endif::[] diff --git a/creating_images/guidelines.adoc b/creating_images/guidelines.adoc index 44af7ae5f7a1..5e760a433f14 100644 --- a/creating_images/guidelines.adoc +++ b/creating_images/guidelines.adoc @@ -243,7 +243,7 @@ ifdef::openshift-online[] Docker images cannot be built using the `VOLUME` directive in the `DOCKERFILE`. Images using a read/write file system need to use persistent volumes or -`emptydir` volumes instead of local storage. Instead of specifying a volume in +`emptyDir` volumes instead of local storage. Instead of specifying a volume in the Dockerfile, specify a directory for local storage and mount either a persistent volume or `emptyDir` volume to that directory when deploying the pod. endif::[] diff --git a/dev_guide/application_lifecycle/new_app.adoc b/dev_guide/application_lifecycle/new_app.adoc index af4b70313b36..93364360812e 100644 --- a/dev_guide/application_lifecycle/new_app.adoc +++ b/dev_guide/application_lifecycle/new_app.adoc @@ -319,7 +319,7 @@ as input to `new-app`, then an image stream is created for that image as well. a|`DeploymentConfig` a|A `DeploymentConfig` is created either to deploy the output of a build, or a -specified image. The `new-app` command creates xref:../volumes.adoc#dev-guide-volumes[*EmptyDir* +specified image. The `new-app` command creates xref:../volumes.adoc#dev-guide-volumes[*emptyDir* volumes] for all Docker volumes that are specified in containers included in the resulting `DeploymentConfig`. diff --git a/dev_guide/volumes.adoc b/dev_guide/volumes.adoc index 3275ad6cdce1..8f7a6fcf5731 100644 --- a/dev_guide/volumes.adoc +++ b/dev_guide/volumes.adoc @@ -23,14 +23,14 @@ are present, to repair them when possible, {product-title} invokes the `fsck` utility prior to the `mount` utility. This occurs when either adding a volume or updating an existing volume. -The simplest volume type is `EmptyDir`, which is a temporary directory on a +The simplest volume type is `emptyDir`, which is a temporary directory on a single machine. Administrators may also allow you to request a xref:persistent_volumes.adoc#dev-guide-persistent-volumes[persistent volume] that is automatically attached to your pods. [NOTE] ==== -`EmptyDir` volume storage may be restricted by a quota based on the pod's +`emptyDir` volume storage may be restricted by a quota based on the pod's FSGroup, if the FSGroup parameter is enabled by your cluster administrator. ==== diff --git a/getting_started/devpreview_faq.adoc b/getting_started/devpreview_faq.adoc index 167ef0e5b559..8d24b78780e5 100644 --- a/getting_started/devpreview_faq.adoc +++ b/getting_started/devpreview_faq.adoc @@ -13,7 +13,7 @@ toc::[] == Overview -During the {product-title} 3 Developer Preview, consult the following sections +During the {product-title} (Next Gen) Developer Preview, consult the following sections for frequently asked questions and xref:devpreview-current-usage-considerations[current usage considerations] during the preview period. @@ -34,12 +34,12 @@ plan for the current (v2) offering and provide you with adequate time to migrate applications to the new platform. WHAT ARE THE RESOURCE LIMITS DURING THE DEVELOPER PREVIEW?:: -Each user can create 1 project with up to 2 GiB memory, 4 CPU cores, and 2 x 1 +Each user can create a single project with up to 2 GiB memory, 4 CPU cores, and 2 x 1 GiB persistent volumes. For more detailed limits, see the *Settings* tab on your project's Overview page in the web console. HOW LONG WILL I HAVE ACCESS TO THE ENVIRONMENT?:: -You will have access to the {product-title} 3 Developer Preview environment for +You will have access to the {product-title} (Next Gen) Developer Preview environment for 30 days, at which point your account will expire. WHAT HAPPENS WHEN MY ACCOUNT EXPIRES?:: @@ -49,38 +49,21 @@ longer be able to log in to the web console, authenticate using the {product-tit CLI tools, or access your applications and related data. CAN I CREATE A NEW ACCOUNT AFTER MY ACCOUNT EXPIRES?:: -If you are interested in trying the {product-title} 3 Developer Preview again, +If you are interested in trying the {product-title} (Next Gen) Developer Preview again, just complete the registration form after your account expires and we will provision a fresh set of resources for you as soon as they become available. -WHAT LANGUAGES ARE SUPPORTED?:: -The {product-title} 3 Developer Preview currently supports: +WHAT LANGUAGES AND DATABASE SERVICES ARE SUPPORTED?:: +The {product-title} (Next Gen) Developer Preview currently supports a number of developer languages and database services, including JBoss Middleware services. -- Node.js (0.10) -- PHP (5.5, 5.6) -- Python (2.7, 3.3, 3.4) -- Ruby (2.0, 2.2) -- Perl (5.16, 5.20) -- Java (6, 7, 8, EE) is available via optional JBoss Middleware Services (JBoss -EAP and JBoss Web Server) - -WHAT DATABASE SERVICES ARE SUPPORTED?:: -The {product-title} 3 Developer Preview currently supports: - -- MongoDB (2.4, 2.6) -- MySQL (5.5, 5.6) -- PostgreSQL (9.2, 9.4) - -WHAT JBOSS MIDDLEWARE SERVICES ARE AVAILABLE IN THE DEVELOPER PREVIEW?:: -JBoss EAP and JBoss Web Server are available to try during the {product-title} -(Next Gen) Developer Preview. +See the link:https://www.openshift.com/features/cartridges.html#online3[OpenShift features page] for the list of available languages and services. CAN USERS RUN IMAGES FROM DOCKER HUB OR PUSH THEIR OWN IMAGES TO THE REGISTRY?:: Yes, but with a few caveats. For https://docs.docker.com/engine/security/security/[security reasons], no images that run processes as root are allowed. Additionally, any Dockerfile `VOLUME` instruction must be mounted with either a persistent volume claim (PVC) or an -EmptyDir at this time. See xref:devpreview-current-usage-considerations[more +emptyDir at this time. See xref:devpreview-current-usage-considerations[more considerations]. CAN I RUN PRODUCTION SERVICES ON THE DEVELOPER PREVIEW?:: @@ -93,9 +76,9 @@ to see how it performs in the environment. == Pricing HOW AM I BILLED?:: -During our Developer Preview period, {product-title} 3 is FREE! +During our Developer Preview period, {product-title} (Next Gen) is FREE! -ARE PAID PLANS AVAILABLE FOR {product-title} (NEXT GEN)?:: +ARE PAID PLANS AVAILABLE FOR OPENSHIFT (NEXT GEN)?:: Not at this time. {product-title} (Next Gen) will offer paid tiers when the offering becomes generally available. @@ -108,7 +91,7 @@ During our Developer Preview period, we do not offer a Service Level Agreement HOW CAN I FIND OUT ABOUT PRODUCT UPDATES AND SCHEDULED MAINTENANCE?:: Red Hat will provide updates via -http://status.openshift.com[status.openshift.com]. +http://status.preview.openshift.com[status.preview.openshift.com]. [[devpreview-faq-support]] == Support @@ -147,20 +130,21 @@ selected with the provided link). [[devpreview-current-usage-considerations]] == Current Usage Considerations -The {product-title} 3 Developer Preview offering scopes the inventory of images +The {product-title} (Next Gen) Developer Preview offering scopes the inventory of images it provides out of the box with a few considerations in mind, which also apply to any images you choose to import into your project. These conditions are enforced via the {product-title} xref:../dev_guide/compute_resources.adoc#dev-guide-compute-resources[quotas, limit ranges, and compute resources] systems. -* A memory limit of 2GiB is in place. The 2 GiB is spread out across the project's -pods and containers. +* A memory limit of 2 GiB is in place for a project. The 2 GiB is spread out +across the project's pods and containers. Individual pods and containers have a +limit of 1 GiB each. * Maximum counts are in place for pods, replication controllers, services, and secrets (though some amount of these secrets will be needed by the system's build and deployer service accounts). * Any Dockerfile `VOLUME` instruction must be mounted with either a persistent -volume claim (PVC) or an EmptyDir at this time. -* The project associated with a user can allocate up to two PVCs. +volume claim (PVC) or an emptyDir at this time. +* The project associated with a user can allocate up to two PVCs of up to 1 GiB each. * No images that run as *root* are allowed. * Only the Source-to-Image (S2I) build strategy is allowed for any build configurations imported into your project. diff --git a/install_config/cluster_metrics.adoc b/install_config/cluster_metrics.adoc index 886d85d68b68..a523c3fd9c3f 100644 --- a/install_config/cluster_metrics.adoc +++ b/install_config/cluster_metrics.adoc @@ -306,7 +306,7 @@ metrics can still survive a container being restarted. In order to use non-persistent storage, you must set the `openshift_metrics_cassandra_storage_type` xref:../install_config/cluster_metrics.adoc#metrics-ansible-variables[variable] -to `emptydir` in the inventory file. +to `emptyDir` in the inventory file. [NOTE] ==== @@ -383,7 +383,7 @@ appended to the prefix starting from 1. |The persistent volume claim size for each of the Cassandra nodes. |`openshift_metrics_cassandra_storage_type` -|Use `emptydir` for ephemeral storage (for testing); `pv` for persistent volumes, +|Use `emptyDir` for ephemeral storage (for testing); `pv` for persistent volumes, which need to be created before the installation; or `dynamic` for dynamic persistent volumes. diff --git a/install_config/install/advanced_install.adoc b/install_config/install/advanced_install.adoc index 75320f0bee47..5617008fd11e 100644 --- a/install_config/install/advanced_install.adoc +++ b/install_config/install/advanced_install.adoc @@ -31,10 +31,14 @@ a supported version of Fedora, CentOS, or RHEL. endif::[] The host initiating the installation does not need to be intended for inclusion in the {product-title} cluster, but it can be. + +Alternatively, a +xref:running-the-advanced-installation-system-container[containerized version of the installer] is available as a system container, which is currently a +Technology Preview feature. ==== ifdef::openshift-enterprise[] -Alternatively, you can use the xref:quick_install.adoc#install-config-install-quick-install[quick installation] +Alternatively, you can use the xref:../../install_config/install/quick_install.adoc#install-config-install-quick-install[quick installation] method if you prefer an interactive installation experience. endif::[] @@ -136,7 +140,7 @@ can be assigned cluster-wide: |`ansible_ssh_user` |This variable sets the SSH user for the installer to use and defaults to `root`. This user should allow SSH-based authentication -xref:host_preparation.adoc#ensuring-host-access[without requiring a password]. If +xref:../../install_config/install/host_preparation.adoc#ensuring-host-access[without requiring a password]. If using SSH key-based authentication, then the key should be managed by an SSH agent. @@ -195,14 +199,14 @@ more information. |`openshift_rolling_restart_mode` |This variable enables rolling restarts of HA masters (i.e., masters are taken down one at a time) when -xref:../upgrading/automated_upgrades.adoc#running-the-upgrade-playbook-directly[running +xref:../../install_config/upgrading/automated_upgrades.adoc#running-the-upgrade-playbook-directly[running the upgrade playbook directly]. It defaults to `services`, which allows rolling restarts of services on the masters. It can instead be set to `system`, which enables rolling, full system restarts and also works for single master clusters. |`os_sdn_network_plugin_name` |This variable configures which -xref:../../architecture/additional_concepts/sdn.adoc#architecture-additional-concepts-sdn[OpenShift SDN plug-in] to +xref:../../architecture/networking/sdn.adoc#architecture-additional-concepts-sdn[OpenShift SDN plug-in] to use for the pod network, which defaults to `redhat/openshift-ovs-subnet` for the standard SDN plug-in. Set the variable to `redhat/openshift-ovs-multitenant` to use the multitenant plug-in. @@ -253,7 +257,7 @@ options] in the OAuth configuration. See xref:advanced-install-session-options[C |This variable configures the subnet in which xref:../../architecture/core_concepts/pods_and_services.adoc#services[services] will be created within the -xref:../../architecture/additional_concepts/sdn.adoc#architecture-additional-concepts-sdn[{product-title} +xref:../../architecture/networking/sdn.adoc#architecture-additional-concepts-sdn[{product-title} SDN]. This network block should be private and must not conflict with any existing network blocks in your infrastructure to which pods, nodes, or the master may require access to, or the installation will fail. Defaults to @@ -261,7 +265,10 @@ master may require access to, or the installation will fail. Defaults to |`openshift_master_default_subdomain` |This variable overrides the default subdomain to use for exposed -xref:../../architecture/core_concepts/routes.adoc#architecture-core-concepts-routes[routes]. +xref:../../architecture/networking/routes.adoc#architecture-core-concepts-routes[routes]. + +|`openshift_master_image_policy_config` +|Sets `imagePolicyConfig` in the master configuration. See xref:../../install_config/master_node_configuration.adoc#master-config-image-config[Image Configuration] for details. |`openshift_node_proxy_mode` |This variable specifies the @@ -275,19 +282,19 @@ when placing pods. |`osm_cluster_network_cidr` | This variable overrides the -xref:../../architecture/additional_concepts/sdn.adoc#sdn-design-on-masters[SDN +xref:../../architecture/networking/sdn.adoc#sdn-design-on-masters[SDN cluster network] CIDR block. This is the network from which pod IPs are assigned. This network block should be a private block and must not conflict with existing network blocks in your infrastructure to which pods, nodes, or the master may require access. Defaults to `10.128.0.0/14` and cannot be arbitrarily re-configured after deployment, although certain changes to it can be made in -the xref:../configuring_sdn.adoc#configuring-the-pod-network-on-masters[SDN +the xref:../../install_config/configuring_sdn.adoc#configuring-the-pod-network-on-masters[SDN master configuration]. |`osm_host_subnet_length` |This variable specifies the size of the per host subnet allocated for pod IPs by -xref:../../architecture/additional_concepts/sdn.adoc#sdn-design-on-masters[{product-title} +xref:../../architecture/networking/sdn.adoc#sdn-design-on-masters[{product-title} SDN]. Defaults to `9` which means that a subnet of size /23 is allocated to each host; for example, given the default 10.128.0.0/14 cluster network, this will allocate 10.128.0.0/23, 10.128.2.0/23, 10.128.4.0/23, and so on. This cannot be @@ -296,7 +303,7 @@ re-configured after deployment. |`openshift_use_flannel` |This variable enables *flannel* as an alternative networking layer instead of the default SDN. If enabling *flannel*, disable the default SDN with the -`openshift_use_openshift_sdn` variable. For more information, see xref:../configuring_sdn.adoc#using-flannel[Using Flannel]. +`openshift_use_openshift_sdn` variable. For more information, see xref:../install_config/configuring_sdn.adoc#using-flannel[Using Flannel]. |`openshift_docker_additional_registries` |{product-title} adds the specified additional registry or registries to the @@ -313,12 +320,16 @@ the *docker* configuration. For any of these registries, secure sockets layer *docker* configuration. Block the listed registries. Setting this to `all` blocks everything not in the other variables. -|`openshift_hosted_metrics_public_url` +|`openshift_metrics_hawkular_hostname` |This variable sets the host name for integration with the metrics console by overriding `metricsPublicURL` in the master configuration for cluster metrics. If you alter this variable, ensure the host name is accessible via your router. See xref:advanced-install-cluster-metrics[Configuring Cluster Metrics] for details. + +|`openshift_template_service_broker_namespaces` +|This variable enables the template service broker by specifying one of more +namespaces whose templates will be served by the broker. |=== [[advanced-install-deployment-types]] @@ -370,25 +381,25 @@ can be assigned to individual host entries: |Variable |Purpose -|`*openshift_hostname*` +|`openshift_hostname` |This variable overrides the internal cluster host name for the system. Use this when the system's default IP address does not resolve to the system host name. -|`*openshift_public_hostname*` +|`openshift_public_hostname` |This variable overrides the system's public host name. Use this for cloud installations, or for hosts on networks using a network address translation (NAT). -|`*openshift_ip*` +|`openshift_ip` |This variable overrides the cluster internal IP address for the system. Use this when using an interface that is not configured with the default route. -|`*openshift_public_ip*` +|`openshift_public_ip` |This variable overrides the system's public IP address. Use this for cloud installations, or for hosts on networks using a network address translation (NAT). -|`*containerized*` +|`containerized` |If set to *true*, containerized {product-title} services are run on the target master and node hosts instead of installed using RPM packages. If set to *false* or unset, the default RPM method is used. RHEL Atomic Host requires the containerized @@ -399,36 +410,38 @@ ifdef::openshift-enterprise[] Containerized installations are supported starting in {product-title} 3.1.1. endif::[] -|`*openshift_node_labels*` +|`openshift_node_labels` |This variable adds labels to nodes during installation. See xref:configuring-node-host-labels[Configuring Node Host Labels] for more details. -|`*openshift_node_kubelet_args*` +|`openshift_node_kubelet_args` |This variable is used to configure `kubeletArguments` on nodes, such as arguments used in xref:../../admin_guide/garbage_collection.adoc#admin-guide-garbage-collection[container and image garbage collection], and to xref:../../admin_guide/manage_nodes.adoc#configuring-node-resources[specify resources per node]. `kubeletArguments` are key value pairs that are passed directly to the Kubelet that match the -https://kubernetes.io/docs/admin/kubelet/[Kubelet's command line +http://kubernetes.io/v1.1/docs/admin/kubelet.html[Kubelet's command line arguments]. `kubeletArguments` are not migrated or validated and may become invalid if used. These values override other settings in node configuration which may cause invalid configurations. Example usage: *{'image-gc-high-threshold': ['90'],'image-gc-low-threshold': ['80']}*. -|`*openshift_hosted_router_selector*` +|`openshift_hosted_router_selector` |Default node selector for automatically deploying router pods. See xref:configuring-node-host-labels[Configuring Node Host Labels] for details. -|`*openshift_registry_selector*` +|`openshift_registry_selector` |Default node selector for automatically deploying registry pods. See xref:configuring-node-host-labels[Configuring Node Host Labels] for details. -|`*openshift_docker_options*` -|This variable configures additional Docker options within *_/etc/sysconfig/docker_*, such as -options used in xref:../../install_config/install/host_preparation.adoc#managing-docker-container-logs[Managing Container Logs]. -Example usage: *"--log-driver json-file --log-opt max-size=1M --log-opt max-file=3"*. +|`openshift_docker_options` +|This variable configures additional `docker` options within +*_/etc/sysconfig/docker_*, such as options used in +xref:../../install_config/install/host_preparation.adoc#managing-docker-container-logs[Managing Container Logs]. Example usage: *"--log-driver json-file --log-opt max-size=1M +--log-opt max-file=3"*. Do not use when +xref:advanced-install-docker-system-container[running `docker` as a system container]. |`openshift_schedulable` |This variable configures whether the host is marked as a schedulable node, @@ -546,11 +559,14 @@ inventory file. For example: openshift_disable_check=memory_availability,disk_availability ---- -A similar set of checks meant to run for diagnostic on existing clusters can be +[NOTE] +==== +A similar set of health checks meant to run for diagnostics on existing clusters +can be found in +xref:../../admin_guide/diagnostics_tool.adoc#admin-guide-health-checks-via-ansible-playbook[Ansible-based Health Checks]. Another set of checks for checking certificate expiration can be found in -xref:../../admin_guide/diagnostics_tool.adoc#admin-guide-diagnostics-tool[Additional Diagnostic Checks via Ansible]. Another set of checks for checking certificate -expiration can be found in xref:../../install_config/redeploying_certificates.adoc#install-config-redeploying-certificates[Redeploying Certificates]. +==== [[advanced-install-configuring-system-containers]] === Configuring System Containers @@ -689,8 +705,9 @@ If you are using an image registry other than the default at *_/etc/ansible/hosts_* file. ---- -oreg_url=example.com/openshift3/ose-${component}:${version} +oreg_url={registry}/openshift3/ose-${component}:${version} openshift_examples_modify_imagestreams=true +openshift_docker_additional_registries={registry} ---- .Registry Variables @@ -703,8 +720,18 @@ openshift_examples_modify_imagestreams=true |`*openshift_examples_modify_imagestreams*` |Set to `true` if pointing to a registry other than the default. Modifies the image stream location to the value of `*oreg_url*`. + +|`*openshift_docker_additional_registries*` +|Specify the additional registry or registries. |=== +For example: +---- +oreg_url=example.com/openshift3/ose-${component}:${version} +openshift_examples_modify_imagestreams=true +openshift_docker_additional_registries=example.com +---- + [[advanced-install-registry-storage]] ==== Configuring Registry Storage @@ -791,6 +818,241 @@ region endpoint parameter: openshift_hosted_registry_storage_s3_regionendpoint=https://myendpoint.example.com/ ---- + +[[advanced-install-glusterfs-persistent-storage]] +=== Configuring GlusterFS Persistent Storage + +GlusterFS can be configured to provide +xref:../../architecture/additional_concepts/storage.adoc#architecture-additional-concepts-storage[peristent storage] and +xref:../../install_config/storage_examples/gluster_dynamic_example.adoc#install-config-persistent-storage-dynamically-provisioning-pvs[dynamic provisioning] for {product-title}. It can be used both containerized within +{product-title} and non-containerized on its own nodes. + +[[advanced-install-containerized-glusterfs-persistent-storage]] +==== Configuring Containerized GlusterFS Persistent Storage + +ifdef::openshift-enterprise[] +This option utilizes +link:https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.2/html/container-native_storage_for_openshift_container_platform/[Red Hat Container Native Storage (CNS)] for configuring containerized GlusterFS persistent storage in {product-title}. +endif::[] +ifdef::openshift-origin[] +See link:https://github.com/gluster/gluster-kubernetes[Running Containerized GlusterFS in Kubernetes] for additional information on containerized storage +using GlusterFS. +endif::[] + +[IMPORTANT] +==== +See +xref:../../install_config/install/prerequisites.adoc#prereq-containerized-glusterfs-considerations[Containerized GlusterFS Considerations] for specific host preparations and prerequisites. +==== + +. In your inventory file, add `glusterfs` in the `[OSEv3:children]` section to +enable the `[glusterfs]` group: ++ +---- +[OSEv3:children] +masters +nodes +glusterfs +---- + +. (Optional) Include any of the following role variables in the `[OSEv3:vars]` +section you wish to change: ++ +---- +[OSEv3:vars] +openshift_storage_glusterfs_namespace=glusterfs <1> +openshift_storage_glusterfs_name=storage <2> +---- +<1> The project (namespace) to host the storage pods. Defaults to `glusterfs`. +<2> A name to identify the GlusterFS cluster, which will be used in resource names. +Defaults to `storage`. + +. Add a `[glusterfs]` section with entries for each storage node that will host +the GlusterFS storage and include the `glusterfs_ip` and +`glusterfs_devices` parameters in the form: ++ +---- + glusterfs_ip= glusterfs_devices='[ "", "", ... ]' +---- ++ +For example: ++ +---- +[glusterfs] +192.168.10.11 glusterfs_ip=192.168.10.11 glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]' +192.168.10.12 glusterfs_ip=192.168.10.12 glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]' +192.168.10.13 glusterfs_ip=192.168.10.13 glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]' +---- ++ +Set `glusterfs_devices` to a list of raw block devices that will be completely +managed as part of a GlusterFS cluster. There must be at least one device +listed. Each device must be bare, with no partitions or LVM PVs. Set +`glusterfs_ip` to the IP address that will be used by pods to communicate with +the GlusterFS node. + +. Add the hosts listed under `[glusterfs]` to the `[nodes]` group as well: ++ +---- +[nodes] +192.168.10.14 +192.168.10.15 +192.168.10.16 +---- + +. After completing the cluster installation per +xref:running-the-advanced-installation[Running the Advanced Installation], run +the following from a master to verify the necessary objects were successfully +created: + +.. Verfiy that the GlusterFS `StorageClass` was created: ++ +---- +# oc get storageclass +NAME TYPE +glusterfs-storage kubernetes.io/glusterfs +---- + +.. Verify that the route was created: ++ +---- +# oc get routes +NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD +heketi-glusterfs-route heketi-glusterfs-default.cloudapps.example.com heketi-glusterfs None +---- ++ +[NOTE] +==== +The name for the route will be `heketi-glusterfs-route` unless the default +`glusterfs` value was overridden using the `openshift_glusterfs_storage_name` +variable in the inventory file. +==== + +.. Use `curl` to verify the route works correctly: ++ +---- +# curl http://heketi-glusterfs-default.cloudapps.example.com/hello +Hello from Heketi. +---- + +After successful installation, see +link:https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.2/html/container-native_storage_for_openshift_container_platform/chap-documentation-red_hat_gluster_storage_container_native_with_openshift_platform-gluster_pod_operations[Operations on a Red Hat Gluster Storage Pod in an OpenShift Environment] to check the status of the GlusterFS clusters. + +xref:../../install_config/storage_examples/gluster_dynamic_example.adoc#install-config-persistent-storage-dynamically-provisioning-pvs[Dynamic provisioning] of GlusterFS volumes can occur by +xref:../../install_config/storage_examples/gluster_dynamic_example.adoc#create-a-pvc-ro-request-storage-for-your-application[creating a PVC to request storage]. + +[[advanced-install-configuring-openshift-container-registry]] +=== Configuring the OpenShift Container Registry + +Additional configuration options are available at installation time for the +xref:../../architecture/infrastructure_components/image_registry.adoc#integrated-openshift-registry[OpenShift Container Registry]. + +If no registry storage options are used, the default {product-title} registry is +ephermal and all data will be lost if the pod no longer exists. {product-title} +also supports a single node NFS-backed registry, but this option lacks +redundancy and reliability compared with the GlusterFS-backed option. + +[[advanced-install-containerized-glusterfs-backed-registry]] +==== Configuring a Containerized GlusterFS-Backed Registry + +Similar to +xref:advanced-install-containerized-glusterfs-persistent-storage[configuring containerized GlusterFS for persistent storage], GlusterFS storage can be +configured and deployed for an OpenShift Container Registry during the initial +installation of the cluster to offer redundant and more reliable storage for the +registry. + +[IMPORTANT] +==== +See +xref:../../install_config/install/prerequisites.adoc#prereq-containerized-glusterfs-considerations[Containerized +GlusterFS Considerations] for specific host preparations and prerequisites. +==== + +Configuration of storage for an OpenShift Container Registry is very similar to +configuration for GlusterFS persistent storage in that it can be either +containerized or non-containerized. For this containerized method, the following +exceptions and additions apply: + +. In your inventory file, add `glusterfs_registry` in the `[OSEv3:children]` section +to enable the `[glusterfs_registry]` group: ++ +---- +[OSEv3:children] +masters +nodes +glusterfs_registry +---- + +. Add the following role variable in the `[OSEv3:vars]` section to enable the +GlusterFS-backed registry, provided that the `glusterfs_registry` group name and +the `[glusterfs_registry]` group exist: ++ +---- +[OSEv3:vars] +openshift_hosted_registry_storage_kind=glusterfs +---- + +. It is recommended to have at least three registry pods, so set the following +role variable in the `[OSEv3:vars]` section: ++ +---- +openshift_hosted_registry_replicas=3 +---- + +. If you want to specify the volume size for the GlusterFS-backed registry, set +the following role variable in `[OSEv3:vars]` section: ++ +---- +openshift_hosted_registry_storage_volume_size=10Gi +---- ++ +If unspecified, the volume size defaults to `5Gi`. + +. The installer will deploy the OpenShift Container Registry pods and associated +routers on nodes containing the `region=infra` label. Add this label on at least +one node entry in the `[nodes]` section, otherwise the registry deployment will +fail. For example: ++ +---- +[nodes] +192.168.10.14 openshift_schedulable=True openshift_node_labels="{'region': 'infra'}" +---- + +. Add a `[glusterfs_registry]` section with entries for each storage node that +will host the GlusterFS-backed registry and include the `glusterfs_ip` and +`glusterfs_devices` parameters in the form: ++ +---- + glusterfs_ip= glusterfs_devices='[ "", "", ... ]' +---- ++ +For example: ++ +---- +[glusterfs_registry] +192.168.10.14 glusterfs_ip=192.168.10.14 glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]' +192.168.10.15 glusterfs_ip=192.168.10.15 glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]' +192.168.10.16 glusterfs_ip=192.168.10.16 glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]' +---- ++ +Set `glusterfs_devices` to a list of raw block devices that will be completely +managed as part of a GlusterFS cluster. There must be at least one device +listed. Each device must be bare, with no partitions or LVM PVs. Set +`glusterfs_ip` to the IP address that will be used by pods to communicate with +the GlusterFS node. + +. Add the hosts listed under `[glusterfs_registry]` to the `[nodes]` group as well: ++ +---- +[nodes] +192.168.10.14 +192.168.10.15 +192.168.10.16 +---- + +After successful installation, see +link:https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.2/html/container-native_storage_for_openshift_container_platform/chap-documentation-red_hat_gluster_storage_container_native_with_openshift_platform-gluster_pod_operations[Operations on a Red Hat Gluster Storage Pod in an OpenShift Environment] to check the +status of the GlusterFS clusters. + [[advanced-install-configuring-global-proxy]] === Configuring Global Proxy Options @@ -817,60 +1079,79 @@ environment is defined for builds. |Variable |Purpose -|`*openshift_http_proxy*` -|This variable specifies the `*HTTP_PROXY*` environment variable for masters and +|`openshift_http_proxy` +|This variable specifies the `HTTP_PROXY` environment variable for masters and the Docker daemon. -|`*openshift_https_proxy*` -|This variable specifices the `*HTTPS_PROXY*` environment variable for masters +|`openshift_https_proxy` +|This variable specifices the `HTTPS_PROXY` environment variable for masters and the Docker daemon. -|`*openshift_no_proxy*` -|This variable is used to set the `*NO_PROXY*` environment variable for masters +|`openshift_no_proxy` +|This variable is used to set the `NO_PROXY` environment variable for masters and the Docker daemon. This value should be set to a comma separated list of host names or wildcard host names that should not use the defined proxy. This list will be augmented with the list of all defined {product-title} host names by default. -|`*openshift_generate_no_proxy_hosts*` +|`openshift_generate_no_proxy_hosts` |This boolean variable specifies whether or not the names of all defined OpenShift hosts and `pass:[*.cluster.local]` should be automatically appended to -the `*NO_PROXY*` list. Defaults to *true*; set it to *false* to override this +the `NO_PROXY` list. Defaults to `true`; set it to `false` to override this option. -|`*openshift_builddefaults_http_proxy*` -|This variable defines the `*HTTP_PROXY*` environment variable inserted into -builds using the `*BuildDefaults*` admission controller. If -`*openshift_http_proxy*` is set, this variable will inherit that value; you only +|`openshift_builddefaults_http_proxy` +|This variable defines the `HTTP_PROXY` environment variable inserted into +builds using the `BuildDefaults` admission controller. If +`openshift_http_proxy` is set, this variable will inherit that value; you only need to set this if you want your builds to use a different value. -|`*openshift_builddefaults_https_proxy*` +|`openshift_builddefaults_https_proxy` |This variable defines the `*HTTPS_PROXY*` environment variable inserted into builds using the `*BuildDefaults*` admission controller. If `*openshift_https_proxy*` is set, this variable will inherit that value; you only need to set this if you want your builds to use a different value. -|`*openshift_builddefaults_no_proxy*` -|This variable defines the `*NO_PROXY*` environment variable inserted into -builds using the `*BuildDefaults*` admission controller. If -`*openshift_no_proxy*` is set, this variable will inherit that value; you only +|`openshift_builddefaults_no_proxy` +|This variable defines the `NO_PROXY` environment variable inserted into +builds using the `BuildDefaults` admission controller. If +`openshift_no_proxy` is set, this variable will inherit that value; you only need to set this if you want your builds to use a different value. -|`*openshift_builddefaults_git_http_proxy*` +|`openshift_builddefaults_git_http_proxy` |This variable defines the HTTP proxy used by `git clone` operations during a -build, defined using the `*BuildDefaults*` admission controller. If -`*openshift_builddefaults_http_proxy*` is set, this variable will inherit that +build, defined using the `BuildDefaults` admission controller. If +`openshift_builddefaults_http_proxy` is set, this variable will inherit that value; you only need to set this if you want your `git clone` operations to use a different value. -|`*openshift_builddefaults_git_https_proxy*` +|`openshift_builddefaults_git_https_proxy` |This variable defines the HTTPS proxy used by `git clone` operations during a -build, defined using the `*BuildDefaults*` admission controller. If -`*openshift_builddefaults_https_proxy*` is set, this variable will inherit that +build, defined using the `BuildDefaults` admission controller. If +`openshift_builddefaults_https_proxy` is set, this variable will inherit that value; you only need to set this if you want your `git clone` operations to use a different value. |=== +[[advanced-install-no-proxy-list]] +If any of: + +- `openshift_no_proxy` +- `openshift_https_proxy` +- `openshift_http_proxy` + +are set, then all cluster hosts will have an automatically generated `NO_PROXY` +environment variable injected into several service configuration scripts. The +default `.svc` domain and your cluster's `dns_domain` (typically +`.cluster.local`) will also be added. + +[NOTE] +==== +Setting `openshift_generate_no_proxy_hosts` to `false` in your inventory will +not disable the automatic addition of the `.svc` domain and the cluster domain. +These are required and added automatically if any of the above listed proxy +parameters are set. +==== ifdef::openshift-enterprise,openshift-origin[] [[advanced-install-configuring-firewalls]] @@ -911,7 +1192,7 @@ endif::[] Any hosts you designate as masters during the installation process should also be configured as nodes so that the masters are configured as part of the -xref:../../architecture/additional_concepts/networking.adoc#openshift-sdn[OpenShift SDN]. You must do so by adding entries for these hosts to the `[nodes]` section: +xref:../../architecture/networking/network_plugins.adoc#openshift-sdn[OpenShift SDN]. You must do so by adding entries for these hosts to the `[nodes]` section: ---- [nodes] @@ -942,7 +1223,7 @@ You can assign xref:../../architecture/core_concepts/pods_and_services.adoc#labels[labels] to node hosts during the Ansible install by configuring the *_/etc/ansible/hosts_* file. Labels are useful for determining the placement of pods onto nodes using -the xref:../../admin_guide/scheduler.adoc#configurable-predicates[scheduler]. +the xref:../../admin_guide/scheduling/scheduler.adoc#configurable-predicates[scheduler]. Other than `region=infra` (discussed in xref:configuring-dedicated-infrastructure-nodes[Configuring Dedicated Infrastructure Nodes]), the actual label names and values are arbitrary and can be assigned however you see fit per your cluster's requirements. @@ -970,9 +1251,8 @@ pods. They are set to `region=infra` by default: # openshift_registry_selector='region=infra' ---- -The default router and registry will be automatically deployed during -installation if nodes exist in the `[nodes]` section that match the selector -settings. For example: +The registry and router are only able to run on node hosts with the `region=infra` label. +Ensure that at least one node host in your {product-title} environment has the `region=infra` label. For example: ---- [nodes] @@ -981,9 +1261,8 @@ infra-node1.example.com openshift_node_labels="{'region': 'infra','zone': 'defau [IMPORTANT] ==== -The registry and router are only able to run on node hosts with the -`region=infra` label. Ensure that at least one node host in your {product-title} -environment has the `region=infra` label. +If there is not a node in the [nodes] section that matches the selector settings, +the default router and registry will be deployed as failed with `Pending` status. ==== It is recommended for production environments that you maintain dedicated @@ -1148,17 +1427,12 @@ following to enable cluster metrics when using the advanced install: ---- [OSEv3:vars] -openshift_hosted_metrics_deploy=true <1> -openshift_hosted_metrics_deployer_prefix=registry.example.com:8888/openshift3/ <2> -openshift_hosted_metrics_deployer_version=v3.5 <3> +openshift_metrics_install_metrics=true ---- -<1> Enables the metrics deployment. -<2> Replace `registry.example.com:8888/openshift3/` with the prefix for the component images. -<3> Replace with the desired image version. The {product-title} web console uses the data coming from the Hawkular Metrics service to display its graphs. The metrics public URL can be set during cluster -installation using the `openshift_hosted_metrics_public_url` Ansible variable, +installation using the `openshift_metrics_hawkular_hostname` Ansible variable, which defaults to: `\https://hawkular-metrics.{{openshift_master_default_subdomain}}/hawkular/metrics` @@ -1171,7 +1445,7 @@ If you alter this variable, ensure the host name is accessible via your router. The `openshift_metrics_cassandra_storage_type` variable must be set in order to use persistent storage for metrics. If `openshift_metrics_cassandra_storage_type` is not set, then cluster metrics data -is stored in an `EmptyDir` volume, which will be deleted when the Cassandra pod +is stored in an `emptyDir` volume, which will be deleted when the Cassandra pod terminates. There are three options for enabling cluster metrics storage when using the @@ -1189,12 +1463,12 @@ be *_/exports/metrics_*: ---- [OSEv3:vars] -openshift_hosted_metrics_storage_kind=nfs -openshift_hosted_metrics_storage_access_modes=['ReadWriteOnce'] -openshift_hosted_metrics_storage_nfs_directory=/exports -openshift_hosted_metrics_storage_nfs_options='*(rw,root_squash)' -openshift_hosted_metrics_storage_volume_name=metrics -openshift_hosted_metrics_storage_volume_size=10Gi +openshift_metrics_storage_kind=nfs +openshift_metrics_storage_access_modes=['ReadWriteOnce'] +openshift_metrics_storage_nfs_directory=/exports +openshift_metrics_storage_nfs_options='*(rw,root_squash)' +openshift_metrics_storage_volume_name=metrics +openshift_metrics_storage_volume_size=10Gi ---- [discrete] @@ -1207,12 +1481,12 @@ To use an external NFS volume, one must already exist with a path of ---- [OSEv3:vars] -openshift_hosted_metrics_storage_kind=nfs -openshift_hosted_metrics_storage_access_modes=['ReadWriteOnce'] -openshift_hosted_metrics_storage_host=nfs.example.com -openshift_hosted_metrics_storage_nfs_directory=/exports -openshift_hosted_metrics_storage_volume_name=metrics -openshift_hosted_metrics_storage_volume_size=10Gi +openshift_metrics_storage_kind=nfs +openshift_metrics_storage_access_modes=['ReadWriteOnce'] +openshift_metrics_storage_host=nfs.example.com +openshift_metrics_storage_nfs_directory=/exports +openshift_metrics_storage_volume_name=metrics +openshift_metrics_storage_volume_size=10Gi ---- The remote volume path using the following options would be @@ -1240,20 +1514,15 @@ following to enable cluster logging when using the advanced installation method: ---- [OSEv3:vars] -openshift_hosted_logging_deploy=true <1> -openshift_hosted_logging_deployer_prefix=registry.example.com:8888/openshift3/ <2> -openshift_hosted_logging_deployer_version=v3.5 <3> +openshift_logging_install_logging=true ---- -<1> Enables the logging stack. -<2> Replace `registry.example.com:8888/openshift3/` with your desired prefix. -<3> Replace with the desired image version. [[advanced-installation-logging-storage]] ==== Configuring Logging Storage -The `openshift_hosted_logging_storage_kind` variable must be set in order to use -persistent storage for logging. If `openshift_hosted_logging_storage_kind` is -not set, then cluster logging data is stored in an `EmptyDir` volume, which will +The `openshift_logging_storage_kind` variable must be set in order to use +persistent storage for logging. If `openshift_logging_storage_kind` is +not set, then cluster logging data is stored in an `emptyDir` volume, which will be deleted when the Elasticsearch pod terminates. There are three options for enabling cluster logging storage when using the @@ -1271,12 +1540,12 @@ the `[nfs]` host group. For example, the volume path using these options would b ---- [OSEv3:vars] -openshift_hosted_logging_storage_kind=nfs -openshift_hosted_logging_storage_access_modes=['ReadWriteOnce'] -openshift_hosted_logging_storage_nfs_directory=/exports -openshift_hosted_logging_storage_nfs_options='*(rw,root_squash)' -openshift_hosted_logging_storage_volume_name=logging -openshift_hosted_logging_storage_volume_size=10Gi +openshift_logging_storage_kind=nfs +openshift_logging_storage_access_modes=['ReadWriteOnce'] +openshift_logging_storage_nfs_directory=/exports +openshift_logging_storage_nfs_options='*(rw,root_squash)' +openshift_logging_storage_volume_name=logging +openshift_logging_storage_volume_size=10Gi ---- [discrete] @@ -1289,12 +1558,12 @@ To use an external NFS volume, one must already exist with a path of ---- [OSEv3:vars] -openshift_hosted_logging_storage_kind=nfs -openshift_hosted_logging_storage_access_modes=['ReadWriteOnce'] -openshift_hosted_logging_storage_host=nfs.example.com -openshift_hosted_logging_storage_nfs_directory=/exports -openshift_hosted_logging_storage_volume_name=logging -openshift_hosted_logging_storage_volume_size=10Gi +openshift_logging_storage_kind=nfs +openshift_logging_storage_access_modes=['ReadWriteOnce'] +openshift_logging_storage_host=nfs.example.com +openshift_logging_storage_nfs_directory=/exports +openshift_logging_storage_volume_name=logging +openshift_logging_storage_volume_size=10Gi ---- The remote volume path using the following options would be @@ -1310,9 +1579,218 @@ xref:../../install_config/persistent_storage/dynamically_provisioning_pvs.adoc#i ---- [OSEv3:vars] -openshift_hosted_logging_storage_kind=dynamic +openshift_logging_storage_kind=dynamic +---- + +[[enabling-service-catalog]] +=== Enabling the Service Catalog + +[NOTE] +==== +Enabling the service catalog is a Technology Preview feature only. +ifdef::openshift-enterprise[] +Technology Preview features are not +supported with Red Hat production service level agreements (SLAs), might not be +functionally complete, and Red Hat does not recommend to use them for +production. These features provide early access to upcoming product features, +enabling customers to test functionality and provide feedback during the +development process. + +For more information on Red Hat Technology Preview features support scope, see +https://access.redhat.com/support/offerings/techpreview/. +endif::[] +==== + +Enabling the +xref:../../architecture/service_catalog/index.adoc#architecture-additional-concepts-service-catalog[service catalog] allows service brokers to be registered with the catalog. The web +console is also configured to enable an updated landing page for browsing the +catalog. + +To enable the service catalog, add the following in your inventory file's +`[OSEv3:vars]` section: + +---- +openshift_enable_service_catalog=true +ifdef::openshift-origin[] +openshift_service_catalog_image_prefix=openshift/origin- +openshift_service_catalog_image_version=latest +endif::[] ---- +When the service catalog is enabled, the web console shows the updated landing +page but still uses the normal image stream and template behavior. The Ansible +service broker is also enabled; see +xref:configuring-ansible-service-broker[Configuring the Ansible Service Broker] +for more details. The template service broker (TSB) is not deployed by default; +see xref:configuring-template-service-broker[Configuring the Template Service Broker] for more information. + +[[configuring-ansible-service-broker]] +=== Configuring the Ansible Service Broker + +[NOTE] +==== +Enabling the Ansible service broker is a Technology Preview feature only. +ifdef::openshift-enterprise[] +Technology Preview features are not +supported with Red Hat production service level agreements (SLAs), might not be +functionally complete, and Red Hat does not recommend to use them for +production. These features provide early access to upcoming product features, +enabling customers to test functionality and provide feedback during the +development process. + +For more information on Red Hat Technology Preview features support scope, see +https://access.redhat.com/support/offerings/techpreview/. +endif::[] +==== + +If you have xref:enabling-service-catalog[enabled the service catalog], the +xref:../../architecture/service_catalog/ansible_service_broker.adoc#arch-ansible-service-broker[Ansible service broker] (ASB) is also enabled. + +The ASB deploys its own etcd instance separate from the etcd used by the rest of +the {product-title} cluster. The ASB's etcd instance requires separate storage +using persistent volumes (PVs) to function. If no PV is available, etcd will +wait until the PV can be satisfied. The ASB application will enter a `CrashLoop` +state until its etcd instance is available. + +[NOTE] +==== +The following example shows usage of an NFS host to provide the required PVs, +but +xref:../../install_config/persistent_storage/index.adoc#install-config-persistent-storage-index[other persistent storage providers] can be used instead. +==== + +Some Ansible playbook bundles (APBs) may also require a PV for their own usage. +Two APBs are currently provided with {product-title} 3.6: MediaWiki and +PostgreSQL. Both of these require their own PV to deploy. + +To configure the ASB: + +. In your inventory file, add `nfs` to the `[OSEv3:children]` section to enable +the `[nfs]` group: ++ +---- +[OSEv3:children] +masters +nodes +nfs +---- + +. Add a `[nfs]` group section and add the host name for the system that will +be the NFS host: ++ +---- +[nfs] +master1.example.com +---- + +. In addition to the settings from xref:enabling-service-catalog[Enabling the +Service Catalog], add the following in the `[OSEv3:vars]` +section: ++ +---- +openshift_hosted_etcd_storage_kind=nfs +openshift_hosted_etcd_storage_nfs_options="*(rw,root_squash,sync,no_wdelay)" +openshift_hosted_etcd_storage_nfs_directory=/opt/osev3-etcd <1> +openshift_hosted_etcd_storage_volume_name=etcd-vol2 <1> +openshift_hosted_etcd_storage_access_modes=["ReadWriteOnce"] +openshift_hosted_etcd_storage_volume_size=1G +openshift_hosted_etcd_storage_labels={'storage': 'etcd'} + +ifdef::openshift-origin[] +ansible_service_broker_image_prefix=openshift/ +ansible_service_broker_registry_url="registry.access.redhat.com" +ansible_service_broker_registry_user= <2> +ansible_service_broker_registry_password= <2> +ansible_service_broker_registry_organization= <2> +endif::[] +---- +<1> An NFS volume will be created with path `/` on the +host within the `[nfs]` group. For example, the volume path using these options +would be *_/opt/osev3-etcd/etcd-vol2_*. +ifdef::openshift-origin[] +<2> Only required if `ansible_service_broker_registry_url` is set to a registry that +requires authentication for pulling APBs. +endif::[] ++ +These settings create a persistent volume that is attached to the ASB's etcd +instance during cluster installation. + +[[configuring-template-service-broker]] +=== Configuring the Template Service Broker + +[NOTE] +==== +Enabling the template service broker is a Technology Preview feature only. +ifdef::openshift-enterprise[] +Technology Preview features are not +supported with Red Hat production service level agreements (SLAs), might not be +functionally complete, and Red Hat does not recommend to use them for +production. These features provide early access to upcoming product features, +enabling customers to test functionality and provide feedback during the +development process. + +For more information on Red Hat Technology Preview features support scope, see +https://access.redhat.com/support/offerings/techpreview/. +endif::[] +==== + +If you have xref:enabling-service-catalog[enabled the service catalog], you can +also enable the +xref:../../architecture/service_catalog/template_service_broker.adoc#arch-template-service-broke[template service broker] (TSB). + +To configure the TSB: + +. One or more projects must be defined as the broker's source +namespace(s) for loading templates and image streams into the service catalog. +Set the desired projects by modifying the following in your inventory file's +`[OSEv3:vars]` section: ++ +---- +openshift_template_service_broker_namespaces=['openshift','myproject'] +---- + +. The installer currently does not automate installation of the TSB, so additional +steps must be run manually after the cluster installation has completed. +Continue with the rest of the preparation of your inventory file, then see +xref:running-the-advanced-installation[Running the Advanced Installation] for +the additional steps to deploy the TSB. + +[[configuring-web-console-customization]] +=== Configuring Web Console Customization + +The following Ansible variables set master configuration options for customizing +the web console. See +xref:../../install_config/web_console_customization.adoc#install-config-web-console-customization[Customizing the Web Console] for more details on these customization options. + +.Web Console Customization Variables +[options="header"] +|=== + +|Variable |Purpose + +|`openshift_master_logout_url` +|Sets `logoutURL` in the master configuration. See xref:../../install_config/web_console_customization.adoc#changing-the-logout-url[Changing the Logout URL] for details. Example value: `\http://example.com` + +|`openshift_master_extension_scripts` +|Sets `extensionScripts` in the master configuration. See xref:../../install_config/web_console_customization.adoc#loading-custom-scripts-and-stylesheets[Loading Extension Scripts and Stylesheets] for details. Example value: `['/path/to/script1.js','/path/to/script2.js']` + +|`openshift_master_extension_stylesheets` +|Sets `extensionStylesheets` in the master configuration. See xref:../../install_config/web_console_customization.adoc#loading-custom-scripts-and-stylesheets[Loading Extension Scripts and Stylesheets] for details. Example value: `['/path/to/stylesheet1.css','/path/to/stylesheet2.css']` + +|`openshift_master_extensions` +|Sets `extensions` in the master configuration. See xref:../../install_config/web_console_customization.adoc#serving-static-files[Serving Static Files] and xref:../../install_config/web_console_customization.adoc#customizing-the-about-page[Customizing the About Page] for details. Example value: `[{'name': 'images', 'sourceDirectory': '/path/to/my_images'}]` + +|`openshift_master_oauth_template` +|Sets the OAuth template in the master configuration. See xref:../../install_config/web_console_customization.adoc#customizing-the-login-page[Customizing the Login Page] for details. Example value: `['/path/to/login-template.html']` + +|`openshift_master_metrics_public_url` +|Sets `metricsPublicURL` in the master configuration. See xref:../../install_config/cluster_metrics.adoc#install-setting-the-metrics-public-url[Setting the Metrics Public URL] for details. Example value: `\https://hawkular-metrics.example.com/hawkular/metrics` + +|`openshift_master_logging_public_url` +|Sets `loggingPublicURL` in the master configuration. See xref:../../install_config/aggregate_logging.adoc#aggregate-logging-kibana[Kibana] for details. Example value: `\https://kibana.example.com` + +|=== + [[adv-install-example-inventory-files]] == Example Inventory Files @@ -1320,7 +1798,7 @@ openshift_hosted_logging_storage_kind=dynamic === Single Master Examples You can configure an environment with a single master and multiple nodes, and -either a single embedded *etcd* or multiple external *etcd* hosts. +either a single or multiple number of external *etcd* hosts. [NOTE] ==== @@ -1333,7 +1811,7 @@ not supported. ==== Single Master and Multiple Nodes The following table describes an example environment for a single -xref:../../architecture/infrastructure_components/kubernetes_infrastructure.adoc#master[master] (with embedded *etcd*) +xref:../../architecture/infrastructure_components/kubernetes_infrastructure.adoc#master[master] (with *etcd* on the same host) and two xref:../../architecture/infrastructure_components/kubernetes_infrastructure.adoc#node[nodes]: @@ -1345,6 +1823,9 @@ xref:../../architecture/infrastructure_components/kubernetes_infrastructure.adoc |*master.example.com* |Master and node +|*master.example.com* +|etcd + |*node1.example.com* .2+.^|Node @@ -1370,10 +1851,10 @@ ansible_ssh_user=root #ansible_become=true ifdef::openshift-enterprise[] -deployment_type=openshift-enterprise +openshift_deployment_type=openshift-enterprise endif::[] ifdef::openshift-origin[] -deployment_type=origin +openshift_deployment_type=origin endif::[] # uncomment the following to enable htpasswd authentication; defaults to DenyAllPasswordIdentityProvider @@ -1383,6 +1864,10 @@ endif::[] [masters] master.example.com +# host group for etcd +[etcd] +master.example.com + # host group for nodes, includes region info [nodes] master.example.com @@ -1449,10 +1934,10 @@ etcd [OSEv3:vars] ansible_ssh_user=root ifdef::openshift-enterprise[] -deployment_type=openshift-enterprise +openshift_deployment_type=openshift-enterprise endif::[] ifdef::openshift-origin[] -deployment_type=origin +openshift_deployment_type=origin endif::[] # uncomment the following to enable htpasswd authentication; defaults to DenyAllPasswordIdentityProvider @@ -1625,10 +2110,10 @@ lb [OSEv3:vars] ansible_ssh_user=root ifdef::openshift-enterprise[] -deployment_type=openshift-enterprise +openshift_deployment_type=openshift-enterprise endif::[] ifdef::openshift-origin[] -deployment_type=origin +openshift_deployment_type=origin endif::[] # Uncomment the following to enable htpasswd authentication; defaults to @@ -1731,7 +2216,7 @@ lb # Set variables common for all OSEv3 hosts [OSEv3:vars] ansible_ssh_user=root -deployment_type=openshift-enterprise +openshift_deployment_type=openshift-enterprise # Uncomment the following to enable htpasswd authentication; defaults to # DenyAllPasswordIdentityProvider. @@ -1820,57 +2305,210 @@ include::install_config/install/advanced_install.adoc[tag=syscontainers_techprev The ifdef::openshift-enterprise[] -3.5, +*openshift3/ose-ansible* endif::[] ifdef::openshift-origin[] -1.5, +*openshift/origin-ansible* endif::[] -the master now connects to etcd via IP address. -+ -When configuring a cluster to use proxy settings (see -xref:advanced-install-configuring-global-proxy[Configuring Global Proxy Options]), this change causes the master-to-etcd connection to be proxied as -well, rather than being excluded by host name in each host's `NO_PROXY` setting -(see -xref:../../install_config/http_proxies.adoc#install-config-http-proxies[Working with HTTP Proxies] for more about `NO_PROXY`). -+ -To workaround this issue, set the following: +image is a containerized version of the {product-title} installer that runs as a +link:https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux_atomic_host/7/html/managing_containers/running_system_containers[system container]. System containers are stored and run outside of the traditional +*docker* service. Functionally, using the containerized installer is the same as +using the traditional RPM-based installer, except it is running in a +containerized environment instead of directly on the host. + +. Use the Docker CLI to pull the image locally: + ---- -openshift_no_proxy=https://: +ifdef::openshift-enterprise[] +$ docker pull registry.access.redhat.com/openshift3/ose-ansible:v3.6 +endif::[] +ifdef::openshift-origin[] +$ docker pull docker.io/openshift/origin-ansible:v3.6 +endif::[] ---- + +. The installer system container must be stored in +link:https://access.redhat.com/documentation/en-us/red_hat_satellite/6.2/html/content_management_guide/managing_ostree_content[OSTree] +instead of defaulting to *docker* daemon storage. Use the Atomic CLI to import +the installer image from the local *docker* engine to OSTree storage: + -Use the IP that the master will use to contact the etcd cluster as the -``. The `` should be `2379` if you are using standalone etcd -(clustered) or `4001` for embedded etcd (single master, non-clustered etcd). The -installer will be updated in a future release to handle this scenario -automatically during installation and upgrades -(link:https://bugzilla.redhat.com/show_bug.cgi?id=1466783[*BZ#1466783*]). -// end::BZ1466783-workaround-install[] - -. Run the advanced installation using the following playbook: +---- +$ atomic pull --storage ostree \ +ifdef::openshift-enterprise[] + docker:registry.access.redhat.com/openshift3/ose-ansible:v3.6 +endif::[] +ifdef::openshift-origin[] + docker:docker.io/openshift/origin-ansible:v3.6 +endif::[] +---- + +. Install the system container so it is set up as a systemd service: + ---- +$ atomic install --system \ + --storage=ostree \ + --name=openshift-installer \//<1> + --set INVENTORY_FILE=/path/to/inventory \//<2> ifdef::openshift-enterprise[] -# ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/byo/config.yml + docker:registry.access.redhat.com/openshift3/ose-ansible:v3.6 endif::[] ifdef::openshift-origin[] -# ansible-playbook ~/openshift-ansible/playbooks/byo/config.yml + docker:docker.io/openshift/origin-ansible:v3.6 endif::[] ---- +<1> Sets the name for the systemd service. +<2> Specify the location for your inventory file on your local workstation. + +. Use the `systemctl` command to start the installer service as you would any +other systemd service. This command initiates the cluster installation: ++ +---- +$ systemctl start openshift-installer +---- + If for any reason the installation fails, before re-running the installer, see xref:installer-known-issues[Known Issues] to check for any specific instructions or workarounds. -. After the installation succeeds, continue to -xref:advanced-verifying-the-installation[Verifying the Installation]. +. After the installation completes, you can uninstall the system container if you want. However, if you need to run the installer again to run any other playbooks later, you would have to follow this procedure again. ++ +To uninstall the system container: ++ +---- +$ atomic uninstall openshift-installer +---- + +[[running-the-advanced-installation-system-container-other-playbooks]] +==== Running Other Playbooks + +After you have completed the cluster installation, if you want to later run any +other playbooks using the containerized installer (for example, cluster upgrade +playbooks), you can use the `PLAYBOOK_FILE` environment variable. The default +value is `playbooks/byo/config.yml`, which is the main cluster installation +playbook, but you can set it to the path of another playbook inside the +container. + +For example: + +---- +$ atomic install --system \ + --storage=ostree \ + --name=openshift-installer \ + --set INVENTORY_FILE=/etc/ansible/hosts \ + --set PLAYBOOK_FILE=playbooks/byo/openshift-cluster/upgrades/v3_6/upgrade.yml \//<1> +ifdef::openshift-enterprise[] + docker:registry.access.redhat.com/openshift3/ose-ansible:v3.6 +endif::[] +ifdef::openshift-origin[] + docker:docker.io/openshift/origin-ansible:v3.6 +endif::[] +---- +<1> Set `PLAYBOOK_FILE` to the relative path of the playbook starting at the +*_playbooks/_* directory. Playbooks mentioned elsewhere in {product-title} +documentation assume use of the RPM-based installer, so use this relative path +instead when using the containerized installer. + +[[running-the-advanced-installation-tsb]] +=== Deploying the Template Service Broker + +If you have xref:enabling-service-catalog[enabled the service catalog] and want +to deploy the xref:configuring-template-service-broker[template service broker] +(TSB), run the following manual steps after the cluster installation completes +successfully: + +[NOTE] +==== +The template service broker is a Technology Preview feature only. +ifdef::openshift-enterprise[] +Technology Preview features are not +supported with Red Hat production service level agreements (SLAs), might not be +functionally complete, and Red Hat does not recommend to use them for +production. These features provide early access to upcoming product features, +enabling customers to test functionality and provide feedback during the +development process. + +For more information on Red Hat Technology Preview features support scope, see +https://access.redhat.com/support/offerings/techpreview/. +endif::[] +==== + +[WARNING] +==== +Enabling the TSB currently requires opening unauthenticated access to the +cluster; this security issue will be resolved before exiting the Technology +Preview phase. +==== + +. Ensure that one or more source projects for the TSB were defined via +`openshift_template_service_broker_namespaces` as described in +xref:../../install_config/install/advanced_install.adoc#configuring-template-service-broker[Configuring the Template Service Broker]. + +. Run the following command to enable unauthenticated access for the TSB: ++ +---- +$ oc adm policy add-cluster-role-to-group \ + system:openshift:templateservicebroker-client \ + system:unauthenticated system:authenticated +---- + +. Create a *_template-broker.yml_* file with the following contents: ++ +[source,yaml] +---- +apiVersion: servicecatalog.k8s.io/v1alpha1 +kind: Broker +metadata: + name: template-broker +spec: + url: https://kubernetes.default.svc:443/brokers/template.openshift.io +---- + +. Use the file to register the broker: ++ +---- +$ oc create -f template-broker.yml +---- + +. Enable the Technology Preview feature in the web console to use the TSB instead +of the standard `openshift` global library behavior. + +.. Save the following script to a file (for example, *_tech-preview.js_*): ++ +[source, javascript] +---- +window.OPENSHIFT_CONSTANTS.ENABLE_TECH_PREVIEW_FEATURE.template_service_broker = true; +---- + +.. Add the file to the master configuration file in +*_/etc/origin/master/master-config.yml_*: ++ +[source, yaml] +---- +assetConfig: + ... + extensionScripts: + - /path/to/tech-preview.js +---- + +.. Restart the master service: ++ +ifdef::openshift-origin[] +---- +# systemctl restart origin-master +---- +endif::[] +ifdef::openshift-enterprise[] +---- +# systemctl restart atomic-openshift-master +---- +endif::[] [[advanced-verifying-the-installation]] == Verifying the Installation +// tag::verifying-the-installation[] After the installation completes: -// tag::verifying-the-installation[] . Verify that the master is started and nodes are registered and reporting in *Ready* status. _On the master host_, run the following as root: @@ -1890,14 +2528,6 @@ and the web console port number to access the web console with a web browser. For example, for a master host with a host name of `master.openshift.com` and using the default port of `8443`, the web console would be found at `\https://master.openshift.com:8443/console`. -. Now that the install has been verified, run the following command on each master -and node host to add the *atomic-openshift* packages back to the list of yum -excludes on the host: -+ ----- -# atomic-openshift-excluder exclude ----- - // end::verifying-the-installation[] [NOTE] @@ -2013,10 +2643,10 @@ nodes <1> [OSEv3:vars] ansible_ssh_user=root ifdef::openshift-enterprise[] -deployment_type=openshift-enterprise +openshift_deployment_type=openshift-enterprise endif::[] ifdef::openshift-origin[] -deployment_type=origin +openshift_deployment_type=origin endif::[] [nodes] @@ -2071,8 +2701,8 @@ ifdef::openshift-origin[] xref:../../install_config/configuring_authentication.adoc#AllowAllPasswordIdentityProvider[Allow All]. endif::[] -- Deploy an xref:../registry/index.adoc#install-config-registry-overview[integrated Docker registry]. -- Deploy a xref:../router/index.adoc#install-config-router-overview[router]. +- Deploy an xref:../../install_config/registry/index.adoc#install-config-registry-overview[integrated Docker registry]. +- Deploy a xref:../../install_config/router/index.adoc#install-config-router-overview[router]. ifdef::openshift-origin[] - xref:../../install_config/imagestreams_templates.adoc#install-config-imagestreams-templates[Populate your {product-title} installation] with a useful set of Red Hat-provided image streams and templates. diff --git a/release_notes/ose_3_1_release_notes.adoc b/release_notes/ose_3_1_release_notes.adoc index 63f378a85ee2..1d91304b745b 100644 --- a/release_notes/ose_3_1_release_notes.adoc +++ b/release_notes/ose_3_1_release_notes.adoc @@ -252,7 +252,7 @@ https://bugzilla.redhat.com/show_bug.cgi?id=1275388[BZ#1275388]:: Previously, so https://bugzilla.redhat.com/show_bug.cgi?id=1265187[BZ#1265187]:: When persistent volume claims (PVC) were created from a template, sometimes the same volume would be mounted to multiple PVCs. At the same time, the volume would show that only one PVC was being used. The issue has been fixed. https://bugzilla.redhat.com/show_bug.cgi?id=1279308[BZ#1279308]:: Previously, using a etcd storage location other than the default, as defined in the master configuration file, would result in an upgrade fail at the "generate etcd backup" stage. This issue has now been fixed. https://bugzilla.redhat.com/show_bug.cgi?id=1276599[BZ#1276599]:: Basic authentication passwords can now contain colons. -https://bugzilla.redhat.com/show_bug.cgi?id=1279744[BZ#1279744]:: Previously, giving `*EmptyDir*` volumes a different default permission setting and group ownership could affect deploying the *postgresql-92-rhel7* image. The issue has been fixed. +https://bugzilla.redhat.com/show_bug.cgi?id=1279744[BZ#1279744]:: Previously, giving `*emptyDir*` volumes a different default permission setting and group ownership could affect deploying the *postgresql-92-rhel7* image. The issue has been fixed. https://bugzilla.redhat.com/show_bug.cgi?id=1276395[BZ#1276395]:: Previously, an error could occur when trying to perform an HA install using Ansible, due to a problem with SRC files. The issue has been fixed. https://bugzilla.redhat.com/show_bug.cgi?id=1267733[BZ#1267733]:: When installing a etcd cluster with hosts with different network interfaces, the install would fail. The issue has been fixed. https://bugzilla.redhat.com/show_bug.cgi?id=1274239[BZ#1274239]:: Previously, when changing the default project region from *infra* to *primary*, old route and registry pods are stuck in the terminating stage and could not be deleted, meaning that new route and registry pods could not be deployed. The issue has been fixed. diff --git a/release_notes/ose_3_2_release_notes.adoc b/release_notes/ose_3_2_release_notes.adoc index 0458856c1b0c..bf3fd483a1c2 100644 --- a/release_notes/ose_3_2_release_notes.adoc +++ b/release_notes/ose_3_2_release_notes.adoc @@ -130,7 +130,7 @@ Authentication] for details. - The `SETUID` and `SETGID` capabilities have been added back to the *anyuid* SCC, which ensures that programs that start as root and then drop to a lower permission level will work by default. -- Quota support has been added for `*emptydir*`. When the quota is enabled on an +- Quota support has been added for `*emptyDir*`. When the quota is enabled on an XFS system, nodes will limit the amount of space any given project can use on a node to a fixed upper bound. The quota is tied to the `*FSGroup*` of the project. Administrators can control this value by editing the project directly diff --git a/using_images/other_images/jenkins.adoc b/using_images/other_images/jenkins.adoc index 1c67632214af..a771981e82ec 100644 --- a/using_images/other_images/jenkins.adoc +++ b/using_images/other_images/jenkins.adoc @@ -305,7 +305,7 @@ are already installed. $ oc new-app jenkins-persistent ---- -.. Or an `EmptyDir` type volume (where configuration does not persist across pod restarts): +.. Or an `emptyDir` type volume (where configuration does not persist across pod restarts): ---- $ oc new-app jenkins-ephemeral ----